{"id":1811,"date":"2026-04-11T16:54:29","date_gmt":"2026-04-11T16:54:29","guid":{"rendered":"https:\/\/inphronesys.com\/?p=1811"},"modified":"2026-04-11T16:54:29","modified_gmt":"2026-04-11T16:54:29","slug":"design-for-ai-what-40-years-of-dfma-teaches-us-about-working-with-llms","status":"publish","type":"post","link":"https:\/\/inphronesys.com\/?p=1811","title":{"rendered":"Design for AI: What 40 Years of DFMA Teaches Us About Working With LLMs"},"content":{"rendered":"<h2>Your Prompt Has Nine Fasteners<\/h2>\n<p>In 1985, IBM set out to redesign its Proprinter \u2014 the dot-matrix printer competing head-to-head with the Epson MX80. Engineers working from the Design for Manufacturability and Assembly methodology developed by Geoffrey Boothroyd and Peter Dewhurst at the University of Massachusetts asked a simple, brutal question: does this product need all these parts?<\/p>\n<p>The answer was no. IBM&#8217;s DFMA redesign eliminated all fasteners and reached the theoretical minimum part count. The competing Epson design required approximately 111 more parts for the same function. Assembly time came in at 170 seconds. The Proprinter cost less to build, less to fail, and less to service \u2014 and none of that improvement came from the factory. It came from the design. (Dewhurst &amp; Boothroyd, &quot;Design for Assembly in Action,&quot; <em>Assembly Engineering<\/em>, 1987.)<\/p>\n<p>Their insight became the discipline of <strong>Design for Manufacturability and Assembly (DFMA)<\/strong>: manufacturing cost is not a factory problem. It is a design problem. By the time the product reached the factory floor, 70\u201380% of its cost was already locked in by design decisions made weeks or months earlier.<\/p>\n<p>This idea spread fast. Ford. IBM. Motorola. Procter &amp; Gamble. Industry after industry discovered that they had been optimizing their factories while the real waste was upstream, in the design stage. DFMA delivered 20\u201350% cost reductions across automotive and electronics \u2014 not by building better robots, but by designing simpler products.<\/p>\n<p>In 2026, we have the exact same problem with AI.<\/p>\n<p>The reason your AI assistant keeps missing the point isn&#8217;t Claude, GPT, or Gemini. It is that the prompt you wrote \u2014 the document you handed it, the SOP you uploaded, the ticket you assigned it \u2014 has the communicative equivalent of nine fasteners. It assumes a reader who already knows the unspoken rules, tolerates ambiguity, can infer missing context, and will ask for clarification when confused.<\/p>\n<p>LLMs are not that reader. And that mismatch is costing organizations millions in failed pilots, wasted tokens, and re-work.<\/p>\n<p>The fix is not smarter models. It is a new discipline: <strong>Design for AI (DFAI)<\/strong> \u2014 treating the way we write, structure, document, and hand off work as engineering artifacts designed to be executed by a literal machine. By the end of this post, you will have 7 concrete rules you can apply to the next prompt you write.<\/p>\n<hr \/>\n<h2>Section 1 \u2014 DFMA in One Minute<\/h2>\n<p>Before the analogy, the source material. DFMA is a methodology developed by Boothroyd, Dewhurst, and later Knight for analyzing product designs from the perspective of the manufacturing process that will produce them. The core idea: most production cost is determined at design time, not on the factory floor.<\/p>\n<p>The canonical rules (Boothroyd &amp; Dewhurst, <em>Product Design for Manufacture and Assembly<\/em>, 1994):<\/p>\n<ol>\n<li><strong>Minimize part count<\/strong> \u2014 every part is a potential failure, assembly step, and inventory item<\/li>\n<li><strong>Self-locating features<\/strong> \u2014 parts should position themselves; don&#8217;t rely on the assembler to interpret<\/li>\n<li><strong>Eliminate adjustments<\/strong> \u2014 variation in assembly should be handled by design, not by the worker<\/li>\n<li><strong>Single-direction assembly<\/strong> \u2014 all parts insert from the same direction; no flipping, no reorientation<\/li>\n<li><strong>Standardize fasteners<\/strong> \u2014 fewer fastener types means fewer tools, fewer errors, faster assembly<\/li>\n<li><strong>Poka-yoke (mistake-proof)<\/strong> \u2014 design parts so they can only go in the correct way<\/li>\n<li><strong>Design for ease of handling<\/strong> \u2014 parts that are easy to pick up, orient, and place<\/li>\n<li><strong>Eliminate secondary operations<\/strong> \u2014 no rework, no post-assembly adjustments<\/li>\n<\/ol>\n<p>The results were consistent across industries: 20\u201350% cost reductions, with the biggest gains coming not from process optimization but from part-count reduction. Less to assemble = faster assembly, fewer defects, lower cost.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/dfai_part_count-1.png\" alt=\"DFMA before\/after: 9 fasteners \u2192 1\" \/><\/p>\n<p>The most important insight from DFMA was philosophical: it forced engineers to think about downstream processes before the design was frozen. The people writing the spec had to understand how the spec would be executed \u2014 by a machine, with no ability to fill in gaps.<\/p>\n<p>Sound familiar?<\/p>\n<hr \/>\n<h2>Section 2 \u2014 Why Your Prompts and Pilots Stall<\/h2>\n<p>The numbers are uncomfortable. Only <strong>4% of companies have developed the AI capabilities needed to consistently generate significant value from it<\/strong> (BCG, <em>Where&#8217;s the Value in AI?<\/em>, 2024). Gartner predicts organizations will abandon <strong>60% of AI projects through 2026<\/strong> for lack of AI-ready data. These are not numbers about AI capability. They are numbers about AI readiness \u2014 and specifically, about the quality of the inputs organizations hand to AI systems.<\/p>\n<p>The root causes, framed honestly, are all design failures:<\/p>\n<p><strong>LLMs don&#8217;t hold ambiguity open \u2014 they commit.<\/strong> Research on LLM ambiguity (arXiv 2505.11679, 2025) confirms what practitioners observe daily: when faced with an ambiguous input, LLMs don&#8217;t hold multiple interpretations open the way a human colleague might \u2014 they commit to one reading and generate forward. The ambiguity does not surface as confusion; it surfaces as a confidently wrong answer.<\/p>\n<p><strong>46% of organizations identify operations as their most tribal-knowledge-dependent function<\/strong> (Lucid.co AI Readiness Report, 2025). That knowledge lives in someone&#8217;s head. It is never written down, never specified, never externalized. When an LLM processes a document that assumes that tribal knowledge, it is operating with a missing assembly manual.<\/p>\n<p><strong>More context is not the same as better context.<\/strong> Anthropic&#8217;s context engineering research (2025) makes the point directly: it is not about filling the context window, it is about filling it with the right information. Structured, high-signal context systematically outperforms a raw dump of documents \u2014 regardless of how large the context window is.<\/p>\n<p><strong>11,000 Baby Boomers retire daily<\/strong> (Alliance for Lifetime Income, 2024), taking with them the institutional knowledge that most organizational documentation assumed was already in the reader&#8217;s head.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/dfai_funnel-1.png\" alt=\"From GenAI pilot to production: where projects die\" \/><\/p>\n<p>None of these are AI failures. They are failures of design \u2014 of the inputs, the documents, the prompts, the processes that LLMs are asked to execute. You cannot prompt your way out of a system that was never designed to be read by a literal machine.<\/p>\n<p>This post is not about prompt engineering techniques, which I covered in <a href=\"https:\/\/inphronesys.com\"><em>Prompting Just Split Into 4 Different Skills<\/em><\/a>. It is not about the history of prompting methods from 2020 to 2026, which is in <a href=\"https:\/\/inphronesys.com\"><em>Anthropic Just Killed One of Its Own Prompting Tricks<\/em><\/a>. And it is not another argument that structured frameworks improve your prompts \u2014 that case was made in <a href=\"https:\/\/inphronesys.com\"><em>The $50,000 Prompt<\/em><\/a> using McKinsey methodology.<\/p>\n<p>DFAI is a different lens. It is not about which technique you use when you open a chat window. It is about the design quality of everything the LLM reads \u2014 before it writes a single token.<\/p>\n<hr \/>\n<h2>Section 3 \u2014 The 7 Principles of Design for AI<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/dfai_principles_card-1.png\" alt=\"The 7 Principles of Design for AI\" \/><\/p>\n<p>Each principle below maps to a DFMA rule, names the DFAI equivalent, cites the research behind it, and gives a concrete before\/after example across different knowledge-work domains.<\/p>\n<hr \/>\n<h3>Principle 1: Eliminate Ambiguity<\/h3>\n<p><strong>DFMA analogue:<\/strong> Poka-yoke \/ mistake-proofing \u2014 design parts so they can only be assembled correctly.<\/p>\n<p><strong>DFAI rule:<\/strong> Write prompts and documents with exactly one reasonable interpretation. If a human reader could parse it two ways, an LLM will pick one without telling you which.<\/p>\n<p><strong>Why it matters:<\/strong> arXiv 2505.11679 (2025) documents what practitioners already feel: LLMs bias toward generating one interpretation among many for an ambiguous input, rather than holding alternatives open. In practice, vague verbs (&quot;analyze,&quot; &quot;improve,&quot; &quot;summarize&quot;), unspecified audiences, and missing constraints are not just stylistic weaknesses \u2014 they are architectural failure points in your input design.<\/p>\n<p>The fix is not wordsmithing. It is adding what DFMA calls a locating feature: a constraint that makes the wrong interpretation physically impossible. Specify audience, scope, format, tone, and exclusions explicitly, and the interpretation space collapses to one.<\/p>\n<blockquote>\n<p>&#x274c; <strong>Before:<\/strong> &quot;Write a follow-up email to the client about the delay.&quot;<\/p>\n<p>&#x2705; <strong>After:<\/strong> &quot;Write a 3-paragraph email to our enterprise client contact (technical buyer, skeptical tone expected). Topic: our 2-week delivery delay on Order #8821. Paragraph 1: acknowledge the delay and take responsibility. Paragraph 2: specific cause (supplier lead time extension, not internal). Paragraph 3: revised delivery date (April 28) + what we are doing to prevent recurrence. Tone: direct, professional, no apologies after the first sentence. Do not offer discounts or concessions.&quot;<\/p>\n<\/blockquote>\n<hr \/>\n<h3>Principle 2: Externalize Tribal Knowledge<\/h3>\n<p><strong>DFMA analogue:<\/strong> Standardize to a single source of truth \u2014 eliminate process variation by writing it down, once, definitively.<\/p>\n<p><strong>DFAI rule:<\/strong> Every piece of knowledge the LLM needs to execute correctly must exist as an artifact in the prompt or context. If it lives only in someone&#8217;s head, it does not exist for the model.<\/p>\n<p><strong>Why it matters:<\/strong> Lucid.co&#8217;s 2025 AI Readiness Report found that 46% of organizations identify operations as their most tribal-knowledge-dependent function. When that knowledge is missing from the context, LLMs fill the gap with pattern-matching from their training data \u2014 which is often generic and occasionally wrong. The result is plausible-sounding output that violates internal conventions the model had no way to know about.<\/p>\n<blockquote>\n<p>&#x274c; <strong>Before:<\/strong> &quot;Write a Python function that follows our team&#8217;s style.&quot;<\/p>\n<p>&#x2705; <strong>After:<\/strong> &quot;Write a Python function that follows these team conventions: (1) type hints on all function signatures, (2) Google-style docstrings, (3) errors raise custom exceptions from <code>exceptions.py<\/code>, never bare <code>Exception<\/code>, (4) no logic in <code>__init__<\/code> files, (5) max function length 40 lines. Function to write: [spec follows].&quot;<\/p>\n<\/blockquote>\n<hr \/>\n<h3>Principle 3: Chunk and Label<\/h3>\n<p><strong>DFMA analogue:<\/strong> Single-direction assembly \u2014 all components should load from the same direction, with clear sequencing.<\/p>\n<p><strong>DFAI rule:<\/strong> Break complex inputs into labeled, discrete sections. Give every section a clear header. Long undivided documents are the textual equivalent of an assembly with no orientation markers.<\/p>\n<p><strong>Why it matters:<\/strong> Anthropic&#8217;s context engineering research (2025) states it directly: find the smallest set of high-signal tokens that maximize the likelihood of the desired outcome. A focused, well-structured context block outperforms an unstructured document dump \u2014 regardless of raw token count. The information architecture of the context is a variable that determines output quality, not just the information itself. Chunked, labeled inputs reduce the model&#8217;s cognitive overhead in parsing, which increases the precision of generation.<\/p>\n<blockquote>\n<p>&#x274c; <strong>Before:<\/strong> &quot;Here is the research paper, three analyst reports, and some notes I took last week. Can you extract the main themes and any contradictions?&quot;<\/p>\n<p>&#x2705; <strong>After:<\/strong><br \/>\n&quot;## Task: Extract themes and contradictions across 4 sources.<\/p>\n<p><strong>Source 1 \u2014 Research paper (Smith et al., 2024):<\/strong> [paste]<\/p>\n<p><strong>Source 2 \u2014 Analyst report (Goldman, Feb 2026):<\/strong> [paste]<\/p>\n<p><strong>Source 3 \u2014 Analyst report (Bernstein, Mar 2026):<\/strong> [paste]<\/p>\n<p><strong>Source 4 \u2014 My notes (field interview, Apr 2026):<\/strong> [paste]<\/p>\n<p><strong>Output format:<\/strong> (a) 3\u20135 themes all sources agree on, (b) 2\u20133 points where sources contradict, (c) one claim in my notes not corroborated elsewhere.&quot;<\/p>\n<\/blockquote>\n<hr \/>\n<h3>Principle 4: Prefer Structured Over Narrative<\/h3>\n<p><strong>DFMA analogue:<\/strong> Machine-readable format \u2014 parts described in engineering drawings, not prose descriptions, because machines need coordinates, not stories.<\/p>\n<p><strong>DFAI rule:<\/strong> Wherever the input data has structure, preserve it explicitly. Tables, JSON, bullet lists, and key:value pairs outperform equivalent prose for any downstream computation or extraction task.<\/p>\n<p><strong>Why it matters:<\/strong> As of 2025, over 600 websites had adopted the <code>\/llms.txt<\/code> standard (Howard, Answer.AI, 2024) \u2014 a structured, Markdown-formatted file designed to give LLMs a machine-readable summary of a site&#8217;s content rather than a scraped prose page. The premise is the same as a CAD drawing versus a verbal description: structured formats are unambiguous by construction.<\/p>\n<p><strong>SCM example (RFQ intake):<\/strong><\/p>\n<blockquote>\n<p>&#x274c; <strong>Before (narrative RFQ):<\/strong> &quot;We&#8217;re looking for a supplier who can provide approximately 10,000 units of our standard M8 bracket, ideally in Q3, with delivery to our Munich facility. Pricing should be competitive and we need reliable quality.&quot;<\/p>\n<p>&#x2705; <strong>After (structured RFQ fields):<\/strong><\/p>\n<pre><code>Part number: M8-BKT-001\nQuantity: 10,000 units\nDelivery: 2026-09-01 (hard), Munich (plant code MU-03)\nSpec: DIN 912, Grade 8.8, zinc-plated\nPrice target: \u2264 \u20ac0.42\/unit\nRequired certificates: EN 15048, ISO 9001\nBid deadline: 2026-05-15\n<\/code><\/pre>\n<\/blockquote>\n<p>The structured version eliminates the interpretation layer entirely. The LLM \u2014 or the human \u2014 reading the second version has no decisions to make.<\/p>\n<hr \/>\n<h3>Principle 5: Make State Explicit<\/h3>\n<p><strong>DFMA analogue:<\/strong> Self-locating features \u2014 a part that knows where it belongs, without the assembler having to decide.<\/p>\n<p><strong>DFAI rule:<\/strong> Never rely on implicit context accumulated across a conversation or workflow. In every handoff, every new context window, every new session, state the current state explicitly.<\/p>\n<p><strong>Why it matters:<\/strong> Anthropic&#8217;s <em>Building Effective Agents<\/em> (2024) identifies implicit state assumptions as a primary source of agentic failure. In multi-step workflows, agents do not automatically carry forward information from prior steps unless that information is explicitly present in the current context. A human colleague can reconstruct context from memory; an LLM can only work with what is in front of it.<\/p>\n<blockquote>\n<p>&#x274c; <strong>Before (continuing a conversation from yesterday):<\/strong> &quot;Ok, continue with the analysis you started.&quot;<\/p>\n<p>&#x2705; <strong>After:<\/strong> &quot;## Current state: We are mid-way through a competitive analysis for a new SaaS pricing decision. Completed: market segment sizing (see table below), competitor price benchmarking for Tier 1 and Tier 2 segments. Not yet done: Tier 3 pricing, sensitivity analysis, recommendation. Next step: analyze Tier 3 pricing options using the same framework as Tier 2 (attached). Constraint: target gross margin \u2265 72%.&quot;<\/p>\n<\/blockquote>\n<hr \/>\n<h3>Principle 6: Standardize the Human&#x2194;Agent Handoff<\/h3>\n<p><strong>DFMA analogue:<\/strong> Standardized fasteners \u2014 fewer fastener types means fewer tools, fewer errors, fewer cognitive switching costs.<\/p>\n<p><strong>DFAI rule:<\/strong> Define a consistent, reusable protocol for how humans hand tasks to AI agents and receive results back. Ad-hoc handoffs accumulate variation; variation accumulates errors.<\/p>\n<p><strong>Why it matters:<\/strong> Klein and Hoffman&#8217;s work on human-AI teaming (2025) identifies interface consistency as a critical variable in collaborative performance. When humans use different prompting formats, different output schemas, and different escalation procedures across tasks, teams lose the ability to review, audit, and improve AI outputs systematically. A standardized handoff is not a constraint on creativity \u2014 it is what makes quality reproducible.<\/p>\n<blockquote>\n<p>&#x274c; <strong>Before (ad hoc):<\/strong> &quot;Hey, can you review this contract and flag anything weird? Also check if the payment terms are ok. Let me know what you think.&quot;<\/p>\n<p>&#x2705; <strong>After (standardized handoff template):<\/strong><\/p>\n<pre><code>TASK: [one-line description]\nCONTEXT: [relevant background \u2014 max 200 words]\nCONSTRAINTS: [hard limits the output must respect]\nOUTPUT FORMAT: [exact structure, e.g., JSON schema or Markdown template]\nESCALATION CRITERIA: [conditions under which the agent should stop and flag for human review]\n<\/code><\/pre>\n<\/blockquote>\n<hr \/>\n<h3>Principle 7: Observability First<\/h3>\n<p><strong>DFMA analogue:<\/strong> Mistake-proofing + cost visibility \u2014 design so that errors are visible immediately and cheaply, not after they have propagated downstream.<\/p>\n<p><strong>DFAI rule:<\/strong> Design your AI workflows to fail loudly and visibly. Build evals, assertions, and output validators before you scale. Unobservable AI output is a design defect, not an operational risk.<\/p>\n<p><strong>Why it matters:<\/strong> arXiv 2512.12791 (2024) argues that agentic AI evaluation must go beyond binary task completion \u2014 behavioral failures invisible to &quot;did it finish?&quot; metrics only surface through comprehensive assessment of tool use, memory management, and environmental interaction. An AI agent that confidently produces wrong output with no visibility mechanism is the LLM equivalent of an assembly line with no quality gate: the defects ship.<\/p>\n<blockquote>\n<p>&#x274c; <strong>Before:<\/strong> Deploy a document classification agent, review a sample of outputs monthly, fix issues as they surface.<\/p>\n<p>&#x2705; <strong>After:<\/strong> Define expected output schema before deployment. Write 20 test cases with known correct answers. Instrument the agent to log confidence scores and flag low-confidence outputs for human review. Set a threshold (e.g., \u226595% classification agreement on the test set) as a go\/no-go criterion. Review the eval dashboard weekly; trigger investigation if accuracy drops &gt;2 percentage points.<\/p>\n<\/blockquote>\n<hr \/>\n<h2>Section 4 \u2014 Three Things DFMA Didn&#8217;t Warn Us About<\/h2>\n<p>DFMA was a manufacturing discipline. It was not designed for the specific pathologies of knowledge work. Three inversions surprised even DFMA devotees when they tried to port the discipline to AI.<\/p>\n<h3>1. Humans must become more literal, not agents more human.<\/h3>\n<p>DFMA placed the burden on engineers: design the product to suit the process. DFAI places the burden on knowledge workers: write for a reader that cannot infer intent.<\/p>\n<p>This is culturally harder than it sounds. Most professional writing \u2014 business emails, strategy documents, meeting notes, project briefs \u2014 is proudly ambiguous. It is written for human colleagues who share context, who ask questions, who can read between the lines. Ambiguity in that context is not a bug; it is efficient shorthand.<\/p>\n<p>An LLM has none of those social repair mechanisms. It cannot ask a clarifying question in the middle of executing a 35-minute agentic task. It reads what is in front of it, commits to one interpretation, and generates forward. The cultural ask of DFAI \u2014 &quot;write as if your reader is intelligent but has no unspoken context, cannot infer, and will not ask&quot; \u2014 runs directly against how most professionals have been trained to communicate. That is not a technical problem. It is an organizational change management problem.<\/p>\n<h3>2. Eliminating steps beats optimizing them.<\/h3>\n<p>The DFMA version of this insight was that the best cost reduction often came from removing a part entirely, not redesigning it. The same inversion applies to DFAI.<\/p>\n<p>When a prompt fails, the instinct is to engineer a more sophisticated prompt for the same underlying task structure. Add more instructions. More examples. More constraints. More chain-of-thought scaffolding.<\/p>\n<p>Often the right move is simpler and harder: <strong>redesign the task so the agent has fewer decisions to make<\/strong>. Fewer branches, fewer interpretations required, fewer opportunities to go wrong. The best DFAI move is frequently &quot;restructure the upstream input so this decision doesn&#8217;t need to be made at all&quot; \u2014 not &quot;write a cleverer prompt to handle a structural mess.&quot;<\/p>\n<p>This matters because it shifts the intervention point. Prompt optimization is local. Input redesign is systemic. Organizations that only ever optimize prompts are iterating around the edges of a design problem they have not named.<\/p>\n<h3>3. Your notes, docs, and templates are now production infrastructure.<\/h3>\n<p>The moment an LLM reads something, the quality of that thing becomes the ceiling of the output.<\/p>\n<p>Your README. Your onboarding document. Your meeting-notes-to-CRM template. Your procurement policy SOP. Your style guide. Your RFQ form. All of them are now <strong>executable substrate<\/strong>, not help files. If they are vague, your AI outputs will be vague. If they contain outdated information, your AI outputs will be confidently wrong. If they assume tribal knowledge, your AI outputs will silently omit what was never written down.<\/p>\n<p>This is the organizational consequence that most AI adoption strategies miss entirely. The bottleneck is not model capability. It is documentation quality \u2014 and documentation quality has been neglected for decades precisely because human readers could compensate for it. LLMs cannot.<\/p>\n<p>The 11,000 Baby Boomers who retire daily (Alliance for Lifetime Income, 2024) are not just taking their experience with them. They are making visible, for the first time, just how much organizational functioning depended on knowledge that was never written down at all.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/dfai_quadrant-1.png\" alt=\"The DFAI quadrant: implicit vs explicit, narrative vs structured\" \/><\/p>\n<p>The quadrant above plots common workplace artifacts on two axes: how explicit they are (does the document contain everything the reader needs?) versus how structured they are (is the information machine-readable?). Well-written API documentation and structured data schemas sit in the upper-right. Tribal knowledge and watercooler conversation sit in the lower-left. Most organizational documentation clusters somewhere in the middle \u2014 written for humans who could fill in the gaps, and now insufficient for machines that cannot.<\/p>\n<hr \/>\n<h2>Section 5 \u2014 Where the Analogy Breaks<\/h2>\n<p>Every analogy has limits. Here are DFAI&#8217;s:<\/p>\n<p><strong>DFMA designs for deterministic processes. LLMs are probabilistic.<\/strong> DFAI can reduce ambiguity, improve input structure, and make outputs more consistent \u2014 but it cannot eliminate non-determinism. Two identical prompts may produce different outputs. Design reduces variance; it does not eliminate it. Build your workflows accordingly: treat AI output as a high-quality first draft that requires a defined review step, not a guaranteed result.<\/p>\n<p><strong>DFMA&#8217;s improvements are durable. DFAI requires continuous re-evaluation.<\/strong> A part designed to DFMA principles in 1990 still works the same way in 2026. A prompt tuned for Claude 3.5 may behave meaningfully differently on Claude 4, GPT-5, or a fine-tuned domain model. Model versions change frequently, and optimization against a specific model can become a liability when that model is deprecated. DFAI principles should be robust to model changes, but specific prompt implementations should be re-validated when models change.<\/p>\n<p><strong>Works best for structured, repeatable work.<\/strong> DFAI is powerful for high-volume, repeatable knowledge tasks: document classification, RFQ intake, report generation, ticket triage, data extraction, meeting summarization. It is weaker for genuinely ambiguous, creative, or relationship-driven work where the right output is not specifiable in advance. Know the difference before you apply the discipline.<\/p>\n<p><strong>Some rules are artifacts of current constraints.<\/strong> Chunking and context minimization are partially driven by today&#8217;s context-window and attention limitations. Those constraints may relax significantly in the next two to five years. The underlying principle \u2014 structured inputs outperform unstructured inputs \u2014 is robust. The specific implementation guidance may need updating.<\/p>\n<hr \/>\n<h2>Section 6 \u2014 The 5-Step DFAI Audit You Can Run This Week<\/h2>\n<p>Theory without a handle is a decoration. Here is how to apply DFAI immediately \u2014 two variants.<\/p>\n<h3>A. For individuals and prompt authors<\/h3>\n<ol>\n<li>\n<p><strong>Take your last 10 chat prompts.<\/strong> Count how many contain a word or phrase that has more than one reasonable interpretation. Be honest: &quot;analyze,&quot; &quot;improve,&quot; &quot;review,&quot; &quot;summarize&quot; almost always fail this test.<\/p>\n<\/li>\n<li>\n<p><strong>Pick the worst offender.<\/strong> The one most likely to generate a plausible-but-wrong response because the goal was underspecified.<\/p>\n<\/li>\n<li>\n<p><strong>Rewrite it to pass the one-interpretation test.<\/strong> Add: audience, scope, constraints, output format, and at least one explicit exclusion (what the model should <em>not<\/em> do).<\/p>\n<\/li>\n<li>\n<p><strong>Add a state block at the top.<\/strong> Even for single-turn prompts: what is the current situation, what has already been done, what does &quot;done&quot; look like.<\/p>\n<\/li>\n<li>\n<p><strong>Save it as a reusable template.<\/strong> Strip out the specifics and leave the structure. You have just built a DFAI-compliant part \u2014 one that can be reused, improved, and handed to a colleague without explaining what you meant.<\/p>\n<\/li>\n<\/ol>\n<h3>B. For teams and workflow owners<\/h3>\n<ol>\n<li>\n<p><strong>Pick one high-volume process<\/strong> \u2014 RFQ intake, ticket triage, invoice coding, meeting notes \u2192 CRM, contract review. Pick the one where AI output quality is most variable.<\/p>\n<\/li>\n<li>\n<p><strong>Map every implicit decision.<\/strong> Walk through the process step by step. Every place where a human thinks &quot;obviously, I would&#8230;&quot; \u2014 that is a gap in the design. Write them all down.<\/p>\n<\/li>\n<li>\n<p><strong>Score each gap:<\/strong> Can it be written as an explicit rule? (Yes \/ No \/ Maybe.) The &quot;yes&quot; pile is your DFAI backlog.<\/p>\n<\/li>\n<li>\n<p><strong>Identify the 3 biggest nine-fastener moments<\/strong> \u2014 places where the design itself could eliminate a step, not just improve it. Where does the process require the LLM to make an inferential leap that good input design could eliminate entirely?<\/p>\n<\/li>\n<li>\n<p><strong>Redesign one this quarter.<\/strong> Not a transformation program. One process, one rule, one structured template. Measure the output quality difference. Then do the next one.<\/p>\n<\/li>\n<\/ol>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/dfai_audit-1.png\" alt=\"The 5-step DFAI audit\" \/><\/p>\n<hr \/>\n<details>\n<summary><strong>Show R Code<\/strong><\/summary>\n<pre><code class=\"language-r\"># =============================================================================\n# Design for AI (DFAI) \u2014 Image Generation\n# =============================================================================\n# Generates all 5 visualizations for the &quot;Design for AI&quot; (DFMA \u2192 DFAI) post.\n#\n# Required packages: ggplot2, dplyr, scales, ggrepel\n# Output: Images\/dfai_*.png (800px wide, white background)\n# =============================================================================\n\nsource(&quot;Scripts\/theme_inphronesys.R&quot;)\n\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(scales)\n\nimg_dir &lt;- &quot;Images&quot;\n\n# =============================================================================\n# IMAGE 1: dfai_part_count.png (800x400)\n# DFMA before\/after bar chart \u2014 consumer hairdryer fastener count\n# =============================================================================\n\ndf_parts &lt;- data.frame(\n  design   = factor(\n    c(&quot;Legacy design\\n(pre-DFMA)&quot;, &quot;DFMA-redesigned&quot;),\n    levels = c(&quot;Legacy design\\n(pre-DFMA)&quot;, &quot;DFMA-redesigned&quot;)\n  ),\n  parts    = c(9, 1),\n  bar_fill = c(&quot;legacy&quot;, &quot;redesigned&quot;)\n)\n\np1 &lt;- ggplot(df_parts, aes(x = design, y = parts, fill = bar_fill)) +\n  geom_col(width = 0.5) +\n  geom_text(\n    aes(label = paste0(parts, &quot; fastener&quot;, ifelse(parts &gt; 1, &quot;s&quot;, &quot;&quot;))),\n    vjust = -0.45, fontface = &quot;bold&quot;, size = 6, color = iph_colors$dark\n  ) +\n  scale_fill_manual(\n    values = c(legacy = iph_colors$lightgrey, redesigned = iph_colors$blue),\n    guide  = &quot;none&quot;\n  ) +\n  scale_y_continuous(limits = c(0, 11.5), breaks = NULL) +\n  labs(\n    title    = &quot;DFMA: A Design Problem, Not a Factory Problem&quot;,\n    subtitle = &quot;Illustrative DFMA part-count reduction (Boothroyd &amp; Dewhurst)&quot;,\n    caption  = &quot;Boothroyd &amp; Dewhurst: 20\u201350% cost reduction is a design problem, not a factory problem.&quot;,\n    x = NULL, y = NULL\n  ) +\n  theme_inphronesys(grid = &quot;none&quot;)\n\nggsave(file.path(img_dir, &quot;dfai_part_count.png&quot;), p1,\n       width = 8, height = 4, dpi = 100, bg = &quot;white&quot;)\n\n\n# =============================================================================\n# IMAGE 2: dfai_funnel.png (800x500)\n# Pilot-to-production attrition funnel\n# =============================================================================\n\nsteps &lt;- c(&quot;GenAI pilots\\nlaunched&quot;, &quot;Still active\\nafter 6 months&quot;, &quot;Reach\\nproduction scale&quot;)\npcts  &lt;- c(100, 49, 4)\n\ndf_funnel &lt;- data.frame(\n  step = factor(steps, levels = rev(steps)),\n  pct  = pcts\n)\n\np2 &lt;- ggplot() +\n  geom_col(data = df_funnel, aes(y = step, x = 100),\n           fill = iph_colors$lightgrey, width = 0.55) +\n  geom_col(data = df_funnel, aes(y = step, x = pct),\n           fill = iph_colors$blue, width = 0.55) +\n  geom_text(data = df_funnel, aes(y = step, x = pct, label = paste0(pct, &quot;%&quot;)),\n            hjust = -0.25, fontface = &quot;bold&quot;, size = 5.5, color = iph_colors$dark) +\n  annotate(&quot;text&quot;, y = 2, x = 53,\n           label = &quot;Ambiguous requirements,\\nfragmented data&quot;,\n           hjust = 0, size = 3.5, color = iph_colors$grey,\n           lineheight = 1.25, fontface = &quot;italic&quot;) +\n  annotate(&quot;text&quot;, y = 1, x = 8,\n           label = &quot;No observability,\\nunclear ownership&quot;,\n           hjust = 0, size = 3.5, color = iph_colors$grey,\n           lineheight = 1.25, fontface = &quot;italic&quot;) +\n  scale_x_continuous(limits = c(0, 130), breaks = NULL) +\n  labs(\n    title    = &quot;The GenAI Pilot Graveyard&quot;,\n    subtitle = &quot;Of every 100 AI pilots launched, only 4 reach production scale&quot;,\n    caption  = &quot;Source: BCG AI Adoption Survey, 2024&quot;,\n    x = NULL, y = NULL\n  ) +\n  theme_inphronesys(grid = &quot;none&quot;) +\n  theme(axis.text.y = element_text(size = 12, color = iph_colors$dark))\n\nggsave(file.path(img_dir, &quot;dfai_funnel.png&quot;), p2,\n       width = 8, height = 5, dpi = 100, bg = &quot;white&quot;)\n\n\n# =============================================================================\n# IMAGE 3: dfai_principles_card.png (800x700)\n# The 7 DFAI principles comparison card\n# =============================================================================\n\nprinciples &lt;- data.frame(\n  num  = 1:7,\n  dfma = c(\n    &quot;Poka-yoke (mistake-proofing)&quot;,\n    &quot;Standardize parts &amp; processes&quot;,\n    &quot;Single assembly direction&quot;,\n    &quot;Machine-readable geometry&quot;,\n    &quot;Self-locating features&quot;,\n    &quot;Standardized fasteners&quot;,\n    &quot;Design for ease of inspection&quot;\n  ),\n  dfai = c(\n    &quot;Eliminate ambiguity&quot;,\n    &quot;Externalize tribal knowledge&quot;,\n    &quot;Chunk and label&quot;,\n    &quot;Prefer structured over narrative&quot;,\n    &quot;Make state explicit&quot;,\n    &quot;Standardize the human\\u2194agent handoff&quot;,\n    &quot;Observability first&quot;\n  )\n)\n\nrow_h &lt;- 0.1\n\np3 &lt;- ggplot() +\n  annotate(&quot;rect&quot;, xmin = 0, xmax = 1, ymin = 0.9, ymax = 1.0,\n           fill = iph_colors$blue, color = NA) +\n  annotate(&quot;text&quot;, x = 0.5, y = 0.95,\n           label = &quot;Design for AI: The 7 Principles&quot;,\n           hjust = 0.5, vjust = 0.5, size = 7, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n  annotate(&quot;rect&quot;, xmin = 0, xmax = 0.47, ymin = 0.8, ymax = 0.9,\n           fill = iph_colors$navy, color = NA) +\n  annotate(&quot;text&quot;, x = 0.235, y = 0.85,\n           label = &quot;DFMA Principle&quot;, hjust = 0.5, vjust = 0.5,\n           size = 4.5, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n  annotate(&quot;rect&quot;, xmin = 0.47, xmax = 0.53, ymin = 0.8, ymax = 0.9,\n           fill = iph_colors$navy, color = NA) +\n  annotate(&quot;text&quot;, x = 0.5, y = 0.85, label = &quot;\\u2192&quot;,\n           hjust = 0.5, vjust = 0.5, size = 5, color = &quot;white&quot;) +\n  annotate(&quot;rect&quot;, xmin = 0.53, xmax = 1.0, ymin = 0.8, ymax = 0.9,\n           fill = iph_colors$navy, color = NA) +\n  annotate(&quot;text&quot;, x = 0.765, y = 0.85,\n           label = &quot;DFAI Rule&quot;, hjust = 0.5, vjust = 0.5,\n           size = 4.5, fontface = &quot;bold&quot;, color = &quot;white&quot;)\n\nfor (i in 1:7) {\n  row_top    &lt;- 1.0 - (i + 1) * row_h\n  row_bottom &lt;- row_top - row_h\n  row_mid    &lt;- (row_top + row_bottom) \/ 2\n  row_fill   &lt;- if (i %% 2 == 0) iph_colors$lightgrey else &quot;#ffffff&quot;\n\n  p3 &lt;- p3 +\n    annotate(&quot;rect&quot;, xmin = 0, xmax = 1,\n             ymin = row_bottom, ymax = row_top, fill = row_fill, color = NA) +\n    annotate(&quot;point&quot;, x = 0.04, y = row_mid, size = 8,\n             color = iph_colors$blue, shape = 16) +\n    annotate(&quot;text&quot;, x = 0.04, y = row_mid,\n             label = as.character(principles$num[i]),\n             hjust = 0.5, vjust = 0.5, size = 3.8,\n             fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n    annotate(&quot;text&quot;, x = 0.10, y = row_mid,\n             label = principles$dfma[i], hjust = 0, vjust = 0.5,\n             size = 3.8, color = iph_colors$dark) +\n    annotate(&quot;text&quot;, x = 0.5, y = row_mid, label = &quot;\\u2192&quot;,\n             hjust = 0.5, vjust = 0.5, size = 4.5, color = iph_colors$grey) +\n    annotate(&quot;text&quot;, x = 0.56, y = row_mid,\n             label = principles$dfai[i], hjust = 0, vjust = 0.5,\n             size = 3.8, fontface = &quot;bold&quot;, color = iph_colors$blue)\n}\n\np3 &lt;- p3 +\n  annotate(&quot;rect&quot;, xmin = 0, xmax = 1, ymin = 0, ymax = 0.1,\n           fill = iph_colors$lightgrey, color = NA) +\n  annotate(&quot;text&quot;, x = 0.5, y = 0.05,\n           label = &quot;inphronesys.com  \\u00b7  2026&quot;,\n           hjust = 0.5, vjust = 0.5, size = 3.5, color = iph_colors$grey) +\n  coord_cartesian(xlim = c(0, 1), ylim = c(0, 1), expand = FALSE) +\n  theme_void()\n\nggsave(file.path(img_dir, &quot;dfai_principles_card.png&quot;), p3,\n       width = 8, height = 7, dpi = 100, bg = &quot;white&quot;)\n\n\n# =============================================================================\n# IMAGE 4: dfai_quadrant.png (800x500)\n# 2x2 AI-readiness quadrant\n# =============================================================================\n\nlibrary(ggrepel)\n\ndf_quad &lt;- data.frame(\n  label      = c(\n    &quot;Slack message&quot;, &quot;Hallway conversation&quot;, &quot;Meeting minutes&quot;, &quot;Email thread&quot;,\n    &quot;README \/ SOP&quot;, &quot;Well-written\\nAPI docs&quot;, &quot;Master data\\nrecord&quot;,\n    &quot;JSON schema \/\\nOpenAPI spec&quot;\n  ),\n  structured = c(18, 5,  38, 28, 65, 85, 80, 95),\n  explicit   = c(22, 5,  45, 32, 63, 90, 88, 95),\n  priority   = c(FALSE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, TRUE)\n)\n\np4 &lt;- ggplot(df_quad, aes(x = structured, y = explicit)) +\n  annotate(&quot;rect&quot;, xmin = 50, xmax = 100, ymin = 50, ymax = 100,\n           fill = iph_colors$blue, alpha = 0.08) +\n  annotate(&quot;rect&quot;, xmin = 0, xmax = 50, ymin = 0, ymax = 50,\n           fill = iph_colors$lightgrey, alpha = 0.6) +\n  annotate(&quot;text&quot;, x = 75, y = 97, label = &quot;AI-ready&quot;,\n           hjust = 0.5, vjust = 1, size = 5, fontface = &quot;bold&quot;,\n           color = iph_colors$blue) +\n  annotate(&quot;text&quot;, x = 25, y = 3, label = &quot;AI-hostile&quot;,\n           hjust = 0.5, vjust = 0, size = 5, fontface = &quot;bold&quot;,\n           color = iph_colors$grey) +\n  geom_vline(xintercept = 50, linetype = &quot;dashed&quot;,\n             color = iph_colors$grey, linewidth = 0.5, alpha = 0.7) +\n  geom_hline(yintercept = 50, linetype = &quot;dashed&quot;,\n             color = iph_colors$grey, linewidth = 0.5, alpha = 0.7) +\n  geom_point(aes(color = priority, size = priority), alpha = 0.9) +\n  scale_color_manual(\n    values = c(&quot;FALSE&quot; = iph_colors$navy, &quot;TRUE&quot; = iph_colors$blue),\n    guide  = &quot;none&quot;\n  ) +\n  scale_size_manual(values = c(&quot;FALSE&quot; = 3, &quot;TRUE&quot; = 4), guide = &quot;none&quot;) +\n  geom_text_repel(aes(label = label), size = 3.4, color = iph_colors$dark,\n                  segment.color = iph_colors$grey, segment.size = 0.3,\n                  max.overlaps = 15, box.padding = 0.5, seed = 42,\n                  lineheight = 0.9) +\n  scale_x_continuous(limits = c(0, 100), breaks = c(8, 92),\n                     labels = c(&quot;Narrative&quot;, &quot;Structured&quot;)) +\n  scale_y_continuous(limits = c(0, 100), breaks = c(8, 92),\n                     labels = c(&quot;Implicit&quot;, &quot;Explicit&quot;)) +\n  labs(\n    title    = &quot;How AI-Ready Is Your Information?&quot;,\n    subtitle = &quot;Documents and artifacts mapped by structure and explicitness&quot;,\n    x = NULL, y = NULL\n  ) +\n  theme_inphronesys(grid = &quot;none&quot;) +\n  theme(\n    axis.text.x = element_text(size = 12, color = iph_colors$dark, face = &quot;bold&quot;),\n    axis.text.y = element_text(size = 12, color = iph_colors$dark, face = &quot;bold&quot;)\n  )\n\nggsave(file.path(img_dir, &quot;dfai_quadrant.png&quot;), p4,\n       width = 8, height = 5, dpi = 100, bg = &quot;white&quot;)\n\n\n# =============================================================================\n# IMAGE 5: dfai_audit.png (800x500)\n# 5-step DFAI audit \u2014 two columns (individuals + teams)\n# =============================================================================\n\nind_steps &lt;- c(\n  &quot;List your last\\n10 prompts&quot;,\n  &quot;Find the most\\nambiguous&quot;,\n  &quot;Rewrite: scope +\\nformat + exclusions&quot;,\n  &quot;Move implicit\\ncontext \\u2192 explicit&quot;,\n  &quot;Save as\\ntemplate&quot;\n)\n\nteam_steps &lt;- c(\n  &quot;Pick one high-\\nvolume process&quot;,\n  &quot;List implicit\\ndecisions&quot;,\n  &quot;Score: rule-able?\\ny \/ n \/ maybe&quot;,\n  &quot;Find 3 '9-fastener\\nmoments'&quot;,\n  &quot;Redesign one\\nthis quarter&quot;\n)\n\nn_steps &lt;- 5\ncx_ind  &lt;- 0.165; tx_ind  &lt;- 0.215\ncx_team &lt;- 0.665; tx_team &lt;- 0.715\ntop_y   &lt;- 0.695; step_h  &lt;- 0.135\n\np5 &lt;- ggplot() +\n  annotate(&quot;rect&quot;, xmin = 0, xmax = 1, ymin = 0.9, ymax = 1.0,\n           fill = iph_colors$blue, color = NA) +\n  annotate(&quot;text&quot;, x = 0.5, y = 0.95,\n           label = &quot;Run this DFAI audit this week&quot;,\n           hjust = 0.5, vjust = 0.5, size = 6.5, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n  annotate(&quot;rect&quot;, xmin = 0.02, xmax = 0.48, ymin = 0.82, ymax = 0.90,\n           fill = iph_colors$navy, color = NA) +\n  annotate(&quot;text&quot;, x = 0.25, y = 0.86, label = &quot;For individuals&quot;,\n           hjust = 0.5, vjust = 0.5, size = 4.5, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n  annotate(&quot;rect&quot;, xmin = 0.52, xmax = 0.98, ymin = 0.82, ymax = 0.90,\n           fill = iph_colors$navy, color = NA) +\n  annotate(&quot;text&quot;, x = 0.75, y = 0.86, label = &quot;For teams&quot;,\n           hjust = 0.5, vjust = 0.5, size = 4.5, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n  annotate(&quot;rect&quot;, xmin = 0.02, xmax = 0.48, ymin = 0.04, ymax = 0.82,\n           fill = iph_colors$light, color = NA) +\n  annotate(&quot;rect&quot;, xmin = 0.52, xmax = 0.98, ymin = 0.04, ymax = 0.82,\n           fill = iph_colors$light, color = NA)\n\nfor (i in seq_len(n_steps)) {\n  y_pos &lt;- top_y - (i - 1) * step_h\n  p5 &lt;- p5 +\n    annotate(&quot;point&quot;, x = cx_ind,  y = y_pos, size = 9,\n             color = iph_colors$blue, shape = 16) +\n    annotate(&quot;text&quot;,  x = cx_ind,  y = y_pos, label = as.character(i),\n             hjust = 0.5, vjust = 0.5, size = 4, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n    annotate(&quot;text&quot;,  x = tx_ind,  y = y_pos, label = ind_steps[i],\n             hjust = 0, vjust = 0.5, size = 3.4,\n             color = iph_colors$dark, lineheight = 1.15) +\n    annotate(&quot;point&quot;, x = cx_team, y = y_pos, size = 9,\n             color = iph_colors$blue, shape = 16) +\n    annotate(&quot;text&quot;,  x = cx_team, y = y_pos, label = as.character(i),\n             hjust = 0.5, vjust = 0.5, size = 4, fontface = &quot;bold&quot;, color = &quot;white&quot;) +\n    annotate(&quot;text&quot;,  x = tx_team, y = y_pos, label = team_steps[i],\n             hjust = 0, vjust = 0.5, size = 3.4,\n             color = iph_colors$dark, lineheight = 1.15)\n\n  if (i &lt; n_steps) {\n    y_from &lt;- y_pos - 0.04; y_to &lt;- y_pos - step_h + 0.04\n    p5 &lt;- p5 +\n      annotate(&quot;segment&quot;, x = cx_ind,  xend = cx_ind,  y = y_from, yend = y_to,\n               arrow = arrow(length = unit(0.1, &quot;cm&quot;), type = &quot;closed&quot;),\n               color = iph_colors$grey, linewidth = 0.4) +\n      annotate(&quot;segment&quot;, x = cx_team, xend = cx_team, y = y_from, yend = y_to,\n               arrow = arrow(length = unit(0.1, &quot;cm&quot;), type = &quot;closed&quot;),\n               color = iph_colors$grey, linewidth = 0.4)\n  }\n}\n\np5 &lt;- p5 +\n  coord_cartesian(xlim = c(0, 1), ylim = c(0, 1), expand = FALSE) +\n  theme_void()\n\nggsave(file.path(img_dir, &quot;dfai_audit.png&quot;), p5,\n       width = 8, height = 5, dpi = 100, bg = &quot;white&quot;)\n\ncat(&quot;\\nAll 5 DFAI images generated successfully.\\n&quot;)\n<\/code><\/pre>\n<\/details>\n<hr \/>\n<h2>Interactive Dashboard<\/h2>\n<p>Assess where you stand on all 7 DFAI principles \u2014 get your weighted Readiness Score (0\u2013100) and targeted recommendations for your two weakest areas.<\/p>\n<div class=\"dashboard-link\" style=\"margin:2em 0; padding:1.5em; background:#f8f9fa; border-left:4px solid #0073aa; border-radius:4px;\">\n<p style=\"margin:0 0 0.5em 0; font-size:1.1em;\"><strong>Interactive Dashboard<\/strong><\/p>\n<p style=\"margin:0 0 1em 0;\">Explore the data yourself \u2014 adjust parameters and see the results update in real time.<\/p>\n<p><a href=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/2026-04-11_Design_For_AI_dashboard-1.html\" target=\"_blank\" style=\"display:inline-block; padding:0.6em 1.2em; background:#0073aa; color:#fff; text-decoration:none; border-radius:4px; font-weight:bold;\">Open Interactive Dashboard &rarr;<\/a><\/div>\n<hr \/>\n<h2>References<\/h2>\n<ol>\n<li>Boothroyd, G., &amp; Dewhurst, P. (1994). <em>Product Design for Manufacture and Assembly<\/em>. Marcel Dekker.<\/li>\n<li>Dewhurst, P., &amp; Boothroyd, G. (1987). Design for Assembly in Action. <em>Assembly Engineering<\/em>, 30(1), 64\u201368.<\/li>\n<li>Anthropic. (2025). <em>Effective Context Engineering for AI Agents<\/em>. Anthropic Engineering Blog.<\/li>\n<li>Anthropic. (2024). <em>Building Effective AI Agents<\/em>. Anthropic Research.<\/li>\n<li>Howard, J. (2024). <em>llms.txt: A proposal<\/em>. Answer.AI. [llmstxt.org]<\/li>\n<li>Weng, L. (2023). <em>Prompt Engineering<\/em>. Lil&#8217;Log. [lilianweng.github.io]<\/li>\n<li>arXiv 2505.11679 (2025). Ambiguity in LLMs is a concept missing problem. Hu et al.<\/li>\n<li>arXiv 2512.12791 (2024). Beyond Task Completion: An Assessment Framework for Evaluating Agentic AI Systems.<\/li>\n<li>Deloitte. (2024\u201325). <em>Procurement Data Quality Standards for AI Adoption<\/em>.<\/li>\n<li>Gartner (via Lucid.co, 2025). Organizations will abandon 60% of AI projects through 2026 without AI-ready data.<\/li>\n<li>Lucid.co. (2025). <em>AI Readiness Report 2025<\/em>. [lucid.co\/ai-readiness]<\/li>\n<li>BCG. (2024). <em>Where&#8217;s the Value in AI?<\/em> Only 4% of companies have developed AI capabilities to consistently generate significant value.<\/li>\n<li>Klein, G., &amp; Hoffman, R. (2025). Human-AI teaming: interface consistency and collaborative performance.<\/li>\n<li>Fern. (2026). <em>How to write LLM-friendly documentation<\/em>.<\/li>\n<li>Alliance for Lifetime Income. (2024). <em>Peak 65: 11,200 Americans turn 65 per day through 2027<\/em>. [protectedincome.org\/peak65]<\/li>\n<\/ol>\n<p><strong>Related posts on inphronesys.com:<\/strong><\/p>\n<ul>\n<li><a href=\"https:\/\/inphronesys.com\">Prompting Just Split Into 4 Different Skills \u2014 Here&#8217;s How to Master Each One<\/a> \u2014 for the taxonomy of prompting disciplines (Prompt Craft, Context Engineering, Intent Engineering, Specification Engineering)<\/li>\n<li><a href=\"https:\/\/inphronesys.com\">Anthropic Just Killed One of Its Own Prompting Tricks<\/a> \u2014 for the 2020\u20132026 history of prompting techniques<\/li>\n<li><a href=\"https:\/\/inphronesys.com\">The $50,000 Prompt: How McKinsey Frameworks Turn AI Into Your Best Supply Chain Consultant<\/a> \u2014 for using structured consulting frameworks to improve prompt quality<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Forty years ago, Boothroyd &#038; Dewhurst showed that manufacturing cost was a design problem. Today, bad prompts and failing AI pilots are the same kind of mistake. Here are 7 principles for designing work that LLMs can actually execute.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[139,269],"tags":[272,274,158,275,156,270,271,273,60,157],"class_list":["post-1811","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence","category-strategy","tag-agentic-ai","tag-ai-adoption","tag-ai-strategy","tag-boothroyd-dewhurst","tag-context-engineering","tag-design-for-ai","tag-dfma","tag-knowledge-work","tag-llm","tag-prompt-engineering-2"],"_links":{"self":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1811","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1811"}],"version-history":[{"count":1,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1811\/revisions"}],"predecessor-version":[{"id":1813,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1811\/revisions\/1813"}],"wp:attachment":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1811"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1811"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1811"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}