{"id":1268,"date":"2026-03-03T22:44:13","date_gmt":"2026-03-03T22:44:13","guid":{"rendered":"https:\/\/inphronesys.com\/?p=1268"},"modified":"2026-03-03T22:44:13","modified_gmt":"2026-03-03T22:44:13","slug":"the-experience-curve-the-most-powerful-cost-model-youre-probably-not-using","status":"publish","type":"post","link":"https:\/\/inphronesys.com\/?p=1268","title":{"rendered":"The Experience Curve: The Most Powerful Cost Model You&#8217;re Probably Not Using"},"content":{"rendered":"<h2>The Price Gap Nobody Noticed<\/h2>\n<p>Anna Berger, senior procurement manager at an automotive tier-1 supplier in Stuttgart, was preparing for her annual negotiation with ProSense GmbH, a mid-size manufacturer of industrial IoT sensors. ProSense had been supplying vibration sensors for predictive maintenance systems since 2018. Back then, Anna negotiated a unit price of \u20ac138 for an initial order of 2,000 units per year.<\/p>\n<p>Eight years later, ProSense had scaled up substantially. Based on trade data and ProSense&#8217;s own growth announcements, Anna estimated their cumulative sensor production had roughly tripled since she began buying \u2014 perhaps two doublings&#8216; worth of accumulated manufacturing experience. Production lines were mature, yields were high, the workforce was experienced, and material purchasing volumes had grown significantly.<\/p>\n<p>Yet the price Anna was paying? \u20ac118. A decline of just 14% over eight years.<\/p>\n<p>Anna pulled up a spreadsheet and did a rough calculation. If ProSense&#8217;s costs followed a standard 80% experience curve \u2014 meaning costs drop 20% every time cumulative production doubles \u2014 two doublings should have reduced costs to about 64% of their starting level. Applied to her baseline price, that suggested a fair current price closer to \u20ac92. That&#8217;s a gap of \u20ac26 per unit. Across her company&#8217;s total vibration sensor spend of 64,000 units per year from four suppliers with similar growth profiles, that kind of gap added up to <strong>\u20ac1.66 million per year<\/strong> left on the table.<\/p>\n<p>She wasn&#8217;t accusing ProSense of fraud. They were probably reinvesting in R&amp;D, absorbing some cost reductions into margin, and genuinely improving the product. But she now had a data-driven starting point for the conversation \u2014 not a vague &quot;we&#8217;d like a better price,&quot; but a specific, defensible cost trajectory based on decades of empirical research.<\/p>\n<p>That starting point is the experience curve. And if you&#8217;re not using it, you&#8217;re negotiating blind.<\/p>\n<h2>From Paper Airplanes to Strategic Weapons<\/h2>\n<p>The idea that costs decline with experience is older than most people think. In 1936, Theodore Paul Wright, an aeronautical engineer at Curtiss-Wright Corporation, published a study of aircraft manufacturing costs. He found that every time cumulative aircraft production doubled, the direct labor hours per unit dropped by a consistent percentage \u2014 typically around 20%. The pattern held across multiple aircraft types and factories.<\/p>\n<p>Wright&#8217;s discovery was primarily about <em>learning<\/em> \u2014 workers getting faster at repetitive assembly tasks. It was useful for military procurement during World War II (the U.S. government used it to negotiate bomber contracts), but it stayed confined to direct labor in manufacturing.<\/p>\n<p>Three decades later, Bruce Henderson and the Boston Consulting Group blew the concept wide open. In the mid-1960s, BCG analyzed cost data across dozens of industries and found that the decline wasn&#8217;t limited to direct labor. <em>Total<\/em> unit costs \u2014 including materials, overhead, distribution, administration \u2014 declined by 20-30% every time cumulative industry output doubled. They called this the &quot;experience curve&quot; to distinguish it from Wright&#8217;s narrower &quot;learning curve.&quot;<\/p>\n<p>Henderson turned it into a strategic weapon. BCG argued that market share was destiny: the company with the most cumulative production would have the lowest costs, the highest margins, and an insurmountable competitive advantage. This thinking drove the growth-at-all-costs strategies of the 1970s and 1980s. Companies acquired competitors, cut prices below cost to gain volume, and bet everything on riding the experience curve down faster than rivals.<\/p>\n<p>Then the concept fell out of fashion. Critics pointed out that BCG&#8217;s advice had led companies into price wars, overexpansion, and neglect of innovation. The experience curve couldn&#8217;t explain why IBM lost to smaller, nimbler PC manufacturers, or why high-market-share companies sometimes had <em>lower<\/em> margins than focused niche players.<\/p>\n<p>But here&#8217;s the thing: the underlying empirical relationship never went away. Costs <em>do<\/em> decline with cumulative output. The pattern has been documented in semiconductors (Moore&#8217;s Law is an experience curve), solar panels (a breathtaking 99% cost reduction since 1976), lithium-ion batteries, wind turbines, genome sequencing, and hundreds of manufactured products. What was flawed was the <em>strategic advice<\/em> built on top of it, not the cost model itself.<\/p>\n<p>With modern data science tools, the experience curve is coming back \u2014 not as a grand corporate strategy, but as a precise analytical tool for procurement, cost forecasting, and supplier management. That&#8217;s how Anna Berger used it, and that&#8217;s how we&#8217;ll use it here.<\/p>\n<h2>Learning Curve vs. Experience Curve: They&#8217;re Not the Same Thing<\/h2>\n<p>These two terms get used interchangeably, and that&#8217;s a problem because they measure different things. If you confuse them, you&#8217;ll underestimate cost reduction potential.<\/p>\n<table style=\"width:100%;border-collapse:collapse;margin:1.5em 0;font-size:0.95em\">\n<thead>\n<tr style=\"background:#2980b9;color:#fff\">\n<th style=\"padding:10px 14px;text-align:left;border:1px solid #ddd\"><\/th>\n<th style=\"padding:10px 14px;text-align:left;border:1px solid #ddd\">Learning Curve<\/th>\n<th style=\"padding:10px 14px;text-align:left;border:1px solid #ddd\">Experience Curve<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Scope<\/strong><\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Direct labor hours on a single task<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">All costs: labor, materials, overhead, distribution<\/td>\n<\/tr>\n<tr>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Driver<\/strong><\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Individual practice and repetition<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Cumulative organizational\/industry output<\/td>\n<\/tr>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Typical slope<\/strong><\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">75\u201385% (labor-intensive assembly)<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">70\u201385% (varies by industry and cost structure)<\/td>\n<\/tr>\n<tr>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Unit of analysis<\/strong><\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">A single worker or workstation<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">An entire product, factory, or industry<\/td>\n<\/tr>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>First documented<\/strong><\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Wright (1936), aircraft assembly<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Henderson\/BCG (1968), cross-industry analysis<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The learning curve says: &quot;Workers get faster at doing the same thing.&quot; That&#8217;s true, and it matters \u2014 but it&#8217;s only one piece of the puzzle.<\/p>\n<p>The experience curve says: &quot;Everything gets cheaper \u2014 labor, materials procurement, process engineering, quality control, logistics, overhead allocation \u2014 as cumulative volume grows.&quot; The mechanisms include:<\/p>\n<ul>\n<li><strong>Labor learning<\/strong> (Wright&#8217;s original insight)<\/li>\n<li><strong>Process improvements<\/strong> \u2014 equipment upgrades, layout optimization, automation<\/li>\n<li><strong>Purchasing leverage<\/strong> \u2014 bulk discounts on materials as volumes grow<\/li>\n<li><strong>Standardization<\/strong> \u2014 reducing product variants, simplifying designs<\/li>\n<li><strong>Scale effects<\/strong> \u2014 fixed costs spread across more units<\/li>\n<li><strong>Technology adoption<\/strong> \u2014 newer, cheaper production technologies become justified at higher volumes<\/li>\n<\/ul>\n<p>When a procurement team uses a learning curve (75-85% slope) to estimate a supplier&#8217;s cost trajectory but the real driver is broader experience effects, they&#8217;ll predict too little cost reduction. The experience curve \u2014 encompassing all cost elements \u2014 is the right tool for strategic sourcing.<\/p>\n<h2>The Math That Matters<\/h2>\n<p>The experience curve follows a power law:<\/p>\n<p><strong>C(x) = C\u2081 \u00b7 x^b<\/strong><\/p>\n<p>Where:<\/p>\n<ul>\n<li><strong>C(x)<\/strong> = cost per unit when cumulative production is x<\/li>\n<li><strong>C\u2081<\/strong> = cost of the first unit<\/li>\n<li><strong>x<\/strong> = cumulative production volume<\/li>\n<li><strong>b<\/strong> = learning exponent (negative, since costs decline)<\/li>\n<\/ul>\n<p>The learning exponent <em>b<\/em> relates to the <strong>learning rate<\/strong> (also called the slope) by:<\/p>\n<p><strong>b = log(learning_rate) \/ log(2)<\/strong><\/p>\n<p>An 80% learning rate means costs fall to 80% of their previous level every time cumulative volume doubles. The exponent <em>b<\/em> for an 80% curve is:<\/p>\n<p><strong>b = log(0.80) \/ log(2) = -0.322<\/strong><\/p>\n<p>The beauty of this model is the log-log transformation. Take the natural log of both sides:<\/p>\n<p><strong>ln(C) = ln(C\u2081) + b \u00b7 ln(x)<\/strong><\/p>\n<p>This is a straight line in log-log space: the intercept is ln(C\u2081) and the slope is <em>b<\/em>. That means fitting an experience curve is just linear regression on log-transformed data. Any spreadsheet or statistical tool can do it.<\/p>\n<h3>Worked Example: ProSense GmbH<\/h3>\n<p>Let&#8217;s make this concrete with ProSense&#8217;s sensor production. Here are the key facts:<\/p>\n<ul>\n<li>Started production in Q1 2018 at 500 units\/quarter<\/li>\n<li>Grew to 4,000 units\/quarter by Q4 2025<\/li>\n<li>Unit cost at 500 cumulative units: \u20ac145<\/li>\n<li>32 quarters of production data<\/li>\n<li>Cumulative production by Q4 2025: approximately 55,000 units<\/li>\n<\/ul>\n<p>If ProSense follows an 80% experience curve, we can predict unit costs at any cumulative volume. The first doubling \u2014 from 500 to 1,000 cumulative units \u2014 should bring costs from \u20ac145 to \u20ac116. The next \u2014 1,000 to 2,000 \u2014 to \u20ac93. Then \u20ac74, \u20ac59, and so on. By 64,000 cumulative units (approximately 7 doublings from 500), the predicted cost is:<\/p>\n<p><strong>C(64,000) = 145 \u00d7 (64,000 \/ 500)^(-0.322) = 145 \u00d7 0.21 \u2248 \u20ac30<\/strong><\/p>\n<p>Or more intuitively: 7 doublings at 80% per doubling = 0.80^7 = 0.21. So predicted cost = 145 \u00d7 0.21 = <strong>~\u20ac30<\/strong>.<\/p>\n<p>Wait \u2014 that seems too low. And it would be, if ProSense were a pure assembly operation. But sensors require expensive electronic components (MEMS accelerometers, signal processing ICs) whose costs decline on a shallower curve. This is why component decomposition matters, which we&#8217;ll get to in a moment.<\/p>\n<p>In practice, fitting the actual data with regression tells us the <em>real<\/em> learning rate rather than assuming one. That&#8217;s what the R code does.<\/p>\n<h2>Fitting Experience Curves with R<\/h2>\n<p>Let&#8217;s walk through the process step by step, using ProSense GmbH&#8217;s simulated production data.<\/p>\n<p><strong>Step 1: Prepare the data.<\/strong> We need quarterly cumulative production and unit cost. The R script generates realistic data following an 80% experience curve with noise \u2014 because real-world data is never perfectly smooth.<\/p>\n<p><strong>Step 2: Log-log scatter plot.<\/strong> Plot ln(cumulative_production) on the x-axis and ln(unit_cost) on the y-axis. If the experience curve holds, you&#8217;ll see a clear linear relationship.<\/p>\n<p><strong>Step 3: Fit the regression.<\/strong> Run <code>lm(log(unit_cost) ~ log(cum_production))<\/code>. The slope coefficient is your learning exponent <em>b<\/em>. Convert it to a learning rate with <code>2^b<\/code>.<\/p>\n<p><strong>Step 4: Interpret the results.<\/strong> Check R-squared (should be above 0.90 for a good fit), confidence intervals on the slope, and residual patterns. A learning rate of 0.80 with tight confidence intervals means you can confidently predict future costs.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/03\/exp_curve_linear-1.png\" alt=\"Cost vs cumulative production on linear scale\" \/><\/p>\n<p>The linear-scale chart shows the classic concave curve \u2014 steep cost declines early, flattening as volume grows. But the shape can be misleading because it&#8217;s hard to tell whether the relationship is truly a power law or some other functional form.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/03\/exp_curve_loglog-1.png\" alt=\"Log-log regression showing experience curve fit\" \/><\/p>\n<p>The log-log plot resolves this immediately. A straight line in log-log space confirms the power law. The slope gives us the learning exponent directly, and the regression line lets us extrapolate costs to future volumes.<\/p>\n<p>For ProSense, our regression yields:<\/p>\n<ul>\n<li><strong>Learning exponent (b):<\/strong> -0.318 (95% CI: -0.340 to -0.296)<\/li>\n<li><strong>Learning rate:<\/strong> 80.2% (meaning costs drop ~20% per doubling)<\/li>\n<li><strong>R-squared:<\/strong> 0.97<\/li>\n<li><strong>First-unit cost (C\u2081):<\/strong> approximately \u20ac1,070<\/li>\n<\/ul>\n<p>The high R-squared and tight confidence intervals tell us the experience curve is a strong fit for ProSense&#8217;s cost data. The estimated learning rate of 80.2% is right in the typical range for electronic components manufacturing.<\/p>\n<h2>Not All Costs Learn Equally: Component Decomposition<\/h2>\n<p>Here&#8217;s where most experience curve analyses go wrong: they treat the total unit cost as a monolith. In reality, different cost components decline at different rates, and understanding this decomposition is crucial for realistic forecasting.<\/p>\n<p>Consider ProSense&#8217;s sensor unit cost breakdown at the start of production:<\/p>\n<table style=\"width:100%;border-collapse:collapse;margin:1.5em 0;font-size:0.95em\">\n<thead>\n<tr style=\"background:#2980b9;color:#fff\">\n<th style=\"padding:10px 14px;text-align:left;border:1px solid #ddd\">Component<\/th>\n<th style=\"padding:10px 14px;text-align:center;border:1px solid #ddd\">Share<\/th>\n<th style=\"padding:10px 14px;text-align:center;border:1px solid #ddd\">Learning Rate<\/th>\n<th style=\"padding:10px 14px;text-align:left;border:1px solid #ddd\">Why<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Materials<\/strong> (MEMS, ICs, PCBs)<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">55%<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">85%<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Commodity-ish; limited by supplier&#8217;s own curve<\/td>\n<\/tr>\n<tr>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Direct Labor<\/strong> (assembly, testing)<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">30%<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">75%<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Classic Wright learning; high improvement potential<\/td>\n<\/tr>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\"><strong>Overhead<\/strong> (equipment, facilities, quality)<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">15%<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">80%<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Utilization improves; fixed costs spread over more units<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Materials have the shallowest curve because they&#8217;re largely purchased components \u2014 your learning rate is bounded by your supplier&#8217;s learning rate (and their willingness to share those savings). Direct labor has the steepest curve because human learning effects are powerful and well-documented.<\/p>\n<p>This explains why labor-intensive products (apparel, furniture, manual assembly) typically show steeper experience curves (75-80%) than material-intensive products (electronics, chemicals) where curves are shallower (85-90%).<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/03\/exp_cost_components-1.png\" alt=\"Component experience curves on log-log scale\" \/><\/p>\n<p>The component plot reveals what a single aggregate curve hides: labor costs have plummeted by over 50%, but materials costs have only declined by 25%. The aggregate curve sits somewhere in the weighted middle.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/03\/exp_component_waterfall-1.png\" alt=\"Cost breakdown waterfall at low vs high cumulative volume\" \/><\/p>\n<p>The waterfall chart makes the strategic implications clear. At low cumulative volume, ProSense&#8217;s cost structure was roughly 55% materials, 30% labor, 15% overhead. At high cumulative volume, the mix has shifted even further toward materials (now over 60% of total) because materials declined less than labor and overhead. Future cost reduction efforts should focus on materials \u2014 negotiating with component suppliers, redesigning for cheaper parts, or vertically integrating.<\/p>\n<h2>Strategic Applications for Supply Chain<\/h2>\n<p>The experience curve isn&#8217;t just an academic exercise. It&#8217;s a practical tool for at least four critical supply chain decisions.<\/p>\n<h3>Procurement: Know What Your Supplier&#8217;s Costs <em>Should<\/em> Be<\/h3>\n<p>This is Anna Berger&#8217;s use case, and it&#8217;s the most immediately actionable. If you know (or can estimate) your supplier&#8217;s cumulative production volume and learning rate, you can calculate what their costs should be \u2014 independent of what they&#8217;re charging you.<\/p>\n<table style=\"width:100%;border-collapse:collapse;margin:1.5em 0;font-size:0.95em\">\n<thead>\n<tr style=\"background:#2980b9;color:#fff\">\n<th style=\"padding:10px 14px;text-align:center;border:1px solid #ddd\">Year<\/th>\n<th style=\"padding:10px 14px;text-align:center;border:1px solid #ddd\">Supplier&#8217;s Price<\/th>\n<th style=\"padding:10px 14px;text-align:center;border:1px solid #ddd\">Curve Prediction<\/th>\n<th style=\"padding:10px 14px;text-align:center;border:1px solid #ddd\">Gap<\/th>\n<th style=\"padding:10px 14px;text-align:left;border:1px solid #ddd\">Your Opportunity<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">2018<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac138<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac138<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac0<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Baseline \u2014 fair price at launch<\/td>\n<\/tr>\n<tr>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">2020<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac131<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac121<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;color:#d97706;font-weight:600\">\u20ac10<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Price lagging cost reductions<\/td>\n<\/tr>\n<tr style=\"background:#f8fafc\">\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">2022<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac125<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac108<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;color:#d97706;font-weight:600\">\u20ac17<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Supplier capturing 60% of savings<\/td>\n<\/tr>\n<tr>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">2024<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac120<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0\">\u20ac98<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;color:#e74c3c;font-weight:600\">\u20ac22<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0\">Gap widening \u2014 time for a conversation<\/td>\n<\/tr>\n<tr style=\"background:#fef3c7\">\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;font-weight:700\">2025<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;font-weight:700\">\u20ac118<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;font-weight:700\">\u20ac92<\/td>\n<td style=\"padding:8px 14px;text-align:center;border:1px solid #e2e8f0;color:#e74c3c;font-weight:700\">\u20ac26<\/td>\n<td style=\"padding:8px 14px;border:1px solid #e2e8f0;font-weight:700\">\u20ac26\/unit opportunity = \u20ac1.66M\/year<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>You&#8217;re not accusing the supplier of gouging. You&#8217;re showing them that you understand cost dynamics and expect pricing to reflect productivity gains. The conversation shifts from &quot;give us a discount&quot; to &quot;let&#8217;s talk about how we share the benefits of your growing efficiency.&quot;<\/p>\n<p><strong>Important caveat:<\/strong> suppliers have legitimate reasons for prices not tracking the experience curve exactly \u2014 R&amp;D investment, quality improvements, regulatory compliance, raw material inflation. The curve gives you a starting point for negotiation, not a final answer.<\/p>\n<h3>Operations: Forecast Your Own Production Costs<\/h3>\n<p>If you&#8217;re ramping up production of a new product, the experience curve helps you forecast when you&#8217;ll hit cost targets, break even, or achieve target margins. Plot your actual cost data against the fitted curve quarterly. If costs are above the curve, investigate \u2014 you may have process issues preventing normal learning. If below, you&#8217;re outperforming expectations.<\/p>\n<h3>Competitive Intelligence: Why Market Leaders Win on Price<\/h3>\n<p>A company with 3x your cumulative production is 1.58 doublings ahead on the experience curve (log\u2082(3) = 1.58). At an 80% learning rate, their unit costs are approximately 0.80^1.58 = 70% of yours. They can undercut your price by 20% and still maintain higher margins. This is why experience-intensive industries tend toward consolidation \u2014 the leader&#8217;s cost advantage is self-reinforcing.<\/p>\n<h3>Make vs. Buy: When to Surrender to the Specialist<\/h3>\n<p>If a contract manufacturer has produced 500,000 units and you&#8217;ve produced 5,000, they&#8217;re roughly 6.6 doublings ahead. At an 80% curve, their costs are 0.80^6.6 = 23% of yours. You will <em>never<\/em> catch up unless you can find a way to leapfrog their volume. This is a strong signal to buy rather than make.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/03\/exp_industry_slopes-1.png\" alt=\"Typical experience curve slopes by industry\" \/><\/p>\n<p>The industry comparison chart shows why one-size-fits-all assumptions are dangerous. Semiconductors have famously steep curves (around 70%) driven by Moore&#8217;s Law dynamics. Commodity chemicals are near 90% \u2014 costs decline, but slowly. Knowing the typical slope for your industry gives you a reasonable prior before you have enough data to estimate your own.<\/p>\n<h2>Where the Curve Breaks<\/h2>\n<p>The experience curve is powerful, but it&#8217;s not a law of nature. It&#8217;s an empirical pattern that holds under certain conditions and breaks under others. Honest analysis requires knowing when <em>not<\/em> to rely on it.<\/p>\n<p><strong>Depleting natural resources.<\/strong> If your key input is a finite resource whose extraction gets harder over time (deep-water oil, rare earth elements, high-grade ores), input costs may <em>rise<\/em> with cumulative volume industry-wide. The experience curve for extraction efficiency exists but can be overwhelmed by geological depletion.<\/p>\n<p><strong>Regulatory barriers and patents.<\/strong> A pharmaceutical company with patent protection has no competitive pressure to reduce prices along the experience curve. Regulatory compliance costs (GMP manufacturing, clinical trials, documentation) don&#8217;t always decline with volume because they&#8217;re driven by regulatory requirements, not production efficiency.<\/p>\n<p><strong>Mature commodities.<\/strong> When an entire industry is far down the experience curve \u2014 think steel, commodity plastics, basic chemicals \u2014 further doublings of cumulative output yield trivially small cost reductions. The curve has largely been exhausted. Competition shifts to other dimensions (service, delivery, quality, sustainability).<\/p>\n<p><strong>Technological disruptions.<\/strong> A disruptive technology can reset the curve entirely. The experience curve for incandescent light bulbs became irrelevant when LEDs arrived with a new, steeper curve starting from a higher cost point but declining faster. If your supplier is riding a technology that&#8217;s about to be displaced, their experience curve is worthless for long-term forecasting.<\/p>\n<p><strong>Products with high intangible value.<\/strong> Luxury goods, branded products, and IP-heavy offerings don&#8217;t follow cost-based pricing. Hermes isn&#8217;t going to cut handbag prices because their artisans got more experienced. The experience curve applies to costs, not to prices in markets driven by brand, exclusivity, or intellectual property.<\/p>\n<p><strong>The management effort trap.<\/strong> Perhaps the most dangerous misconception: cost decline along the experience curve is <em>not automatic<\/em>. It requires deliberate management action \u2014 continuous improvement programs, capital investment in better equipment, workforce training, supplier development. Companies that assume costs will drop simply because volume grew often find that they don&#8217;t. The curve describes what <em>can<\/em> happen with good management, not what <em>will<\/em> happen with passive management.<\/p>\n<h2>Interactive Dashboard<\/h2>\n<p>Explore the experience curve with your own numbers. Adjust learning rates, volume growth, and cost components to see how costs decline \u2014 and set data-driven negotiation targets.<\/p>\n<div class=\"dashboard-link\" style=\"margin:2em 0; padding:1.5em; background:#f8f9fa; border-left:4px solid #0073aa; border-radius:4px;\">\n<p style=\"margin:0 0 0.5em 0; font-size:1.1em;\"><strong>Interactive Dashboard<\/strong><\/p>\n<p style=\"margin:0 0 1em 0;\">Explore the data yourself \u2014 adjust parameters and see the results update in real time.<\/p>\n<p><a href=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/03\/2026-03-03_Experience_Curve_Supply_Chain_dashboard-1.html\" target=\"_blank\" style=\"display:inline-block; padding:0.6em 1.2em; background:#0073aa; color:#fff; text-decoration:none; border-radius:4px; font-weight:bold;\">Open Interactive Dashboard &rarr;<\/a><\/div>\n<h2>Your Next Steps<\/h2>\n<p>The experience curve is one of the most well-documented empirical relationships in business. Here&#8217;s how to start using it this week:<\/p>\n<ol>\n<li>\n<p><strong>Pull price history for your top 5 purchased items.<\/strong> Get 3-5 years of unit prices alongside your estimates of the supplier&#8217;s cumulative production volume (or use your own purchase volume as a proxy). Fit an experience curve to each \u2014 the R code in the collapsible section below does exactly this. Any item where the fitted learning rate is above 85% has significant cost reduction potential you may not be capturing.<\/p>\n<\/li>\n<li>\n<p><strong>Calculate the &quot;experience gap.&quot;<\/strong> Compare your supplier&#8217;s actual price trajectory to the experience curve prediction. A widening gap over time means the supplier is capturing most of the cost savings from their growing efficiency. This gap is your negotiation starting point \u2014 specific, data-driven, and hard to argue with.<\/p>\n<\/li>\n<li>\n<p><strong>Decompose your own production costs into components.<\/strong> Estimate learning rates for each \u2014 labor, materials, overhead. This tells you where to focus improvement efforts. If your labor curve is 90% when 80% is typical for your industry, you have a learning problem (inadequate training, high turnover, poor process standardization).<\/p>\n<\/li>\n<li>\n<p><strong>Use the interactive dashboard to model what-if scenarios.<\/strong> What happens if you double your order volume with a single supplier? How much should that be worth in unit cost reduction? What if you switch to a supplier with more cumulative experience? The dashboard lets you explore these questions in seconds.<\/p>\n<\/li>\n<li>\n<p><strong>Start tracking cumulative production for key items.<\/strong> The experience curve only works with good data, and most companies don&#8217;t track cumulative volume systematically. Add it to your supplier scorecards. In 12 months, you&#8217;ll have enough data to fit reliable curves \u2014 and your suppliers will know you&#8217;re watching.<\/p>\n<\/li>\n<\/ol>\n<details>\n<summary><strong>Show R Code<\/strong><\/summary>\n<pre><code class=\"language-r\"># =============================================================================\n# The Experience Curve \u2014 Complete R Code\n# =============================================================================\n# This script reproduces all analysis and visualizations from the blog post.\n# Fictional scenario: ProSense GmbH \u2014 German IoT sensor manufacturer.\n#\n# Required packages: ggplot2, dplyr, tidyr, scales, patchwork\n# =============================================================================\n\n# === Setup ===================================================================\n\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(scales)\nlibrary(patchwork)\n\nset.seed(42)\n\n# Custom theme \u2014 minimal, clean, publication-ready\ntheme_exp &lt;- theme_minimal(base_size = 13) +\n  theme(\n    plot.title       = element_text(face = &quot;bold&quot;, size = 14),\n    plot.subtitle    = element_text(color = &quot;grey40&quot;, size = 11),\n    panel.grid.minor = element_blank(),\n    legend.position  = &quot;bottom&quot;\n  )\n\n# Color palette\ncol_red    &lt;- &quot;#e74c3c&quot;\ncol_blue   &lt;- &quot;#2980b9&quot;\ncol_green  &lt;- &quot;#27ae60&quot;\ncol_orange &lt;- &quot;#e67e22&quot;\ncol_purple &lt;- &quot;#8b5cf6&quot;\n\n# === Experience Curve Function ===============================================\n#\n# The experience curve follows a power law:\n#   C(x) = C1 * x^b\n#\n# Where:\n#   C(x) = unit cost at cumulative production x\n#   C1   = theoretical cost of the first unit\n#   b    = log(learning_rate) \/ log(2)\n#\n# A learning rate of 80% means costs drop to 80% every time\n# cumulative production doubles.\n\nexp_curve &lt;- function(x, C1, learning_rate) {\n  b &lt;- log(learning_rate) \/ log(2)\n  C1 * x^b\n}\n\n# === Data Generation: ProSense GmbH =========================================\n#\n# Scenario: 8 years (32 quarters) of IoT sensor production data\n# - Production ramps from ~500 units\/quarter to ~4,000 units\/quarter\n# - Unit costs started at ~EUR 145 (at cumulative volume ~500)\n# - Costs follow an 80% experience curve\n\nn_quarters &lt;- 32\n\n# Quarterly production ramps exponentially from 500 to 4,000\nquarterly_prod &lt;- round(\n  500 * exp(log(4000 \/ 500) * (0:(n_quarters - 1)) \/ (n_quarters - 1))\n)\n\n# Cumulative production\ncum_prod &lt;- cumsum(quarterly_prod)\ncat(&quot;Cumulative production range:&quot;,\n    comma(min(cum_prod)), &quot;to&quot;, comma(max(cum_prod)), &quot;units\\n&quot;)\n\n# Cost parameters\nlr_total &lt;- 0.80\nb_total  &lt;- log(lr_total) \/ log(2)\n\n# Calibrate C1 so that cost at cum_prod ~500 = EUR 145\nC1_true &lt;- 145 \/ (500^b_total)\ncat(&quot;Theoretical first-unit cost (C1):&quot;, round(C1_true, 1), &quot;EUR\\n&quot;)\n\n# True total cost at each cumulative volume\ntrue_cost &lt;- exp_curve(cum_prod, C1_true, lr_total)\n\n# Add realistic noise (+-3-5%)\nnoise &lt;- rnorm(n_quarters, mean = 0, sd = 0.04)\nobserved_cost &lt;- true_cost * (1 + noise)\n\n# Build the main data frame\ndf &lt;- data.frame(\n  quarter        = 1:n_quarters,\n  quarterly_prod = quarterly_prod,\n  cum_prod       = cum_prod,\n  true_cost      = true_cost,\n  observed_cost  = observed_cost\n)\n\n# Quick look at the data\ncat(&quot;\\nFirst 5 quarters:\\n&quot;)\nprint(head(df, 5))\ncat(&quot;\\nLast 5 quarters:\\n&quot;)\nprint(tail(df, 5))\n\n# === Component Cost Data =====================================================\n#\n# Three cost components, each with a different learning rate:\n#   Materials: 55% of initial cost, 85% learning rate (slowest decline)\n#   Labor:     30% of initial cost, 75% learning rate (fastest decline)\n#   Overhead:  15% of initial cost, 80% learning rate\n\ncomp_params &lt;- list(\n  materials = list(share = 0.55, lr = 0.85),\n  labor     = list(share = 0.30, lr = 0.75),\n  overhead  = list(share = 0.15, lr = 0.80)\n)\n\nfor (comp_name in names(comp_params)) {\n  params  &lt;- comp_params[[comp_name]]\n  C1_comp &lt;- C1_true * params$share\n  true_comp  &lt;- exp_curve(cum_prod, C1_comp, params$lr)\n  noise_comp &lt;- rnorm(n_quarters, mean = 0, sd = 0.03)\n  df[[paste0(&quot;cost_&quot;, comp_name)]] &lt;- true_comp * (1 + noise_comp)\n}\n\n# === Fitting the Experience Curve ============================================\n#\n# Method: Log-log linear regression\n#   log(C) = a + b * log(x)\n#\n# This is equivalent to the power law C = 10^a * x^b\n\ndf$log_cum  &lt;- log10(df$cum_prod)\ndf$log_cost &lt;- log10(df$observed_cost)\n\nlm_loglog &lt;- lm(log_cost ~ log_cum, data = df)\n\n# Model results\ncat(&quot;\\n=== Log-Log Regression Results ===\\n&quot;)\nprint(summary(lm_loglog))\n\nr_sq       &lt;- summary(lm_loglog)$r.squared\nslope_b    &lt;- coef(lm_loglog)[2]\nintercept_a &lt;- coef(lm_loglog)[1]\nimplied_lr &lt;- 2^slope_b\n\ncat(&quot;\\nKey metrics:\\n&quot;)\ncat(&quot;  R-squared:     &quot;, round(r_sq, 4), &quot;\\n&quot;)\ncat(&quot;  Slope (b):     &quot;, round(slope_b, 4), &quot;\\n&quot;)\ncat(&quot;  Intercept (a): &quot;, round(intercept_a, 4), &quot;\\n&quot;)\ncat(&quot;  Implied LR:    &quot;, round(implied_lr * 100, 1), &quot;%\\n&quot;)\ncat(&quot;  Cost reduction per doubling:&quot;,\n    round((1 - implied_lr) * 100, 1), &quot;%\\n&quot;)\n\n# Also fit with nls for the linear-scale overlay\nfit_nls &lt;- nls(observed_cost ~ C1 * cum_prod^b,\n               data = df,\n               start = list(C1 = 1000, b = -0.3))\ndf$fitted_cost &lt;- predict(fit_nls)\n\n# === Chart 1: Experience Curve \u2014 Linear Scale ================================\n# The classic &quot;hockey stick&quot; shape\n\nfirst_pt &lt;- df[1, ]\nlast_pt  &lt;- df[n_quarters, ]\nmid_pt   &lt;- df[which.min(abs(df$cum_prod - 20000)), ]\n\np1 &lt;- ggplot(df, aes(x = cum_prod, y = observed_cost)) +\n  geom_point(color = col_blue, size = 2.5, alpha = 0.8) +\n  geom_line(aes(y = fitted_cost), color = col_red, linewidth = 1.2) +\n  annotate(&quot;segment&quot;,\n           x = first_pt$cum_prod, y = first_pt$observed_cost,\n           xend = first_pt$cum_prod + 5000, yend = first_pt$observed_cost + 5,\n           color = &quot;grey40&quot;, linewidth = 0.4) +\n  annotate(&quot;label&quot;,\n           x = first_pt$cum_prod + 5500, y = first_pt$observed_cost + 5,\n           label = paste0(&quot;Q1: EUR&quot;, round(first_pt$observed_cost)),\n           size = 3.5, fill = &quot;white&quot;) +\n  annotate(&quot;segment&quot;,\n           x = mid_pt$cum_prod, y = mid_pt$observed_cost,\n           xend = mid_pt$cum_prod + 8000, yend = mid_pt$observed_cost + 15,\n           color = &quot;grey40&quot;, linewidth = 0.4) +\n  annotate(&quot;label&quot;,\n           x = mid_pt$cum_prod + 8500, y = mid_pt$observed_cost + 15,\n           label = paste0(comma(mid_pt$cum_prod), &quot; units: EUR&quot;,\n                          round(mid_pt$observed_cost)),\n           size = 3.5, fill = &quot;white&quot;) +\n  annotate(&quot;segment&quot;,\n           x = last_pt$cum_prod, y = last_pt$observed_cost,\n           xend = last_pt$cum_prod - 12000, yend = last_pt$observed_cost + 12,\n           color = &quot;grey40&quot;, linewidth = 0.4) +\n  annotate(&quot;label&quot;,\n           x = last_pt$cum_prod - 12500, y = last_pt$observed_cost + 12,\n           label = paste0(&quot;Current: EUR&quot;, round(last_pt$observed_cost)),\n           size = 3.5, fill = &quot;white&quot;) +\n  scale_x_continuous(labels = comma_format()) +\n  scale_y_continuous(labels = dollar_format(prefix = &quot;\\u20ac&quot;)) +\n  labs(\n    title    = &quot;ProSense GmbH \u2014 Unit Cost vs. Cumulative Production&quot;,\n    subtitle = &quot;Classic experience curve: steep decline early, flattening with volume&quot;,\n    x = &quot;Cumulative Production (units)&quot;,\n    y = &quot;Unit Cost (\u20ac)&quot;\n  ) +\n  theme_exp\n\nprint(p1)\n\n# === Chart 2: Log-Log Scale \u2014 The Power Law Revealed ========================\n# On log-log axes, the experience curve becomes a straight line\n\neq_label &lt;- paste0(\n  &quot;log(C) = &quot;, round(intercept_a, 3), &quot; + (&quot;, round(slope_b, 3),\n  &quot;) \u00b7 log(x)\\n&quot;,\n  &quot;R\u00b2 = &quot;, round(r_sq, 4), &quot;\\n&quot;,\n  &quot;Slope b = &quot;, round(slope_b, 3), &quot;\\n&quot;,\n  &quot;Learning Rate = &quot;, round(implied_lr * 100, 1), &quot;%&quot;\n)\n\np2 &lt;- ggplot(df, aes(x = cum_prod, y = observed_cost)) +\n  geom_point(color = col_blue, size = 2.5, alpha = 0.8) +\n  geom_smooth(method = &quot;lm&quot;, formula = y ~ x, color = col_red,\n              linewidth = 1.1, fill = col_red, alpha = 0.12) +\n  scale_x_log10(labels = comma_format()) +\n  scale_y_log10(labels = dollar_format(prefix = &quot;\\u20ac&quot;)) +\n  annotate(&quot;label&quot;,\n           x = 10^(mean(range(df$log_cum)) - 0.1),\n           y = 10^(max(df$log_cost) - 0.02),\n           label = eq_label,\n           hjust = 0.5, vjust = 1, size = 3.8,\n           fill = &quot;white&quot;, family = &quot;mono&quot;) +\n  labs(\n    title    = &quot;Log-Log Scale \u2014 The Power Law Becomes Obvious&quot;,\n    subtitle = &quot;A straight line on log-log axes confirms the experience curve&quot;,\n    x = &quot;Cumulative Production (log scale)&quot;,\n    y = &quot;Unit Cost (log scale)&quot;\n  ) +\n  theme_exp\n\nprint(p2)\n\n# === Chart 3: Cost Components \u2014 Different Learning Rates =====================\n# Each cost component declines at its own rate\n\nx_smooth &lt;- seq(min(df$cum_prod), max(df$cum_prod), length.out = 200)\n\ncomp_smooth &lt;- data.frame(\n  cum_prod  = rep(x_smooth, 4),\n  cost      = c(\n    exp_curve(x_smooth, C1_true * 0.55, 0.85),\n    exp_curve(x_smooth, C1_true * 0.30, 0.75),\n    exp_curve(x_smooth, C1_true * 0.15, 0.80),\n    exp_curve(x_smooth, C1_true, lr_total)\n  ),\n  component = rep(c(&quot;Materials (85% LR)&quot;, &quot;Labor (75% LR)&quot;,\n                     &quot;Overhead (80% LR)&quot;, &quot;Total (80% LR)&quot;), each = 200)\n)\n\ncomp_smooth$component &lt;- factor(comp_smooth$component,\n  levels = c(&quot;Total (80% LR)&quot;, &quot;Materials (85% LR)&quot;,\n             &quot;Labor (75% LR)&quot;, &quot;Overhead (80% LR)&quot;))\n\ncomp_colors &lt;- c(\n  &quot;Total (80% LR)&quot;     = col_red,\n  &quot;Materials (85% LR)&quot; = col_blue,\n  &quot;Labor (75% LR)&quot;     = col_green,\n  &quot;Overhead (80% LR)&quot;  = col_orange\n)\n\n# Scatter points for observed component costs\ndf_comp_long &lt;- df %&gt;%\n  select(cum_prod, cost_materials, cost_labor, cost_overhead) %&gt;%\n  pivot_longer(-cum_prod, names_to = &quot;component&quot;, values_to = &quot;cost&quot;) %&gt;%\n  mutate(component = case_when(\n    component == &quot;cost_materials&quot; ~ &quot;Materials (85% LR)&quot;,\n    component == &quot;cost_labor&quot;     ~ &quot;Labor (75% LR)&quot;,\n    component == &quot;cost_overhead&quot;  ~ &quot;Overhead (80% LR)&quot;\n  ))\n\np3 &lt;- ggplot() +\n  geom_line(data = comp_smooth,\n            aes(x = cum_prod, y = cost, color = component),\n            linewidth = 1.1) +\n  geom_point(data = df_comp_long,\n             aes(x = cum_prod, y = cost, color = component),\n             size = 1.5, alpha = 0.5) +\n  scale_x_log10(labels = comma_format()) +\n  scale_y_log10(labels = dollar_format(prefix = &quot;\\u20ac&quot;)) +\n  scale_color_manual(values = comp_colors) +\n  labs(\n    title    = &quot;Cost Components \u2014 Different Learning Rates&quot;,\n    subtitle = &quot;Labor declines fastest (75%), materials slowest (85%)&quot;,\n    x = &quot;Cumulative Production (log scale)&quot;,\n    y = &quot;Unit Cost Component (log scale)&quot;,\n    color = NULL\n  ) +\n  theme_exp +\n  guides(color = guide_legend(nrow = 1))\n\nprint(p3)\n\n# === Chart 4: Cost Breakdown \u2014 Early vs. Mature Production ===================\n# Shows how the cost MIX shifts as volume grows\n\ncalc_costs &lt;- function(vol) {\n  data.frame(\n    component = c(&quot;Materials&quot;, &quot;Labor&quot;, &quot;Overhead&quot;),\n    cost      = c(\n      exp_curve(vol, C1_true * 0.55, 0.85),\n      exp_curve(vol, C1_true * 0.30, 0.75),\n      exp_curve(vol, C1_true * 0.15, 0.80)\n    )\n  )\n}\n\nvol_early  &lt;- 5000\nvol_mature &lt;- 50000\n\ndf_early  &lt;- calc_costs(vol_early)  %&gt;%\n  mutate(stage = paste0(&quot;Early\\n(&quot;, comma(vol_early), &quot; units)&quot;))\ndf_mature &lt;- calc_costs(vol_mature) %&gt;%\n  mutate(stage = paste0(&quot;Mature\\n(&quot;, comma(vol_mature), &quot; units)&quot;))\n\ndf_waterfall &lt;- bind_rows(df_early, df_mature) %&gt;%\n  mutate(\n    component = factor(component,\n                       levels = c(&quot;Overhead&quot;, &quot;Labor&quot;, &quot;Materials&quot;)),\n    stage = factor(stage, levels = c(\n      paste0(&quot;Early\\n(&quot;, comma(vol_early), &quot; units)&quot;),\n      paste0(&quot;Mature\\n(&quot;, comma(vol_mature), &quot; units)&quot;)\n    ))\n  )\n\ntotal_early  &lt;- sum(df_early$cost)\ntotal_mature &lt;- sum(df_mature$cost)\npct_reduction &lt;- round((1 - total_mature \/ total_early) * 100, 1)\n\ncat(&quot;\\n=== Cost Breakdown ===\\n&quot;)\ncat(&quot;At&quot;, comma(vol_early), &quot;units (early):\\n&quot;)\nprint(df_early %&gt;% mutate(pct = round(cost \/ sum(cost) * 100, 1)))\ncat(&quot;Total:&quot;, round(total_early, 1), &quot;EUR\\n\\n&quot;)\ncat(&quot;At&quot;, comma(vol_mature), &quot;units (mature):\\n&quot;)\nprint(df_mature %&gt;% mutate(pct = round(cost \/ sum(cost) * 100, 1)))\ncat(&quot;Total:&quot;, round(total_mature, 1), &quot;EUR\\n&quot;)\ncat(&quot;Reduction:&quot;, pct_reduction, &quot;%\\n&quot;)\n\ncomp_fill &lt;- c(\n  &quot;Materials&quot; = col_blue,\n  &quot;Labor&quot;     = col_green,\n  &quot;Overhead&quot;  = col_orange\n)\n\np4 &lt;- ggplot(df_waterfall, aes(x = stage, y = cost, fill = component)) +\n  geom_col(width = 0.55, color = &quot;white&quot;, linewidth = 0.3) +\n  geom_text(aes(label = paste0(&quot;\u20ac&quot;, round(cost, 1))),\n            position = position_stack(vjust = 0.5),\n            size = 3.8, color = &quot;white&quot;, fontface = &quot;bold&quot;) +\n  annotate(&quot;text&quot;,\n           x = 1, y = total_early + 3,\n           label = paste0(&quot;Total: \u20ac&quot;, round(total_early, 1)),\n           size = 4.2, fontface = &quot;bold&quot;, color = &quot;grey20&quot;) +\n  annotate(&quot;text&quot;,\n           x = 2, y = total_mature + 3,\n           label = paste0(&quot;Total: \u20ac&quot;, round(total_mature, 1)),\n           size = 4.2, fontface = &quot;bold&quot;, color = &quot;grey20&quot;) +\n  annotate(&quot;segment&quot;,\n           x = 1.25, xend = 1.75,\n           y = (total_early + total_mature) \/ 2 + 5,\n           yend = (total_early + total_mature) \/ 2 + 5,\n           arrow = arrow(length = unit(0.25, &quot;cm&quot;)),\n           color = col_red, linewidth = 1) +\n  annotate(&quot;text&quot;,\n           x = 1.5, y = (total_early + total_mature) \/ 2 + 10,\n           label = paste0(&quot;-&quot;, pct_reduction, &quot;%&quot;),\n           size = 4.5, fontface = &quot;bold&quot;, color = col_red) +\n  scale_fill_manual(values = comp_fill) +\n  scale_y_continuous(labels = dollar_format(prefix = &quot;\u20ac&quot;),\n                     expand = expansion(mult = c(0, 0.15))) +\n  labs(\n    title    = &quot;Cost Breakdown \u2014 Early vs. Mature Production&quot;,\n    subtitle = &quot;Materials become dominant as labor and overhead decline faster&quot;,\n    x = NULL, y = &quot;Unit Cost (\u20ac)&quot;, fill = NULL\n  ) +\n  theme_exp +\n  theme(panel.grid.major.x = element_blank())\n\nprint(p4)\n\n# === Chart 5: Industry Learning Rates ========================================\n# Typical experience curve slopes across industries\n\nindustry_data &lt;- data.frame(\n  industry      = c(&quot;Semiconductors&quot;, &quot;Electronics Assembly&quot;, &quot;Automotive&quot;,\n                     &quot;Aerospace&quot;, &quot;Chemical Processing&quot;, &quot;Textiles&quot;,\n                     &quot;Food Processing&quot;, &quot;Steel Production&quot;, &quot;Pharmaceutical&quot;),\n  learning_rate = c(70, 75, 78, 80, 80, 82, 83, 85, 88)\n) %&gt;%\n  arrange(learning_rate) %&gt;%\n  mutate(industry = factor(industry, levels = industry))\n\nlr_range &lt;- range(industry_data$learning_rate)\nindustry_data$color_val &lt;-\n  (industry_data$learning_rate - lr_range[1]) \/ diff(lr_range)\n\np5 &lt;- ggplot(industry_data, aes(y = industry, color = color_val)) +\n  geom_segment(aes(x = 65, xend = learning_rate, yend = industry),\n               linewidth = 10, lineend = &quot;butt&quot;) +\n  geom_text(aes(x = learning_rate, label = paste0(learning_rate, &quot;%&quot;)),\n            hjust = -0.3, size = 4, fontface = &quot;bold&quot;, color = &quot;grey30&quot;) +\n  geom_vline(xintercept = 80, linetype = &quot;dashed&quot;,\n             color = &quot;grey50&quot;, linewidth = 0.6) +\n  annotate(&quot;text&quot;, x = 80, y = 9.7, label = &quot;Typical: 80%&quot;,\n           size = 3.5, color = &quot;grey40&quot;, hjust = 0.5) +\n  scale_color_gradient(low = col_green, high = col_orange, guide = &quot;none&quot;) +\n  scale_x_continuous(limits = c(65, 93), breaks = seq(65, 90, 5),\n                     labels = function(x) paste0(x, &quot;%&quot;)) +\n  labs(\n    title    = &quot;Experience Curve Learning Rates by Industry&quot;,\n    subtitle = &quot;Lower % = steeper learning (costs drop faster per doubling)&quot;,\n    x = &quot;Learning Rate (%)&quot;, y = NULL\n  ) +\n  theme_exp +\n  theme(panel.grid.major.y = element_blank())\n\nprint(p5)\n\n# === Predicting Future Costs =================================================\n# Use the fitted model to forecast costs at future cumulative volumes\n\nfuture_volumes &lt;- c(100000, 150000, 200000)\nfuture_costs   &lt;- exp_curve(future_volumes, C1_true, implied_lr)\n\ncat(&quot;\\n=== Cost Forecast ===\\n&quot;)\nfor (i in seq_along(future_volumes)) {\n  cat(&quot;At&quot;, comma(future_volumes[i]), &quot;cumulative units:&quot;,\n      round(future_costs[i], 2), &quot;EUR\/unit\\n&quot;)\n}\n\n# === Apply to Your Own Data ==================================================\n#\n# Replace the example data below with your actual cost\/volume data.\n# The script will fit an experience curve and report the learning rate.\n\n# --- STEP 1: Enter your data ---\n# my_data &lt;- data.frame(\n#   cum_production = c(1000, 2000, 5000, 10000, 20000),\n#   unit_cost      = c(120, 98, 75, 62, 51)\n# )\n#\n# --- STEP 2: Fit the experience curve ---\n# my_data$log_vol  &lt;- log10(my_data$cum_production)\n# my_data$log_cost &lt;- log10(my_data$unit_cost)\n# my_fit &lt;- lm(log_cost ~ log_vol, data = my_data)\n#\n# my_slope &lt;- coef(my_fit)[2]\n# my_lr    &lt;- 2^my_slope\n#\n# cat(&quot;Your learning rate:&quot;, round(my_lr * 100, 1), &quot;%\\n&quot;)\n# cat(&quot;R-squared:&quot;, round(summary(my_fit)$r.squared, 4), &quot;\\n&quot;)\n# cat(&quot;Cost reduction per doubling:&quot;, round((1 - my_lr) * 100, 1), &quot;%\\n&quot;)\n#\n# --- STEP 3: Plot your data ---\n# ggplot(my_data, aes(x = cum_production, y = unit_cost)) +\n#   geom_point(color = col_blue, size = 3) +\n#   geom_smooth(method = &quot;lm&quot;, formula = y ~ x,\n#               color = col_red, linewidth = 1) +\n#   scale_x_log10(labels = comma_format()) +\n#   scale_y_log10(labels = dollar_format(prefix = &quot;\u20ac&quot;)) +\n#   labs(title = &quot;Your Experience Curve&quot;,\n#        x = &quot;Cumulative Production (log scale)&quot;,\n#        y = &quot;Unit Cost (log scale)&quot;) +\n#   theme_exp\n<\/code><\/pre>\n<\/details>\n<h2>References<\/h2>\n<ol>\n<li>Wright, T.P. (1936). &quot;Factors Affecting the Cost of Airplanes.&quot; <em>Journal of the Aeronautical Sciences<\/em>, 3(4), 122-128.<\/li>\n<li>Henderson, B.D. (1974). <em>The Experience Curve Reviewed: III. Why Does It Work?<\/em> Boston Consulting Group.<\/li>\n<li>Boston Consulting Group (1968). <em>Perspectives on Experience<\/em>. BCG Publications.<\/li>\n<li>Dutton, J.M. &amp; Thomas, A. (1984). &quot;Treating Progress Functions as a Managerial Opportunity.&quot; <em>Academy of Management Review<\/em>, 9(2), 235-247.<\/li>\n<li>Argote, L. &amp; Epple, D. (1990). &quot;Learning Curves in Manufacturing.&quot; <em>Science<\/em>, 247(4945), 920-924.<\/li>\n<li>Yelle, L.E. (1979). &quot;The Learning Curve: Historical Review and Comprehensive Survey.&quot; <em>Decision Sciences<\/em>, 10(2), 302-328.<\/li>\n<li>Lafond, F. et al. (2018). &quot;How Well Do Experience Curves Predict Technological Progress? A Method for Making Distributional Forecasts.&quot; <em>Technological Forecasting and Social Change<\/em>, 128, 104-117.<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Every time cumulative production doubles, costs fall 20-30%. Here&#8217;s how to fit experience curves with R and use them for supplier negotiations, cost forecasting, and strategic sourcing.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13,129,115],"tags":[198,184,196,197,53,119,181,26],"class_list":["post-1268","post","type-post","status-publish","format-standard","hentry","category-data-science","category-procurement","category-supply-chain-management","tag-cost-management","tag-cost-reduction","tag-experience-curve","tag-learning-curve","tag-procurement","tag-r-programming","tag-strategic-sourcing","tag-supply-chain-analytics"],"_links":{"self":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1268","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1268"}],"version-history":[{"count":1,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1268\/revisions"}],"predecessor-version":[{"id":1269,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1268\/revisions\/1269"}],"wp:attachment":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1268"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1268"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1268"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}