{"id":1729,"date":"2026-04-06T00:05:09","date_gmt":"2026-04-06T00:05:09","guid":{"rendered":"https:\/\/inphronesys.com\/?p=1729"},"modified":"2026-04-06T00:05:09","modified_gmt":"2026-04-06T00:05:09","slug":"taking-the-engine-apart-time-series-decomposition-for-supply-chain-forecasters","status":"publish","type":"post","link":"https:\/\/inphronesys.com\/?p=1729","title":{"rendered":"Taking the Engine Apart: Time Series Decomposition for Supply Chain Forecasters"},"content":{"rendered":"<p>Every time series tells three stories at once.<\/p>\n<p>There&#8217;s the long arc \u2014 the trend. Is demand growing? Shrinking? Flattening out? Then there&#8217;s the rhythm \u2014 the seasonality. Summer peaks, winter troughs, end-of-quarter spikes from customers gaming their own budgets. And finally, there&#8217;s everything else \u2014 the remainder, the noise, the &#8222;what on earth happened in March 2024?&#8220; part that no pattern can explain.<\/p>\n<p>The problem is that when you look at a raw time series, you hear all three stories simultaneously. It&#8217;s like listening to a symphony with the melody, harmony, and percussion layered on top of each other \u2014 beautiful to experience, impossible to analyze. You can&#8217;t tune the violins if you can&#8217;t isolate them from the cellos.<\/p>\n<p><a href=\"\/your-line-chart-is-hiding-8-patterns\">Last week<\/a>, we learned to <em>see<\/em> our data \u2014 eight chart types that reveal hidden patterns in demand data. We popped the hood. Today, we take the engine apart.<\/p>\n<p><strong>Time series decomposition<\/strong> is the statistical technique that separates a time series into its component parts: trend, seasonality, and remainder. It&#8217;s one of the most powerful diagnostic tools in the forecaster&#8217;s toolkit, and it&#8217;s been evolving for over a century \u2014 from the crude methods of the 1920s to sophisticated algorithms that can handle multiple seasonal patterns, holidays, and abrupt structural changes.<\/p>\n<p>By the end of this post, you&#8217;ll understand not just <em>how<\/em> decomposition works, but <em>which<\/em> method to use for your specific data. That distinction matters more than most textbooks let on.<\/p>\n<h2>The Three Components (And Why They Matter for Planning)<\/h2>\n<p>Mathematically, decomposition models a time series y_t as a combination of three components:<\/p>\n<ul>\n<li><strong>Trend-cycle (T_t):<\/strong> The long-term direction and medium-term cycles. In supply chain terms, this is your underlying demand trajectory \u2014 market growth, product lifecycle, gradual customer base changes.<\/li>\n<li><strong>Seasonal (S_t):<\/strong> Repeating patterns with a fixed, known period. Quarterly budget cycles, summer demand peaks, December retail surges, Monday warehouse volume spikes. The key word is <em>fixed<\/em> \u2014 if the pattern repeats at the same interval, it&#8217;s seasonal.<\/li>\n<li><strong>Remainder (R_t):<\/strong> Everything that isn&#8217;t trend or seasonality. Promotions, supply disruptions, one-time events, random variation. The &#8222;unexplained&#8220; part \u2014 and often the most interesting part to investigate.<\/li>\n<\/ul>\n<p>The question is: how do these three components combine?<\/p>\n<h3>Additive vs. Multiplicative: The Decision That Shapes Everything<\/h3>\n<p>In an <strong>additive<\/strong> model, the components simply add up:<\/p>\n<p><strong>y_t = S_t + T_t + R_t<\/strong><\/p>\n<p>This means seasonal swings are roughly constant in absolute terms. If your product sells 200 extra units every December regardless of whether baseline demand is 1,000 or 5,000, that&#8217;s additive seasonality.<\/p>\n<p>In a <strong>multiplicative<\/strong> model, the components multiply:<\/p>\n<p><strong>y_t = S_t x T_t x R_t<\/strong><\/p>\n<p>Here, seasonal swings scale proportionally with the level. If December demand is always about 20% higher than baseline \u2014 whether baseline is 1,000 (so +200) or 5,000 (so +1,000) \u2014 that&#8217;s multiplicative seasonality.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_additive_vs_multiplicative.png\" alt=\"Additive vs. multiplicative decomposition: constant swings vs. proportional swings\" \/><\/p>\n<p><strong>Why this matters for your supply chain:<\/strong> Get this choice wrong and your seasonal factors will be systematically biased. Additive factors applied to multiplicative data will underestimate peaks at high demand levels and overestimate them at low levels. Your safety stock calculations, production plans, and procurement schedules all inherit that error.<\/p>\n<p><strong>The practical test:<\/strong> Plot your data. If the seasonal &#8222;amplitude&#8220; (the distance from peak to trough) stays roughly constant as the level changes, use additive. If it grows proportionally with the level, use multiplicative. When in doubt, there&#8217;s an elegant trick: apply a <strong>log transformation<\/strong> to your data. Since log(S_t x T_t x R_t) = log(S_t) + log(T_t) + log(R_t), taking logarithms converts a multiplicative model into an additive one. Decompose the log-transformed series additively, then exponentiate back. Problem solved.<\/p>\n<p>Most retail and consumer goods demand data is multiplicative \u2014 higher-volume products have proportionally larger seasonal swings. Most industrial and MRO data is closer to additive. But always check. Your data doesn&#8217;t care about rules of thumb.<\/p>\n<h2>Moving Averages: The Foundation of Trend Estimation<\/h2>\n<p>Before we decompose anything, we need a way to estimate the trend. The classic tool is the <strong>moving average<\/strong> \u2014 and despite its simplicity, it&#8217;s worth understanding properly, because every decomposition method builds on this idea.<\/p>\n<p>An m-order moving average estimates the trend at time t by averaging m consecutive observations centered around t:<\/p>\n<p><strong>T\u0302<em>t = (1\/m) (y<\/em>{t-(m-1)\/2} + &#8230; + y_t + &#8230; + y_{t+(m-1)\/2})<\/strong><\/p>\n<p>For odd values of m, this is straightforward \u2014 a 5-MA averages the two observations before, the current one, and the two after. For even periods (like monthly data with m = 12), you need a small adjustment: the <strong>2xm-MA<\/strong>, which takes a moving average of the moving average to re-center it. A 2&#215;12-MA is the standard approach for monthly data.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_moving_averages.png\" alt=\"Moving averages with different orders: more smoothing reveals broader trends\" \/><\/p>\n<p><strong>What you&#8217;re seeing:<\/strong> Lower-order moving averages (m = 3, m = 5) follow the data closely but don&#8217;t fully remove the seasonal pattern. Higher-order averages (m = 12 for monthly data) smooth out the seasonality entirely, revealing the pure trend underneath. The 2&#215;12-MA is the sweet spot for monthly supply chain data \u2014 it removes the 12-month seasonal cycle while preserving the trend.<\/p>\n<p>There&#8217;s a trade-off here that mirrors a fundamental tension in supply chain planning: <strong>responsiveness vs. stability.<\/strong> A short moving average responds quickly to changes but is noisy. A long moving average is stable but slow to react. Sound familiar? It&#8217;s the same trade-off you face choosing between responsive and stable safety stock parameters, or between short and long planning horizons in your MRP.<\/p>\n<p><strong>Weighted moving averages<\/strong> generalize this by assigning different weights to each observation (weights must sum to 1 and be symmetric). The 2&#215;12-MA is actually a weighted MA where the first and last observations get half weight. This matters because it means the trend estimate is less influenced by extreme values at the edges of the window.<\/p>\n<h2>Classical Decomposition: Where It All Started<\/h2>\n<p>Classical decomposition has been around since the 1920s \u2014 longer than most supply chain concepts. It&#8217;s beautifully simple, which is both its charm and its fatal flaw.<\/p>\n<h3>The Algorithm (Additive Version)<\/h3>\n<ol>\n<li><strong>Estimate the trend<\/strong> using a 2xm-MA (e.g., 2&#215;12-MA for monthly data)<\/li>\n<li><strong>Detrend the data<\/strong> by subtracting the trend: y_t &#8211; T\u0302_t<\/li>\n<li><strong>Estimate the seasonal component<\/strong> by averaging the detrended values for each season (all Januaries together, all Februaries together, etc.), then adjusting so the seasonal factors sum to zero over a complete cycle<\/li>\n<li><strong>Calculate the remainder:<\/strong> R_t = y_t &#8211; T\u0302_t &#8211; S\u0302_t<\/li>\n<\/ol>\n<p>For the multiplicative version, replace subtraction with division in steps 2 and 4.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_classical.png\" alt=\"Classical decomposition of employment data: trend, seasonal, and remainder\" \/><\/p>\n<p><strong>What you&#8217;re seeing:<\/strong> The top panel is the original series. Below it, the smooth trend-cycle estimated by the 2&#215;12-MA. Then the seasonal component \u2014 a repeating pattern that&#8217;s <em>identical<\/em> every year (this is a critical limitation we&#8217;ll discuss). And finally the remainder, which should ideally look like random noise if the decomposition captured everything meaningful.<\/p>\n<h3>Why Hyndman Says &#8222;Not Recommended&#8220;<\/h3>\n<p>Classical decomposition is still taught \u2014 and still used in many supply chain organizations \u2014 but Rob Hyndman&#8217;s verdict in <em>Forecasting: Principles and Practice<\/em> is blunt: <strong>&#8222;not recommended.&#8220;<\/strong> Here&#8217;s why:<\/p>\n<ol>\n<li><strong>Missing edges.<\/strong> The 2xm-MA can&#8217;t produce estimates for the first and last m\/2 observations. For a 2&#215;12-MA on monthly data, you lose the first 6 and last 6 months. That&#8217;s a full year of your most recent data \u2014 exactly the part you care about most for planning.<\/li>\n<li><strong>Over-smoothing.<\/strong> The trend estimate can smooth out rapid changes, like a sudden demand shift from a new product launch or a lost customer. The MA just averages right through it, as if nothing happened.<\/li>\n<li><strong>Static seasonality.<\/strong> Classical decomposition assumes the seasonal pattern is <em>identical<\/em> every year. That&#8217;s almost never true in practice. Consumer preferences shift. Product mixes change. Supply chains reconfigure. A method that can&#8217;t handle evolving seasonality is going to drift further from reality every year.<\/li>\n<li><strong>Outlier sensitivity.<\/strong> A single unusual observation \u2014 a massive one-time order, a data entry error, a pandemic \u2014 gets absorbed into the seasonal and trend estimates, contaminating both. There&#8217;s no mechanism to say &#8222;that point is weird, let&#8217;s downweight it.&#8220;<\/li>\n<\/ol>\n<p>These aren&#8217;t minor quibbles. They&#8217;re structural problems that make classical decomposition unreliable for real forecasting work. It&#8217;s like using a typewriter in 2026 \u2014 historically interesting, good for understanding the concept, but you wouldn&#8217;t build your production plan on it.<\/p>\n<h2>STL: The Method You Should Actually Be Using<\/h2>\n<p><strong>STL \u2014 Seasonal and Trend decomposition using LOESS<\/strong> \u2014 was developed by Cleveland, Cleveland, McRae, and Terpenning in 1990, and it remains the workhorse of modern time series decomposition. If classical decomposition is the typewriter, STL is the word processor.<\/p>\n<h3>How STL Works<\/h3>\n<p>Instead of simple moving averages, STL uses <strong>LOESS<\/strong> (LOcally Estimated Scatterplot Smoothing) \u2014 a form of local weighted regression that fits smooth curves through data points by giving more weight to nearby observations. The algorithm alternates between two nested loops:<\/p>\n<p><strong>Inner loop<\/strong> (runs multiple times per iteration):<\/p>\n<ol>\n<li>Remove the current seasonal estimate from the data<\/li>\n<li>Apply LOESS smoothing to estimate the trend<\/li>\n<li>Remove the trend estimate from the data<\/li>\n<li>Apply LOESS smoothing to estimate the seasonality<\/li>\n<li>Repeat until convergence<\/li>\n<\/ol>\n<p><strong>Outer loop<\/strong> (for robustness):<\/p>\n<ul>\n<li>Calculates weights based on the size of each remainder value<\/li>\n<li>Large remainders (potential outliers) get downweighted<\/li>\n<li>The inner loop re-runs with these weights, so outliers have less influence<\/li>\n<\/ul>\n<p>This iterative LOESS-based approach is what gives STL its superpowers:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_stl.png\" alt=\"STL decomposition with tuned parameters\" \/><\/p>\n<h3>Why STL Beats Classical (Every Time)<\/h3>\n<table style=\"border-collapse: collapse; width: 100%; margin: 1.5em 0; font-size: 0.95em; line-height: 1.5;\">\n<thead>\n<tr>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">Feature<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">Classical<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">STL<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Handles any seasonal period<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Limited<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes \u2014 daily, weekly, monthly, any<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Evolving seasonality<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No \u2014 static pattern<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes \u2014 seasonal shape can change over time<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Adjustable smoothness<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No \u2014 fixed<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes \u2014 tune trend and seasonal windows<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Robust to outliers<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes \u2014 via outer loop weighting<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Edge estimates<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Missing first\/last m\/2<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Available for the full series<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Evolving seasonality<\/strong> is the game-changer. In the real world, your December peak might be getting stronger every year as e-commerce grows. Your Q3 trough might be shifting as your customer base changes regions. STL captures this because it re-estimates the seasonal component at every time point, rather than averaging all Decembers together and assuming they&#8217;re identical.<\/p>\n<h3>The Two Parameters That Matter<\/h3>\n<p>STL has several parameters, but two dominate:<\/p>\n<ul>\n<li><strong><code>season(window = ...)<\/code><\/strong> \u2014 Controls how quickly the seasonal component can change. Larger values mean more stable seasonality (closer to classical decomposition). Smaller values allow the seasonal pattern to evolve more freely. A common starting point for monthly data is <code>season(window = 13)<\/code>, which uses just over one year&#8217;s worth of data to estimate each seasonal value.<\/li>\n<li><strong><code>trend(window = ...)<\/code><\/strong> \u2014 Controls how smooth the trend is. Larger values produce smoother trends. Smaller values let the trend respond to shorter-term changes. A rough rule of thumb: set it to about 1.5 times the seasonal period, then adjust based on what the remainder looks like.<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_stl_comparison.png\" alt=\"STL with different window parameters: how smoothness choices shape the decomposition\" \/><\/p>\n<p><strong>What you&#8217;re seeing:<\/strong> The same data decomposed with three different parameter settings. Narrow windows (left) produce a wiggly trend and rapidly changing seasonality \u2014 responsive but noisy. Wide windows (right) produce a smooth trend and near-constant seasonality \u2014 stable but potentially missing real changes. The middle ground is where the art meets the science.<\/p>\n<p><strong>The diagnostic trick:<\/strong> Look at the remainder. If it shows obvious patterns \u2014 clear seasonality, systematic trends \u2014 your decomposition hasn&#8217;t captured everything. Go back and adjust. A good decomposition produces a remainder that looks like white noise (random, unpatterned, centered on zero).<\/p>\n<h3>STL&#8217;s Limitations<\/h3>\n<p>STL isn&#8217;t perfect. Two limitations matter for supply chain work:<\/p>\n<ol>\n<li><strong>Additive only.<\/strong> STL natively handles additive decomposition. For multiplicative data, you need to log-transform first, decompose, then exponentiate back. It works, but it&#8217;s an extra step and the back-transformed confidence intervals can be asymmetric.<\/li>\n<li><strong>No calendar adjustment.<\/strong> STL treats every month (or quarter, or week) as equally long. It doesn&#8217;t know that February has fewer days than March, or that some months have five Mondays while others have four. For daily-level supply chain data \u2014 warehouse throughput, order volumes \u2014 this can matter.<\/li>\n<\/ol>\n<h3>The Payoff: Seasonally Adjusted Data<\/h3>\n<p>One of the most valuable outputs of decomposition isn&#8217;t a forecast \u2014 it&#8217;s the <strong>seasonally adjusted<\/strong> series. By removing the seasonal component, you can see the underlying trend and unusual events without the seasonal noise:<\/p>\n<p><strong>Seasonally adjusted = y_t &#8211; S\u0302_t<\/strong> (additive)<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_seasonally_adjusted.png\" alt=\"Original data vs. seasonally adjusted: seeing through the seasonal noise\" \/><\/p>\n<p><strong>Why supply chain teams should care:<\/strong> Seasonally adjusted data makes it much easier to detect real changes in underlying demand. Did that jump in March represent genuine demand growth, or was it just the seasonal March peak? Subtracting out the seasonal component gives you the answer. It&#8217;s also what central banks and government statisticians use when they report &#8222;seasonally adjusted&#8220; GDP or employment figures \u2014 the same technique, applied at a very different scale.<\/p>\n<h2>Beyond STL: Modern Decomposition Methods<\/h2>\n<p>STL was published in 1990. A lot has happened in 36 years. The core idea \u2014 iterative LOESS smoothing \u2014 remains sound, but researchers have extended it to handle increasingly complex real-world data. Here are three methods pushing the frontier.<\/p>\n<h3>MSTL: Multiple Seasonal Patterns<\/h3>\n<p><strong>MSTL (Multiple Seasonal-Trend decomposition using LOESS)<\/strong>, developed by Bandara, Hyndman, and Bergmeir, extends STL to handle data with <em>multiple<\/em> seasonal patterns simultaneously.<\/p>\n<p>Why does this matter? Because much real-world supply chain data has more than one seasonal cycle. Daily warehouse data might have both a <strong>weekly<\/strong> pattern (lower volumes on weekends) and an <strong>annual<\/strong> pattern (holiday peaks in December). Hourly electricity demand has <strong>daily<\/strong> cycles (morning ramp-up, evening peak), <strong>weekly<\/strong> cycles (weekday vs. weekend), and <strong>annual<\/strong> cycles (summer cooling, winter heating).<\/p>\n<p>STL can only handle one seasonal period at a time. You could nest multiple STL passes, but the order of decomposition affects the results, and parameter tuning becomes a nightmare. MSTL handles all seasonal patterns in a single, principled algorithm \u2014 and with fewer tuning parameters than alternatives like Prophet or TBATS.<\/p>\n<p>In fpp3, you can use MSTL directly with the <code>STL()<\/code> function by specifying multiple seasonal periods. It&#8217;s already built into the tidyverts ecosystem.<\/p>\n<h3>STAHL: When Holidays Break Your Patterns<\/h3>\n<p><strong>STAHL (Seasonal, Trend, and Holiday Decomposition with LOESS)<\/strong> is a 2025 innovation from Quantcube Technology that adds something STL and MSTL can&#8217;t handle: an explicit <strong>holiday component<\/strong>.<\/p>\n<p>Think about it: Easter moves. Chinese New Year moves. Ramadan moves. These events create demand spikes (or dips) that don&#8217;t align with fixed seasonal periods. STL dumps them into the remainder. Your seasonal factors end up contaminated by holiday effects that shift from year to year, and your remainder contains systematic patterns that aren&#8217;t truly random.<\/p>\n<p>STAHL solves this with three key innovations:<\/p>\n<ol>\n<li><strong>Spectral frequency identification<\/strong> \u2014 automatically detects which seasonal periods are present in the data, rather than requiring you to specify them<\/li>\n<li><strong>Explicit holiday component<\/strong> \u2014 separates holiday effects from the regular seasonal pattern<\/li>\n<li><strong>Second inner loop<\/strong> \u2014 disentangles outliers from holiday effects (a distinction that earlier methods blur together)<\/li>\n<\/ol>\n<p>The results are striking: in automated quality assessments, STAHL showed a <strong>42% improvement<\/strong> over comparable methods on key decomposition quality metrics. For supply chain data with significant holiday effects \u2014 retail, food &amp; beverage, consumer electronics \u2014 this is a major advance.<\/p>\n<h3>BASTION: Bayesian Decomposition with Uncertainty<\/h3>\n<p><strong>BASTION (Bayesian Adaptive Seasonality and Trend DecompositION)<\/strong> takes a fundamentally different approach. Instead of LOESS smoothing, it uses a <strong>Bayesian framework<\/strong> to decompose time series into trend and multiple seasonal components.<\/p>\n<p>What makes BASTION interesting for supply chain applications:<\/p>\n<ul>\n<li><strong>Uncertainty quantification:<\/strong> Every component estimate comes with a credible interval. You don&#8217;t just get &#8222;the trend is going up&#8220; \u2014 you get &#8222;the trend is going up, and we&#8217;re 95% confident it&#8217;s between X and Y.&#8220; For safety stock calculations and scenario planning, this is gold.<\/li>\n<li><strong>Abrupt change handling:<\/strong> BASTION can detect and adapt to sudden level shifts \u2014 a new customer, a lost contract, a supply chain disruption that permanently alters your demand pattern. STL&#8217;s LOESS smoothing tends to gradually absorb these shifts rather than detecting them cleanly.<\/li>\n<li><strong>Robustness to outliers:<\/strong> The Bayesian framework naturally handles unusual observations through its probabilistic model, without needing a separate robustness loop.<\/li>\n<\/ul>\n<p>BASTION is still relatively new and not yet part of the standard fpp3 toolkit, but it represents where the field is heading: probabilistic decomposition that quantifies what we know and what we don&#8217;t.<\/p>\n<h3>X-11 and SEATS: The Official Statistics Workhorses<\/h3>\n<p>No discussion of decomposition is complete without mentioning the methods used by statistical agencies worldwide. <strong>X-11<\/strong> (developed at the U.S. Census Bureau) and <strong>SEATS<\/strong> (developed at the Bank of Spain) are the backbone of official economic statistics \u2014 the methods behind every &#8222;seasonally adjusted employment figure&#8220; and &#8222;seasonally adjusted GDP growth rate&#8220; you&#8217;ve ever seen in the news.<\/p>\n<p>The modern implementation, <strong>X-13ARIMA-SEATS<\/strong>, combines both approaches with ARIMA modeling for trend extension. In R, the <code>seasonal<\/code> package by Christoph Sax provides a clean interface:<\/p>\n<pre><code class=\"language-r\">library(seasonal)\nseas_model &lt;- seas(AirPassengers)\nplot(seas_model)\n<\/code><\/pre>\n<p>For most supply chain applications, STL or MSTL will serve you better \u2014 X-11\/SEATS are optimized for the specific needs of macroeconomic statistics (calendar adjustment, trading day effects, holiday correction) that matter less for demand planning. But if your organization needs to produce official or regulatory-grade seasonally adjusted figures, these are the methods to use.<\/p>\n<h2>Method Comparison: Choosing the Right Tool<\/h2>\n<p>Here&#8217;s the cheat sheet. Every method has trade-offs \u2014 there is no single &#8222;best&#8220; decomposition method for all data.<\/p>\n<table style=\"border-collapse: collapse; width: 100%; margin: 1.5em 0; font-size: 0.95em; line-height: 1.5;\">\n<thead>\n<tr>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">Criterion<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">Classical<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">STL<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">X-11 \/ SEATS<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">MSTL<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">STAHL<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">BASTION<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Evolving seasonality<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Multiple seasonal periods<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Robust to outliers<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Partial<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Holiday handling<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes (X-11)<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Uncertainty estimates<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Calendar adjustment<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Yes<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">No<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Ease of use in R<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Easy<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Easy (fpp3)<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Moderate (<code>seasonal<\/code>)<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Easy (fpp3)<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Specialized<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Specialized<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Best for<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Teaching<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">General use<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Official statistics<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Complex seasonality<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Holiday-heavy data<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">Scenario planning<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>The decision tree for supply chain forecasters:<\/strong><\/p>\n<ol>\n<li><strong>Single seasonal period, no holidays?<\/strong> Use <strong>STL<\/strong>. It&#8217;s the default for good reason.<\/li>\n<li><strong>Multiple seasonal periods<\/strong> (e.g., daily data with weekly + annual cycles)? Use <strong>MSTL<\/strong>.<\/li>\n<li><strong>Significant holiday effects<\/strong> (retail, consumer goods)? Consider <strong>STAHL<\/strong> or <strong>X-11<\/strong>.<\/li>\n<li><strong>Need uncertainty bands<\/strong> on components for planning? Watch <strong>BASTION<\/strong> as it matures.<\/li>\n<li><strong>Government reporting<\/strong> or official statistics? Use <strong>X-13ARIMA-SEATS<\/strong> via the <code>seasonal<\/code> package.<\/li>\n<li><strong>Just learning?<\/strong> Start with STL. Seriously. It handles 90% of cases well, and everything else is a refinement.<\/li>\n<\/ol>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/decomp_method_comparison.png\" alt=\"Comparing decomposition methods on the same data\" \/><\/p>\n<h2>Interactive Dashboard<\/h2>\n<p>Explore the decomposition methods yourself \u2014 adjust parameters, switch between additive and multiplicative models, and see how different STL window settings change the results in real time.<\/p>\n<div class=\"dashboard-link\" style=\"margin: 2em 0; padding: 1.5em; background: #f8f9fa; border-left: 4px solid #0073aa; border-radius: 4px;\">\n<p style=\"margin: 0 0 0.5em 0; font-size: 1.1em;\"><strong>Interactive Dashboard<\/strong><\/p>\n<p style=\"margin: 0 0 1em 0;\">Explore the data yourself \u2014 adjust parameters and see the results update in real time.<\/p>\n<p><a style=\"display: inline-block; padding: 0.6em 1.2em; background: #0073aa; color: #fff; text-decoration: none; border-radius: 4px; font-weight: bold;\" href=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/2026-04-11_Time_Series_Decomposition_FPP3_dashboard.html\" target=\"_blank\" rel=\"noopener\">Open Interactive Dashboard \u2192<\/a><\/p>\n<\/div>\n<h2>Your Next Steps<\/h2>\n<p>Decomposition is a diagnostic tool, not a forecast. It tells you <em>what&#8217;s inside<\/em> your time series so you can make better modeling decisions. Here are five things you can do with this knowledge right now:<\/p>\n<ol>\n<li><strong>Decompose one real demand series this week.<\/strong> Pick your highest-volume SKU, pull 3+ years of monthly data, and run <code>STL()<\/code> in fpp3. Look at the remainder \u2014 does it look random? If not, your current forecast model is missing something.<\/li>\n<li><strong>Check your additive\/multiplicative assumption.<\/strong> Plot your data. If seasonal amplitude grows with the level, log-transform before decomposing. Getting this wrong silently biases every seasonal factor downstream.<\/li>\n<li><strong>Use seasonally adjusted data in your next S&amp;OP meeting.<\/strong> When someone says &#8222;demand jumped 15% last month,&#8220; pull out the seasonally adjusted series and check \u2014 was it a real jump, or just the seasonal March peak? You&#8217;ll be the smartest person in the room.<\/li>\n<li><strong>Compare your ERP&#8217;s seasonal factors to STL&#8217;s.<\/strong> Many ERP systems use classical decomposition (or worse, static seasonal indices that haven&#8217;t been updated in years). Run STL on the same data and compare. The differences will tell you how much forecast accuracy you&#8217;re leaving on the table.<\/li>\n<li><strong>Read ahead.<\/strong> Next week, we&#8217;ll use what we&#8217;ve learned here \u2014 the trend, the seasonality, the choice between additive and multiplicative \u2014 to build actual forecast models with ETS and ARIMA. Decomposition was the diagnosis. Forecasting is the treatment.<\/li>\n<\/ol>\n<details>\n<summary><strong>Show R Code<\/strong><\/summary>\n<pre><code class=\"language-r\"># =============================================================================\n# Time Series Decomposition in R \u2014 Complete Reproducible Code\n# =============================================================================\n# This code accompanies the blog post on inphronesys.com\n# All examples use the fpp3 ecosystem (tidyverts)\n# =============================================================================\n\n# --- Load packages ---\nlibrary(fpp3)        # Loads tsibble, tsibbledata, feasts, fable, ggplot2, etc.\nlibrary(patchwork)   # Multi-panel plot layouts\nlibrary(slider)      # Efficient moving average calculations\nlibrary(scales)      # Axis label formatting\n\nsource(\"Scripts\/theme_inphronesys.R\")\n\n# --- Prepare the data ---\n# US Retail Employment (monthly, 1990 onward)\nretail &lt;- us_employment |&gt;\n  filter(year(Month) &gt;= 1990, Title == \"Retail Trade\") |&gt;\n  select(Month, Employed)\n\n# =============================================================================\n# Step 1: Understand Additive vs. Multiplicative Decomposition\n# =============================================================================\n\n# Additive: Y = Trend + Seasonal + Remainder\n#   Use when seasonal swings stay CONSTANT regardless of level\nbeer &lt;- aus_production |&gt;\n  filter(year(Quarter) &gt;= 1992) |&gt;\n  select(Quarter, Beer)\n\nautoplot(beer, Beer) +\n  labs(title = \"Beer Production \u2014 Constant Seasonal Amplitude (Additive)\")\n\n# Multiplicative: Y = Trend \u00d7 Seasonal \u00d7 Remainder\n#   Use when seasonal swings GROW with the level\na10 &lt;- PBS |&gt;\n  filter(ATC2 == \"A10\") |&gt;\n  summarise(Cost = sum(Cost))\n\nautoplot(a10, Cost) +\n  labs(title = \"Drug Sales \u2014 Growing Seasonal Amplitude (Multiplicative)\")\n\n# =============================================================================\n# Step 2: Moving Averages \u2014 Estimating the Trend\n# =============================================================================\n\nretail_ma &lt;- retail |&gt;\n  as_tibble() |&gt;\n  mutate(\n    MA_5    = slide_dbl(Employed, mean, .before = 2, .after = 2, .complete = TRUE),\n    ma12    = slide_dbl(Employed, mean, .before = 5, .after = 6, .complete = TRUE),\n    MA_2x12 = slide_dbl(ma12, mean, .before = 0, .after = 1, .complete = TRUE)\n  )\n\n# =============================================================================\n# Step 3: Classical Decomposition\n# =============================================================================\n\nretail |&gt;\n  model(classical_decomposition(Employed, type = \"additive\")) |&gt;\n  components() |&gt;\n  autoplot() +\n  labs(title = \"Classical Additive Decomposition\")\n\n# =============================================================================\n# Step 4: STL Decomposition (the modern standard)\n# =============================================================================\n\ndcmp &lt;- retail |&gt;\n  model(STL(Employed ~ trend(window = 13) + season(window = \"periodic\"))) |&gt;\n  components()\n\nautoplot(dcmp) +\n  labs(title = \"STL Decomposition of US Retail Employment\")\n\n# =============================================================================\n# Step 5: Tuning STL Parameters\n# =============================================================================\n\n# Compare flexible vs. smooth trend windows\nstl_flex &lt;- retail |&gt;\n  model(STL(Employed ~ trend(window = 7) + season(window = \"periodic\"))) |&gt;\n  components()\n\nstl_smooth &lt;- retail |&gt;\n  model(STL(Employed ~ trend(window = 21) + season(window = \"periodic\"))) |&gt;\n  components()\n\n# =============================================================================\n# Step 6: Extracting the Seasonally Adjusted Series\n# =============================================================================\n\ndcmp |&gt;\n  as_tibble() |&gt;\n  mutate(\n    Month = as.Date(Month),\n    Seasonally_Adjusted = Employed - season_year\n  ) |&gt;\n  ggplot(aes(x = Month)) +\n  geom_line(aes(y = Employed), color = \"grey80\") +\n  geom_line(aes(y = Seasonally_Adjusted), color = \"#0073aa\") +\n  labs(\n    title = \"Original vs. Seasonally Adjusted\",\n    y = \"Employed (thousands)\", x = NULL\n  )\n\n# =============================================================================\n# Step 7: Comparing Methods\n# =============================================================================\n\n# Ljung-Box test for autocorrelation in remainder:\ndcmp |&gt;\n  as_tsibble() |&gt;\n  features(remainder, ljung_box, lag = 24)\n\n# =============================================================================\n# Apply to Your Own Data\n# =============================================================================\n\n# Replace this with your own time series:\n#\n# my_data &lt;- read_csv(\"your_data.csv\") |&gt;\n#   mutate(Date = yearmonth(Date)) |&gt;\n#   as_tsibble(index = Date)\n#\n# # Visualize first\n# autoplot(my_data, Value)\n#\n# # Check: additive or multiplicative?\n# # If seasonal swings grow \u2192 use log transform for additive\n# my_data &lt;- my_data |&gt; mutate(Log_Value = log(Value))\n#\n# # Decompose with STL\n# my_dcmp &lt;- my_data |&gt;\n#   model(STL(Value ~ trend(window = 13) + season(window = \"periodic\"))) |&gt;\n#   components()\n#\n# # Inspect components\n# autoplot(my_dcmp)\n#\n# # Extract seasonally adjusted series\n# my_sa &lt;- my_dcmp |&gt;\n#   as_tibble() |&gt;\n#   mutate(SA = Value - season_year)\n<\/code><\/pre>\n<\/details>\n<h2>References<\/h2>\n<ul>\n<li>Hyndman, R.J., &amp; Athanasopoulos, G. (2021). <em>Forecasting: Principles and Practice<\/em>, 3rd edition. OTexts. Chapter 3: Time Series Decomposition. <a href=\"https:\/\/otexts.com\/fpp3\/decomposition.html\">otexts.com\/fpp3\/decomposition.html<\/a><\/li>\n<li>Cleveland, R.B., Cleveland, W.S., McRae, J.E., &amp; Terpenning, I. (1990). STL: A Seasonal-Trend Decomposition Procedure Based on Loess (with Discussion). <em>Journal of Official Statistics<\/em>, 6(1), 3\u201373.<\/li>\n<li>Bandara, K., Hyndman, R.J., &amp; Bergmeir, C. (2025). MSTL: A Seasonal-Trend Decomposition Algorithm for Time Series with Multiple Seasonal Patterns. <em>International Journal of Operational Research<\/em>, 52(1), 79\u201398.<\/li>\n<li>Haller, V., Daniel, S., &amp; Bellone, B. (2025). STAHL: Seasonal, Trend, and Holiday Decomposition with Loess. <em>Journal of Official Statistics<\/em>, 41(4).<\/li>\n<li>Sax, C., &amp; Eddelbuettel, D. (2018). Seasonal Adjustment by X-13ARIMA-SEATS in R. <em>Journal of Statistical Software<\/em>, 87(11), 1\u201317.<\/li>\n<li>Persons, W.M. (1919). Indices of Business Conditions. <em>Review of Economics and Statistics<\/em>, 1, 5\u2013107. (Early classical decomposition methods.)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Every time series is a cocktail of trend, seasonality, and noise. Decomposition is how you separate the ingredients \u2014 and once you can see each one, choosing the right forecast model stops being a guessing game.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13,206,115],"tags":[263,8,127,265,15,124,264,26,208,266],"class_list":["post-1729","post","type-post","status-publish","format-standard","hentry","category-data-science","category-forecasting","category-supply-chain-management","tag-decomposition","tag-forecasting","tag-fpp3","tag-mstl","tag-r","tag-seasonality","tag-stl","tag-supply-chain-analytics","tag-time-series-2","tag-trend"],"_links":{"self":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1729","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1729"}],"version-history":[{"count":1,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1729\/revisions"}],"predecessor-version":[{"id":1730,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1729\/revisions\/1730"}],"wp:attachment":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1729"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}