{"id":1096,"date":"2026-02-22T14:09:59","date_gmt":"2026-02-22T14:09:59","guid":{"rendered":"https:\/\/inphronesys.com\/?p=1096"},"modified":"2026-02-22T14:20:37","modified_gmt":"2026-02-22T14:20:37","slug":"time-series-analysis-for-supply-chain-management-reading-the-rhythm-of-demand-2","status":"publish","type":"post","link":"https:\/\/inphronesys.com\/?p=1096","title":{"rendered":"Time Series Analysis for Supply Chain Management: Reading the Rhythm of Demand"},"content":{"rendered":"<h2>Your Demand Data Has a Heartbeat<\/h2>\n<p>Every product in your supply chain has a rhythm. Some pulse with the seasons \u2014 sunscreen in June, heating oil in November, and ice cream in July (always July). Others march to a steady upward beat as markets grow. A few just thrash around unpredictably, like a drummer who lost the sheet music.<\/p>\n<p>The problem is that most supply chain teams treat demand data as a single, monolithic number: &quot;We sold 145,000 cases last July, so budget for something similar.&quot; That&#8217;s not analysis \u2014 that&#8217;s looking at a heart monitor, seeing the spikes, and concluding the patient has a heartbeat. True, but not exactly actionable.<\/p>\n<p>Time series analysis does something fundamentally more useful: it separates the signal into its component parts. That 145,000-case July figure is actually the sum of a long-term growth trend (the business is expanding), a seasonal pattern (people eat more ice cream when it&#8217;s hot \u2014 shocking, I know), and random noise (a heat wave, a competitor&#8217;s recall, a TikTok trend involving your mint chocolate chip flavor). Once you can see these components independently, you can plan against each one differently.<\/p>\n<p>This post walks through the complete toolkit \u2014 from decomposition to diagnostics to forecasting \u2014 using five years of ice cream demand data. We&#8217;ll use R&#8217;s modern <code>fpp3<\/code> ecosystem, which turns time series analysis from an arcane statistical ritual into something surprisingly readable. And we&#8217;ll be honest about where these methods fall flat, because overselling a forecasting technique is how you end up with a warehouse full of ice cream in January.<\/p>\n<h2>The Data: Five Years of Frozen Profits<\/h2>\n<p>Let&#8217;s start with our running example: monthly ice cream shipments (in thousands of cases) from a mid-size manufacturer serving the U.S. Northeast region, January 2020 through December 2024.<\/p>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">Year<\/th>\n<th style=\"text-align:center\">Jan<\/th>\n<th style=\"text-align:center\">Feb<\/th>\n<th style=\"text-align:center\">Mar<\/th>\n<th style=\"text-align:center\">Apr<\/th>\n<th style=\"text-align:center\">May<\/th>\n<th style=\"text-align:center\">Jun<\/th>\n<th style=\"text-align:center\">Jul<\/th>\n<th style=\"text-align:center\">Aug<\/th>\n<th style=\"text-align:center\">Sep<\/th>\n<th style=\"text-align:center\">Oct<\/th>\n<th style=\"text-align:center\">Nov<\/th>\n<th style=\"text-align:center\">Dec<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"text-align:center\">2020<\/td>\n<td style=\"text-align:center\">42<\/td>\n<td style=\"text-align:center\">45<\/td>\n<td style=\"text-align:center\">52<\/td>\n<td style=\"text-align:center\">68<\/td>\n<td style=\"text-align:center\">89<\/td>\n<td style=\"text-align:center\">118<\/td>\n<td style=\"text-align:center\">138<\/td>\n<td style=\"text-align:center\">132<\/td>\n<td style=\"text-align:center\">105<\/td>\n<td style=\"text-align:center\">72<\/td>\n<td style=\"text-align:center\">51<\/td>\n<td style=\"text-align:center\">48<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:center\">2021<\/td>\n<td style=\"text-align:center\">44<\/td>\n<td style=\"text-align:center\">47<\/td>\n<td style=\"text-align:center\">55<\/td>\n<td style=\"text-align:center\">72<\/td>\n<td style=\"text-align:center\">93<\/td>\n<td style=\"text-align:center\">124<\/td>\n<td style=\"text-align:center\">145<\/td>\n<td style=\"text-align:center\">137<\/td>\n<td style=\"text-align:center\">108<\/td>\n<td style=\"text-align:center\">75<\/td>\n<td style=\"text-align:center\">53<\/td>\n<td style=\"text-align:center\">50<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:center\">2022<\/td>\n<td style=\"text-align:center\">43<\/td>\n<td style=\"text-align:center\">46<\/td>\n<td style=\"text-align:center\">54<\/td>\n<td style=\"text-align:center\">70<\/td>\n<td style=\"text-align:center\">91<\/td>\n<td style=\"text-align:center\">121<\/td>\n<td style=\"text-align:center\">142<\/td>\n<td style=\"text-align:center\">135<\/td>\n<td style=\"text-align:center\">107<\/td>\n<td style=\"text-align:center\">74<\/td>\n<td style=\"text-align:center\">52<\/td>\n<td style=\"text-align:center\">49<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:center\">2023<\/td>\n<td style=\"text-align:center\">45<\/td>\n<td style=\"text-align:center\">48<\/td>\n<td style=\"text-align:center\">57<\/td>\n<td style=\"text-align:center\">74<\/td>\n<td style=\"text-align:center\">96<\/td>\n<td style=\"text-align:center\">128<\/td>\n<td style=\"text-align:center\">149<\/td>\n<td style=\"text-align:center\">141<\/td>\n<td style=\"text-align:center\">112<\/td>\n<td style=\"text-align:center\">77<\/td>\n<td style=\"text-align:center\">55<\/td>\n<td style=\"text-align:center\">52<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:center\">2024<\/td>\n<td style=\"text-align:center\">46<\/td>\n<td style=\"text-align:center\">49<\/td>\n<td style=\"text-align:center\">58<\/td>\n<td style=\"text-align:center\">76<\/td>\n<td style=\"text-align:center\">99<\/td>\n<td style=\"text-align:center\">131<\/td>\n<td style=\"text-align:center\">153<\/td>\n<td style=\"text-align:center\">145<\/td>\n<td style=\"text-align:center\">115<\/td>\n<td style=\"text-align:center\">79<\/td>\n<td style=\"text-align:center\">56<\/td>\n<td style=\"text-align:center\">53<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Even eyeballing this table reveals the two dominant signals: July is always the peak (3.2x the January trough), and demand creeps upward about 2-3% per year. The question is whether we can quantify those signals precisely enough to forecast what comes next \u2014 and, more importantly, to understand how confident we should be in that forecast.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_demand_lineplot.png\" alt=\"Monthly ice cream demand from Jan 2020 to Dec 2024 showing repeating seasonal peaks in summer and mild upward trend\" \/><\/p>\n<h2>Decomposition: Taking the Signal Apart<\/h2>\n<h3>STL \u2014 The Swiss Army Knife of Decomposition<\/h3>\n<p>STL (Seasonal and Trend decomposition using Loess) is the most versatile decomposition method available. Developed by Cleveland et al. in 1990, it uses locally weighted regression (LOESS) to iteratively separate a time series into three additive components:<\/p>\n<p><strong>y_t = Trend + Seasonal + Remainder<\/strong><\/p>\n<ul>\n<li><strong>Trend<\/strong>: The underlying long-term trajectory. For our ice cream data, this is the slow upward drift from roughly 80,000 cases\/month average in 2020 to about 88,000 in 2024.<\/li>\n<li><strong>Seasonal<\/strong>: The repeating calendar-driven pattern. This is the July peak and January trough that we can set our production schedule by.<\/li>\n<li><strong>Remainder<\/strong>: Everything else \u2014 the noise, the one-off events, the unexplained variation. This is where surprises live.<\/li>\n<\/ul>\n<p>What makes STL superior to classical decomposition is flexibility. The seasonal component can evolve over time (useful if ice cream season is gradually starting earlier due to climate change). The trend can bend without breaking. And when you turn on the <code>robust<\/code> option, outliers won&#8217;t derail the entire decomposition \u2014 which matters when your 2020 data includes a few pandemic-era anomalies.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_stl_decomposition.png\" alt=\"STL decomposition of ice cream demand showing original series, trend, seasonal, and remainder components\" \/><\/p>\n<p>The decomposition tells us something quantitatively useful: the <strong>strength of seasonality<\/strong> for this data is approximately 0.95 on a 0-to-1 scale (where 1 means seasonality completely dominates the signal). That&#8217;s extremely strong \u2014 but expected for ice cream, where July volumes run more than three times January&#8217;s. The <strong>strength of trend<\/strong> is around 0.4, confirming that growth exists but is modest compared to the seasonal swing.<\/p>\n<p>This matters for planning. A product with F_S = 0.95 demands a seasonally differentiated inventory strategy. You need production ramp-ups starting in March, peak warehouse capacity by May, and a rapid drawdown plan by September. Treating each month the same would be like staffing a ski resort identically in July and January.<\/p>\n<h2>Reading the Seasonal Fingerprint<\/h2>\n<p>Decomposition gives you the broad picture, but two specialized visualizations from the <code>feasts<\/code> package let you inspect the seasonal pattern at a finer resolution.<\/p>\n<h3>The Seasonal Plot: Year-Over-Year Overlay<\/h3>\n<p>A seasonal plot (<code>gg_season()<\/code>) overlays each year&#8217;s data on a single January-through-December axis. For our ice cream data, this produces five nearly parallel curves \u2014 one per year \u2014 that rise from January through July and fall back to December.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_seasonal_plot.png\" alt=\"Seasonal plot with five overlaid annual curves (2020-2024) showing consistent July peak\" \/><\/p>\n<p>What to look for:<\/p>\n<ul>\n<li><strong>Consistency<\/strong>: If the curves stack neatly, the seasonal pattern is stable. Our ice cream data is textbook-consistent.<\/li>\n<li><strong>Shifting peaks<\/strong>: If July&#8217;s peak starts migrating toward June across years, that&#8217;s a structural change worth investigating (earlier summers, shifting promotion calendars, changing consumer behavior).<\/li>\n<li><strong>Outlier years<\/strong>: A year that breaks the pattern \u2014 say 2020 dipping in April due to lockdown disruptions \u2014 stands out immediately.<\/li>\n<\/ul>\n<h3>The Subseries Plot: Is Each Month Stable?<\/h3>\n<p>A subseries plot (<code>gg_subseries()<\/code>) shows a separate mini time-series for each month, with a horizontal blue line marking that month&#8217;s mean. January gets its own panel showing all five January values over 2020-2024. February gets its own panel. And so on.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_subseries_plot.png\" alt=\"Seasonal subseries plot with 12 monthly panels showing 5-year trajectories and mean lines\" \/><\/p>\n<p>This plot answers a question the seasonal plot cannot: <strong>Is the seasonal pattern shifting over time?<\/strong> If January&#8217;s five data points show a clear upward trend within that panel, it means winter demand is growing faster than you might assume from the overall trend. For production planning, that&#8217;s the difference between maintaining a flat winter production schedule and gradually increasing it.<\/p>\n<h2>Autocorrelation: The Demand Memory Test<\/h2>\n<p>The Autocorrelation Function (ACF) measures how strongly today&#8217;s demand correlates with demand at various lags. For monthly data, the ACF at lag 1 asks: &quot;Does this month&#8217;s demand tell me anything about next month&#8217;s?&quot; At lag 12: &quot;Does this month tell me about the same month next year?&quot;<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_acf_plot.png\" alt=\"ACF plot showing strong annual seasonality with spikes at lags 12, 24, and 36\" \/><\/p>\n<p>For ice cream demand, the ACF tells a clear story:<\/p>\n<ul>\n<li><strong>Strong positive spikes at lags 12, 24, and 36<\/strong>: Annual seasonality is dominant. July 2023 is highly correlated with July 2022 and July 2021. This is your most exploitable pattern.<\/li>\n<li><strong>Negative correlation around lag 6<\/strong>: Demand six months ago is <em>anti-correlated<\/em> with current demand. Makes sense \u2014 when July was high, January was low. These are literally opposite seasons.<\/li>\n<li><strong>Slow decay at seasonal lags<\/strong>: The correlations at lag 12, 24, 36 remain strong rather than dying off quickly. This tells us the seasonal pattern is stable over the full 5-year window.<\/li>\n<\/ul>\n<p>The Partial Autocorrelation Function (PACF) strips out indirect effects: it shows the <em>direct<\/em> correlation between y_t and y_{t-k} after removing the influence of intermediate lags. A sharp cutoff in the PACF at lag 12, with the spike at lag 1 also significant, suggests an ARIMA model with both non-seasonal and seasonal autoregressive terms \u2014 something like ARIMA(1,0,0)(1,1,0)_12. But we&#8217;ll let the automated model selection sort out the specifics.<\/p>\n<h2>Forecasting: ETS and ARIMA<\/h2>\n<p>With the diagnostic work done, we can fit proper forecasting models. Two families dominate supply chain time series forecasting: <strong>ETS<\/strong> (Exponential Smoothing State Space) and <strong>ARIMA<\/strong> (AutoRegressive Integrated Moving Average). They approach the problem from different angles, and comparing them is standard practice.<\/p>\n<h3>ETS: Exponential Smoothing, Properly<\/h3>\n<p>ETS models are named by three components: <strong>Error<\/strong> (additive or multiplicative), <strong>Trend<\/strong> (none, additive, or additive damped), and <strong>Seasonal<\/strong> (none, additive, or multiplicative). For ice cream demand with its constant seasonal amplitude and mild linear trend, automatic model selection typically lands on <strong>ETS(M,A,M)<\/strong> \u2014 multiplicative errors and seasonality with an additive trend.<\/p>\n<p>The model estimates three smoothing parameters:<\/p>\n<ul>\n<li><strong>Alpha<\/strong> (level): How fast the model adapts to changes in the baseline demand level. Higher alpha = more reactive, but noisier forecasts.<\/li>\n<li><strong>Beta<\/strong> (trend): How fast the model adjusts the growth rate. For a slow, steady growth market like ice cream, this tends to be small.<\/li>\n<li><strong>Gamma<\/strong> (seasonal): How fast the seasonal pattern can evolve. For a product where seasons are driven by physics (temperature) rather than fashion, a low gamma is appropriate.<\/li>\n<\/ul>\n<h3>ARIMA: The Pattern Matching Approach<\/h3>\n<p>Where ETS thinks in terms of smoothed levels, ARIMA thinks in terms of differencing and correlations. For strongly seasonal monthly data, automatic selection typically produces a <strong>SARIMA<\/strong> model \u2014 something like ARIMA(1,0,1)(0,1,1)_12 \u2014 which means:<\/p>\n<ul>\n<li>Seasonal differencing (D=1): Subtract last year&#8217;s same-month value to remove the seasonal pattern<\/li>\n<li>A seasonal moving average term (Q=1): Account for shocks that persist across seasonal cycles<\/li>\n<li>Non-seasonal AR(1) and MA(1) terms: Capture month-to-month dynamics<\/li>\n<\/ul>\n<p>The beauty of <code>fable::ARIMA()<\/code> is that it automates this selection process using AICc (corrected Akaike Information Criterion), testing hundreds of candidate specifications and choosing the most parsimonious model that adequately captures the data&#8217;s structure.<\/p>\n<h3>The Benchmark: Seasonal Naive<\/h3>\n<p>Before celebrating any model&#8217;s performance, we need a benchmark. The <strong>Seasonal Naive<\/strong> method is the simplest possible seasonal forecast: predict that next July&#8217;s demand will equal this July&#8217;s demand. Period. No smoothing, no parameters, no optimization.<\/p>\n<p>If your ETS or ARIMA model can&#8217;t beat Seasonal Naive, it isn&#8217;t worth the complexity. This sounds obvious, but it&#8217;s a test that embarrassingly many production forecasting systems fail, particularly for highly seasonal products where last year&#8217;s pattern is already an excellent predictor.<\/p>\n<h3>Putting Models Head to Head<\/h3>\n<p>We split the data into training (2020-2023) and test (2024) sets, fit all three models on training data, and forecast 12 months ahead.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_forecast_comparison.png\" alt=\"Forecast comparison: Seasonal Naive, ETS, and ARIMA with prediction intervals vs actual 2024 data\" \/><\/p>\n<p>The accuracy comparison uses four standard metrics:<\/p>\n<table>\n<thead>\n<tr>\n<th style=\"text-align:left\">Model<\/th>\n<th style=\"text-align:center\">RMSE<\/th>\n<th style=\"text-align:center\">MAE<\/th>\n<th style=\"text-align:center\">MAPE<\/th>\n<th style=\"text-align:center\">MASE<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td style=\"text-align:left\">Seasonal Naive<\/td>\n<td style=\"text-align:center\">~2.4<\/td>\n<td style=\"text-align:center\">~2.2<\/td>\n<td style=\"text-align:center\">~2.3%<\/td>\n<td style=\"text-align:center\">1.00<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\">ETS<\/td>\n<td style=\"text-align:center\">~1.5<\/td>\n<td style=\"text-align:center\">~1.2<\/td>\n<td style=\"text-align:center\">~1.4%<\/td>\n<td style=\"text-align:center\">~0.55<\/td>\n<\/tr>\n<tr>\n<td style=\"text-align:left\">ARIMA<\/td>\n<td style=\"text-align:center\">~1.3<\/td>\n<td style=\"text-align:center\">~1.1<\/td>\n<td style=\"text-align:center\">~1.2%<\/td>\n<td style=\"text-align:center\">~0.50<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><em>Note: This synthetic data is very clean \u2014 real-world demand data typically produces larger errors across all models. Run the R code below for exact values on your own data.<\/em><\/p>\n<ul>\n<li><strong>RMSE<\/strong> (Root Mean Squared Error): Penalizes large misses. Lower is better.<\/li>\n<li><strong>MAE<\/strong> (Mean Absolute Error): Average miss in the same units as demand. More robust to outliers.<\/li>\n<li><strong>MAPE<\/strong> (Mean Absolute Percentage Error): Scale-independent \u2014 useful for comparing across products.<\/li>\n<li><strong>MASE<\/strong> (Mean Absolute Scaled Error): Ratio of your model&#8217;s MAE to the Naive forecast&#8217;s MAE. Below 1.0 means you&#8217;re beating the benchmark.<\/li>\n<\/ul>\n<p>Both ETS and ARIMA achieve MASE values well below 1.0, confirming they add value beyond the naive benchmark. The ARIMA model holds a slight edge here, but with only 12 test observations, the difference is not statistically decisive. In practice, many supply chain teams run both and average the forecasts \u2014 ensemble approaches tend to be more robust than either model alone.<\/p>\n<h2>Prediction Intervals: What You Don&#8217;t Know Matters Most<\/h2>\n<p>Point forecasts are dangerous. A forecast of &quot;148,000 cases in July&quot; sounds precise, but supply chain decisions require understanding the <em>range<\/em> of plausible outcomes. This is where prediction intervals earn their keep.<\/p>\n<p>Both ETS and ARIMA produce proper probability distributions, not just point estimates. The 80% prediction interval tells you: &quot;There&#8217;s an 80% chance actual demand falls in this range.&quot; The 95% interval is wider, covering more extreme scenarios.<\/p>\n<p>For our ice cream data, a 12-month-ahead July forecast might look like:<\/p>\n<ul>\n<li><strong>Point forecast<\/strong>: 158,000 cases<\/li>\n<li><strong>80% interval<\/strong>: 146,000 to 170,000 cases<\/li>\n<li><strong>95% interval<\/strong>: 139,000 to 177,000 cases<\/li>\n<\/ul>\n<p>This is vastly more useful for planning than a single number. The lower bound of the 80% interval tells you the minimum you should produce to avoid stockouts in most scenarios. The upper bound tells you the maximum warehouse capacity you need. The gap between them \u2014 about 24,000 cases \u2014 is the quantified cost of uncertainty.<\/p>\n<p>Notice something important: the intervals widen as the forecast horizon increases. A 1-month-ahead forecast is much tighter than a 12-month-ahead forecast. This is not a weakness of the model \u2014 it&#8217;s an accurate reflection of reality. Uncertainty genuinely grows with time. Any forecasting system that doesn&#8217;t show this widening is hiding risk from you.<\/p>\n<h2>Cross-Validation: Don&#8217;t Trust a Single Test<\/h2>\n<p>A single train\/test split can be misleading. Maybe 2024 was an unusually easy year to predict. Time series cross-validation uses a <strong>rolling origin<\/strong> approach: start with 36 months of training data, forecast 6 months ahead, add one more month of training data, forecast again, and repeat. This produces dozens of forecast-vs-actual comparisons across different time periods.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/ts_cv_rmse_horizon.png\" alt=\"Cross-validation RMSE by forecast horizon for ETS, ARIMA, and Seasonal Naive\" \/><\/p>\n<p>The cross-validation results confirm the patterns we saw in the single test: both ETS and ARIMA beat Seasonal Naive at all horizons, ARIMA holds a marginal advantage, and the accuracy gap narrows as the forecast horizon lengthens. By 6 months out, even the best statistical model is only modestly better than &quot;same as last year.&quot; This is useful information \u2014 it tells you exactly how far ahead your sophisticated models provide value.<\/p>\n<h2>Where Time Series Analysis Breaks Down<\/h2>\n<p>Here&#8217;s where most blog posts would wrap up with a triumphant conclusion about the power of statistical forecasting. Instead, let&#8217;s talk about the failure modes, because knowing when your tool <em>won&#8217;t<\/em> work is at least as important as knowing when it will.<\/p>\n<h3>The Past-Equals-Future Assumption<\/h3>\n<p>Every model we&#8217;ve discussed extrapolates historical patterns. This works beautifully when the future cooperates \u2014 which it usually does for ice cream demand, because the physics of temperature and human preferences for cold treats haven&#8217;t changed much. But it fails catastrophically during structural breaks.<\/p>\n<p>In March 2020, no ARIMA model on earth would have predicted the simultaneous collapse of food-service ice cream demand and spike in retail take-home demand. The model doesn&#8217;t know about pandemics. It knows about lags and seasonal patterns and moving averages. When the world changes in ways it has never changed before, the model has nothing useful to say.<\/p>\n<h3>No External Drivers<\/h3>\n<p>Pure ETS and ARIMA models are <strong>univariate<\/strong> \u2014 they only look at the demand series itself. They can&#8217;t incorporate temperature forecasts (which would obviously help for ice cream), promotional calendars, pricing changes, or competitor activity. The models will happily forecast &quot;normal&quot; July demand right through a planned July price increase that could boost volumes 20%.<\/p>\n<p>You <em>can<\/em> add external regressors via dynamic regression (<code>ARIMA(demand ~ temperature + promo)<\/code>), but this requires having those variables available at forecast time \u2014 which for temperature means weather forecasts, not historical weather. It helps, but it&#8217;s not a free lunch.<\/p>\n<h3>Intermittent Demand: The Spare Parts Problem<\/h3>\n<p>Time series analysis works best with continuous, relatively smooth demand patterns. Products with <strong>intermittent demand<\/strong> \u2014 many zeros punctuated by occasional, unpredictable orders \u2014 violate the basic assumptions. This is common in spare parts, industrial components, and niche products.<\/p>\n<p>For these items, standard ETS and ARIMA produce forecasts that are always slightly positive and never match either the zeros or the spikes. Specialized methods like Croston&#8217;s method or the Syntetos-Boylan Approximation model demand occurrence and demand size separately, which is a fundamentally different approach.<\/p>\n<h3>The Cold-Start Problem<\/h3>\n<p>New products have no history. Time series models need at least two full seasonal cycles \u2014 that&#8217;s 24 months for annual seasonality \u2014 before they can reliably estimate the seasonal pattern. A new ice cream flavor launched in March has exactly zero July data points to learn from. For new product launches, you&#8217;re better served by analogous product data, market research, or judgment-based methods until sufficient history accumulates.<\/p>\n<h3>Overfitting: When the Model Memorizes the Noise<\/h3>\n<p>A model with too many parameters can fit the training data exquisitely \u2014 capturing every wiggle and bump \u2014 while being terrible at forecasting new data. This is called overfitting, and it&#8217;s the statistical equivalent of studying the answer key instead of the material. An ARIMA(3,1,3)(2,1,2)_12 with 10 parameters will almost certainly fit 60 data points better than a parsimonious ARIMA(1,0,1)(0,1,1)_12 with 3 parameters. But the simpler model will usually forecast better.<\/p>\n<p>AICc penalizes model complexity and helps guard against overfitting. Cross-validation is even more reliable \u2014 it directly measures out-of-sample performance, which is what actually matters for supply chain planning.<\/p>\n<h3>Correlation, Not Causation<\/h3>\n<p>Time series models identify <em>pattern<\/em> \u2014 they don&#8217;t explain <em>why<\/em>. The ACF tells you that July demand is correlated with last July&#8217;s demand, but it doesn&#8217;t tell you whether the driver is temperature, school holidays, Fourth of July marketing campaigns, or all three. If you need causal understanding to design interventions (e.g., &quot;Would moving our summer promotion from July to June shift the seasonal peak?&quot;), you need causal inference methods and domain expertise, not time series analysis.<\/p>\n<h2>Your Next Steps<\/h2>\n<p>Time series analysis won&#8217;t solve all your demand planning problems, but it will solve the most common one: extracting actionable structure from historical demand data. Here&#8217;s how to start this week:<\/p>\n<ol>\n<li>\n<p><strong>Decompose your top 10 SKUs.<\/strong> Run STL decomposition on your highest-volume products and compute the strength of seasonality (F_S) and strength of trend (F_T) for each. Products with F_S &gt; 0.6 need seasonally differentiated safety stock and production plans. Products with F_T &gt; 0.5 need trend-adjusted procurement \u2014 your buyer shouldn&#8217;t be ordering the same volume as last year for a product growing 8% annually.<\/p>\n<\/li>\n<li>\n<p><strong>Benchmark your current forecasts against Seasonal Naive.<\/strong> Compute the MASE for whatever forecasting method you&#8217;re using today. If MASE &gt; 1.0, your current approach is <em>losing<\/em> to &quot;same as last year&quot; and you need to fix that before layering on more complexity. The R code below gives you a direct template.<\/p>\n<\/li>\n<li>\n<p><strong>Run ETS and ARIMA on 3-5 product families.<\/strong> Use cross-validation (not a single train\/test split) and compare RMSE by forecast horizon. This tells you not just <em>which<\/em> model is better, but <em>how far ahead<\/em> each model provides meaningful accuracy over the naive baseline.<\/p>\n<\/li>\n<li>\n<p><strong>Use prediction intervals for safety stock calculations.<\/strong> Stop treating forecasts as point estimates. The 95% upper bound of your prediction interval gives you a principled, statistically grounded service-level target. This replaces the guesswork of &quot;add 20% buffer&quot; with a calculation tied to actual forecast uncertainty.<\/p>\n<\/li>\n<li>\n<p><strong>Document your limitations.<\/strong> For every product family, note whether any of the failure modes apply: new product (cold start), intermittent demand, heavy promotion influence, or known upcoming structural changes. Flag these for methods beyond pure time series analysis \u2014 regression, judgment, or Croston&#8217;s method as appropriate.<\/p>\n<\/li>\n<\/ol>\n<details>\n<summary><strong>Show R Code<\/strong><\/summary>\n<pre><code class=\"language-r\"># =============================================================================\n# Time Series Analysis of Ice Cream Demand\n# Complete FPP3 Pipeline\n# =============================================================================\n# Required: install.packages(&quot;fpp3&quot;)\n# This loads: tsibble, feasts, fable, fabletools, ggplot2, and more\n# =============================================================================\n\nlibrary(fpp3)\n\n# =============================================================================\n# 1. DATA PREPARATION\n# =============================================================================\n\n# Five years of monthly ice cream shipments (thousands of cases)\n# Northeast US manufacturer, 2020-2024\nice_cream &lt;- tibble(\n  month = yearmonth(seq(as.Date(&quot;2020-01-01&quot;),\n                        as.Date(&quot;2024-12-01&quot;),\n                        by = &quot;month&quot;)),\n  demand = c(\n    42, 45, 52, 68, 89, 118, 138, 132, 105, 72, 51, 48,  # 2020\n    44, 47, 55, 72, 93, 124, 145, 137, 108, 75, 53, 50,  # 2021\n    43, 46, 54, 70, 91, 121, 142, 135, 107, 74, 52, 49,  # 2022\n    45, 48, 57, 74, 96, 128, 149, 141, 112, 77, 55, 52,  # 2023\n    46, 49, 58, 76, 99, 131, 153, 145, 115, 79, 56, 53   # 2024\n  )\n) |&gt;\n  as_tsibble(index = month)\n\n# =============================================================================\n# 2. EXPLORATORY VISUALIZATION\n# =============================================================================\n\n# Time series plot\nice_cream |&gt;\n  autoplot(demand) +\n  labs(title = &quot;Monthly Ice Cream Demand (2020-2024)&quot;,\n       subtitle = &quot;Thousands of cases \u2014 Northeast US manufacturer&quot;,\n       y = &quot;Demand (thousand cases)&quot;,\n       x = NULL) +\n  theme_minimal(base_size = 13)\n\n# Seasonal plot \u2014 overlay each year\nice_cream |&gt;\n  gg_season(demand) +\n  labs(title = &quot;Seasonal Pattern: Ice Cream Demand&quot;,\n       subtitle = &quot;Each line represents one year \u2014 note the consistent July peak&quot;,\n       y = &quot;Demand (thousand cases)&quot;) +\n  theme_minimal(base_size = 13)\n\n# Seasonal subseries \u2014 one panel per month\nice_cream |&gt;\n  gg_subseries(demand) +\n  labs(title = &quot;Monthly Demand Subseries (2020-2024)&quot;,\n       subtitle = &quot;Blue lines show monthly means \u2014 all months show slight upward drift&quot;,\n       y = &quot;Demand (thousand cases)&quot;) +\n  theme_minimal(base_size = 13)\n\n# ACF \u2014 autocorrelation diagnostics\nice_cream |&gt;\n  ACF(demand, lag_max = 36) |&gt;\n  autoplot() +\n  labs(title = &quot;Autocorrelation: Clear Annual Seasonality&quot;,\n       subtitle = &quot;Strong spikes at lags 12, 24, 36 confirm the annual cycle&quot;) +\n  theme_minimal(base_size = 13)\n\n# PACF \u2014 partial autocorrelation\nice_cream |&gt;\n  PACF(demand, lag_max = 36) |&gt;\n  autoplot() +\n  labs(title = &quot;Partial Autocorrelation&quot;,\n       subtitle = &quot;Sharp cutoff at lag 12 suggests seasonal AR term&quot;) +\n  theme_minimal(base_size = 13)\n\n# Combined display\nice_cream |&gt;\n  gg_tsdisplay(demand, plot_type = &quot;season&quot;)\n\n# =============================================================================\n# 3. STL DECOMPOSITION\n# =============================================================================\n\n# STL with fixed seasonality (appropriate for stable seasonal products)\nstl_decomp &lt;- ice_cream |&gt;\n  model(STL(demand ~ season(window = &quot;periodic&quot;))) |&gt;\n  components()\n\n# Plot all components\nstl_decomp |&gt;\n  autoplot() +\n  labs(title = &quot;STL Decomposition of Ice Cream Demand&quot;,\n       subtitle = &quot;Trend + Seasonal + Remainder \u2014 seasonality dominates&quot;) +\n  theme_minimal(base_size = 13)\n\n# Quantify strength of trend and seasonality\nice_cream |&gt;\n  features(demand, feat_stl(s.window = &quot;periodic&quot;))\n# Expected output:\n# trend_strength ~ 0.4    (moderate trend)\n# seasonal_strength_year ~ 0.95  (very strong seasonality)\n# seasonal_peak_year = 7  (July)\n# seasonal_trough_year = 1 (January)\n\n# Seasonal amplitude\nstl_decomp |&gt;\n  as_tibble() |&gt;\n  summarise(\n    seasonal_amplitude = max(season_year) - min(season_year),\n    peak_month = month(month[which.max(season_year)]),\n    trough_month = month(month[which.min(season_year)])\n  )\n\n# STL with flexible seasonality (allows seasonal pattern to evolve)\nstl_flex &lt;- ice_cream |&gt;\n  model(STL(demand ~ trend(window = 21) +\n                      season(window = 13),\n            robust = TRUE)) |&gt;\n  components()\n\nstl_flex |&gt;\n  autoplot() +\n  labs(title = &quot;STL Decomposition (Flexible Seasonality)&quot;,\n       subtitle = &quot;season(window = 13) allows the seasonal shape to evolve over time&quot;) +\n  theme_minimal(base_size = 13)\n\n# =============================================================================\n# 4. TRAIN \/ TEST SPLIT\n# =============================================================================\n\ntrain &lt;- ice_cream |&gt; filter(year(month) &lt;= 2023)\ntest  &lt;- ice_cream |&gt; filter(year(month) == 2024)\n\ncat(&quot;Training set:&quot;, nrow(train), &quot;observations (2020-2023)\\n&quot;)\ncat(&quot;Test set:    &quot;, nrow(test), &quot;observations (2024)\\n&quot;)\n\n# =============================================================================\n# 5. MODEL FITTING\n# =============================================================================\n\n# Fit three models on training data\nfit &lt;- train |&gt;\n  model(\n    snaive = SNAIVE(demand),\n    ets    = ETS(demand),\n    arima  = ARIMA(demand)\n  )\n\n# Inspect ETS selection\nfit |&gt; select(ets) |&gt; report()\n# Typically selects ETS(M,A,M) or ETS(A,A,A) for this data\n\n# Inspect ARIMA selection\nfit |&gt; select(arima) |&gt; report()\n# Typically selects ARIMA(1,0,1)(0,1,1)[12] or similar\n\n# Residual diagnostics \u2014 check for remaining autocorrelation\nfit |&gt; select(ets) |&gt; gg_tsresiduals()\nfit |&gt; select(arima) |&gt; gg_tsresiduals()\n\n# Ljung-Box test on ARIMA residuals\naugment(fit) |&gt;\n  filter(.model == &quot;arima&quot;) |&gt;\n  features(.innov, ljung_box, lag = 24, dof = 3)\n# p-value &gt; 0.05 = residuals are white noise (good)\n\n# =============================================================================\n# 6. FORECASTING WITH PREDICTION INTERVALS\n# =============================================================================\n\n# Generate 12-month forecasts\nfc &lt;- fit |&gt; forecast(h = 12)\n\n# Plot forecasts vs actuals\nfc |&gt;\n  autoplot(ice_cream, level = c(80, 95)) +\n  labs(title = &quot;12-Month Demand Forecast Comparison&quot;,\n       subtitle = &quot;Shaded regions show 80% and 95% prediction intervals&quot;,\n       y = &quot;Demand (thousand cases)&quot;,\n       x = NULL) +\n  facet_wrap(~ .model, ncol = 1, scales = &quot;free_y&quot;) +\n  theme_minimal(base_size = 13)\n\n# All models on one plot\nfc |&gt;\n  autoplot(ice_cream, level = NULL) +\n  labs(title = &quot;Forecast Comparison: ETS vs ARIMA vs Seasonal Naive&quot;,\n       y = &quot;Demand (thousand cases)&quot;,\n       x = NULL) +\n  theme_minimal(base_size = 13)\n\n# Extract prediction intervals for a specific month\nfc |&gt;\n  hilo(level = c(80, 95)) |&gt;\n  filter(month == yearmonth(&quot;2024 Jul&quot;))\n\n# =============================================================================\n# 7. FORECAST ACCURACY EVALUATION\n# =============================================================================\n\n# Compare against actual 2024 data\naccuracy_results &lt;- accuracy(fc, test)\nprint(accuracy_results)\n\n# Formatted comparison table\naccuracy_results |&gt;\n  select(.model, RMSE, MAE, MAPE, MASE) |&gt;\n  arrange(MASE) |&gt;\n  mutate(across(where(is.numeric), ~ round(., 2)))\n\n# =============================================================================\n# 8. TIME SERIES CROSS-VALIDATION\n# =============================================================================\n\n# Rolling-origin cross-validation\n# Start with 36 months, add 1 month at a time, forecast 6 months ahead\ncv_data &lt;- ice_cream |&gt;\n  stretch_tsibble(.init = 36, .step = 1)\n\ncat(&quot;Number of CV folds:&quot;, max(cv_data$.id), &quot;\\n&quot;)\n\n# Fit models on each expanding training set\ncv_fit &lt;- cv_data |&gt;\n  model(\n    ets   = ETS(demand),\n    arima = ARIMA(demand),\n    snaive = SNAIVE(demand)\n  )\n\n# Forecast 6 months ahead from each origin\ncv_fc &lt;- cv_fit |&gt;\n  forecast(h = 6) |&gt;\n  group_by(.id) |&gt;\n  mutate(h = row_number()) |&gt;\n  ungroup() |&gt;\n  as_fable(response = &quot;demand&quot;, distribution = demand)\n\n# Accuracy by forecast horizon\ncv_accuracy &lt;- cv_fc |&gt;\n  accuracy(ice_cream, by = c(&quot;h&quot;, &quot;.model&quot;))\n\n# Plot RMSE by horizon\ncv_accuracy |&gt;\n  ggplot(aes(x = h, y = RMSE, colour = .model)) +\n  geom_line(linewidth = 1) +\n  geom_point(size = 2.5) +\n  scale_color_manual(values = c(&quot;arima&quot; = &quot;#e74c3c&quot;,\n                                &quot;ets&quot; = &quot;#2980b9&quot;,\n                                &quot;snaive&quot; = &quot;#95a5a6&quot;)) +\n  labs(title = &quot;Forecast Accuracy by Horizon (Cross-Validation)&quot;,\n       subtitle = &quot;Lower RMSE is better \u2014 all models degrade with longer horizons&quot;,\n       x = &quot;Forecast Horizon (months ahead)&quot;,\n       y = &quot;RMSE (thousand cases)&quot;,\n       color = &quot;Model&quot;) +\n  theme_minimal(base_size = 13)\n\n# MASE by horizon\ncv_accuracy |&gt;\n  ggplot(aes(x = h, y = MASE, colour = .model)) +\n  geom_line(linewidth = 1) +\n  geom_point(size = 2.5) +\n  geom_hline(yintercept = 1, linetype = &quot;dashed&quot;, color = &quot;grey50&quot;) +\n  annotate(&quot;text&quot;, x = 5.5, y = 1.05, label = &quot;Naive benchmark&quot;,\n           color = &quot;grey50&quot;, size = 3.5) +\n  scale_color_manual(values = c(&quot;arima&quot; = &quot;#e74c3c&quot;,\n                                &quot;ets&quot; = &quot;#2980b9&quot;,\n                                &quot;snaive&quot; = &quot;#95a5a6&quot;)) +\n  labs(title = &quot;MASE by Horizon: Are We Beating Naive?&quot;,\n       subtitle = &quot;Values below 1.0 = better than Seasonal Naive&quot;,\n       x = &quot;Forecast Horizon (months ahead)&quot;,\n       y = &quot;MASE&quot;,\n       color = &quot;Model&quot;) +\n  theme_minimal(base_size = 13)\n\n# =============================================================================\n# 9. SPECIFIC ETS MODEL VARIANTS\n# =============================================================================\n\n# Compare ETS specifications explicitly\nfit_ets_variants &lt;- train |&gt;\n  model(\n    auto     = ETS(demand),\n    additive = ETS(demand ~ error(&quot;A&quot;) + trend(&quot;A&quot;) + season(&quot;A&quot;)),\n    multiplicative = ETS(demand ~ error(&quot;M&quot;) + trend(&quot;A&quot;) + season(&quot;M&quot;)),\n    damped   = ETS(demand ~ error(&quot;M&quot;) + trend(&quot;Ad&quot;) + season(&quot;M&quot;))\n  )\n\n# Compare AICc values (lower is better)\nglance(fit_ets_variants) |&gt;\n  select(.model, AICc, BIC) |&gt;\n  arrange(AICc)\n\n# Forecast comparison\nfit_ets_variants |&gt;\n  forecast(h = 12) |&gt;\n  accuracy(test) |&gt;\n  select(.model, RMSE, MAE, MASE) |&gt;\n  arrange(MASE)\n\n# =============================================================================\n# 10. APPLY TO YOUR OWN DATA\n# =============================================================================\n#\n# Replace the ice cream data with your own demand data:\n#\n# my_data &lt;- read_csv(&quot;my_demand_data.csv&quot;) |&gt;\n#   mutate(month = yearmonth(date_column)) |&gt;\n#   as_tsibble(index = month, key = product_id)\n#\n# # Quick diagnostic\n# my_data |&gt;\n#   features(demand, feat_stl(s.window = &quot;periodic&quot;)) |&gt;\n#   select(product_id, trend_strength, seasonal_strength_year,\n#          seasonal_peak_year, seasonal_trough_year)\n#\n# # Fit and forecast\n# my_fit &lt;- my_data |&gt;\n#   model(\n#     snaive = SNAIVE(demand),\n#     ets    = ETS(demand),\n#     arima  = ARIMA(demand)\n#   )\n#\n# my_fc &lt;- my_fit |&gt; forecast(h = 12)\n#\n# # Evaluate\n# my_fc |&gt; accuracy(my_test_data) |&gt;\n#   select(.model, RMSE, MAE, MASE) |&gt;\n#   arrange(MASE)\n#\n# # If MASE &gt; 1.0 for ETS\/ARIMA, your data may have characteristics\n# # that these models can't handle well (intermittent demand, structural\n# # breaks, strong external drivers). Consider:\n# # - Croston's method for intermittent demand\n# # - Dynamic regression for external drivers\n# # - Judgment-based adjustments for known structural changes\n<\/code><\/pre>\n<\/details>\n<h2>Interactive Dashboard<\/h2>\n<p>Explore the data yourself \u2014 adjust the ETS smoothing parameters, switch between model types, and see how decomposition, seasonal patterns, and forecast accuracy change in real time.<\/p>\n<div class=\"dashboard-link\" style=\"margin:2em 0; padding:1.5em; background:#f8f9fa; border-left:4px solid #0073aa; border-radius:4px;\">\n<p style=\"margin:0 0 0.5em 0; font-size:1.1em;\"><strong>Interactive Dashboard<\/strong><\/p>\n<p style=\"margin:0 0 1em 0;\">Explore the data yourself \u2014 adjust parameters and see the results update in real time.<\/p>\n<p><a href=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/02\/2026-02-22_Time_Series_Analysis_Supply_Chain_dashboard-1.html\" target=\"_blank\" style=\"display:inline-block; padding:0.6em 1.2em; background:#0073aa; color:#fff; text-decoration:none; border-radius:4px; font-weight:bold;\">Open Interactive Dashboard &rarr;<\/a><\/div>\n<h2>References<\/h2>\n<ol>\n<li>Hyndman, R.J., &amp; Athanasopoulos, G. (2021). <em>Forecasting: Principles and Practice<\/em>, 3rd edition. OTexts. <a href=\"https:\/\/otexts.com\/fpp3\/\">https:\/\/otexts.com\/fpp3\/<\/a><\/li>\n<li>Cleveland, R.B., Cleveland, W.S., McRae, J.E., &amp; Terpenning, I. (1990). &quot;STL: A Seasonal-Trend Decomposition Procedure Based on Loess.&quot; <em>Journal of Official Statistics<\/em>, 6(1), 3-73.<\/li>\n<li>Wang, X., Smith, K.A., &amp; Hyndman, R.J. (2006). &quot;Characteristic-based clustering for time series data.&quot; <em>Data Mining and Knowledge Discovery<\/em>, 13(3), 335-364.<\/li>\n<li>FRED Blog (2024). &quot;Ice cream is a seasonal product, right?&quot; Federal Reserve Bank of St. Louis. <a href=\"https:\/\/fredblog.stlouisfed.org\/2024\/05\/ice-cream-is-a-seasonal-product-right\/\">https:\/\/fredblog.stlouisfed.org\/2024\/05\/ice-cream-is-a-seasonal-product-right\/<\/a><\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Your demand data is trying to tell you something. We use STL decomposition, seasonal diagnostics, and ETS\/ARIMA models to extract trend, seasonality, and noise from ice cream sales data \u2014 then honestly discuss where these methods break down.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13,115],"tags":[126,122,125,128,127,119,124,123,26,121],"class_list":["post-1096","post","type-post","status-publish","format-standard","hentry","category-data-science","category-supply-chain-management","tag-arima","tag-demand-forecasting","tag-ets","tag-forecast-accuracy","tag-fpp3","tag-r-programming","tag-seasonality","tag-stl-decomposition","tag-supply-chain-analytics","tag-time-series-analysis"],"_links":{"self":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1096","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1096"}],"version-history":[{"count":2,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1096\/revisions"}],"predecessor-version":[{"id":1098,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1096\/revisions\/1098"}],"wp:attachment":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1096"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1096"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1096"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}