{"id":1909,"date":"2026-04-21T20:31:28","date_gmt":"2026-04-21T20:31:28","guid":{"rendered":"https:\/\/inphronesys.com\/?p=1909"},"modified":"2026-04-21T20:31:28","modified_gmt":"2026-04-21T20:31:28","slug":"global-forecasting-with-xgboost-in-r-a-walmart-weekly-walkthrough","status":"publish","type":"post","link":"https:\/\/inphronesys.com\/?p=1909","title":{"rendered":"Global Forecasting with XGBoost in R: A Walmart Weekly Walkthrough"},"content":{"rendered":"<p>Gradient-boosted trees like XGBoost have become the default ML choice for multi-series forecasting \u2014 not because they beat classical methods by huge margins, but because they scale across hundreds of SKUs and let you fold in real covariates (promotions, holidays, prices) that ETS and SNAIVE can&#8217;t touch. This post is a hands-on walkthrough: we apply the same family of techniques popularised by M5-era gradient-boosted models to a classic public dataset (Walmart weekly sales), and honestly measure where the complexity earns its keep \u2014 and where it doesn&#8217;t.<\/p>\n<p>It&#8217;s the practical follow-up to two earlier posts in this series: <a href=\"https:\/\/www.inphronesys.com\/the-m5-lesson-why-simple-still-beats-fancy-in-supply-chain-forecasting\/\">The M5 Lesson<\/a> explained <em>why<\/em> gradient-boosted trees won the biggest forecasting competition ever run; <a href=\"https:\/\/www.inphronesys.com\/i-ran-6-models-on-real-demand-data-heres-how-i-picked-the-winner\/\">The Horse Race<\/a> showed how to pick a forecasting winner with MASE and cross-validation. Today is the <em>how<\/em> \u2014 and specifically <em>how in R<\/em>, with the tidymodels + modeltime stack.<\/p>\n<h2>The Dataset: Walmart Weekly Sales<\/h2>\n<p>Our example is <code>timetk::walmart_sales_weekly<\/code> \u2014 a public sample distributed with the <code>timetk<\/code> R package, originally released for the 2014 Walmart Recruiting Kaggle competition. It contains <strong>7 departments at a single Walmart store<\/strong> across 143 weeks (February 2010 to October 2012). Think of each series as a product category in one store: apparel, grocery, electronics, and so on. It&#8217;s a convenient teaching example for end-to-end ML forecasting \u2014 retail, weekly, multi-series, with a real holiday covariate baked in \u2014 without the operational weight of a production-scale dataset.<\/p>\n<p>That last point matters more than it looks. Most demand series come with context: promotions, holidays, price changes, weather. Classical statistical models like ETS and SNAIVE can&#8217;t use any of it \u2014 they see only the sales number. Machine learning models can use everything you hand them, which is why they tend to pull ahead on rich retail data and stay even on sparse industrial data.<\/p>\n<p>The <code>IsHoliday<\/code> flag in this dataset captures exactly the kind of event that theoretically breaks a naive forecaster: Thanksgiving, Black Friday, Christmas week. A model that knows &#8222;next week is a holiday week&#8220; has an advantage over a model that doesn&#8217;t \u2014 at least in principle. Whether it actually helps in practice on this particular dataset is a question we&#8217;ll return to in the Feature Importance section, and the answer may surprise you.<\/p>\n<p>We train on the first 131 weeks and hold out the last 12 weeks (from August 10, 2012 onward) as the test window \u2014 a realistic quarter-ahead forecasting horizon.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_walmart_series-2.png\" alt=\"7 Walmart department series with 12-week test window shaded\" \/><\/p>\n<h2>Why Feature Engineering Is Everything<\/h2>\n<p>Here is the one thing that trips up everyone new to ML forecasting: <strong>XGBoost has no idea it is looking at time series data.<\/strong><\/p>\n<p>ETS knows. Its equations explicitly encode level, trend, and seasonality \u2014 that&#8217;s literally what the letters stand for. SNAIVE knows \u2014 its entire method is &#8222;what happened one season ago.&#8220; These models come pre-wired to recognise the rhythm of demand.<\/p>\n<p>XGBoost doesn&#8217;t. To XGBoost, every row is an independent observation, every column just a number. It has no concept of &#8222;last week&#8220; or &#8222;same week last year&#8220; unless you explicitly build a column called <code>lag_52<\/code> and hand it over. If you don&#8217;t, the model cannot learn annual seasonality \u2014 not because it isn&#8217;t smart enough, but because the information literally isn&#8217;t in its input.<\/p>\n<p>Think of it like this: ETS is a specialist who has memorised the seasonal calendar. XGBoost is a brilliant generalist who can learn anything \u2014 but only if you point at it. Feature engineering is the pointing.<\/p>\n<p>This is why the M5 winners talked about features, not algorithms. Everyone at the top of the leaderboard used gradient-boosted trees. The difference between rank 1 and rank 1,000 was how thoughtfully each team had translated demand history into tabular columns the model could read.<\/p>\n<h2>Building the Feature Engineering Recipe<\/h2>\n<p>Our feature pipeline adds a specific kind of column for each thing we want the model to see. Crucially, every lag and rolling feature is computed <strong>strictly from the past<\/strong> (t-n through t-1, never including t), and computed <strong>per department<\/strong> so a high-volume category can&#8217;t leak its numbers into a low-volume one.<\/p>\n<p><strong>1. Lag features (1, 4, 13, 26, 52 weeks).<\/strong> These capture memory at different horizons:<\/p>\n<ul>\n<li><code>lag_1<\/code> \u2014 last week&#8217;s sales. Short-term momentum.<\/li>\n<li><code>lag_4<\/code> \u2014 one month ago. Captures the monthly cycle.<\/li>\n<li><code>lag_13<\/code> \u2014 one quarter. Quarterly rhythm.<\/li>\n<li><code>lag_26<\/code> \u2014 half-year. Seasonal mid-point.<\/li>\n<li><code>lag_52<\/code> \u2014 same week last year. The annual anchor.<\/li>\n<\/ul>\n<p><strong>2. Rolling window features (4w, 13w, 26w means; 4w standard deviation).<\/strong> These smooth out the noise:<\/p>\n<ul>\n<li>4-week rolling mean \u2014 the recent trajectory.<\/li>\n<li>13-week rolling mean \u2014 the quarterly level.<\/li>\n<li>26-week rolling mean \u2014 the long-run baseline.<\/li>\n<li>4-week rolling sd \u2014 local volatility, which helps the model calibrate its response on noisier series.<\/li>\n<\/ul>\n<p><strong>3. Calendar features (month, quarter, week-of-year, year).<\/strong> Time-stamp derivatives (via <code>step_timeseries_signature()<\/code>) that let the model pick up structural patterns like &#8222;week 47 is always Black Friday.&#8220;<\/p>\n<p><strong>4. The <code>IsHoliday<\/code> covariate.<\/strong> Pre-built into the dataset. This is the feature ETS and SNAIVE cannot use \u2014 at least in principle. Whether it earns its keep on a given dataset is an empirical question, and on this one the answer is not what you&#8217;d expect.<\/p>\n<p><strong>5. Department identity dummies.<\/strong> One-hot encoded series IDs. These are what turn this into a <strong>global model<\/strong> \u2014 one model fit across all 7 departments, with department identity as just another feature. The model learns the shared structure (everyone spikes at Christmas) while still letting each department have its own intercept.<\/p>\n<p>A small sketch of what the transformation does:<\/p>\n<table style=\"border-collapse: collapse; width: 100%; margin: 1.5em 0; font-size: 0.95em; line-height: 1.5;\">\n<thead>\n<tr>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">Raw columns<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">\u2192<\/th>\n<th style=\"border: 1px solid #ddd; padding: 10px 14px; background: #0073aa; color: #fff; font-weight: 600; text-align: left;\">Engineered feature columns<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\"><code>Date<\/code>, <code>Sales<\/code>, <code>Dept_ID<\/code>, <code>IsHoliday<\/code><\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\">\u2192<\/td>\n<td style=\"border: 1px solid #ddd; padding: 9px 14px; text-align: left;\"><code>lag_1<\/code>, <code>lag_4<\/code>, <code>lag_13<\/code>, <code>lag_26<\/code>, <code>lag_52<\/code>, <code>roll_mean_4<\/code>, <code>roll_mean_13<\/code>, <code>roll_mean_26<\/code>, <code>roll_sd_4<\/code>, <code>month<\/code>, <code>quarter<\/code>, <code>week<\/code>, <code>year<\/code>, <code>IsHoliday<\/code>, <code>id_1_1<\/code>\u2026<code>id_1_95<\/code><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Four columns in. Twenty-plus columns out. XGBoost gets to pick which of them actually matter \u2014 and as we&#8217;ll see, its choices are not what you&#8217;d expect.<\/p>\n<h2>Training the Global Model<\/h2>\n<p>The tidymodels + modeltime stack lets a machine-learning model and a statistical model compete on the same footing. The workflow \u2014 stripped to its essentials:<\/p>\n<pre><code class=\"language-r\"># Feature engineering recipe \u2014 lag\/rolling features are computed upstream\n# per department (strictly past-window only) to prevent cross-series leakage\nrecipe_xgb &lt;- recipe(Weekly_Sales ~ ., data = train_feat) %&gt;%\n  step_timeseries_signature(Date) %&gt;%\n  step_dummy(all_nominal_predictors()) %&gt;%\n  step_naomit(all_predictors())\n\n# XGBoost spec\nxgb_spec &lt;- boost_tree(\n  trees = 500, min_n = 10, tree_depth = 6,\n  learn_rate = 0.01, sample_size = 0.8\n) %&gt;% set_engine(\"xgboost\") %&gt;% set_mode(\"regression\")\n\n# Fit globally \u2014 one model, all 7 departments\nfit_xgb &lt;- workflow() %&gt;%\n  add_recipe(recipe_xgb) %&gt;%\n  add_model(xgb_spec) %&gt;%\n  fit(data = train_feat)\n<\/code><\/pre>\n<p>Three things worth flagging:<\/p>\n<ol>\n<li><strong>One model, all 7 departments.<\/strong> <code>fit()<\/code> receives the entire panel, not seven per-series datasets. The department dummies carry the identity. This &#8222;global model&#8220; approach is the single biggest architectural difference between M5-style ML forecasting and classical one-series-at-a-time ETS.<\/li>\n<li><strong>The recipe runs inside the workflow.<\/strong> Feature engineering is versioned with the model, not a separate script. When you <code>predict()<\/code> on new data, the same transformations replay automatically.<\/li>\n<li><strong>No hand-tuning for the first pass.<\/strong> We use sensible defaults (500 trees, depth 6, learn rate 0.01). A small 3\u00d73 hyperparameter grid later shows the best combination we found (depth 9, learn rate 0.01) is only 2\u20134% better than this shipping spec (depending on whether we compare against the main fit or the same-spec cell inside the grid) \u2014 the features are doing the real work, not the knobs.<\/li>\n<\/ol>\n<p>The full reproducible script \u2014 including the per-department lag generation and the baseline fits \u2014 is in the collapsible section at the bottom of this post.<\/p>\n<h2>Results: The Model Comparison<\/h2>\n<p>We now run three models on the 12-week test window:<\/p>\n<ul>\n<li><strong>XGBoost<\/strong> \u2014 one global model across all 7 departments.<\/li>\n<li><strong>ETS<\/strong> \u2014 one model per department, via <code>forecast::stlf()<\/code> (STL decomposition + ETS on the seasonally-adjusted series). Plain <code>forecast::ets()<\/code> caps its seasonal period at 24, so for weekly data (m = 52) <code>stlf()<\/code> is the forecast package&#8217;s recommended path. It is a strong benchmark, not a handicapped one.<\/li>\n<li><strong>SNAIVE<\/strong> \u2014 one model per department. &#8222;This week next year = this week last year.&#8220; Our floor.<\/li>\n<\/ul>\n<p>We score them with MASE. A MASE of 1.0 means &#8222;my forecast errors are about the same size as a seasonal-naive forecast on the training history.&#8220; Below 1.0 is better than last-year-ago naive; above 1.0 is worse. Winner per row is bold.<\/p>\n<table style=\"border-collapse: collapse; width: 100%; margin: 1.5em 0; font-size: 0.95em; line-height: 1.5;\">\n<thead>\n<tr>\n<th style=\"text-align: left;\">Department<\/th>\n<th style=\"text-align: center;\">XGBoost<\/th>\n<th style=\"text-align: center;\">ETS<\/th>\n<th style=\"text-align: center;\">SNAIVE<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"text-align: left;\">1_1<\/td>\n<td style=\"text-align: center;\">0.547<\/td>\n<td style=\"text-align: center;\">0.327<\/td>\n<td style=\"text-align: center;\"><strong>0.318<\/strong><\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"text-align: left;\">1_3<\/td>\n<td style=\"text-align: center;\"><strong>5.685<\/strong><\/td>\n<td style=\"text-align: center;\">6.105<\/td>\n<td style=\"text-align: center;\">6.166<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"text-align: left;\">1_8<\/td>\n<td style=\"text-align: center;\"><strong>0.433<\/strong><\/td>\n<td style=\"text-align: center;\">0.655<\/td>\n<td style=\"text-align: center;\">2.044<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"text-align: left;\">1_13<\/td>\n<td style=\"text-align: center;\">0.831<\/td>\n<td style=\"text-align: center;\"><strong>0.820<\/strong><\/td>\n<td style=\"text-align: center;\">1.062<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"text-align: left;\">1_38<\/td>\n<td style=\"text-align: center;\"><strong>0.516<\/strong><\/td>\n<td style=\"text-align: center;\">0.603<\/td>\n<td style=\"text-align: center;\">0.635<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"text-align: left;\">1_93<\/td>\n<td style=\"text-align: center;\">0.588<\/td>\n<td style=\"text-align: center;\"><strong>0.516<\/strong><\/td>\n<td style=\"text-align: center;\">0.709<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"text-align: left;\">1_95<\/td>\n<td style=\"text-align: center;\"><strong>0.748<\/strong><\/td>\n<td style=\"text-align: center;\">0.935<\/td>\n<td style=\"text-align: center;\">0.757<\/td>\n<\/tr>\n<tr style=\"background: #ffffff;\">\n<td style=\"text-align: left;\"><strong>Overall mean<\/strong><\/td>\n<td style=\"text-align: center;\"><strong>1.336<\/strong><\/td>\n<td style=\"text-align: center;\">1.423<\/td>\n<td style=\"text-align: center;\">1.670<\/td>\n<\/tr>\n<tr style=\"background: #f8f9fa;\">\n<td style=\"text-align: left;\">Median<\/td>\n<td style=\"text-align: center;\"><strong>0.588<\/strong><\/td>\n<td style=\"text-align: center;\">0.655<\/td>\n<td style=\"text-align: center;\">0.757<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_model_comparison-2.png\" alt=\"MASE comparison: XGBoost vs ETS vs SNAIVE across 7 departments\" \/><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_forecast_vs_actual-2.png\" alt=\"XGBoost forecast vs actuals for Department 1_1\" \/><\/p>\n<p>XGBoost wins on mean MASE \u2014 <strong>1.336 versus ETS at 1.423<\/strong>, a <strong>6.1% improvement<\/strong>. It beats SNAIVE by 20%. Those are real gains, but they are not a revolution: per-department, the three models trade blows. XGBoost takes four departments outright (1_3, 1_8, 1_38, 1_95). ETS takes two (1_13, 1_93). And on department 1_1, neither of them beats SNAIVE.<\/p>\n<p>Two results in this table deserve their own paragraphs, because they are where the interesting supply chain lessons live.<\/p>\n<p><strong>Department 1_3 is where MASE itself becomes the story.<\/strong> All three MASE values are around 6 \u2014 which looks catastrophic until you look at the raw series. The August spike from ~$10k baseline to above $50k is not unprecedented: it&#8217;s an annual event. The series spiked to $51,159 on 2010-08-27 and $49,776 on 2011-08-26, both in training, before the 2012-08-31 spike of $50,701 in the test window. The models mis-time and mis-magnitude the spike by enough that MASE \u2014 which scales against a <em>tiny<\/em> in-sample SNAIVE denominator on an otherwise very regular series \u2014 balloons to ~6. The supply-chain lesson still holds: flag it, raise an alert, don&#8217;t auto-post. But the reason isn&#8217;t that the spike was unprecedented. It&#8217;s that MASE is unforgiving when your baseline series is predictable and your forecast misses the one event that matters. Excluding 1_3, the XGBoost mean drops to 0.61, ETS to 0.64, SNAIVE to 0.92.<\/p>\n<p><strong>Department 1_8 is where feature engineering pays off.<\/strong><\/p>\n<blockquote><p>XGBoost: MASE 0.43 | SNAIVE: MASE 2.04 \u2014 a <strong>79% reduction in forecast error.<\/strong><\/p><\/blockquote>\n<p>This is the one department where the model&#8217;s memory features clearly outperform naive seasonality, because 1_8&#8217;s history has a repeatable week-to-week structure that SNAIVE&#8217;s once-a-year lookup misses entirely. When people talk about ML beating naive forecasting by &#8222;huge margins,&#8220; they&#8217;re usually describing a series like 1_8. The honest caveat is that most of your SKUs won&#8217;t look like 1_8 \u2014 they&#8217;ll look like 1_1.<\/p>\n<p><strong>Department 1_1 is the humility pill.<\/strong> Its final 12 weeks happened to land almost exactly on top of the same 12 weeks from the previous year \u2014 which is SNAIVE&#8217;s entire method. A 500-tree gradient-boosted model that has seen every other department&#8217;s history cannot beat a one-line formula that just looks up the past, because there&#8217;s nothing left to beat. The forecast-vs-actual chart above is Department 1_1, and you can see all three models converging on roughly the same answer \u2014 that&#8217;s the point. On a series where last-year is right, ML can only match; it cannot improve.<\/p>\n<p>The good news in the table is consistency against SNAIVE: XGBoost beats SNAIVE on 6 of 7 departments \u2014 the only miss is Department 1_1. The honest news is that &#8222;6.1% better than ETS on the mean, 10.2% better on the median&#8220; is the real magnitude of the ML advantage on this kind of data. Budget your expectations accordingly.<\/p>\n<h2>Feature Importance: What the Model Actually Learned<\/h2>\n<p>Feature importance answers the question every stakeholder asks: <em>what is the model actually using to make predictions?<\/em> XGBoost&#8217;s &#8222;Gain&#8220; metric measures how much each feature contributed to reducing the training loss across all splits it appeared in.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_feature_importance-2.png\" alt=\"Top 15 features by XGBoost Gain importance\" \/><\/p>\n<p>The ranking confirms a pattern consistently observed across ML forecasting studies (see Januschowski et al. 2022 for a review of tree-based forecasting): memory features dominate. The most important feature is <strong><code>roll_mean_13<\/code><\/strong> \u2014 the 13-week trailing average \u2014 at <strong>23.0% of total Gain<\/strong>. <code>lag_26<\/code> (six months ago) is second at 16.5%. <code>roll_mean_4<\/code> is third at 15.6%. <code>lag_4<\/code> is fourth at 14.6%. <code>lag_52<\/code> \u2014 the &#8222;same week last year&#8220; feature we spent the most time building intuition around \u2014 comes in at only fifth place, 11.1% of Gain. Add <code>lag_1<\/code> (10.8%) and the top six features alone account for over 91% of everything the model learned.<\/p>\n<h3>A Known Pattern, Reconfirmed: <code>IsHoliday<\/code> Didn&#8217;t Crack the Top 15<\/h3>\n<p>Here is what you&#8217;ll find when you inspect the feature importance table: <strong>the holiday flag contributed nothing measurable to the model.<\/strong> <code>IsHoliday<\/code> didn&#8217;t rank in the top 15 features. None of the calendar features (week-of-year, month, quarter, year) contributed more than half a percent of Gain each. On a dataset where Thanksgiving, Black Friday, and Christmas should \u2014 in theory \u2014 be exactly the kind of event a covariate helps with, the model quietly ignored the label we gave it. This isn&#8217;t novel: it&#8217;s a recurring pattern in the ML forecasting literature. Once your history covers two or more seasonal cycles, lag features tend to absorb the signal that calendar indicators were meant to carry.<\/p>\n<p>Why? Because <code>lag_52<\/code> got there first. &#8222;Sales in the same week of last year&#8220; already encodes every recurring event on the calendar. Thanksgiving week 2012&#8217;s best predictor isn&#8217;t a boolean <code>IsHoliday = 1<\/code>; it&#8217;s the actual sales number from Thanksgiving week 2011, which is exactly what <code>lag_52<\/code> carries. By the time XGBoost considered <code>IsHoliday<\/code>, the seasonal signal had been fully absorbed by the lag and rolling features. Adding a flag to mark &#8222;this is a holiday&#8220; was redundant with a feature that already said &#8222;last year in this exact week, we sold $X.&#8220;<\/p>\n<p>The lesson for your own feature engineering: <strong>lags eat calendar features for breakfast<\/strong>, as long as your history is long enough for a full seasonal cycle. If you only have 18 months of data, <code>lag_52<\/code> isn&#8217;t available for the first year, and <code>IsHoliday<\/code> might genuinely earn its keep. With two full cycles, the covariate usually becomes decorative.<\/p>\n<p>In supply chain terms: the features that moved the needle were <strong>memory<\/strong>, not <strong>context<\/strong>. The model&#8217;s most predictive inputs are &#8222;the smoothed 13-week level,&#8220; &#8222;sales six months ago,&#8220; and &#8222;the smoothed one-month level.&#8220; Raw year-ago values helped, but smoothed windows helped <em>more<\/em> \u2014 because the rolling means filter out the week-to-week noise that a single lag carries. The M5 lesson still holds: feature engineering is the whole story. On this data, the story is specifically about <strong>lag and rolling-window<\/strong> engineering. Calendar engineering turned out to be optional.<\/p>\n<h2>The Honest Verdict: When XGBoost Beats ETS at SKU Level<\/h2>\n<p>The headline finding: <strong>single-digit-percent improvements are the norm for ML forecasters on multi-series retail data<\/strong>, not the exception. The M5 Lesson post covered a sobering result from the competition: at the finest hierarchy level (SKU-store, the one your ERP actually plans against), the M5 winner beat the strongest classical benchmark by only about <strong>3%<\/strong>. Not 30%. Not 300%. Single digits.<\/p>\n<p>Our walkthrough here lands in the same broad neighbourhood \u2014 XGBoost is 6.1% better than STL-ETS on mean MASE on this particular 7-department sample, 10.2% better on the median. The metrics and datasets aren&#8217;t directly comparable to M5&#8217;s WRMSSE, but the order of magnitude is consistent with what&#8217;s been reported across the ML forecasting literature on weekly retail data. Expect single-digit-percent gains, budget for the engineering effort accordingly, and don&#8217;t promise the business more than that. Anyone selling you a 40% accuracy lift from &#8222;AI forecasting&#8220; has not actually run the benchmark.<\/p>\n<p>Here is when XGBoost earns its complexity:<\/p>\n<ul>\n<li><strong>You have multiple related series.<\/strong> Seven departments, 500 SKUs, 40 distribution centres \u2014 the global model amortises feature engineering across all of them and learns shared structure. A single isolated series is ETS territory.<\/li>\n<li><strong>You have real covariates that actually move demand.<\/strong> Promotion calendars, price changes, weather, or holiday flags with genuinely different magnitude than non-holiday weeks. If <code>IsHoliday<\/code> barely moves your series (as we found here), the covariate advantage evaporates.<\/li>\n<li><strong>You have enough history.<\/strong> Two full seasonal cycles is the minimum. Below that, there aren&#8217;t enough <code>lag_52<\/code> observations for the model to learn the annual anchor, and XGBoost collapses onto short-term memory.<\/li>\n<li><strong>Your series have heterogeneous structure.<\/strong> Different products with different seasonality, different trend directions, different volatility regimes. XGBoost handles this natively via series dummies; fitting 500 separate ETS models is slow and brittle.<\/li>\n<\/ul>\n<p>And here is when ETS (or even SNAIVE) will quietly beat you:<\/p>\n<ul>\n<li><strong>Single isolated series.<\/strong> No cross-learning advantage to exploit.<\/li>\n<li><strong>Short history.<\/strong> Under two full seasonal cycles, lag features go to NA and the model loses its best predictor.<\/li>\n<li><strong>Sparse intermittent demand.<\/strong> Spare parts, dead stock, anything with lots of zeros. Croston&#8217;s method and its descendants are purpose-built; XGBoost is not.<\/li>\n<li><strong>The next period happens to look like last year.<\/strong> Department 1_1 above. SNAIVE wins and there is nothing you can do about it \u2014 which is why SNAIVE belongs in every benchmark.<\/li>\n<\/ul>\n<p>The verdict: XGBoost is the right default when you have multi-series data with real demand drivers. It is the wrong default when you don&#8217;t. Don&#8217;t let anyone tell you the choice is always obvious \u2014 the whole point of running the horse race is that you can&#8217;t predict the winner.<\/p>\n<h2>Your Next Steps<\/h2>\n<ol>\n<li><strong>Run this script on your own ERP weekly data.<\/strong> The recipe adapts with a single column rename. If you have store\/DC\/SKU identifiers, one-hot encode them and go global \u2014 you&#8217;ll get an M5-style model on your own data in an afternoon.<\/li>\n<li><strong>Segment A\/B\/C items before fitting globally.<\/strong> Mixing fast-movers with dead stock in one global model is how you get mediocre forecasts on both. Fit A-items globally, B-items globally, and handle C-items with intermittent-demand methods like Croston.<\/li>\n<li><strong>Flag volatility outliers before you trust a forecast.<\/strong> Department 1_3 above is every forecaster&#8217;s nightmare \u2014 a demand regime change the model could not have seen. Calculate MASE per series and escalate anything &gt; 2 for human review instead of auto-posting it to your plan.<\/li>\n<li><strong>Always run SNAIVE as your benchmark before deploying anything fancier.<\/strong> If XGBoost can&#8217;t beat SNAIVE on a given series (Department 1_1 above), either your features are wrong, your history is too short, or the series just happens to repeat itself. Either way, you needed to know.<\/li>\n<li><strong>Start with lag + rolling features, add calendar + covariates only when they earn their keep.<\/strong> On this dataset the memory features (lags and rolling windows) account for over 99% of total model Gain; the holiday flag contributed nothing. Your own data may differ \u2014 but the way you find out is empirically, not by assumption.<\/li>\n<\/ol>\n<h2>Interactive Dashboard<\/h2>\n<p>Explore the forecast comparison across all 7 Walmart departments \u2014 pick any department, toggle models, and see where XGBoost wins, where ETS stays competitive, and where 1_3&#8217;s demand spike breaks every model at once.<\/p>\n<div class=\"dashboard-link\" style=\"margin: 2em 0; padding: 1.5em; background: #f8f9fa; border-left: 4px solid #0073aa; border-radius: 4px;\">\n<p style=\"margin: 0 0 0.5em 0; font-size: 1.1em;\"><strong>Interactive Dashboard<\/strong><\/p>\n<p style=\"margin: 0 0 1em 0;\">Explore the data yourself \u2014 adjust parameters and see the results update in real time.<\/p>\n<p><a style=\"display: inline-block; padding: 0.6em 1.2em; background: #0073aa; color: #fff; text-decoration: none; border-radius: 4px; font-weight: bold;\" href=\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/2026-04-20_XGBoost_Supply_Chain_Forecasting_dashboard-2.html\" target=\"_blank\" rel=\"noopener\">Open Interactive Dashboard \u2192<\/a><\/p>\n<\/div>\n<details>\n<summary><strong>Show R Code<\/strong><\/summary>\n<pre><code class=\"language-r\"># =============================================================================\n# generate_xgboost_images.R\n#   Part of the April 2026 Forecasting Month series\n#   Blog post: \"XGBoost for Supply Chain Forecasting: The Feature Engineering\n#              Is the Whole Story\"\n#\n# Produces 4 images in Images\/:\n#   1. xgb_walmart_series.png       \u2014 7 Walmart departments, weekly sales, test shaded\n#   2. xgb_feature_importance.png   \u2014 XGBoost top-15 features by Gain\n#   3. xgb_model_comparison.png     \u2014 MASE by department \u00d7 (XGBoost, ETS, SNAIVE)\n#   4. xgb_forecast_vs_actual.png   \u2014 Dept 1_1: forecasts + actuals, test window\n#\n# Run from project root:\n#   Rscript Scripts\/generate_xgboost_images.R\n# =============================================================================\n\nsuppressPackageStartupMessages({\n  library(tidymodels)\n  library(modeltime)\n  library(timetk)\n  library(tidyverse)\n  library(xgboost)\n  library(lubridate)\n  library(slider)\n  library(forecast)\n})\n\nsource(\"Scripts\/theme_inphronesys.R\")\nset.seed(42)\n\n# ---- 1. Data -----------------------------------------------------------------\ndata(\"walmart_sales_weekly\", package = \"timetk\")\n\nwalmart &lt;- walmart_sales_weekly %&gt;%\n  select(id, Date, Weekly_Sales, IsHoliday) %&gt;%\n  mutate(\n    id        = as.character(id),\n    IsHoliday = as.integer(IsHoliday)\n  ) %&gt;%\n  arrange(id, Date)\n\ndepts   &lt;- sort(unique(walmart$id))\nhorizon &lt;- 12\n\n# Time-aware split: the last 12 distinct dates become the test window\nall_dates  &lt;- sort(unique(walmart$Date))\ntest_dates &lt;- tail(all_dates, horizon)\nsplit_date &lt;- min(test_dates)\n\ntrain_data &lt;- walmart %&gt;% filter(Date &lt;  split_date)\ntest_data  &lt;- walmart %&gt;% filter(Date &gt;= split_date)\n\n# ---- 2. Honest feature engineering (per-id; no cross-series leakage) ---------\n# Rolling mean \/ sd are computed over the STRICTLY PAST window (t-n..t-1).\n# Lags use only past Weekly_Sales. Grouping by id prevents series-boundary bleed.\nlag_roll &lt;- function(x, n, fun = mean) {\n  out &lt;- rep(NA_real_, length(x))\n  for (i in seq_along(x)) {\n    if (i &gt; n) out[i] &lt;- fun(x[(i - n):(i - 1)], na.rm = TRUE)\n  }\n  out\n}\n\nadd_features &lt;- function(df) {\n  df %&gt;%\n    group_by(id) %&gt;%\n    arrange(Date) %&gt;%\n    mutate(\n      lag_1        = dplyr::lag(Weekly_Sales, 1),\n      lag_4        = dplyr::lag(Weekly_Sales, 4),\n      lag_13       = dplyr::lag(Weekly_Sales, 13),\n      lag_26       = dplyr::lag(Weekly_Sales, 26),\n      lag_52       = dplyr::lag(Weekly_Sales, 52),\n      roll_mean_4  = lag_roll(Weekly_Sales,  4, mean),\n      roll_mean_13 = lag_roll(Weekly_Sales, 13, mean),\n      roll_mean_26 = lag_roll(Weekly_Sales, 26, mean),\n      roll_sd_4    = lag_roll(Weekly_Sales,  4, sd)\n    ) %&gt;%\n    ungroup() %&gt;%\n    mutate(id = factor(id))\n}\n\nwalmart_feat &lt;- add_features(walmart)\ntrain_feat   &lt;- walmart_feat %&gt;% filter(Date &lt;  split_date)\ntest_feat    &lt;- walmart_feat %&gt;% filter(Date &gt;= split_date)\n\n# ---- 3. Recipe ---------------------------------------------------------------\nrecipe_xgb &lt;- recipe(Weekly_Sales ~ ., data = train_feat) %&gt;%\n  step_timeseries_signature(Date) %&gt;%\n  step_rm(contains(\"iso\"),   contains(\"xts\"),    contains(\"hour\"),\n          contains(\"minute\"),contains(\"second\"), contains(\"am.pm\"),\n          contains(\"mday\"),  contains(\"yday\")) %&gt;%\n  step_rm(Date) %&gt;%\n  step_normalize(Date_index.num, Date_year) %&gt;%\n  step_dummy(all_nominal_predictors(), one_hot = FALSE) %&gt;%\n  step_naomit(all_predictors())\n\n# ---- 4. XGBoost model + workflow ---------------------------------------------\nmodel_xgb &lt;- boost_tree(\n    trees       = 500,\n    min_n       = 10,\n    tree_depth  = 6,\n    learn_rate  = 0.01,\n    sample_size = 0.8\n  ) %&gt;%\n  set_engine(\"xgboost\") %&gt;%\n  set_mode(\"regression\")\n\nwf_xgb  &lt;- workflow() %&gt;% add_recipe(recipe_xgb) %&gt;% add_model(model_xgb)\nfit_xgb &lt;- fit(wf_xgb, data = train_feat)\n\npred_xgb &lt;- predict(fit_xgb, new_data = test_feat) %&gt;%\n  bind_cols(test_feat %&gt;% select(id, Date, Weekly_Sales)) %&gt;%\n  rename(xgb = .pred) %&gt;%\n  mutate(id = as.character(id))\n\n# ---- 5. Baselines: SNAIVE and ETS, fit per department ------------------------\nper_dept_baselines &lt;- map_df(depts, function(s) {\n  train_s &lt;- train_data %&gt;% filter(id == s) %&gt;% arrange(Date)\n  test_s  &lt;- test_data  %&gt;% filter(id == s) %&gt;% arrange(Date)\n\n  y &lt;- ts(train_s$Weekly_Sales, frequency = 52)\n\n  sn   &lt;- forecast::snaive(y, h = horizon)$mean\n  # forecast::ets() caps seasonal period at 24, so for weekly (m = 52) we use\n  # stlf(): STL decomposition + ETS on the seasonally-adjusted series.\n  ets_ &lt;- forecast::stlf(y, h = horizon, method = \"ets\")$mean\n\n  tibble(\n    id     = s,\n    Date   = test_s$Date,\n    actual = test_s$Weekly_Sales,\n    snaive = as.numeric(sn),\n    ets    = as.numeric(ets_)\n  )\n})\n\n# ---- 6. MASE (seasonal, m = 52) ----------------------------------------------\n# MASE denominator = mean(|y_t - y_{t-m}|) on the IN-SAMPLE training data.\nmase &lt;- function(actual, forecast, train_actual, m = 52) {\n  denom &lt;- mean(abs(diff(train_actual, lag = m)), na.rm = TRUE)\n  mean(abs(actual - forecast), na.rm = TRUE) \/ denom\n}\n\n# Pre-build per-department training vectors so MASE uses the correct denominator.\ntrain_map &lt;- train_data %&gt;%\n  arrange(id, Date) %&gt;%\n  group_by(id) %&gt;%\n  summarise(train_vec = list(Weekly_Sales), .groups = \"drop\")\n\nscores &lt;- per_dept_baselines %&gt;%\n  left_join(pred_xgb %&gt;% select(id, Date, xgb), by = c(\"id\", \"Date\")) %&gt;%\n  left_join(train_map, by = \"id\") %&gt;%\n  group_by(id) %&gt;%\n  summarise(\n    mase_xgb    = mase(actual, xgb,    train_vec[[1]], 52),\n    mase_ets    = mase(actual, ets,    train_vec[[1]], 52),\n    mase_snaive = mase(actual, snaive, train_vec[[1]], 52),\n    .groups = \"drop\"\n  )\n\nmean_mase &lt;- scores %&gt;%\n  summarise(\n    XGBoost = mean(mase_xgb),\n    ETS     = mean(mase_ets),\n    SNAIVE  = mean(mase_snaive)\n  )\n\n# ---- 7. Feature importance ---------------------------------------------------\nimportance &lt;- xgboost::xgb.importance(model = extract_fit_engine(fit_xgb)) %&gt;%\n  as_tibble()\n\ntop15 &lt;- importance %&gt;%\n  arrange(desc(Gain)) %&gt;%\n  slice_head(n = 15) %&gt;%\n  mutate(\n    feature_type = case_when(\n      str_detect(Feature, \"^lag_\")         ~ \"Lag\",\n      str_detect(Feature, \"^roll\")         ~ \"Rolling mean \/ sd\",\n      str_detect(Feature, \"(?i)holiday\")   ~ \"Calendar \/ Holiday\",\n      str_detect(Feature, \"^Date_\")        ~ \"Calendar \/ Holiday\",\n      str_detect(Feature, \"^id_\")          ~ \"Series ID\",\n      TRUE                                 ~ \"Other\"\n    )\n  )\n\n# ---- 8. Images ---------------------------------------------------------------\n\n# --- 8.1 Walmart weekly series, faceted, test shaded ---\ntest_shade &lt;- tibble(\n  xmin = split_date, xmax = max(walmart$Date),\n  ymin = -Inf,       ymax = Inf\n)\n\np1 &lt;- ggplot(walmart, aes(x = Date, y = Weekly_Sales)) +\n  geom_rect(data = test_shade,\n            aes(xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax),\n            fill = iph_colors$blue, alpha = 0.18, inherit.aes = FALSE) +\n  geom_line(color = iph_colors$dark, linewidth = 0.45) +\n  facet_wrap(~ id, ncol = 2, scales = \"free_y\") +\n  scale_x_date(date_breaks = \"6 months\", date_labels = \"%b '%y\") +\n  scale_y_continuous(labels = scales::dollar_format(scale = 1e-3, suffix = \"k\")) +\n  labs(title    = \"Walmart Weekly Sales \u2014 7 Departments, 143 Weeks\",\n       subtitle = sprintf(\"Shaded band = 12-week test window (from %s)\",\n                          format(split_date, \"%b %d, %Y\")),\n       x = NULL, y = \"Weekly Sales\") +\n  theme_inphronesys(grid = \"y\")\n\nggsave(\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_walmart_series-2.png\", p1,\n       width = 8, height = 7, dpi = 100, bg = \"white\")\n\n# --- 8.2 Feature importance bar chart ---\nfeature_colors &lt;- c(\"Lag\"                = iph_colors$blue,\n                    \"Rolling mean \/ sd\"  = iph_colors$navy,\n                    \"Calendar \/ Holiday\" = iph_colors$orange,\n                    \"Series ID\"          = iph_colors$teal,\n                    \"Other\"              = iph_colors$grey)\n\np2 &lt;- ggplot(top15, aes(x = Gain, y = reorder(Feature, Gain), fill = feature_type)) +\n  geom_col(width = 0.75) +\n  scale_fill_manual(values = feature_colors, name = NULL) +\n  scale_x_continuous(labels = scales::percent_format(accuracy = 1)) +\n  labs(title    = \"XGBoost Feature Importance \u2014 Top 15\",\n       subtitle = \"Gain = each feature's share of total reduction in training loss\",\n       x = \"Gain\", y = NULL) +\n  theme_inphronesys(grid = \"x\")\n\nggsave(\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_feature_importance-2.png\", p2,\n       width = 8, height = 5, dpi = 100, bg = \"white\")\n\n# --- 8.3 MASE comparison (grouped bars) ---\nmase_long &lt;- scores %&gt;%\n  pivot_longer(starts_with(\"mase_\"), names_to = \"model\", values_to = \"mase\") %&gt;%\n  mutate(model = recode(model,\n                        mase_xgb    = \"XGBoost\",\n                        mase_ets    = \"ETS\",\n                        mase_snaive = \"SNAIVE\"),\n         model = factor(model, levels = c(\"XGBoost\", \"ETS\", \"SNAIVE\")))\n\np3 &lt;- ggplot(mase_long, aes(x = id, y = mase, fill = model)) +\n  geom_col(position = position_dodge(0.8), width = 0.72) +\n  geom_hline(yintercept = 1, linetype = \"dashed\",\n             color = iph_colors$red, linewidth = 0.5) +\n  annotate(\"text\", x = 0.6, y = 1,\n           label = \"MASE = 1  (in-sample seasonal naive)\",\n           hjust = 0, vjust = -0.6,\n           color = iph_colors$red, size = 3.2, fontface = \"italic\",\n           family = \"Inter\") +\n  scale_fill_manual(values = c(\"XGBoost\" = iph_colors$blue,\n                               \"ETS\"     = iph_colors$green,\n                               \"SNAIVE\"  = iph_colors$lightgrey),\n                    name = NULL) +\n  labs(title    = \"Forecast Accuracy by Department (MASE \u2014 lower is better)\",\n       subtitle = \"12-week holdout across 7 Walmart departments\",\n       x = \"Department ID\", y = \"MASE\") +\n  theme_inphronesys(grid = \"y\")\n\nggsave(\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_model_comparison-2.png\", p3,\n       width = 8, height = 5, dpi = 100, bg = \"white\")\n\n# --- 8.4 Dept 1_1: context + forecasts vs actual ---\ndept_focus    &lt;- \"1_1\"\ncontext_weeks &lt;- 24\n\ncontext_df &lt;- train_data %&gt;%\n  filter(id == dept_focus) %&gt;%\n  arrange(Date) %&gt;%\n  slice_tail(n = context_weeks)\n\nforecast_df &lt;- per_dept_baselines %&gt;%\n  filter(id == dept_focus) %&gt;%\n  left_join(pred_xgb %&gt;% select(id, Date, xgb), by = c(\"id\", \"Date\")) %&gt;%\n  pivot_longer(c(xgb, ets, snaive), names_to = \"model\", values_to = \"forecast\") %&gt;%\n  mutate(model = recode(model, xgb = \"XGBoost\", ets = \"ETS\", snaive = \"SNAIVE\"),\n         model = factor(model, levels = c(\"XGBoost\", \"ETS\", \"SNAIVE\")))\n\nactuals_df &lt;- per_dept_baselines %&gt;%\n  filter(id == dept_focus) %&gt;%\n  select(Date, actual)\n\np4 &lt;- ggplot() +\n  geom_line(data = context_df, aes(x = Date, y = Weekly_Sales),\n            color = iph_colors$grey, linewidth = 0.55) +\n  geom_vline(xintercept = split_date,\n             linetype = \"dashed\", color = iph_colors$dark, linewidth = 0.4) +\n  geom_line(data = forecast_df,\n            aes(x = Date, y = forecast, color = model),\n            linewidth = 0.9) +\n  geom_point(data = actuals_df,\n             aes(x = Date, y = actual),\n             color = iph_colors$dark, size = 2.2) +\n  scale_color_manual(values = c(\"XGBoost\" = iph_colors$blue,\n                                \"ETS\"     = iph_colors$green,\n                                \"SNAIVE\"  = iph_colors$red), name = NULL) +\n  scale_y_continuous(labels = scales::dollar_format(scale = 1e-3, suffix = \"k\")) +\n  labs(title    = sprintf(\"Department %s \u2014 12-Week Forecast vs Actual\", dept_focus),\n       subtitle = \"Grey line = last 24 weeks of training context \u2022 Black dots = actual test values\",\n       x = NULL, y = \"Weekly Sales\") +\n  theme_inphronesys(grid = \"y\")\n\nggsave(\"https:\/\/inphronesys.com\/wp-content\/uploads\/2026\/04\/xgb_forecast_vs_actual-2.png\", p4,\n       width = 8, height = 5, dpi = 100, bg = \"white\")\n\n# ---- 9. Hyperparameter grid (3x3 tree_depth x learn_rate) --------------------\ngrid &lt;- expand_grid(tree_depth = c(3, 6, 9),\n                    learn_rate = c(0.05, 0.01, 0.005))\n\nfit_one_combo &lt;- function(td, lr) {\n  m &lt;- boost_tree(trees = 500, min_n = 10,\n                  tree_depth = td, learn_rate = lr,\n                  sample_size = 0.8) %&gt;%\n    set_engine(\"xgboost\") %&gt;% set_mode(\"regression\")\n  wf &lt;- workflow() %&gt;% add_recipe(recipe_xgb) %&gt;% add_model(m)\n  ft &lt;- fit(wf, data = train_feat)\n  pr &lt;- predict(ft, new_data = test_feat) %&gt;%\n    bind_cols(test_feat %&gt;% select(id, Date, Weekly_Sales)) %&gt;%\n    mutate(id = as.character(id))\n\n  per_dept_mase &lt;- vapply(depts, function(s) {\n    sub &lt;- pr %&gt;% filter(id == s) %&gt;% arrange(Date)\n    tr  &lt;- train_map %&gt;% filter(id == s) %&gt;% pull(train_vec) %&gt;% .[[1]]\n    mase(sub$Weekly_Sales, sub$.pred, tr, 52)\n  }, numeric(1))\n\n  mean(per_dept_mase)\n}\n\nhyp_grid &lt;- grid %&gt;%\n  mutate(mean_mase = map2_dbl(tree_depth, learn_rate,\n                              ~ fit_one_combo(.x, .y)))\n<\/code><\/pre>\n<\/details>\n<h2>References<\/h2>\n<ul>\n<li>Makridakis, S., Spiliotis, E., &amp; Assimakopoulos, V. (2022). <em>The M5 competition: Background, organization, and implementation.<\/em> International Journal of Forecasting, 38(4), 1325\u20131336.<\/li>\n<li>Januschowski, T., Wang, Y., Torkkola, K., Erkkil\u00e4, T., Hasson, H., &amp; Gasthaus, J. (2022). <em>Forecasting with trees.<\/em> International Journal of Forecasting, 38(4), 1473\u20131481.<\/li>\n<li>Hyndman, R. J., &amp; Athanasopoulos, G. (2021). <em>Forecasting: Principles and Practice (3rd ed.).<\/em> OTexts. <a href=\"https:\/\/otexts.com\/fpp3\/\">https:\/\/otexts.com\/fpp3\/<\/a><\/li>\n<li>Chen, T., &amp; Guestrin, C. (2016). <em>XGBoost: A Scalable Tree Boosting System.<\/em> KDD &#8217;16.<\/li>\n<li>Dancho, M. (2024). <em>modeltime: The Tidymodels Extension for Time Series Modeling.<\/em> R package. <a href=\"https:\/\/business-science.github.io\/modeltime\/\">https:\/\/business-science.github.io\/modeltime\/<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>A hands-on walkthrough of global XGBoost forecasting in R with tidymodels and modeltime, applied to the Walmart weekly sales dataset. What the feature importance reveals, when ML earns its complexity, and when ETS or SNAIVE quietly wins.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13,206,20],"tags":[289,284,290,15,291,292,288],"class_list":["post-1909","post","type-post","status-publish","format-standard","hentry","category-data-science","category-forecasting","category-supply-chain","tag-feature-engineering","tag-machine-learning-2","tag-modeltime","tag-r","tag-supply-chain-forecasting","tag-tidymodels","tag-xgboost"],"_links":{"self":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1909","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1909"}],"version-history":[{"count":1,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1909\/revisions"}],"predecessor-version":[{"id":1911,"href":"https:\/\/inphronesys.com\/index.php?rest_route=\/wp\/v2\/posts\/1909\/revisions\/1911"}],"wp:attachment":[{"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1909"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1909"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inphronesys.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1909"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}