Note
Click here to download the full example code
Interpretability
Silverkite generates easily interpretable forecasting models when using its default ML algorithms (e.g. Ridge). This is because after transforming the raw features to basis functions (transformed features), the model uses an additive structure. Silverkite can break down each forecast into various summable components e.g. long-term growth, seasonality, holidays, events, short-term growth (auto-regression), regressors impact etc.
The approach to generate these breakdowns consists of two steps:
Group the transformed variables into various meaningful groups.
Calculate the sum of the features multiplied by their regression coefficients within each group.
These breakdowns then can be used to answer questions such as:
Question 1: How is the forecast value generated?
Question 2: What is driving the change of the forecast as new data comes in?
Forecast components can also help us analyze model behavior and sensitivity. This is because while it is not feasible to compare a large set of features across two model settings, it can be quite practical and informative to compare a few well-defined components.
This tutorial discusses in detail the usage of forecast_breakdown
and how to estimate forecast
components using custom component dictionaries. Some of this functionality has been built in estimators
using the method plot_components(...)
. An example of this usage is in the “Simple Forecast” tutorial
in the Quick Start.
31 # required imports
32 import plotly
33 import warnings
34 import pandas as pd
35 from greykite.framework.benchmark.data_loader_ts import DataLoaderTS
36 from greykite.framework.templates.autogen.forecast_config import EvaluationPeriodParam
37 from greykite.framework.templates.autogen.forecast_config import ForecastConfig
38 from greykite.framework.templates.autogen.forecast_config import MetadataParam
39 from greykite.framework.templates.autogen.forecast_config import ModelComponentsParam
40 from greykite.framework.templates.forecaster import Forecaster
41 from greykite.framework.templates.model_templates import ModelTemplateEnum
42 from greykite.framework.utils.result_summary import summarize_grid_search_results
43 from greykite.common.viz.timeseries_plotting import plot_multivariate
44 warnings.filterwarnings("ignore")
Function to load and prepare data
This is the code to upload and prepare the daily bike-sharing data in Washington DC.
51 def prepare_bikesharing_data():
52 """Loads bike-sharing data and adds proper regressors."""
53 dl = DataLoaderTS()
54 agg_func = {"count": "sum", "tmin": "mean", "tmax": "mean", "pn": "mean"}
55 df = dl.load_bikesharing(agg_freq="daily", agg_func=agg_func)
56
57 # There are some zero values which cause issue for MAPE
58 # This adds a small number to all data to avoid that issue
59 value_col = "count"
60 df[value_col] += 10
61 # We drop last value as data might be incorrect as original data is hourly
62 df.drop(df.tail(1).index, inplace=True)
63 # We only use data from 2018 for demonstration purposes (run time is shorter)
64 df = df.loc[df["ts"] > "2018-01-01"]
65 df.reset_index(drop=True, inplace=True)
66
67 print(f"\n df.tail(): \n {df.tail()}")
68
69 # Creates useful regressors from existing raw regressors
70 df["bin_pn"] = (df["pn"] > 5).map(float)
71 df["bin_heavy_pn"] = (df["pn"] > 20).map(float)
72 df.columns = [
73 "ts",
74 value_col,
75 "regressor_tmin",
76 "regressor_tmax",
77 "regressor_pn",
78 "regressor_bin_pn",
79 "regressor_bin_heavy_pn"]
80
81 forecast_horizon = 7
82 train_df = df.copy()
83 test_df = df.tail(forecast_horizon).reset_index(drop=True)
84 # When using the pipeline (as done in the ``fit_forecast`` below),
85 # fitting and prediction are done in one step
86 # Therefore for demonstration purpose we remove the response values of last 7 days.
87 # This is needed because we are using regressors,
88 # and future regressor data must be augmented to ``df``.
89 # We mimic that by removal of the values of the response.
90 train_df.loc[(len(train_df) - forecast_horizon):len(train_df), value_col] = None
91
92 print(f"train_df shape: \n {train_df.shape}")
93 print(f"test_df shape: \n {test_df.shape}")
94 print(f"train_df.tail(14): \n {train_df.tail(14)}")
95 print(f"test_df: \n {test_df}")
96
97 return {
98 "train_df": train_df,
99 "test_df": test_df}
Function to fit silverkite
This is the code for fitting a silverkite model to the data.
106 def fit_forecast(
107 df,
108 time_col,
109 value_col):
110 """Fits a daily model for this use case.
111 The daily model is a generic silverkite model with regressors."""
112
113 meta_data_params = MetadataParam(
114 time_col=time_col,
115 value_col=value_col,
116 freq="D",
117 )
118
119 # Autoregression to be used in the function
120 autoregression = {
121 "autoreg_dict": {
122 "lag_dict": {"orders": [1, 2, 3]},
123 "agg_lag_dict": {
124 "orders_list": [[7, 7*2, 7*3]],
125 "interval_list": [(1, 7), (8, 7*2)]},
126 "series_na_fill_func": lambda s: s.bfill().ffill()},
127 "fast_simulation": True
128 }
129
130 # Changepoints configuration
131 # The config includes changepoints both in trend and seasonality
132 changepoints = {
133 "changepoints_dict": {
134 "method": "auto",
135 "yearly_seasonality_order": 15,
136 "resample_freq": "2D",
137 "actual_changepoint_min_distance": "100D",
138 "potential_changepoint_distance": "50D",
139 "no_changepoint_distance_from_end": "50D"},
140 "seasonality_changepoints_dict": {
141 "method": "auto",
142 "yearly_seasonality_order": 15,
143 "resample_freq": "2D",
144 "actual_changepoint_min_distance": "100D",
145 "potential_changepoint_distance": "50D",
146 "no_changepoint_distance_from_end": "50D"}
147 }
148
149 regressor_cols = [
150 "regressor_tmin",
151 "regressor_bin_pn",
152 "regressor_bin_heavy_pn",
153 ]
154
155 # Model parameters
156 model_components = ModelComponentsParam(
157 growth=dict(growth_term="linear"),
158 seasonality=dict(
159 yearly_seasonality=[15],
160 quarterly_seasonality=[False],
161 monthly_seasonality=[False],
162 weekly_seasonality=[7],
163 daily_seasonality=[False]
164 ),
165 custom=dict(
166 fit_algorithm_dict=dict(fit_algorithm="ridge"),
167 extra_pred_cols=None,
168 normalize_method="statistical"
169 ),
170 regressors=dict(regressor_cols=regressor_cols),
171 autoregression=autoregression,
172 uncertainty=dict(uncertainty_dict=None),
173 events=dict(holiday_lookup_countries=["US"]),
174 changepoints=changepoints
175 )
176
177 # Evaluation is done on same ``forecast_horizon`` as desired for output
178 evaluation_period_param = EvaluationPeriodParam(
179 test_horizon=None,
180 cv_horizon=forecast_horizon,
181 cv_min_train_periods=365*2,
182 cv_expanding_window=True,
183 cv_use_most_recent_splits=False,
184 cv_periods_between_splits=None,
185 cv_periods_between_train_test=0,
186 cv_max_splits=5,
187 )
188
189 # Runs the forecast model using "SILVERKITE" template
190 forecaster = Forecaster()
191 result = forecaster.run_forecast_config(
192 df=df,
193 config=ForecastConfig(
194 model_template=ModelTemplateEnum.SILVERKITE.name,
195 coverage=0.95,
196 forecast_horizon=forecast_horizon,
197 metadata_param=meta_data_params,
198 evaluation_period_param=evaluation_period_param,
199 model_components_param=model_components
200 )
201 )
202
203 # Gets cross-validation results
204 grid_search = result.grid_search
205 cv_results = summarize_grid_search_results(
206 grid_search=grid_search,
207 decimals=2,
208 cv_report_metrics=None)
209 cv_results = cv_results.transpose()
210 cv_results = pd.DataFrame(cv_results)
211 cv_results.columns = ["err_value"]
212 cv_results["err_name"] = cv_results.index
213 cv_results = cv_results.reset_index(drop=True)
214 cv_results = cv_results[["err_name", "err_value"]]
215
216 print(f"\n cv_results: \n {cv_results}")
217
218 return result
Loads and prepares data
The data is loaded and some information about the input data is printed. We use the number of daily rented bikes in Washington DC over time. The data is augmented with weather data (precipitation, min/max daily temperature).
226 data = prepare_bikesharing_data()
Out:
df.tail():
ts count tmin tmax pn
602 2019-08-27 12216 17.2 26.7 0.0
603 2019-08-28 11401 18.3 27.8 0.0
604 2019-08-29 12685 16.7 28.9 0.0
605 2019-08-30 12097 14.4 32.8 0.0
606 2019-08-31 11281 17.8 31.1 0.0
train_df shape:
(607, 7)
test_df shape:
(7, 7)
train_df.tail(14):
ts count regressor_tmin regressor_tmax regressor_pn regressor_bin_pn regressor_bin_heavy_pn
593 2019-08-18 9655.0 22.2 35.6 0.3 0.0 0.0
594 2019-08-19 10579.0 21.1 37.2 0.0 0.0 0.0
595 2019-08-20 8898.0 22.2 36.1 0.0 0.0 0.0
596 2019-08-21 11648.0 21.7 35.0 1.8 0.0 0.0
597 2019-08-22 11724.0 21.7 35.0 30.7 1.0 1.0
598 2019-08-23 8158.0 17.8 23.3 1.8 0.0 0.0
599 2019-08-24 12475.0 16.7 26.1 0.0 0.0 0.0
600 2019-08-25 NaN 15.6 26.7 0.0 0.0 0.0
601 2019-08-26 NaN 17.2 25.0 0.0 0.0 0.0
602 2019-08-27 NaN 17.2 26.7 0.0 0.0 0.0
603 2019-08-28 NaN 18.3 27.8 0.0 0.0 0.0
604 2019-08-29 NaN 16.7 28.9 0.0 0.0 0.0
605 2019-08-30 NaN 14.4 32.8 0.0 0.0 0.0
606 2019-08-31 NaN 17.8 31.1 0.0 0.0 0.0
test_df:
ts count regressor_tmin regressor_tmax regressor_pn regressor_bin_pn regressor_bin_heavy_pn
0 2019-08-25 11634 15.6 26.7 0.0 0.0 0.0
1 2019-08-26 11747 17.2 25.0 0.0 0.0 0.0
2 2019-08-27 12216 17.2 26.7 0.0 0.0 0.0
3 2019-08-28 11401 18.3 27.8 0.0 0.0 0.0
4 2019-08-29 12685 16.7 28.9 0.0 0.0 0.0
5 2019-08-30 12097 14.4 32.8 0.0 0.0 0.0
6 2019-08-31 11281 17.8 31.1 0.0 0.0 0.0
Fits model to daily data
In this step we fit a silverkite model to the data which uses weather regressors, holidays, auto-regression etc.
233 df = data["train_df"]
234 time_col = "ts"
235 value_col = "count"
236 forecast_horizon = 7
237
238 result = fit_forecast(
239 df=df,
240 time_col=time_col,
241 value_col=value_col)
242 trained_estimator = result.model[-1]
243 # Checks model coefficients and p-values
244 print("\n Model Summary:")
245 print(trained_estimator.summary())
Out:
Fitting 1 folds for each of 1 candidates, totalling 1 fits
cv_results:
err_name err_value
0 rank_test_MAPE 1
1 mean_test_MAPE 10.28
2 split_test_MAPE (10.28,)
3 mean_train_MAPE 21.71
4 param_estimator__auto_holiday_params None
5 params []
6 param_estimator__yearly_seasonality 15
7 param_estimator__weekly_seasonality 7
8 param_estimator__uncertainty_dict None
9 param_estimator__training_fraction None
10 param_estimator__train_test_thresh None
11 param_estimator__time_properties {'period': 86400, 'simple_freq': SimpleTimeFre...
12 param_estimator__simulation_num 10
13 param_estimator__seasonality_changepoints_dict {'method': 'auto', 'yearly_seasonality_order':...
14 param_estimator__remove_intercept False
15 param_estimator__regressor_cols [regressor_tmin, regressor_bin_pn, regressor_b...
16 param_estimator__regression_weight_col None
17 param_estimator__quarterly_seasonality False
18 param_estimator__origin_for_time_vars None
19 param_estimator__normalize_method statistical
20 param_estimator__monthly_seasonality False
21 param_estimator__min_admissible_value None
22 param_estimator__max_weekly_seas_interaction_o... 2
23 param_estimator__max_daily_seas_interaction_order 5
24 param_estimator__max_admissible_value None
25 param_estimator__lagged_regressor_dict None
26 param_estimator__holidays_to_model_separately auto
27 param_estimator__holiday_pre_post_num_dict None
28 param_estimator__holiday_pre_num_days 2
29 param_estimator__holiday_post_num_days 2
30 param_estimator__holiday_lookup_countries [US]
31 param_estimator__growth_term linear
32 param_estimator__fit_algorithm_dict {'fit_algorithm': 'ridge'}
33 param_estimator__feature_sets_enabled auto
34 param_estimator__fast_simulation True
35 param_estimator__extra_pred_cols None
36 param_estimator__explicit_pred_cols None
37 param_estimator__drop_pred_cols None
38 param_estimator__daily_seasonality False
39 param_estimator__daily_event_shifted_effect None
40 param_estimator__daily_event_neighbor_impact None
41 param_estimator__daily_event_df_dict None
42 param_estimator__changepoints_dict {'method': 'auto', 'yearly_seasonality_order':...
43 param_estimator__autoreg_dict {'lag_dict': {'orders': [1, 2, 3]}, 'agg_lag_d...
44 param_estimator__auto_seasonality False
45 param_estimator__auto_holiday False
46 param_estimator__auto_growth False
47 split_train_MAPE (21.71,)
48 mean_fit_time 9.8
49 std_fit_time 0.0
50 mean_score_time 16.87
51 std_score_time 0.0
52 split0_test_MAPE 10.28
53 std_test_MAPE 0.0
54 split0_train_MAPE 21.71
55 std_train_MAPE 0.0
Model Summary:
============================ Forecast Model Summary ============================
Number of observations: 600, Number of features: 134
Method: Ridge regression
Number of nonzero features: 133
Regularization parameter: 174.3
Residuals:
Min 1Q Median 3Q Max
-7532.0 -907.2 85.25 986.3 7618.0
Pred_col Estimate Std. Err Pr(>)_boot sig. code 95%CI
Intercept 9633.0 74.94 <2e-16 *** (9525.0, 9813.0)
events_Christmas Day -144.7 76.48 <2e-16 *** (-187.5, 1.078e-27)
events_C...as Day-1 -135.8 72.04 <2e-16 *** (-174.5, 1.095e-27)
events_C...as Day-2 -51.6 30.28 0.012 * (-82.37, 1.123e-27)
events_C...as Day+1 -72.73 40.69 <2e-16 *** (-106.5, 9.996e-28)
events_C...as Day+2 -23.44 17.26 0.118 (-50.68, 1.113e-27)
events_I...ence Day 45.49 22.02 0.022 * (-6.385e-28, 79.69)
events_I...ce Day-1 -27.7 20.25 0.134 (-62.19, 9.044)
events_I...ce Day-2 -14.33 28.76 0.582 (-66.74, 41.58)
events_I...ce Day+1 -16.08 15.18 0.248 (-43.28, 14.76)
events_I...ce Day+2 -65.19 46.12 0.114 (-133.0, 14.21)
events_Labor Day -61.5 34.43 0.002 ** (-91.23, 1.105e-27)
events_Labor Day-1 92.35 48.83 <2e-16 *** (-7.956e-28, 128.1)
events_Labor Day-2 -59.02 31.99 0.004 ** (-85.76, 1.061e-27)
events_Labor Day+1 -51.11 28.52 0.018 * (-82.65, 9.789e-28)
events_Labor Day+2 -3.722 11.37 0.492 (-28.71, 20.98)
events_Memorial Day -41.91 21.66 0.022 * (-73.18, 8.113e-28)
events_M...al Day-1 125.5 72.99 0.032 * (-1.283e-27, 237.0)
events_M...al Day-2 -28.66 20.71 0.136 (-65.88, 8.725)
events_M...al Day+1 -57.03 51.44 0.310 (-138.8, 34.07)
events_M...al Day+2 -35.61 18.11 0.026 * (-63.46, 6.578e-28)
events_New Years Day -46.77 26.61 0.014 * (-73.26, 1.063e-27)
events_N...rs Day-1 -42.77 26.51 0.028 * (-75.23, 1.063e-27)
events_N...rs Day-2 7.616 11.81 0.380 (-17.7, 32.23)
events_N...rs Day+1 -23.72 33.89 0.486 (-87.51, 35.24)
events_N...rs Day+2 33.46 33.8 0.340 (-33.64, 87.23)
events_Other -109.0 53.41 0.038 * (-196.4, 1.537)
events_Other-1 43.86 45.8 0.336 (-63.26, 120.4)
events_Other-2 -93.26 42.38 0.036 * (-174.9, -8.795)
events_Other+1 48.48 44.46 0.294 (-36.77, 134.3)
events_Other+2 -31.5 66.1 0.638 (-160.6, 105.1)
events_Thanksgiving -184.1 97.22 <2e-16 *** (-232.0, 7.381e-28)
events_T...giving-1 -46.3 27.66 0.008 ** (-76.77, 8.502e-28)
events_T...giving-2 2.582 9.677 0.554 (-22.2, 18.26)
events_T...giving+1 -128.7 69.34 0.002 ** (-169.7, 8.803e-28)
events_T...giving+2 -52.92 33.23 0.044 * (-96.53, 1.093e-27)
events_Veterans Day -31.03 20.15 0.072 . (-60.22, 1.137e-27)
events_V...ns Day-1 -42.18 25.08 0.036 * (-78.57, 8.032e-28)
events_V...ns Day-2 -77.52 41.36 <2e-16 *** (-109.1, 8.803e-28)
events_V...ns Day+1 24.4 17.51 0.148 (-8.979, 51.61)
events_V...ns Day+2 1.288 12.45 0.576 (-28.94, 26.4)
str_dow_2-Tue 19.98 26.64 0.462 (-27.82, 75.96)
str_dow_3-Wed 20.59 22.93 0.390 (-26.65, 60.26)
str_dow_4-Thu 28.45 27.06 0.280 (-22.94, 80.27)
str_dow_5-Fri 42.23 31.66 0.180 (-28.1, 98.26)
str_dow_6-Sat -9.575 37.97 0.808 (-77.24, 65.55)
str_dow_7-Sun -105.6 29.68 <2e-16 *** (-165.5, -47.42)
regressor_tmin 599.7 63.98 <2e-16 *** (443.8, 695.7)
regressor_bin_pn -836.8 65.61 <2e-16 *** (-957.3, -688.0)
regresso...heavy_pn -363.7 82.71 <2e-16 *** (-515.1, -190.8)
ct1 -7.363 30.15 0.802 (-68.59, 52.06)
is_weekend:ct1 -13.75 23.89 0.602 (-59.13, 31.7)
str_dow_2-Tue:ct1 20.77 22.74 0.372 (-24.65, 66.21)
str_dow_3-Wed:ct1 16.21 20.85 0.450 (-23.75, 59.84)
str_dow_4-Thu:ct1 3.699 23.88 0.876 (-41.64, 53.63)
str_dow_5-Fri:ct1 8.268 28.93 0.782 (-47.48, 63.65)
str_dow_6-Sat:ct1 17.78 31.02 0.578 (-37.18, 77.04)
str_dow_7-Sun:ct1 -36.41 30.07 0.220 (-93.01, 26.27)
cp0_2018_07_21_00 -155.3 26.65 <2e-16 *** (-202.0, -96.91)
is_weeke...07_21_00 -26.32 29.09 0.370 (-76.82, 29.53)
str_dow_...07_21_00 -38.36 32.68 0.240 (-105.1, 22.48)
str_dow_...07_21_00 -34.16 23.3 0.134 (-78.41, 13.83)
str_dow_...07_21_00 -13.82 32.45 0.656 (-75.56, 48.43)
str_dow_...07_21_00 -88.42 42.58 0.042 * (-165.8, 4.909)
str_dow_...07_21_00 16.16 41.7 0.716 (-63.25, 98.97)
str_dow_...07_21_00 -52.69 41.84 0.212 (-129.5, 26.47)
ct1:sin1_tow_weekly 20.6 22.56 0.372 (-25.91, 61.92)
ct1:cos1_tow_weekly -34.91 23.25 0.140 (-82.57, 9.324)
ct1:sin2_tow_weekly 30.22 22.65 0.160 (-15.94, 74.16)
ct1:cos2_tow_weekly -30.71 22.71 0.188 (-75.74, 14.96)
cp0_2018...w_weekly -2.956 25.81 0.922 (-56.8, 45.85)
cp0_2018...w_weekly -25.93 34.06 0.470 (-92.81, 36.41)
cp0_2018...w_weekly -12.03 31.67 0.710 (-70.04, 54.61)
cp0_2018...w_weekly -61.05 31.23 0.054 . (-122.1, 1.68)
sin1_tow_weekly 60.01 28.39 0.038 * (6.205, 114.1)
cos1_tow_weekly -57.42 29.78 0.066 . (-115.8, 7.316)
sin2_tow_weekly 59.25 28.3 0.032 * (7.321, 114.1)
cos2_tow_weekly 27.97 31.17 0.382 (-37.6, 86.48)
sin3_tow_weekly 8.528 30.76 0.760 (-44.82, 78.6)
cos3_tow_weekly 35.31 29.52 0.224 (-20.32, 94.2)
sin4_tow_weekly -8.528 30.76 0.760 (-78.6, 44.82)
cos4_tow_weekly 35.31 29.52 0.224 (-20.32, 94.2)
sin5_tow_weekly -59.25 28.3 0.032 * (-114.1, -7.321)
cos5_tow_weekly 27.97 31.17 0.382 (-37.6, 86.48)
sin6_tow_weekly -60.01 28.39 0.038 * (-114.1, -6.205)
cos6_tow_weekly -57.42 29.78 0.066 . (-115.8, 7.316)
sin7_tow_weekly 63.68 26.51 0.014 * (12.76, 114.5)
cos7_tow_weekly 0. 0. 1.000 (0., 0.)
sin1_ct1_yearly 14.3 50.05 0.768 (-81.59, 112.9)
cos1_ct1_yearly -524.4 39.57 <2e-16 *** (-588.8, -437.4)
sin2_ct1_yearly -206.7 48.19 <2e-16 *** (-300.1, -114.1)
cos2_ct1_yearly -85.62 55.77 0.130 (-180.1, 30.24)
sin3_ct1_yearly -73.14 52.68 0.170 (-171.6, 34.86)
cos3_ct1_yearly -41.59 49.27 0.384 (-136.4, 61.64)
sin4_ct1_yearly 33.59 52.73 0.520 (-73.84, 133.7)
cos4_ct1_yearly 36.9 51.64 0.462 (-64.12, 137.9)
sin5_ct1_yearly -54.65 56.34 0.322 (-162.3, 58.86)
cos5_ct1_yearly -57.41 53.97 0.306 (-156.3, 61.91)
sin6_ct1_yearly -17.45 56.17 0.744 (-119.1, 96.69)
cos6_ct1_yearly -194.4 61.41 <2e-16 *** (-299.0, -63.29)
sin7_ct1_yearly -29.26 56.59 0.634 (-150.1, 76.32)
cos7_ct1_yearly 59.93 59.49 0.318 (-53.67, 176.3)
sin8_ct1_yearly 11.46 60.32 0.850 (-101.3, 126.2)
cos8_ct1_yearly 22.87 57.61 0.670 (-90.03, 137.2)
sin9_ct1_yearly -20.75 57.95 0.754 (-118.3, 90.16)
cos9_ct1_yearly -36.6 54.06 0.486 (-130.5, 77.85)
sin10_ct1_yearly 98.24 62.16 0.118 (-31.22, 209.1)
cos10_ct1_yearly -19.53 54.73 0.732 (-124.3, 90.24)
sin11_ct1_yearly -15.28 54.15 0.752 (-133.3, 78.24)
cos11_ct1_yearly 1.071 61.62 0.988 (-134.7, 111.1)
sin12_ct1_yearly -46.26 54.83 0.416 (-157.6, 51.31)
cos12_ct1_yearly 126.5 60.24 0.026 * (8.399, 238.5)
sin13_ct1_yearly -78.99 56.6 0.160 (-184.6, 28.92)
cos13_ct1_yearly -50.87 56.6 0.360 (-168.6, 56.94)
sin14_ct1_yearly -50.02 58.0 0.410 (-163.4, 57.04)
cos14_ct1_yearly -25.4 58.76 0.680 (-147.7, 82.53)
sin15_ct1_yearly -161.4 60.38 0.004 ** (-274.0, -43.27)
cos15_ct1_yearly -38.67 57.1 0.506 (-154.6, 65.96)
sin1_con...07_21_00 37.09 45.03 0.440 (-47.25, 122.7)
cos1_con...07_21_00 -136.3 55.06 0.016 * (-247.8, -37.53)
sin2_con...07_21_00 30.48 47.62 0.520 (-62.32, 125.9)
cos2_con...07_21_00 -172.3 59.6 <2e-16 *** (-282.8, -49.76)
sin3_con...07_21_00 -29.37 52.75 0.572 (-140.5, 61.77)
cos3_con...07_21_00 -4.81 48.42 0.914 (-87.07, 92.21)
sin4_con...07_21_00 -18.05 45.55 0.684 (-103.4, 72.62)
cos4_con...07_21_00 42.7 53.17 0.410 (-64.07, 142.2)
sin5_con...07_21_00 46.26 58.6 0.432 (-63.39, 160.1)
cos5_con...07_21_00 69.93 50.05 0.182 (-23.47, 170.8)
y_lag1 608.2 81.67 <2e-16 *** (423.2, 758.6)
y_lag2 87.56 66.91 0.178 (-22.87, 243.8)
y_lag3 157.6 69.93 0.018 * (24.31, 299.5)
y_avglag_7_14_21 333.6 58.85 <2e-16 *** (220.7, 435.8)
y_avglag_1_to_7 235.1 43.5 <2e-16 *** (151.5, 328.6)
y_avglag_8_to_14 337.7 55.08 <2e-16 *** (232.7, 452.9)
Signif. Code: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Multiple R-squared: 0.7793, Adjusted R-squared: 0.7498
F-statistic: 22.607 on 70 and 528 DF, p-value: 1.110e-16
Model AIC: 12945.0, model BIC: 13260.0
WARNING: the F-ratio and its p-value on regularized methods might be misleading, they are provided only for reference purposes.
WARNING: the following columns have estimated coefficients equal to zero, while ridge is not supposed to have zero estimates. This is probably because these columns are degenerate in the design matrix. Make sure these columns do not have constant values.
['cos7_tow_weekly']
WARNING: the following columns are degenerate, do you really want to include them in your model? This may cause some of them to show unrealistic significance. Consider using the `drop_degenerate` transformer.
['Intercept', 'cos7_tow_weekly']
Grouping of variables
Regex expressions are used to group variables in the breakdown plot. Each group is given in one key of this dictionary. The grouping is done using variable names and for each group multiple regex are given. For each group, variables that satisfy EITHER regex are chosen. Note that this grouping assumes that regressor variables start with “regressor_”. Also note that the order of this grouping matters (Python treats the dictionary as ordered in 3.6+). That means the variables chosen using regex in top groups will not be picked up again. If some variables do not satisfy any of the groupings, they will be grouped into “OTHER”. The following breakdown dictionary should work for many use cases. However, the users can customize it as needed.
Two alternative dictionaries are included in constants
in the variables
DEFAULT_COMPONENTS_REGEX_DICT
and DETAILED_SEASONALITY_COMPONENTS_REGEX_DICT
.
265 grouping_regex_patterns_dict = {
266 "regressors": "regressor_.*", # regressor effects
267 "AR": ".*lag", # autoregression component
268 "events": ".*events_.*", # events and holidays
269 "seasonality": ".*quarter.*|.*month.*|.*C\(dow.*|.*C\(dow_hr.*|sin.*|cos.*|.*doq.*|.*dom.*|.*str_dow.*|.*is_weekend.*|.*tow_weekly.*", # seasonality
270 "trend": "ct1|ct2|ct_sqrt|ct3|ct_root3|.*changepoint.*", # long term trend (includes changepoints)
271 }
Creates forecast breakdown
This is generated for observed data plus the prediction data (available in df
).
Each component is centered around zero and the sum of all components is equal to forecast.
279 breakdown_result = trained_estimator.forecast_breakdown(
280 grouping_regex_patterns_dict=grouping_regex_patterns_dict,
281 center_components=True,
282 plt_title="forecast breakdowns")
283 forecast_breakdown_df = breakdown_result["breakdown_df_with_index_col"]
284 forecast_components_fig = breakdown_result["breakdown_fig"]
285 plotly.io.show(forecast_components_fig)
Standardization of the components
Next we provide a more “standardized” view of the breakdown. This is achieved by dividing all components by observed absolute value of the metric. By doing so, intercept should be mapped to 1 and the y-axis changes can be viewed relative to the average magnitude of the series. The sum of all components at each time point will be equal to “forecast / obs_abs_mean”.
296 column_grouping_result = breakdown_result["column_grouping_result"]
297 component_cols = list(grouping_regex_patterns_dict.keys())
298 forecast_breakdown_stdzd_df = forecast_breakdown_df.copy()
299 obs_abs_mean = abs(df[value_col]).mean()
300 for col in component_cols + ["Intercept", "OTHER"]:
301 if col in forecast_breakdown_stdzd_df.columns:
302 forecast_breakdown_stdzd_df[col] /= obs_abs_mean
303 forecast_breakdown_stdzd_fig = plot_multivariate(
304 df=forecast_breakdown_stdzd_df,
305 x_col=time_col,
306 title="forecast breakdowns divided by mean of abs value of response",
307 ylabel="component")
308 forecast_breakdown_stdzd_fig.update_layout(yaxis_range=[-1.1, 1.1])
309 plotly.io.show(forecast_breakdown_stdzd_fig)
Breaking down the predictions
Next we perform a prediction and generate a breakdown plot for that prediction.
315 test_df = data["test_df"].reset_index()
316 test_df[value_col] = None
317 print(f"\n test_df: \n {test_df}")
318 pred_df = trained_estimator.predict(test_df)
319 forecast_x_mat = trained_estimator.forecast_x_mat
320 # Generate the breakdown plot
321 breakdown_result = trained_estimator.forecast_breakdown(
322 grouping_regex_patterns_dict=grouping_regex_patterns_dict,
323 forecast_x_mat=forecast_x_mat,
324 time_values=pred_df[time_col])
325
326 breakdown_fig = breakdown_result["breakdown_fig"]
327 plotly.io.show(breakdown_fig)
Out:
test_df:
index ts count regressor_tmin regressor_tmax regressor_pn regressor_bin_pn regressor_bin_heavy_pn
0 0 2019-08-25 None 15.6 26.7 0.0 0.0 0.0
1 1 2019-08-26 None 17.2 25.0 0.0 0.0 0.0
2 2 2019-08-27 None 17.2 26.7 0.0 0.0 0.0
3 3 2019-08-28 None 18.3 27.8 0.0 0.0 0.0
4 4 2019-08-29 None 16.7 28.9 0.0 0.0 0.0
5 5 2019-08-30 None 14.4 32.8 0.0 0.0 0.0
6 6 2019-08-31 None 17.8 31.1 0.0 0.0 0.0
Demonstrating a scenario-based breakdown
We artificially inject a “bad weather” day into test data on the second day of prediction. This is done to observe if the breakdown plot captures a decrease in the collective regressors’ effect. The impact of the change in the regressor values can be clearly seen in the updated breakdown.
337 # Altering the test data.
338 # We alter the normal weather conditions on the second day to heavy precipitation and low temperature.
339 test_df["regressor_bin_pn"] = [0, 1, 0, 0, 0, 0, 0]
340 test_df["regressor_bin_heavy_pn"] = [0, 1, 0, 0, 0, 0, 0]
341 test_df["regressor_tmin"] = [15, 0, 15, 15, 15, 15, 15]
342 print(f"altered test_df: \n {test_df}")
343
344 # Gets predictions and the design matrix used during predictions.
345 pred_df = trained_estimator.predict(test_df.reset_index())
346 forecast_x_mat = trained_estimator.forecast_x_mat
347
348 # Generates the breakdown plot.
349 breakdown_result = trained_estimator.forecast_breakdown(
350 grouping_regex_patterns_dict=grouping_regex_patterns_dict,
351 forecast_x_mat=forecast_x_mat,
352 time_values=pred_df[time_col])
353 breakdown_fig = breakdown_result["breakdown_fig"]
354 plotly.io.show(breakdown_fig)
Out:
altered test_df:
index ts count regressor_tmin regressor_tmax regressor_pn regressor_bin_pn regressor_bin_heavy_pn
0 0 2019-08-25 None 15 26.7 0.0 0 0
1 1 2019-08-26 None 0 25.0 0.0 1 1
2 2 2019-08-27 None 15 26.7 0.0 0 0
3 3 2019-08-28 None 15 27.8 0.0 0 0
4 4 2019-08-29 None 15 28.9 0.0 0 0
5 5 2019-08-30 None 15 32.8 0.0 0 0
6 6 2019-08-31 None 15 31.1 0.0 0 0
Total running time of the script: ( 1 minutes 0.861 seconds)