Forecast One By One

A useful feature for short-term forecast in Silverkite model family is autoregression. Silverkite has an “auto” option for autoregression, which automatically selects the autoregression lag orders based on the data frequency and forecast horizons. One important rule of this “auto” option is that the minimum order of autoregression terms is at least the forecast horizon. For example, if the forecast horizon is 3 on a daily model, the minimum order of autoregression is set to 3. The “auto” option won’t have an order of 2 in this case, because the 3rd day forecast will need the 1st day’s observation, which isn’t available at the current time. Although the model can make predictions with an autoregression lag order less than the forecast horizon via simulations, it takes longer time to run and is not the preferred behavior in the “auto” option.

However, in many cases, using smaller autoregression lag orders can give more accurate forecast results. We observe that the only barrier of using an autoregression term of order 2 in the 3-day forecast model is the 3rd day, while we can use it freely for the first 2 days. Similarly, we are able to use an autoregression term of order 1 for the 1st day. In a 3 day forecast, if the accuracy of all 3 days are important, then replacing the first 2 days’ models with shorter autoregression lag orders can improve the accuracy. The forecast-one-by-one algorithm is designed in this context.

The observations above together bring the idea of the forecast-one-by-one algorithm. The algorithm allows fitting multiple models with the “auto” option in autoregression, when one is forecasting with a forecast horizon longer than 1. For each model, the “auto” option for autoregression selects the smallest available autoregression lag order and predicts for the corresponding forecast steps, thus improving the forecast accuracy for the early steps.

In this example, we will cover how to activate the forecast-one-by-one approach via the ForecastConfig and the Forecaster classes. For a detailed API reference, please see the ForecastConfig and OneByOneEstimator classes.

40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
 import warnings

 warnings.filterwarnings("ignore")

 import plotly
 from greykite.common.data_loader import DataLoader
 from greykite.framework.templates.autogen.forecast_config import ForecastConfig
 from greykite.framework.templates.autogen.forecast_config import ModelComponentsParam
 from greykite.framework.templates.forecaster import Forecaster
 from greykite.framework.templates.model_templates import ModelTemplateEnum
 from greykite.framework.utils.result_summary import summarize_grid_search_results

 # Loads dataset into pandas DataFrame
 dl = DataLoader()
 df = dl.load_peyton_manning()

The forecast-one-by-one option

The forecast-one-by-one option is specified through the forecast_one_by_one parameter in ForecastConfig.

63
64
65
66
67
68
69
70
 config = ForecastConfig(
     model_template=ModelTemplateEnum.SILVERKITE.name,
     forecast_horizon=3,
     model_components_param=ModelComponentsParam(
         autoregression=dict(autoreg_dict="auto")
     ),
     forecast_one_by_one=True
 )

The forecast_one_by_one parameter can be specified in the following ways

  • ``True``: every forecast step will be a separate model. The number of models equals the forecast horizon. In this example, 3 models will be fit with the 3 forecast steps.

  • ``False``: the forecast-one-by-one method is turned off. This is the default behavior and a single model is used for all forecast steps.

  • A list of integers: each integer corresponds to a model, and it is the number of steps. For example, in a 7 day forecast, specifying forecast_one_by_one=[1, 2, 4] will result in 3 models. The first model forecasts the 1st day with forecast horizon 1; The second model forecasts the 2nd - 3rd days with forecast horizon 3; The third model forecasts the 4th - 7th days with forecast horizon 7. In this case, the sum of the list entries must equal the forecast horizon.

  • an integer ``n``: every model will account for n steps. The last model will account for the rest <n steps. For example in a 7 day forecast, specifying forecast_one_by_one=2 will result in 4 models, which is equivalent to forecast_one_by_one=[2, 2, 2, 1].

Note

forecast_one_by_one is activated only when there are parameters in the model that depend on the forecast horizon. Currently the only parameter that depends on forecast horizon is autoreg_dict="auto". If you do not specify autoreg_dict="auto", the forecast_one_by_one parameter will be ignored.

Note

Forecast-one-by-one fits multiple models to increase accuracy, which may cause the training time to increase linearly with the number of models. Please make sure your forecast_one_by_one parameter and forecast horizon result in a reasonable number of models.

Next, let’s run the model and look at the result.

106
107
108
109
110
111
 # Runs the forecast
 forecaster = Forecaster()
 result = forecaster.run_forecast_config(
     df=df.iloc[-365:].reset_index(drop=True),  # Uses less data to speed up this example.
     config=config
 )

Out:

Fitting 3 folds for each of 1 candidates, totalling 3 fits

You may see a few warnings like “The future x length is 0, which doesn’t match the model forecast horizon 3, using only the model with the longest forecast horizon for prediction.” This is an expected behavior when calculating the training errors. Because the models are mapped to the forecast period only, but not to the training period. Therefore, only the last model is used to get the fitted values on the training period. You don’t need to worry about it.

Everything on the forecast_result level is the same as not activating forecast-one-by-one. For example, we can view the cross-validation results in the same way.

126
127
128
129
130
131
132
133
134
135
 # Summarizes the CV results
 cv_results = summarize_grid_search_results(
     grid_search=result.grid_search,
     decimals=1,
     # The below saves space in the printed output. Remove to show all available metrics and columns.
     cv_report_metrics=None,
     column_order=["rank", "mean_test", "split_test", "mean_train", "split_train", "mean_fit_time", "mean_score_time", "params"])
 cv_results["params"] = cv_results["params"].astype(str)
 cv_results.set_index("params", drop=True, inplace=True)
 cv_results.transpose()
params []
rank_test_MAPE 1
mean_test_MAPE 5.9
split_test_MAPE (11.3, 2.2, 4.2)
mean_train_MAPE 3.2
split_train_MAPE (2.8, 3.4, 3.4)
mean_fit_time 6.7
mean_score_time 1.9


When you need to access estimator level attributes, for example, model summary or component plots, the returned result will be a list of the original type, because we fit multiple models. The model summary list can be accessed in the same way and you can use index to get the model summary for a single model.

143
144
145
146
147
 # Gets the model summary list
 one_by_one_estimator = result.model[-1]
 summaries = one_by_one_estimator.summary()
 # Prints the model summary for 1st model only
 print(summaries[0])

Out:

================================ Model Summary =================================

Number of observations: 367,   Number of features: 98
Method: Ridge regression
Number of nonzero features: 98
Regularization parameter: 0.3039

Residuals:
         Min           1Q       Median           3Q          Max
     -0.8651      -0.1649     -0.04823      0.09419        3.082

             Pred_col   Estimate Std. Err Pr(>)_boot sig. code                95%CI
            Intercept        6.3   0.1573     <2e-16       ***       (5.987, 6.575)
  events_C...New Year      0.371   0.2837      0.180              (-0.1302, 0.8009)
  events_C...w Year-1     -0.188   0.2248      0.424              (-0.5412, 0.2496)
  events_C...w Year-2    -0.1857   0.1581      0.228              (-0.4615, 0.1513)
  events_C...w Year+1   -0.09105   0.2123      0.602              (-0.3709, 0.4397)
  events_C...w Year+2    -0.1242   0.2224      0.596               (-0.4655, 0.454)
 events_Christmas Day    -0.4278    0.256      0.044         *        (-0.7431, 0.)
  events_C...as Day-1    -0.5503   0.2989      0.006        **        (-0.8153, 0.)
  events_C...as Day-2     -0.112  0.09064      0.156              (-0.2811, 0.0216)
  events_C...as Day+1    -0.1558   0.1182      0.128                  (-0.3614, 0.)
  events_C...as Day+2     0.7455   0.3911     <2e-16       ***            (0., 1.0)
  events_E...Ireland]   -0.01245  0.08503      0.568              (-0.1982, 0.1693)
  events_E...eland]-1    -0.0711   0.0658      0.190             (-0.1943, 0.02119)
  events_E...eland]-2   -0.00544   0.0464      0.572              (-0.1045, 0.1091)
  events_E...eland]+1   -0.06126  0.07196      0.266              (-0.211, 0.07553)
  events_E...eland]+2    0.04205  0.05783      0.326              (-0.06463, 0.178)
   events_Good Friday     0.1017  0.09566      0.222             (-0.09518, 0.2974)
 events_Good Friday-1    -0.2085   0.1381      0.074         .        (-0.4209, 0.)
 events_Good Friday-2    -0.0396   0.0763      0.424              (-0.1994, 0.1364)
 events_Good Friday+1   -0.00544   0.0464      0.572              (-0.1045, 0.1091)
 events_Good Friday+2    -0.0711   0.0658      0.190             (-0.1943, 0.02119)
  events_I...ence Day     0.1564   0.1036      0.090         .     (-0.0354, 0.324)
  events_I...ce Day-1    -0.2586   0.1508      0.068         .        (-0.5218, 0.)
  events_I...ce Day-2   -0.02023  0.09141      0.850                (-0.227, 0.156)
  events_I...ce Day+1    -0.2072   0.1107      0.038         *        (-0.3907, 0.)
  events_I...ce Day+2    -0.1595  0.09581      0.064         .        (-0.3351, 0.)
     events_Labor Day   -0.07172   0.1643      0.594              (-0.4032, 0.2276)
   events_Labor Day-1  -0.001754   0.1256      0.994              (-0.2703, 0.2475)
   events_Labor Day-2    0.09462  0.09004      0.228              (-0.0704, 0.2568)
   events_Labor Day+1    0.01312  0.09496      0.780              (-0.1889, 0.2024)
   events_Labor Day+2    0.05622   0.2096      0.694              (-0.3532, 0.4145)
  events_Memorial Day    -0.5502   0.3015      0.008        **        (-0.8686, 0.)
  events_M...al Day-1    -0.2851   0.1819      0.070         .         (-0.552, 0.)
  events_M...al Day-2    0.01612  0.08031      0.536              (-0.1611, 0.1894)
  events_M...al Day+1   -0.08116  0.08703      0.248              (-0.2655, 0.0494)
  events_M...al Day+2    0.04334  0.06804      0.386             (-0.09907, 0.1799)
 events_New Years Day    -0.3467   0.2061      0.034         *        (-0.6194, 0.)
  events_N...rs Day-1   -0.06156   0.1044      0.400              (-0.3005, 0.1145)
  events_N...rs Day-2    -0.1653   0.1217      0.120              (-0.369, 0.02099)
  events_N...rs Day+1      0.346   0.1822      0.008        **         (0., 0.5238)
  events_N...rs Day+2  -0.008239   0.1054      0.618              (-0.2291, 0.2046)
         events_Other     0.1747   0.1042      0.080         .  (-0.006864, 0.3977)
       events_Other-1   -0.05532  0.06389      0.412             (-0.1654, 0.07621)
       events_Other-2    0.03676  0.05258      0.478             (-0.05232, 0.1486)
       events_Other+1    0.01104  0.06219      0.842              (-0.1002, 0.1467)
       events_Other+2   -0.02959  0.05906      0.606             (-0.1434, 0.07897)
  events_Thanksgiving    -0.2239   0.1338      0.058         .        (-0.4101, 0.)
  events_T...giving-1   -0.07815  0.08277      0.274             (-0.2592, 0.07277)
  events_T...giving-2    -0.1601    0.113      0.104                   (-0.359, 0.)
  events_T...giving+1    0.07259  0.09596      0.324              (-0.1482, 0.2542)
  events_T...giving+2    -0.1017  0.09238      0.220             (-0.2719, 0.06596)
  events_Veterans Day    -0.2146   0.1522      0.088         .          (-0.45, 0.)
  events_V...ns Day-1    -0.2913   0.1687      0.028         *        (-0.4875, 0.)
  events_V...ns Day-2    0.09101   0.1233      0.332               (-0.1635, 0.318)
  events_V...ns Day+1    0.01912  0.06997      0.538              (-0.1533, 0.1548)
  events_V...ns Day+2    -0.2864   0.1711      0.036         *        (-0.4963, 0.)
        str_dow_2-Tue    0.03991  0.03905      0.300              (-0.0311, 0.1285)
        str_dow_3-Wed    0.06449   0.0412      0.118              (-0.0179, 0.1466)
        str_dow_4-Thu    0.06466  0.04736      0.180             (-0.02395, 0.1665)
        str_dow_5-Fri   -0.01393   0.0355      0.690            (-0.08899, 0.05008)
        str_dow_6-Sat    -0.1267  0.03962      0.002        **   (-0.2169, -0.0566)
        str_dow_7-Sun   -0.07119  0.05375      0.192             (-0.1664, 0.03438)
                  ct1     0.3831   0.1406      0.004        **     (0.1228, 0.6907)
       is_weekend:ct1    0.05423  0.08391      0.502              (-0.1279, 0.2225)
    str_dow_2-Tue:ct1    -0.2062   0.1796      0.262              (-0.5937, 0.1091)
    str_dow_3-Wed:ct1   -0.06916  0.08475      0.424             (-0.2335, 0.09851)
    str_dow_4-Thu:ct1    -0.1697   0.1255      0.194              (-0.416, 0.05266)
    str_dow_5-Fri:ct1    0.06403   0.1149      0.602              (-0.1524, 0.2853)
    str_dow_6-Sat:ct1    -0.1465  0.09957      0.132              (-0.3232, 0.0653)
    str_dow_7-Sun:ct1     0.2003    0.157      0.210              (-0.1446, 0.4681)
  ct1:sin1_tow_weekly    -0.1764  0.06479      0.008        **  (-0.3095, -0.05105)
  ct1:cos1_tow_weekly     0.4485   0.2291      0.054         .    (-0.0166, 0.8943)
  ct1:sin2_tow_weekly    -0.1274  0.07885      0.112               (-0.2692, 0.046)
  ct1:cos2_tow_weekly     0.4418   0.1931      0.022         *    (0.08819, 0.8368)
      sin1_tow_weekly     0.1576  0.04655      0.002        **    (0.07076, 0.2493)
      cos1_tow_weekly  -0.004555  0.08314      0.954              (-0.1506, 0.1608)
      sin2_tow_weekly    -0.0185  0.04752      0.716             (-0.1143, 0.07516)
      cos2_tow_weekly    0.07223  0.07231      0.308              (-0.0585, 0.2326)
      sin3_tow_weekly   -0.01263  0.02602      0.632            (-0.06244, 0.03726)
      cos3_tow_weekly    0.01097  0.04597      0.824             (-0.0819, 0.09566)
      sin4_tow_weekly    0.01263  0.02602      0.632            (-0.03726, 0.06244)
      cos4_tow_weekly    0.01097  0.04597      0.824             (-0.0819, 0.09566)
   sin1_toq_quarterly    0.01325  0.06044      0.842              (-0.1182, 0.1207)
   cos1_toq_quarterly   -0.09165  0.06605      0.182             (-0.2226, 0.03126)
   sin2_toq_quarterly   0.006561  0.05822      0.886               (-0.1054, 0.121)
   cos2_toq_quarterly   0.007935  0.05742      0.914             (-0.09494, 0.1241)
   sin3_toq_quarterly   -0.08042  0.04741      0.084         .   (-0.1806, 0.01156)
   cos3_toq_quarterly  -0.005312  0.06691      0.920              (-0.1319, 0.1294)
   sin4_toq_quarterly   -0.04617  0.05473      0.370             (-0.1509, 0.06724)
   cos4_toq_quarterly   -0.06097  0.06059      0.322             (-0.1727, 0.05537)
   sin5_toq_quarterly   -0.07137  0.07079      0.306              (-0.1947, 0.0775)
   cos5_toq_quarterly    0.03116  0.05056      0.556              (-0.0705, 0.1266)
               y_lag1      3.005   0.4763     <2e-16       ***       (1.916, 3.735)
               y_lag2    -0.3528   0.3743      0.384              (-0.9337, 0.4773)
               y_lag3    0.09896    0.285      0.718              (-0.4472, 0.5924)
     y_avglag_7_14_21      0.737   0.2793      0.006        **       (0.2405, 1.36)
      y_avglag_1_to_7     0.2508   0.3122      0.414              (-0.3319, 0.8551)
     y_avglag_8_to_14      0.191   0.2009      0.344              (-0.2071, 0.5715)
Signif. Code: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Multiple R-squared: 0.7862,   Adjusted R-squared: 0.7362
F-statistic: 14.885 on 69 and 296 DF,   p-value: 1.110e-16
Model AIC: 1503.6,   model BIC: 1778.5

WARNING: the condition number is large, 9.02e+03. This might indicate that there are strong multicollinearity or other numerical problems.
WARNING: the F-ratio and its p-value on regularized methods might be misleading, they are provided only for reference purposes.

We can access the component plots in a similar way.

152
153
154
155
 # Gets the fig list
 figs = one_by_one_estimator.plot_components()
 # Shows the component plot for 1st model only
 plotly.io.show(figs[0])

Total running time of the script: ( 0 minutes 49.661 seconds)

Gallery generated by Sphinx-Gallery