Docs

All Templates

class greykite.framework.templates.forecaster.Forecaster(model_template_enum: Type[enum.Enum] = <enum 'ModelTemplateEnum'>, default_model_template_name: str = 'SILVERKITE')[source]

The main entry point to creates a forecast.

Call the run_forecast_config method to create a forecast. It takes a dataset and forecast configuration parameters.

Notes

This class can create forecasts using any of the model templates in ModelTemplateEnum. Model templates provide suitable default values for the available forecast estimators depending on the data characteristics.

The model template is selected via the config.model_template parameter to run_forecast_config.

To add your own custom algorithms or template classes in our framework, pass model_template_enum and default_model_template_name to the constructor.

model_template_enum: Type[Enum]

The available template names. An Enum class where names are template names, and values are of type ModelTemplate.

default_model_template_name: str

The default template name if not provided by config.model_template. Should be a name in model_template_enum. Used by __get_template_class.

template_class: Optional[Type[TemplateInterface]]

Template class used. Must implement TemplateInterface and be one of the classes in self.model_template_enum. Available for debugging purposes. Set by run_forecast_config.

template: Optional[TemplateInterface]

Instance of template_class used to run the forecast. See the docstring of the specific template class used.

Available for debugging purposes. Set by run_forecast_config.

config: Optional[ForecastConfig]

ForecastConfig passed to the template class. Set by run_forecast_config.

pipeline_params: Optional[Dict]

Parameters used to call forecast_pipeline. Available for debugging purposes. Set by run_forecast_config.

forecast_result: Optional[ForecastResult]

The forecast result, returned by forecast_pipeline. Set by run_forecast_config.

apply_forecast_config(df: pandas.core.frame.DataFrame, config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None) → Dict[source]

Fetches pipeline parameters from the df and config, but does not run the pipeline to generate a forecast.

run_forecast_config calls this function and also runs the forecast pipeline.

Available for debugging purposes to check pipeline parameters before running a forecast. Sets these attributes for debugging:

  • pipeline_params : the parameters passed to forecast_pipeline.

  • template_class, template : the template class used to generate the pipeline parameters.

  • config : the ForecastConfig passed as input to template class, to translate into pipeline parameters.

Provides basic validation on the compatibility of config.model_template with config.model_components_param.

Parameters
  • df (pandas.DataFrame) – Timeseries data to forecast. Contains columns [time_col, value_col], and optional regressor columns Regressor columns should include future values for prediction

  • config (ForecastConfig or None) – Config object for template class to use. See ForecastConfig.

Returns

pipeline_params – Input to forecast_pipeline.

Return type

dict [str, any]

run_forecast_config(df: pandas.core.frame.DataFrame, config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None)greykite.framework.pipeline.pipeline.ForecastResult[source]

Creates a forecast from input data and config. The result is also stored as self.forecast_result.

Parameters
  • df (pandas.DataFrame) – Timeseries data to forecast. Contains columns [time_col, value_col], and optional regressor columns Regressor columns should include future values for prediction

  • config (ForecastConfig) – Config object for template class to use. See ForecastConfig.

Returns

forecast_result – Forecast result, an object of type ForecastResult.

The output of forecast_pipeline, according to the df and config configuration parameters.

Return type

ForecastResult

run_forecast_json(df: pandas.core.frame.DataFrame, json_str: str = '{}')greykite.framework.pipeline.pipeline.ForecastResult[source]

Calls forecast_pipeline according to the json_str configuration parameters.

Parameters
  • df (pandas.DataFrame) – Timeseries data to forecast. Contains columns [time_col, value_col], and optional regressor columns Regressor columns should include future values for prediction

  • json_str (str) – Json string of the config object for Forecast to use. See ForecastConfig.

Returns

forecast_result – Forecast result. The output of forecast_pipeline, called using the template class with specified configuration. See ForecastResult for details.

Return type

ForecastResult

class greykite.framework.templates.model_templates.ModelTemplate(template_class: Type[greykite.framework.templates.template_interface.TemplateInterface], description: str)[source]

A model template consists of a template class, a description, and a name.

This class holds the template class and description. The model template name is the member name in greykite.framework.templates.model_templates.ModelTemplateEnum.

template_class: Type[greykite.framework.templates.template_interface.TemplateInterface]

A class that implements the template interface.

description: str

A description of the model template.

class greykite.framework.templates.model_templates.ModelTemplateEnum(value)[source]

Available model templates.

Enumerates the possible values for the model_template attribute of ForecastConfig.

The value has type ModelTemplate which contains:

  • the template class that recognizes the model_template. Template classes implement the TemplateInterface interface.

  • a plain-text description of what the model_template is for,

The description should be unique across enum members. The template class can be shared, because a template class can recognize multiple model templates. For example, the same template class may use different default values for ForecastConfig.model_components_param depending on ForecastConfig.model_template.

Notes

The template classes SilverkiteTemplate and ProphetTemplate recognize only the model templates explicitly enumerated here.

However, the SimpleSilverkiteTemplate template class allows additional model templates to be specified generically. Any object of type SimpleSilverkiteTemplateOptions can be used as the model_template. These generic model templates are valid but not enumerated here.

SILVERKITE = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model with automatic growth, seasonality, holidays, and interactions. Best for hourly and daily frequencies.Uses `SimpleSilverkiteEstimator`.')

Silverkite model with automatic growth, seasonality, holidays, and interactions. Best for hourly and daily frequencies. Uses SimpleSilverkiteEstimator.

SILVERKITE_DAILY_90 = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model specifically tuned for daily data with 90 days forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses `SimpleSilverkiteEstimator`.')

Silverkite model specifically tuned for daily data with 90 days forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses SimpleSilverkiteEstimator.

SILVERKITE_WEEKLY = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model specifically tuned for weekly data. Contains 4 hyperparameter combinations for grid search. Uses `SimpleSilverkiteEstimator`.')

Silverkite model specifically tuned for weekly data. Contains 4 hyperparameter combinations for grid search. Uses SimpleSilverkiteEstimator.

SILVERKITE_HOURLY_1 = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model specifically tuned for hourly data with 1 hour forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses `SimpleSilverkiteEstimator`.')

Silverkite model specifically tuned for hourly data with 1 hour forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses SimpleSilverkiteEstimator.

SILVERKITE_HOURLY_24 = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model specifically tuned for hourly data with 24 hours (1 day) forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses `SimpleSilverkiteEstimator`.')

Silverkite model specifically tuned for hourly data with 24 hours (1 day) forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses SimpleSilverkiteEstimator.

SILVERKITE_HOURLY_168 = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model specifically tuned for hourly data with 168 hours (1 week) forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses `SimpleSilverkiteEstimator`.')

Silverkite model specifically tuned for hourly data with 168 hours (1 week) forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses SimpleSilverkiteEstimator.

SILVERKITE_HOURLY_336 = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model specifically tuned for hourly data with 336 hours (2 weeks) forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses `SimpleSilverkiteEstimator`.')

Silverkite model specifically tuned for hourly data with 336 hours (2 weeks) forecast horizon. Contains 4 hyperparameter combinations for grid search. Uses SimpleSilverkiteEstimator.

SILVERKITE_EMPTY = ModelTemplate(template_class=<class 'greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate'>, description='Silverkite model with no component included by default. Fits only a constant intercept. Select and customize this template to add only the terms you want. Uses `SimpleSilverkiteEstimator`.')

Silverkite model with no component included by default. Fits only a constant intercept. Select and customize this template to add only the terms you want. Uses SimpleSilverkiteEstimator.

SK = ModelTemplate(template_class=<class 'greykite.framework.templates.silverkite_template.SilverkiteTemplate'>, description='Silverkite model with low-level interface. For flexible model tuning if SILVERKITE template is not flexible enough. Not for use out-of-the-box: customization is needed for good performance. Uses `SilverkiteEstimator`.')

Silverkite model with low-level interface. For flexible model tuning if SILVERKITE template is not flexible enough. Not for use out-of-the-box: customization is needed for good performance. Uses SilverkiteEstimator.

PROPHET = ModelTemplate(template_class=<class 'greykite.framework.templates.prophet_template.ProphetTemplate'>, description='Prophet model with growth, seasonality, holidays, additional regressors and prediction intervals. Uses `ProphetEstimator`.')

Prophet model with growth, seasonality, holidays, additional regressors and prediction intervals. Uses ProphetEstimator.

class greykite.framework.templates.autogen.forecast_config.ForecastConfig(computation_param: Optional[greykite.framework.templates.autogen.forecast_config.ComputationParam] = None, coverage: Optional[float] = None, evaluation_metric_param: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam] = None, evaluation_period_param: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam] = None, forecast_horizon: Optional[int] = None, metadata_param: Optional[greykite.framework.templates.autogen.forecast_config.MetadataParam] = None, model_components_param: Optional[Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[Optional[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]]] = None, model_template: Optional[Union[str, dataclasses.dataclass, List[Union[str, dataclasses.dataclass]]]] = None)[source]

Config for providing parameters to the Forecast library

computation_param: Optional[greykite.framework.templates.autogen.forecast_config.ComputationParam] = None

How to compute the result. See ComputationParam.

coverage: Optional[float] = None

Intended coverage of the prediction bands (0.0 to 1.0). If None, the upper/lower predictions are not returned.

evaluation_metric_param: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam] = None

What metrics to evaluate. See EvaluationMetricParam.

evaluation_period_param: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam] = None

How to split data for evaluation. See EvaluationPeriodParam.

forecast_horizon: Optional[int] = None

Number of periods to forecast into the future. Must be > 0. If None, default is determined from input data frequency.

metadata_param: Optional[greykite.framework.templates.autogen.forecast_config.MetadataParam] = None

Information about the input data. See MetadataParam.

model_components_param: Optional[Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[Optional[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]]] = None

Parameters to tune the model. Typically a single ModelComponentsParam, but the SimpleSilverkiteTemplate template also allows a list of ModelComponentsParam for grid search. A single ModelComponentsParam corresponds to one grid, and a list corresponds to a list of grids. See ModelComponentsParam.

model_template: Optional[Union[str, dataclasses.dataclass, List[Union[str, dataclasses.dataclass]]]] = None

Name of the model template. Typically a single string, but the SimpleSilverkiteTemplate template also allows a list of string for grid search. See ModelTemplateEnum for valid names.

class greykite.framework.templates.autogen.forecast_config.MetadataParam(anomaly_info: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, date_format: Optional[str] = None, freq: Optional[str] = None, time_col: Optional[str] = None, train_end_date: Optional[str] = None, value_col: Optional[str] = None)[source]

Properties of the input data

anomaly_info: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None

Anomaly adjustment info. Anomalies in df are corrected before any forecasting is done. If None, no adjustments are made. See forecast_pipeline.

date_format: Optional[str] = None

See forecast_pipeline

freq: Optional[str] = None

See forecast_pipeline

time_col: Optional[str] = None

See forecast_pipeline

train_end_date: Optional[str] = None

Last date to use for fitting the model. Forecasts are generated after this date. If None, it is set to the last date with a non-null value in value_col df. See forecast_pipeline.

value_col: Optional[str] = None

See forecast_pipeline

class greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam(agg_func: Optional[Callable] = None, agg_periods: Optional[int] = None, cv_report_metrics: Optional[Union[str, List[str]]] = None, cv_selection_metric: Optional[str] = None, null_model_params: Optional[Dict[str, Any]] = None, relative_error_tolerance: Optional[float] = None)[source]

What metrics to evaluate

agg_func: Optional[Callable] = None

See forecast_pipeline.

agg_periods: Optional[int] = None

See forecast_pipeline.

cv_report_metrics: Optional[Union[str, List[str]]] = None

See score_func in forecast_pipeline.

cv_selection_metric: Optional[str] = None

See score_func in forecast_pipeline.

null_model_params: Optional[Dict[str, Any]] = None

See forecast_pipeline.

relative_error_tolerance: Optional[float] = None

See forecast_pipeline.

class greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam(cv_expanding_window: Optional[bool] = None, cv_horizon: Optional[int] = None, cv_max_splits: Optional[int] = None, cv_min_train_periods: Optional[int] = None, cv_periods_between_splits: Optional[int] = None, cv_periods_between_train_test: Optional[int] = None, cv_use_most_recent_splits: Optional[bool] = None, periods_between_train_test: Optional[int] = None, test_horizon: Optional[int] = None)[source]

How to split data for evaluation.

cv_expanding_window: Optional[bool] = None

See forecast_pipeline.

cv_horizon: Optional[int] = None

See forecast_pipeline.

cv_max_splits: Optional[int] = None

See forecast_pipeline.

cv_min_train_periods: Optional[int] = None

See forecast_pipeline.

cv_periods_between_splits: Optional[int] = None

See forecast_pipeline.

cv_periods_between_train_test: Optional[int] = None

See forecast_pipeline.

cv_use_most_recent_splits: Optional[bool] = None

See forecast_pipeline.

periods_between_train_test: Optional[int] = None

See forecast_pipeline

test_horizon: Optional[int] = None

See forecast_pipeline.

class greykite.framework.templates.autogen.forecast_config.ModelComponentsParam(autoregression: Optional[Dict[str, Any]] = None, changepoints: Optional[Dict[str, Any]] = None, custom: Optional[Dict[str, Any]] = None, events: Optional[Dict[str, Any]] = None, growth: Optional[Dict[str, Any]] = None, hyperparameter_override: Optional[Union[Dict, List[Optional[Dict]]]] = None, regressors: Optional[Dict[str, Any]] = None, seasonality: Optional[Dict[str, Any]] = None, uncertainty: Optional[Dict[str, Any]] = None)[source]

Parameters to tune the model.

autoregression: Optional[Dict[str, Any]] = None

For modeling autoregression, see template for details

changepoints: Optional[Dict[str, Any]] = None

For modeling changepoints, see template for details

custom: Optional[Dict[str, Any]] = None

Additional parameters used by template, see template for details

events: Optional[Dict[str, Any]] = None

For modeling events, see template for details

growth: Optional[Dict[str, Any]] = None

For modeling growth (trend), see template for details

hyperparameter_override: Optional[Union[Dict, List[Optional[Dict]]]] = None

After the above model components are used to create a hyperparameter grid, the result is updated by this dictionary, to create new keys or override existing ones. Allows for complete customization of the grid search.

regressors: Optional[Dict[str, Any]] = None

For modeling regressors, see template for details

seasonality: Optional[Dict[str, Any]] = None

For modeling seasonality, see template for details

uncertainty: Optional[Dict[str, Any]] = None

For modeling uncertainty, see template for details

class greykite.framework.templates.autogen.forecast_config.ComputationParam(hyperparameter_budget: Optional[int] = None, n_jobs: Optional[int] = None, verbose: Optional[int] = None)[source]

How to compute the result.

hyperparameter_budget: Optional[int] = None

See forecast_pipeline.

n_jobs: Optional[int] = None

See forecast_pipeline.

verbose: Optional[int] = None

See forecast_pipeline.

Silverkite Template

class greykite.framework.templates.simple_silverkite_template.SimpleSilverkiteTemplate(constants: greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateConstants = SimpleSilverkiteTemplateConstants(COMMON_MODELCOMPONENTPARAM_PARAMETERS={'SEAS': {'HOURLY': {'LT': {'yearly_seasonality': 8, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 3, 'daily_seasonality': 5}, 'NM': {'yearly_seasonality': 15, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 4, 'daily_seasonality': 8}, 'HV': {'yearly_seasonality': 25, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 6, 'daily_seasonality': 12}, 'LTQM': {'yearly_seasonality': 8, 'quarterly_seasonality': 2, 'monthly_seasonality': 2, 'weekly_seasonality': 3, 'daily_seasonality': 5}, 'NMQM': {'yearly_seasonality': 15, 'quarterly_seasonality': 3, 'monthly_seasonality': 3, 'weekly_seasonality': 4, 'daily_seasonality': 8}, 'HVQM': {'yearly_seasonality': 25, 'quarterly_seasonality': 4, 'monthly_seasonality': 4, 'weekly_seasonality': 6, 'daily_seasonality': 12}, 'NONE': {'yearly_seasonality': 0, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 0, 'daily_seasonality': 0}}, 'DAILY': {'LT': {'yearly_seasonality': 8, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 3, 'daily_seasonality': 0}, 'NM': {'yearly_seasonality': 15, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 3, 'daily_seasonality': 0}, 'HV': {'yearly_seasonality': 25, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 4, 'daily_seasonality': 0}, 'LTQM': {'yearly_seasonality': 8, 'quarterly_seasonality': 3, 'monthly_seasonality': 2, 'weekly_seasonality': 3, 'daily_seasonality': 0}, 'NMQM': {'yearly_seasonality': 15, 'quarterly_seasonality': 4, 'monthly_seasonality': 4, 'weekly_seasonality': 3, 'daily_seasonality': 0}, 'HVQM': {'yearly_seasonality': 25, 'quarterly_seasonality': 6, 'monthly_seasonality': 4, 'weekly_seasonality': 4, 'daily_seasonality': 0}, 'NONE': {'yearly_seasonality': 0, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 0, 'daily_seasonality': 0}}, 'WEEKLY': {'LT': {'yearly_seasonality': 8, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 0, 'daily_seasonality': 0}, 'NM': {'yearly_seasonality': 15, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 0, 'daily_seasonality': 0}, 'HV': {'yearly_seasonality': 25, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 0, 'daily_seasonality': 0}, 'LTQM': {'yearly_seasonality': 8, 'quarterly_seasonality': 2, 'monthly_seasonality': 2, 'weekly_seasonality': 0, 'daily_seasonality': 0}, 'NMQM': {'yearly_seasonality': 15, 'quarterly_seasonality': 3, 'monthly_seasonality': 3, 'weekly_seasonality': 0, 'daily_seasonality': 0}, 'HVQM': {'yearly_seasonality': 25, 'quarterly_seasonality': 4, 'monthly_seasonality': 4, 'weekly_seasonality': 0, 'daily_seasonality': 0}, 'NONE': {'yearly_seasonality': 0, 'quarterly_seasonality': 0, 'monthly_seasonality': 0, 'weekly_seasonality': 0, 'daily_seasonality': 0}}}, 'GR': {'LINEAR': {'growth_term': 'linear'}, 'NONE': {'growth_term': None}}, 'CP': {'HOURLY': {'LT': {'method': 'auto', 'resample_freq': 'D', 'regularization_strength': 0.6, 'potential_changepoint_distance': '7D', 'no_changepoint_distance_from_end': '30D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': None}, 'NM': {'method': 'auto', 'resample_freq': 'D', 'regularization_strength': 0.5, 'potential_changepoint_distance': '15D', 'no_changepoint_distance_from_end': '30D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': '365D'}, 'HV': {'method': 'auto', 'resample_freq': 'D', 'regularization_strength': 0.3, 'potential_changepoint_distance': '15D', 'no_changepoint_distance_from_end': '30D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': '365D'}, 'NONE': None}, 'DAILY': {'LT': {'method': 'auto', 'resample_freq': '7D', 'regularization_strength': 0.6, 'potential_changepoint_distance': '15D', 'no_changepoint_distance_from_end': '90D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': None}, 'NM': {'method': 'auto', 'resample_freq': '7D', 'regularization_strength': 0.5, 'potential_changepoint_distance': '15D', 'no_changepoint_distance_from_end': '180D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': '365D'}, 'HV': {'method': 'auto', 'resample_freq': '7D', 'regularization_strength': 0.3, 'potential_changepoint_distance': '15D', 'no_changepoint_distance_from_end': '180D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': '365D'}, 'NONE': None}, 'WEEKLY': {'LT': {'method': 'auto', 'resample_freq': '7D', 'regularization_strength': 0.6, 'potential_changepoint_distance': '14D', 'no_changepoint_distance_from_end': '180D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': None}, 'NM': {'method': 'auto', 'resample_freq': '7D', 'regularization_strength': 0.5, 'potential_changepoint_distance': '14D', 'no_changepoint_distance_from_end': '180D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': '365D'}, 'HV': {'method': 'auto', 'resample_freq': '7D', 'regularization_strength': 0.3, 'potential_changepoint_distance': '14D', 'no_changepoint_distance_from_end': '180D', 'yearly_seasonality_order': 15, 'yearly_seasonality_change_freq': '365D'}, 'NONE': None}}, 'HOL': {'SP1': {'holidays_to_model_separately': 'auto', 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 1, 'holiday_post_num_days': 1, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, 'SP2': {'holidays_to_model_separately': 'auto', 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 2, 'holiday_post_num_days': 2, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, 'SP4': {'holidays_to_model_separately': 'auto', 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 4, 'holiday_post_num_days': 4, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, 'TG': {'holidays_to_model_separately': [], 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 3, 'holiday_post_num_days': 3, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, 'NONE': {'holidays_to_model_separately': [], 'holiday_lookup_countries': [], 'holiday_pre_num_days': 0, 'holiday_post_num_days': 0, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}}, 'FEASET': {'AUTO': 'auto', 'ON': True, 'OFF': False}, 'ALGO': {'LINEAR': {'fit_algorithm': 'linear', 'fit_algorithm_params': None}, 'RIDGE': {'fit_algorithm': 'ridge', 'fit_algorithm_params': None}, 'SGD': {'fit_algorithm': 'sgd', 'fit_algorithm_params': None}, 'LASSO': {'fit_algorithm': 'lasso', 'fit_algorithm_params': None}}, 'AR': {'AUTO': {'autoreg_dict': 'auto'}, 'OFF': {'autoreg_dict': None}}, 'DSI': {'HOURLY': {'AUTO': 5, 'OFF': 0}, 'DAILY': {'AUTO': 0, 'OFF': 0}, 'WEEKLY': {'AUTO': 0, 'OFF': 0}}, 'WSI': {'HOURLY': {'AUTO': 2, 'OFF': 0}, 'DAILY': {'AUTO': 2, 'OFF': 0}, 'WEEKLY': {'AUTO': 0, 'OFF': 0}}}, MULTI_TEMPLATES={'SILVERKITE_DAILY_90': ['DAILY_SEAS_LTQM_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'DAILY_SEAS_LTQM_GR_LINEAR_CP_NONE_HOL_SP2_FEASET_AUTO_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'DAILY_SEAS_LTQM_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO', 'DAILY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO'], 'SILVERKITE_WEEKLY': ['WEEKLY_SEAS_NM_GR_LINEAR_CP_NONE_HOL_NONE_FEASET_OFF_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'WEEKLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_NONE_FEASET_OFF_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'WEEKLY_SEAS_HV_GR_LINEAR_CP_NM_HOL_NONE_FEASET_OFF_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO', 'WEEKLY_SEAS_HV_GR_LINEAR_CP_LT_HOL_NONE_FEASET_OFF_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO'], 'SILVERKITE_HOURLY_1': ['HOURLY_SEAS_LT_GR_LINEAR_CP_NONE_HOL_TG_FEASET_AUTO_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_LT_GR_LINEAR_CP_NM_HOL_SP4_FEASET_OFF_ALGO_RIDGE_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP1_FEASET_AUTO_ALGO_RIDGE_AR_AUTO'], 'SILVERKITE_HOURLY_24': ['HOURLY_SEAS_LT_GR_LINEAR_CP_NM_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_AUTO', 'HOURLY_SEAS_LT_GR_LINEAR_CP_NONE_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP1_FEASET_OFF_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_AUTO'], 'SILVERKITE_HOURLY_168': ['HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NONE_HOL_SP4_FEASET_OFF_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP1_FEASET_AUTO_ALGO_RIDGE_AR_OFF'], 'SILVERKITE_HOURLY_336': ['HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP1_FEASET_AUTO_ALGO_LINEAR_AR_OFF', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP1_FEASET_AUTO_ALGO_LINEAR_AR_AUTO']}, SILVERKITE=ModelComponentsParam(autoregression={'autoreg_dict': None}, changepoints={'changepoints_dict': None, 'seasonality_changepoints_dict': None}, custom={'fit_algorithm_dict': {'fit_algorithm': 'ridge', 'fit_algorithm_params': None}, 'feature_sets_enabled': 'auto', 'max_daily_seas_interaction_order': 5, 'max_weekly_seas_interaction_order': 2, 'extra_pred_cols': [], 'min_admissible_value': None, 'max_admissible_value': None}, events={'holidays_to_model_separately': 'auto', 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 2, 'holiday_post_num_days': 2, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, growth={'growth_term': 'linear'}, hyperparameter_override=None, regressors={'regressor_cols': []}, seasonality={'yearly_seasonality': 'auto', 'quarterly_seasonality': 'auto', 'monthly_seasonality': 'auto', 'weekly_seasonality': 'auto', 'daily_seasonality': 'auto'}, uncertainty={'uncertainty_dict': None}), SILVERKITE_COMPONENT_KEYWORDS=<enum 'SILVERKITE_COMPONENT_KEYWORDS'>, SILVERKITE_EMPTY='DAILY_SEAS_NONE_GR_NONE_CP_NONE_HOL_NONE_FEASET_OFF_ALGO_LINEAR_AR_OFF_DSI_OFF_WSI_OFF', VALID_FREQ=['HOURLY', 'DAILY', 'WEEKLY'], SimpleSilverkiteTemplateOptions=<class 'greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions'>), estimator: greykite.sklearn.estimator.base_forecast_estimator.BaseForecastEstimator = SimpleSilverkiteEstimator())[source]

A template for SimpleSilverkiteEstimator.

Takes input data and optional configuration parameters to customize the model. Returns a set of parameters to call forecast_pipeline.

Notes

The attributes of a ForecastConfig for SimpleSilverkiteEstimator are:

computation_param: ComputationParam or None, default None

How to compute the result. See ComputationParam.

coverage: float or None, default None

Intended coverage of the prediction bands (0.0 to 1.0). Same as coverage in forecast_pipeline. You may tune how the uncertainty is computed via model_components.uncertainty[“uncertainty_dict”].

evaluation_metric_param: EvaluationMetricParam or None, default None

What metrics to evaluate. See EvaluationMetricParam.

evaluation_period_param: EvaluationPeriodParam or None, default None

How to split data for evaluation. See EvaluationPeriodParam.

forecast_horizon: int or None, default None

Number of periods to forecast into the future. Must be > 0 If None, default is determined from input data frequency Same as forecast_horizon in forecast_pipeline

metadata_param: MetadataParam or None, default None

Information about the input data. See MetadataParam.

model_components_param: ModelComponentsParam, list [ModelComponentsParam] or None, default None

Parameters to tune the model. See ModelComponentsParam. The fields are dictionaries with the following items.

See inline comments on which values accept lists for grid search.

seasonality: dict [str, any] or None, optional

Seasonality configuration dictionary, with the following optional keys. (keys are SilverkiteSeasonalityEnum members in lower case).

The keys are parameters of forecast_simple_silverkite. Refer to that function for more details.

"yearly_seasonality": str or bool or int or a list of such values for grid search, default ‘auto’

Determines the yearly seasonality ‘auto’, True, False, or a number for the Fourier order

"quarterly_seasonality": str or bool or int or a list of such values for grid search, default ‘auto’

Determines the quarterly seasonality ‘auto’, True, False, or a number for the Fourier order

"monthly_seasonality": str or bool or int or a list of such values for grid search, default ‘auto’

Determines the monthly seasonality ‘auto’, True, False, or a number for the Fourier order

"weekly_seasonality": str or bool or int or a list of such values for grid search, default ‘auto’

Determines the weekly seasonality ‘auto’, True, False, or a number for the Fourier order

"daily_seasonality": str or bool or int or a list of such values for grid search, default ‘auto’

Determines the daily seasonality ‘auto’, True, False, or a number for the Fourier order

growth: dict [str, any] or None, optional

Growth configuration dictionary with the following optional key:

"growth_term": str or None or a list of such values for grid search

How to model the growth. Valid options are “linear”, “quadratic”, “sqrt”, “cubic”, “cuberoot” All these terms have their origin at the train start date.

events: dict [str, any] or None, optional

Holiday/events configuration dictionary with the following optional keys:

"holiday_lookup_countries": list [str] or “auto” or None or a list of such values for grid search, default “auto”

The countries that contain the holidays you intend to model (holidays_to_model_separately).

  • If “auto”, uses a default list of countries that contain the default holidays_to_model_separately. See HOLIDAY_LOOKUP_COUNTRIES_AUTO.

  • If a list, must be a list of country names.

  • If None or an empty list, no holidays are modeled.

"holidays_to_model_separately": list [str] or “auto” or ALL_HOLIDAYS_IN_COUNTRIES or None or a list of such values for grid search, default “auto” # noqa: E501

Which holidays to include in the model. The model creates a separate key, value for each item in holidays_to_model_separately. The other holidays in the countries are grouped together as a single effect.

  • If “auto”, uses a default list of important holidays. See HOLIDAYS_TO_MODEL_SEPARATELY_AUTO.

  • If ALL_HOLIDAYS_IN_COUNTRIES, uses all available holidays in holiday_lookup_countries. This can often create a model that has too many parameters, and should typically be avoided.

  • If a list, must be a list of holiday names.

  • If None or an empty list, all holidays in holiday_lookup_countries are grouped together as a single effect.

Use holiday_lookup_countries to provide a list of countries where these holiday occur.

"holiday_pre_num_days": int or a list of such values for grid search, default 2

model holiday effects for pre_num days before the holiday. The unit is days, not periods. It does not depend on input data frequency.

"holiday_post_num_days": int or a list of such values for grid search, default 2

model holiday effects for post_num days after the holiday. The unit is days, not periods. It does not depend on input data frequency.

"holiday_pre_post_num_dict": dict [str, (int, int)] or None, default None

Overrides pre_num and post_num for each holiday in holidays_to_model_separately. For example, if holidays_to_model_separately contains “Thanksgiving” and “Labor Day”, this parameter can be set to {"Thanksgiving": [1, 3], "Labor Day": [1, 2]}, denoting that the “Thanksgiving” pre_num is 1 and post_num is 3, and “Labor Day” pre_num is 1 and post_num is 2. Holidays not specified use the default given by pre_num and post_num.

"daily_event_df_dict": dict [str, pandas.DataFrame] or None, default None

A dictionary of data frames, each representing events data for the corresponding key. Specifies additional events to include besides the holidays specified above. The format is the same as in forecast. The DataFrame has two columns:

  • The first column contains event dates. Must be in a format recognized by pandas.to_datetime. Must be at daily frequency for proper join. It is joined against the time in df, converted to a day: pd.to_datetime(pd.DatetimeIndex(df[time_col]).date).

  • the second column contains the event label for each date

The column order is important; column names are ignored. The event dates must span their occurrences in both the training and future prediction period.

During modeling, each key in the dictionary is mapped to a categorical variable named f"{EVENT_PREFIX}_{key}", whose value at each timestamp is specified by the corresponding DataFrame.

For example, to manually specify a yearly event on September 1 during a training/forecast period that spans 2020-2022:

daily_event_df_dict = {
    "custom_event": pd.DataFrame({
        "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
        "label": ["is_event", "is_event", "is_event"]
    })
}

It’s possible to specify multiple events in the same df. Two events, "sep" and "oct" are specified below for 2020-2021:

daily_event_df_dict = {
    "custom_event": pd.DataFrame({
        "date": ["2020-09-01", "2020-10-01", "2021-09-01", "2021-10-01"],
        "event_name": ["sep", "oct", "sep", "oct"]
    })
}

Use multiple keys if two events may fall on the same date. These events must be in separate DataFrames:

daily_event_df_dict = {
    "fixed_event": pd.DataFrame({
        "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
        "event_name": "fixed_event"
    }),
    "moving_event": pd.DataFrame({
        "date": ["2020-09-01", "2021-08-28", "2022-09-03"],
        "event_name": "moving_event"
    }),
}

The multiple event specification can be used even if events never overlap. An equivalent specification to the second example:

daily_event_df_dict = {
    "sep": pd.DataFrame({
        "date": ["2020-09-01", "2021-09-01"],
        "event_name": "is_event"
    }),
    "oct": pd.DataFrame({
        "date": ["2020-10-01", "2021-10-01"],
        "event_name": "is_event"
    }),
}

Note: All these events are automatically added to the model. There is no need to specify them in extra_pred_cols as you would for forecast.

Note: Do not use EVENT_DEFAULT in the second column. This is reserved to indicate dates that do not correspond to an event.

changepoints: dict [str, dict] or None, optional

Specifies the changepoint configuration. Dictionary with the following optional key:

"changepoints_dict": dict or None or a list of such values for grid search

Changepoints dictionary passed to forecast_simple_silverkite. A dictionary with the following optional keys:

"method": str

The method to locate changepoints. Valid options:

  • “uniform”. Places n_changepoints evenly spaced changepoints to allow growth to change.

  • “custom”. Places changepoints at the specified dates.

  • “auto”. Automatically detects change points.

Additional keys to provide parameters for each particular method are described below.

"continuous_time_col": str or None

Column to apply growth_func to, to generate changepoint features Typically, this should match the growth term in the model

"growth_func": callable or None

Growth function (numeric -> numeric). Changepoint features are created by applying growth_func to “continuous_time_col” with offsets. If None, uses identity function to use continuous_time_col directly as growth term

If changepoints_dict[“method”] == “uniform”, this other key is required:

"n_changepoints": int

number of changepoints to evenly space across training period

If changepoints_dict[“method”] == “custom”, this other key is required:

"dates": list [int or float or str or datetime]

Changepoint dates. Must be parsable by pd.to_datetime. Changepoints are set at the closest time on or after these dates in the dataset.

If changepoints_dict[“method”] == “auto”, optional keys can be passed that match the parameters in find_trend_changepoints (except df, time_col and value_col, which are already known). To add manually specified changepoints to the automatically detected ones, the keys dates, combine_changepoint_min_distance and keep_detected can be specified, which correspond to the three parameters custom_changepoint_dates, min_distance and keep_detected in combine_detected_and_custom_trend_changepoints.

"seasonality_changepoints_dict": dict or None or a list of such values for grid search

seasonality changepoints dictionary passed to forecast_simple_silverkite. The optional keys are the parameters in find_seasonality_changepoints. You don’t need to provide df, time_col, value_col or trend_changepoints, since they are passed with the class automatically.

autoregression: dict [str, dict] or None, optional

Specifies the autoregression configuration. Dictionary with the following optional key:

"autoreg_dict": dict or None or a list of such values for grid search

A dictionary with arguments for build_autoreg_df. That function’s parameter value_col is inferred from the input of current function forecast_silverkite. Other keys are:

  • "lag_dict" : dict or None

  • "agg_lag_dict" : dict or None

  • "series_na_fill_func" : callable

See more details for above parameters in build_autoreg_df.

regressors: dict [str, any] or None, optional

Specifies the regressors to include in the model (e.g. macro-economic factors). Dictionary with the following optional keys:

"regressor_cols": list [str] or None or a list of such values for grid search

The columns in df to use as regressors. Note that regressor values must be available in df for all prediction dates. Thus, df will contain timestamps for both training and future prediction.

  • regressors must be available on all dates

  • the response must be available for training dates (metadata[“value_col”])

Use extra_pred_cols to specify interactions of any model terms with the regressors.

uncertainty: dict [str, dict] or None, optional

Along with coverage, specifies the uncertainty interval configuration. Use coverage to set interval size. Use uncertainty to tune the calculation.

"uncertainty_dict": str or dict or None or a list of such values for grid search

“auto” or a dictionary on how to fit the uncertainty model. If a dictionary, valid keys are:

"uncertainty_method": str

The title of the method. Only "simple_conditional_residuals" is implemented in fit_ml_model which calculates intervals using residuals.

"params": dict

A dictionary of parameters needed for the requested uncertainty_method. For example, for uncertainty_method="simple_conditional_residuals", see parameters of conf_interval:

  • "conditional_cols"

  • "quantiles"

  • "quantile_estimation_method"

  • "sample_size_thresh"

  • "small_sample_size_method"

  • "small_sample_size_quantile"

The default value for quantiles is inferred from coverage.

If “auto”, see get_silverkite_uncertainty_dict for the default value. If coverage is not None and uncertainty_dict is not provided, then the “auto” setting is used.

If coverage is None and uncertainty_dict is None, then no intervals are returned.

custom: dict [str, any] or None, optional

Custom parameters that don’t fit the categories above. Dictionary with the following optional keys:

"fit_algorithm_dict": dict or a list of such values for grid search

How to fit the model. A dictionary with the following optional keys.

"fit_algorithm": str, optional, default “ridge”

The type of predictive model used in fitting.

See fit_model_via_design_matrix for available options and their parameters.

"fit_algorithm_params": dict or None, optional, default None

Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

"feature_sets_enabled": dict [str, bool or “auto” or None] or bool or “auto” or None; or a list of such values for grid search

Whether to include interaction terms and categorical variables to increase model flexibility.

If a dict, boolean values indicate whether include various sets of features in the model. The following keys are recognized (from SilverkiteColumn):

"COLS_HOUR_OF_WEEK": str

Constant hour of week effect

"COLS_WEEKEND_SEAS": str

Daily seasonality interaction with is_weekend

"COLS_DAY_OF_WEEK_SEAS": str

Daily seasonality interaction with day of week

"COLS_TREND_DAILY_SEAS": str

Allow daily seasonality to change over time by is_weekend

"COLS_EVENT_SEAS": str

Allow sub-daily event effects

"COLS_EVENT_WEEKEND_SEAS": str

Allow sub-daily event effect to interact with is_weekend

"COLS_DAY_OF_WEEK": str

Constant day of week effect

"COLS_TREND_WEEKEND": str

Allow trend (growth, changepoints) to interact with is_weekend

"COLS_TREND_DAY_OF_WEEK": str

Allow trend to interact with day of week

"COLS_TREND_WEEKLY_SEAS": str

Allow weekly seasonality to change over time

The following dictionary values are recognized:

  • True: include the feature set in the model

  • False: do not include the feature set in the model

  • None: do not include the feature set in the model

  • “auto” or not provided: use the default setting based on data frequency and size

If not a dict:

  • if a boolean, equivalent to a dictionary with all values set to the boolean.

  • if None, equivalent to a dictionary with all values set to False.

  • if “auto”, equivalent to a dictionary with all values set to “auto”.

"max_daily_seas_interaction_order": int or None or a list of such values for grid search, default 5

Max fourier order to use for interactions with daily seasonality. (COLS_EVENT_SEAS, COLS_EVENT_WEEKEND_SEAS, COLS_WEEKEND_SEAS, COLS_DAY_OF_WEEK_SEAS, COLS_TREND_DAILY_SEAS).

Model includes interactions terms specified by feature_sets_enabled up to the order limited by this value and the available order from seasonality.

"max_weekly_seas_interaction_order"int or None or a list of such values for grid search, default 2

Max fourier order to use for interactions with weekly seasonality (COLS_TREND_WEEKLY_SEAS).

Model includes interactions terms specified by feature_sets_enabled up to the order limited by this value and the available order from seasonality.

"extra_pred_cols": list [str] or None or a list of such values for grid search, default None

Names of extra predictor columns to pass to forecast_silverkite. The standard interactions can be controlled via feature_sets_enabled parameter. Accepts any valid patsy model formula term. Can be used to model complex interactions of time features, events, seasonality, changepoints, regressors. Columns should be generated by build_silverkite_features or included with input data. These are added to any features already included by feature_sets_enabled and terms specified by model.

"min_admissible_value": float or double or int or None, default None

The lowest admissible value for the forecasts and prediction intervals. Any value below this will be mapped back to this value. If None, there is no lower bound.

"max_admissible_value": float or double or int or None, default None

The highest admissible value for the forecasts and prediction intervals. Any value above this will be mapped back to this value. If None, there is no upper bound.

hyperparameter_override: dict [str, any] or None or list [dict [str, any] or None], optional

After the above model components are used to create a hyperparameter grid, the result is updated by this dictionary, to create new keys or override existing ones. Allows for complete customization of the grid search.

Keys should have format {named_step}__{parameter_name} for the named steps of the sklearn.pipeline.Pipeline returned by this function. See sklearn.pipeline.Pipeline.

For example:

hyperparameter_override={
    "estimator__silverkite": SimpleSilverkiteForecast(),
    "estimator__silverkite_diagnostics": SilverkiteDiagnostics(),
    "estimator__growth_term": "linear",
    "input__response__null__impute_algorithm": "ts_interpolate",
    "input__response__null__impute_params": {"orders": [7, 14]},
    "input__regressors_numeric__normalize__normalize_algorithm": "RobustScaler",
}

If a list of dictionaries, grid search will be done for each dictionary in the list. Each dictionary in the list override the defaults. This enables grid search over specific combinations of parameters to reduce the search space.

  • For example, the first dictionary could define combinations of parameters for a “complex” model, and the second dictionary could define combinations of parameters for a “simple” model, to prevent mixed combinations of simple and complex.

  • Or the first dictionary could grid search over fit algorithm, and the second dictionary could use a single fit algorithm and grid search over seasonality.

The result is passed as the param_distributions parameter to sklearn.model_selection.RandomizedSearchCV.

model_template: str, list`[`str] or None, default None

The simple silverkite template support single templates, multi templates or a list of single/multi templates. A valid single template must be either SILVERKITE or consists of

{FREQ}_SEAS_{VAL}_GR_{VAL}_CP_{VAL}_HOL_{VAL}_FEASET_{VAL}_ALGO_{VAL}_AR_{VAL}

For example, we have DAILY_SEAS_NM_GR_LINEAR_CP_LT_HOL_NONE_FEASET_ON_ALGO_RIDGE_AR_ON. The valid FREQ and VAL can be found at template_defaults. The components stand for seasonality, growth, changepoints_dict, events, feature_sets_enabled, fit_algorithm and autoregression in ModelComponentsParam, which is used in SimpleSilverkiteTemplate. Users are allowed to

  • Omit any number of component-value pairs, and the omitted will be filled with default values.

  • Switch the order of different component-value pairs.

A valid multi template must belong to MULTI_TEMPLATES or must be a list of single or multi template names.

DEFAULT_MODEL_TEMPLATE = 'SILVERKITE'

The default model template. See ModelTemplateEnum. Uses a string to avoid circular imports. Overrides the value from ForecastConfigDefaults.

property allow_model_template_list

SimpleSilverkiteTemplate allows config.model_template to be a list.

property allow_model_components_param_list

SilverkiteTemplate allows config.model_components_param to be a list.

property constants

Constants used by the template class. Includes the model templates and their default values.

get_regressor_cols()[source]

Returns regressor column names from the model components.

Implements the method in BaseTemplate.

Uses these attributes:

model_components: ModelComponentsParam, list [ModelComponentsParam] or None, default None

Configuration of model growth, seasonality, holidays, etc. See SimpleSilverkiteTemplate for details.

Returns

regressor_cols – The names of regressor columns used in any hyperparameter set requested by model_components. None if there are no regressors.

Return type

list [str] or None

get_hyperparameter_grid()[source]

Returns hyperparameter grid.

Implements the method in BaseTemplate.

Converts model components, time properties, and model template into SimpleSilverkiteEstimator hyperparameters.

Uses these attributes:

model_components: ModelComponentsParam, list [ModelComponentsParam] or None, default None

Configuration of model growth, seasonality, events, etc. See SimpleSilverkiteTemplate for details.

time_properties: dict [str, any] or None, default None

Time properties dictionary (likely produced by get_forecast_time_properties) with keys:

"period": int

Period of each observation (i.e. minimum time between observations, in seconds).

"simple_freq": SimpleTimeFrequencyEnum

SimpleTimeFrequencyEnum member corresponding to data frequency.

"num_training_points": int

Number of observations for training.

"num_training_days": int

Number of days for training.

"start_year": int

Start year of the training period.

"end_year": int

End year of the forecast period.

"origin_for_time_vars": float

Continuous time representation of the first date in df.

model_template: str, default “SILVERKITE”

The name of model template, must be one of the valid templates defined in SimpleSilverkiteTemplate.

Notes

forecast_pipeline handles the train/test splits according to EvaluationPeriodParam, so estimator__train_test_thresh and estimator__training_fraction are always None.

Similarly, estimator__origin_for_time_vars is set to None.

Returns

hyperparameter_grid – hyperparameter_grid for grid search in forecast_pipeline. The output dictionary values are lists, combined in grid search.

Return type

dict [str, list [any]] or list [ dict [str, list [any]] ]

check_template_type(template)[source]

Checks the template name is valid and whether it is single or multi template. Raises an error if the template is not recognized.

A valid single template must be either SILVERKITE or consists of

{FREQ}_SEAS_{VAL}_GR_{VAL}_CP_{VAL}_HOL_{VAL}_FEASET_{VAL}_ALGO_{VAL}_AR_{VAL}

For example, we have DAILY_SEAS_NM_GR_LINEAR_CP_LT_HOL_NONE_FEASET_ON_ALGO_RIDGE_AR_ON. The valid FREQ and VAL can be found at template_defaults. The components stand for seasonality, growth, changepoints_dict, events, feature_sets_enabled, fit_algorithm and autoregression in ModelComponentsParam, which is used in SimpleSilverkiteTemplate. Users are allowed to

  • Omit any number of component-value pairs, and the omitted will be filled with default values.

  • Switch the order of different component-value pairs.

A valid multi template must belong to MULTI_TEMPLATES or must be a list of single or multi template names.

Parameters

template (str, SimpleSilverkiteTemplateName or list`[`str, SimpleSilverkiteTemplateName]) – The model_template parameter fed into ForecastConfig. for simple silverkite templates.

Returns

template_type – “single” or “multi”.

Return type

str

get_model_components_from_model_template(template)[source]

Gets the ModelComponentsParam class from model template.

The template could be a name string, a SimpleSilverkiteTemplateOptions dataclass, or a list of such strings and/or dataclasses. If a list is given, a list of ModelComponentsParam is returned. If a single element is given, a list of length 1 is returned.

Parameters

template (str, SimpleSilverkiteTemplateOptions or list [str, SimpleSilverkiteTemplateOptions]) – The model_template in ForecastConfig, could be a name string, a SimpleSilverkiteTemplateOptions dataclass, or a list of such strings and/or dataclasses.

Returns

model_components_param – The list of ModelComponentsParam class(es) that correspond to template.

Return type

list [ModelComponentsParam]

static apply_computation_defaults(computation: Optional[greykite.framework.templates.autogen.forecast_config.ComputationParam] = None)greykite.framework.templates.autogen.forecast_config.ComputationParam

Applies the default ComputationParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a ComputationParam object.

Parameters

computation (ComputationParam or None) – The ComputationParam object.

Returns

computation – Valid ComputationParam object with the provided attribute values and the default attribute values if not.

Return type

ComputationParam

static apply_evaluation_metric_defaults(evaluation: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam] = None)greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam

Applies the default EvaluationMetricParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a EvaluationMetricParam object.

Parameters

evaluation (EvaluationMetricParam or None) – The EvaluationMetricParam object.

Returns

evaluation – Valid EvaluationMetricParam object with the provided attribute values and the default attribute values if not.

Return type

EvaluationMetricParam

static apply_evaluation_period_defaults(evaluation: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam] = None)greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam

Applies the default EvaluationPeriodParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a EvaluationPeriodParam object.

Parameters

evaluation (EvaluationPeriodParam or None) – The EvaluationMetricParam object.

Returns

evaluation – Valid EvaluationPeriodParam object with the provided attribute values and the default attribute values if not.

Return type

EvaluationPeriodParam

apply_forecast_config_defaults(config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None)greykite.framework.templates.autogen.forecast_config.ForecastConfig

Applies the default Forecast Config values to the given config. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input config is None, it creates a Forecast Config.

Parameters

config (ForecastConfig or None) – Forecast configuration if available. See ForecastConfig.

Returns

config – A valid Forecast Config which contains the provided attribute values and the default attribute values if not.

Return type

ForecastConfig

static apply_metadata_defaults(metadata: Optional[greykite.framework.templates.autogen.forecast_config.MetadataParam] = None)greykite.framework.templates.autogen.forecast_config.MetadataParam

Applies the default MetadataParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a MetadataParam object.

Parameters

metadata (MetadataParam or None) – The MetadataParam object.

Returns

metadata – Valid MetadataParam object with the provided attribute values and the default attribute values if not.

Return type

MetadataParam

static apply_model_components_defaults(model_components: Optional[Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[Optional[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]]] = None) → Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]

Applies the default ModelComponentsParam values to the given object.

Converts None to a ModelComponentsParam object. Unpacks a list of a single element to the element itself.

Parameters

model_components (ModelComponentsParam or None or list of such items) – The ModelComponentsParam object.

Returns

model_components – Valid ModelComponentsParam object with the provided attribute values and the default attribute values if not.

Return type

ModelComponentsParam or list of such items

apply_model_template_defaults(model_template: Optional[Union[str, List[Optional[str]]]] = None) → Union[str, List[str]]

Applies the default ModelComponentsParam values to the given object.

Unpacks a list of a single element to the element itself. Sets default value if None.

Parameters

model_template (str or None or list [None, str]) – The model template name. See valid names in ModelTemplateEnum.

Returns

model_template – The model template name, with defaults value used if not provided.

Return type

str or list [str]

static apply_template_decorator(func)

Decorator for apply_template_for_pipeline_params function.

By default, this applies apply_forecast_config_defaults to config.

Subclass may override this for pre/post processing of apply_template_for_pipeline_params, such as input validation. In this case, apply_template_for_pipeline_params must also be implemented in the subclass.

apply_template_for_pipeline_params(df: pandas.core.frame.DataFrame, config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None) → Dict

Implements template interface method. Takes input data and optional configuration parameters to customize the model. Returns a set of parameters to call forecast_pipeline.

See template interface for parameters and return value.

Uses the methods in this class to set:

  • "regressor_cols" : get_regressor_cols()

  • "pipeline" : get_pipeline()

  • "time_properties" : get_forecast_time_properties()

  • "hyperparameter_grid" : get_hyperparameter_grid()

All other parameters are taken directly from config.

property estimator

The estimator instance to use as the final step in the pipeline. An instance of BaseForecastEstimator.

get_forecast_time_properties()

Returns forecast time parameters.

Uses self.df, self.config, self.regressor_cols.

Available parameters:

  • self.df

  • self.config

  • self.score_func

  • self.score_func_greater_is_better

  • self.regressor_cols

  • self.estimator

  • self.pipeline

Returns

time_properties – Time properties dictionary (likely produced by get_forecast_time_properties) with keys:

"period"int

Period of each observation (i.e. minimum time between observations, in seconds).

"simple_freq"SimpleTimeFrequencyEnum

SimpleTimeFrequencyEnum member corresponding to data frequency.

"num_training_points"int

Number of observations for training.

"num_training_days"int

Number of days for training.

"start_year"int

Start year of the training period.

"end_year"int

End year of the forecast period.

"origin_for_time_vars"float

Continuous time representation of the first date in df.

Return type

dict [str, any] or None, default None

get_pipeline()

Returns pipeline.

Implementation may be overridden by subclass if a different pipeline is desired.

Uses self.estimator, self.score_func, self.score_func_greater_is_better, self.config, self.regressor_cols.

Available parameters:

  • self.df

  • self.config

  • self.score_func

  • self.score_func_greater_is_better

  • self.regressor_cols

  • self.estimator

Returns

pipeline – See forecast_pipeline.

Return type

sklearn.pipeline.Pipeline

class greykite.sklearn.estimator.simple_silverkite_estimator.SimpleSilverkiteEstimator(silverkite: greykite.algo.forecast.silverkite.forecast_simple_silverkite.SimpleSilverkiteForecast = <greykite.algo.forecast.silverkite.forecast_simple_silverkite.SimpleSilverkiteForecast object>, silverkite_diagnostics: greykite.algo.forecast.silverkite.silverkite_diagnostics.SilverkiteDiagnostics = <greykite.algo.forecast.silverkite.silverkite_diagnostics.SilverkiteDiagnostics object>, score_func: callable = <function mean_squared_error>, coverage: float = None, null_model_params: Optional[Dict] = None, time_properties: Optional[Dict] = None, freq: Optional[str] = None, forecast_horizon: Optional[int] = None, origin_for_time_vars: Optional[float] = None, train_test_thresh: Optional[datetime.datetime] = None, training_fraction: Optional[float] = None, fit_algorithm_dict: Optional[Dict] = None, holidays_to_model_separately: Optional[Union[str, List[str]]] = 'auto', holiday_lookup_countries: Optional[Union[str, List[str]]] = 'auto', holiday_pre_num_days: int = 2, holiday_post_num_days: int = 2, holiday_pre_post_num_dict: Optional[Dict] = None, daily_event_df_dict: Optional[Dict] = None, changepoints_dict: Optional[Dict] = None, yearly_seasonality: Union[bool, str, int] = 'auto', quarterly_seasonality: Union[bool, str, int] = 'auto', monthly_seasonality: Union[bool, str, int] = 'auto', weekly_seasonality: Union[bool, str, int] = 'auto', daily_seasonality: Union[bool, str, int] = 'auto', max_daily_seas_interaction_order: Optional[int] = None, max_weekly_seas_interaction_order: Optional[int] = None, autoreg_dict: Optional[Dict] = None, seasonality_changepoints_dict: Optional[Dict] = None, min_admissible_value: Optional[float] = None, max_admissible_value: Optional[float] = None, uncertainty_dict: Optional[Dict] = None, growth_term: Optional[str] = 'linear', regressor_cols: Optional[List[str]] = None, feature_sets_enabled: Optional[Union[bool, Dict[str, bool]]] = None, extra_pred_cols: Optional[List[str]] = None, regression_weight_col: Optional[str] = None, simulation_based: Optional[bool] = False)[source]

Wrapper for forecast_simple_silverkite.

Parameters
  • score_func (callable, optional, default mean_squared_error) – See BaseForecastEstimator.

  • coverage (float between [0.0, 1.0] or None, optional) – See BaseForecastEstimator.

  • null_model_params (dict or None, optional) – Dictionary with arguments to define DummyRegressor null model, default is None. See BaseForecastEstimator.

  • fit_algorithm_dict (dict or None, optional) –

    How to fit the model. A dictionary with the following optional keys.

    "fit_algorithm"str, optional, default “ridge”

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

    "fit_algorithm_params"dict or None, optional, default None

    Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • uncertainty_dict (dict or str or None, optional) – How to fit the uncertainty model. See forecast. Note that this is allowed to be “auto”. If None or “auto”, will be set to a default value by coverage before calling forecast_silverkite. See BaseForecastEstimator for details.

  • kwargs (additional parameters) –

    Other parameters are the same as in forecast_simple_silverkite.

    See source code __init__ for the parameter names, and refer to forecast_simple_silverkite for their description.

    If this Estimator is called from forecast_pipeline, train_test_thresh and training_fraction should almost always be None, because train/test is handled outside this Estimator.

Notes

Attributes match those of BaseSilverkiteEstimator.

See also

None

For attributes and details on fit, predict, and component plots.

None

Function to transform the parameters to call forecast_silverkite fit.

None

Functions performing the fit and predict.

fit(X, y=None, time_col='ts', value_col='y', **fit_params)[source]

Fits Silverkite forecast model.

Parameters
  • X (pandas.DataFrame) – Input timeseries, with timestamp column, value column, and any additional regressors. The value column is the response, included in X to allow transformation by sklearn.pipeline.

  • y (ignored) – The original timeseries values, ignored. (The y for fitting is included in X).

  • time_col (str) – Time column name in X.

  • value_col (str) – Value column name in X.

  • fit_params (dict) – additional parameters for null model.

Returns

self – Fitted model is stored in self.model_dict.

Return type

self

finish_fit()

Makes important values of self.model_dict conveniently accessible.

To be called by subclasses at the end of their fit method. Sets {pred_cols, feature_cols, and coef_}.

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

plot_seasonalities(title=None)

Convenience function to plot the data and the seasonality components.

Parameters

title (str, optional, default None) – Plot title.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

plot_trend(title=None)

Convenience function to plot the data and the trend component.

Parameters

title (str, optional, default None) – Plot title.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

plot_trend_changepoint_detection(params=None)

Convenience function to plot the original trend changepoint detection results.

Parameters

params (dict or None, default None) –

The parameters in plot. If set to None, all components will be plotted.

Note: seasonality components plotting is not supported currently. plot parameter must be False.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

property pred_category

A dictionary that stores the predictor names in each category.

This property is not initialized until used. This speeds up the fitting process. The categories includes

  • “intercept” : the intercept.

  • “time_features” : the predictors that include TIME_FEATURES but not SEASONALITY_REGEX.

  • “event_features” : the predictors that include EVENT_PREFIX.

  • “trend_features” : the predictors that include TREND_REGEX but not SEASONALITY_REGEX.

  • “seasonality_features” : the predictors that include SEASONALITY_REGEX.

  • “lag_features” : the predictors that include LAG_REGEX.

  • “regressor_features” : external regressors and other predictors manually passed to extra_pred_cols, but not in the categories above.

  • “interaction_features” : the predictors that include interaction terms, i.e., including a colon.

Note that each predictor falls into at least one category. Some “time_features” may also be “trend_features”. Predictors with an interaction are classified into all categories matched by the interaction components. Thus, “interaction_features” are already included in the other categories.

predict(X, y=None)

Creates forecast for the dates specified in X.

Parameters
  • X (pandas.DataFrame) – Input timeseries with timestamp column and any additional regressors. Timestamps are the dates for prediction. Value column, if provided in X, is ignored.

  • y (ignored.) –

Returns

predictions

Forecasted values for the dates in X. Columns:

  • TIME_COL: dates

  • PREDICTED_COL: predictions

  • PREDICTED_LOWER_COL: lower bound of predictions, optional

  • PREDICTED_UPPER_COL: upper bound of predictions, optional

  • [other columns], optional

PREDICTED_LOWER_COL and PREDICTED_UPPER_COL are present if self.coverage is not None.

Return type

pandas.DataFrame

score(X, y, sample_weight=None)

Default scorer for the estimator (Used in GridSearchCV/RandomizedSearchCV if scoring=None)

Notes

If null_model_params is not None, returns R2_null_model_score of model error relative to null model, evaluated by score_func.

If null_model_params is None, returns score_func of the model itself.

By default, grid search (with no scoring parameter) optimizes improvement of score_func against null model.

To optimize a different score function, pass scoring to GridSearchCV/RandomizedSearchCV.

Parameters
Returns

score – Comparison of predictions against null predictions, according to specified score function

Return type

float or None

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

summary(max_colwidth=20)

Creates human readable string of how the model works, including relevant diagnostics These details cannot be extracted from the forecast alone Prints model configuration. Extend this in child class to print the trained model parameters.

Log message is printed to the cst.LOGGER_NAME logger.

class greykite.sklearn.estimator.silverkite_estimator.SilverkiteEstimator(silverkite: greykite.algo.forecast.silverkite.forecast_silverkite.SilverkiteForecast = <greykite.algo.forecast.silverkite.forecast_silverkite.SilverkiteForecast object>, silverkite_diagnostics: greykite.algo.forecast.silverkite.silverkite_diagnostics.SilverkiteDiagnostics = <greykite.algo.forecast.silverkite.silverkite_diagnostics.SilverkiteDiagnostics object>, score_func=<function mean_squared_error>, coverage=None, null_model_params=None, origin_for_time_vars=None, extra_pred_cols=None, train_test_thresh=None, training_fraction=None, fit_algorithm_dict=None, daily_event_df_dict=None, fs_components_df= name period order seas_names 0 tod 24.0 3 daily 1 tow 7.0 3 weekly 2 conti_year 1.0 5 yearly, autoreg_dict=None, changepoints_dict=None, seasonality_changepoints_dict=None, changepoint_detector=None, min_admissible_value=None, max_admissible_value=None, uncertainty_dict=None, normalize_method=None, adjust_anomalous_dict=None, impute_dict=None, regression_weight_col=None, forecast_horizon=None, simulation_based=False)[source]

Wrapper for forecast.

Parameters
  • score_func (callable, optional, default mean_squared_error) – See BaseForecastEstimator.

  • coverage (float between [0.0, 1.0] or None, optional) – See BaseForecastEstimator.

  • null_model_params (dict or None, optional) – Dictionary with arguments to define DummyRegressor null model, default is None. See BaseForecastEstimator.

  • fit_algorithm_dict (dict or None, optional) –

    How to fit the model. A dictionary with the following optional keys.

    "fit_algorithm"str, optional, default “linear”

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

    "fit_algorithm_params"dict or None, optional, default None

    Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • uncertainty_dict (dict or str or None, optional) – How to fit the uncertainty model. See forecast. Note that this is allowed to be “auto”. If None or “auto”, will be set to a default value by coverage before calling forecast_silverkite. See BaseForecastEstimator for details.

  • fs_components_df (pandas.DataFrame or None, optional) –

    A dataframe with information about fourier series generation. If provided, it must contain columns with following names:

    • ”name”: name of the timeseries feature (e.g. tod, tow etc.).

    • ”period”: Period of the fourier series.

    • ”order”: Order of the fourier series. “seas_names”: Label for the type of seasonality (e.g. daily, weekly etc.) and should be unique.

    • validate_fs_components_df checks for it, so that component plots don’t have duplicate y-axis labels.

    This differs from the expected input of forecast_silverkite where “period”, “order” and “seas_names” are optional. This restriction is to facilitate appropriate computation of component (e.g. trend, seasonalities and holidays) effects. See Notes section in this docstring for a more detailed explanation with examples.

Other parameters are the same as in forecast.

If this Estimator is called from forecast_pipeline, train_test_thresh and training_fraction should almost always be None, because train/test is handled outside this Estimator.

The attributes are the same as BaseSilverkiteEstimator.

See also

None

For details on fit, predict, and component plots.

None

Functions performing the fit and predict.

validate_inputs()[source]

Validates the inputs to SilverkiteEstimator.

fit(X, y=None, time_col='ts', value_col='y', **fit_params)[source]

Fits Silverkite forecast model.

Parameters
  • X (pandas.DataFrame) – Input timeseries, with timestamp column, value column, and any additional regressors. The value column is the response, included in X to allow transformation by sklearn.pipeline.

  • y (ignored) – The original timeseries values, ignored. (The y for fitting is included in X).

  • time_col (str) – Time column name in X.

  • value_col (str) – Value column name in X.

  • fit_params (dict) – additional parameters for null model.

static validate_fs_components_df(fs_components_df)[source]

Validates the inputs of a fourier series components dataframe called by SilverkiteEstimator to validate the input fs_components_df.

Parameters

fs_components_df (pandas.DataFrame) –

A DataFrame with information about fourier series generation. Must contain columns with following names:

  • ”name”: name of the timeseries feature (e.g. “tod”, “tow” etc.)

  • ”period”: Period of the fourier series

  • ”order”: Order of the fourier series

  • ”seas_names”: seas_name corresponding to the name (e.g. “daily”, “weekly” etc.).

finish_fit()

Makes important values of self.model_dict conveniently accessible.

To be called by subclasses at the end of their fit method. Sets {pred_cols, feature_cols, and coef_}.

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

plot_seasonalities(title=None)

Convenience function to plot the data and the seasonality components.

Parameters

title (str, optional, default None) – Plot title.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

plot_trend(title=None)

Convenience function to plot the data and the trend component.

Parameters

title (str, optional, default None) – Plot title.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

plot_trend_changepoint_detection(params=None)

Convenience function to plot the original trend changepoint detection results.

Parameters

params (dict or None, default None) –

The parameters in plot. If set to None, all components will be plotted.

Note: seasonality components plotting is not supported currently. plot parameter must be False.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

property pred_category

A dictionary that stores the predictor names in each category.

This property is not initialized until used. This speeds up the fitting process. The categories includes

  • “intercept” : the intercept.

  • “time_features” : the predictors that include TIME_FEATURES but not SEASONALITY_REGEX.

  • “event_features” : the predictors that include EVENT_PREFIX.

  • “trend_features” : the predictors that include TREND_REGEX but not SEASONALITY_REGEX.

  • “seasonality_features” : the predictors that include SEASONALITY_REGEX.

  • “lag_features” : the predictors that include LAG_REGEX.

  • “regressor_features” : external regressors and other predictors manually passed to extra_pred_cols, but not in the categories above.

  • “interaction_features” : the predictors that include interaction terms, i.e., including a colon.

Note that each predictor falls into at least one category. Some “time_features” may also be “trend_features”. Predictors with an interaction are classified into all categories matched by the interaction components. Thus, “interaction_features” are already included in the other categories.

predict(X, y=None)

Creates forecast for the dates specified in X.

Parameters
  • X (pandas.DataFrame) – Input timeseries with timestamp column and any additional regressors. Timestamps are the dates for prediction. Value column, if provided in X, is ignored.

  • y (ignored.) –

Returns

predictions

Forecasted values for the dates in X. Columns:

  • TIME_COL: dates

  • PREDICTED_COL: predictions

  • PREDICTED_LOWER_COL: lower bound of predictions, optional

  • PREDICTED_UPPER_COL: upper bound of predictions, optional

  • [other columns], optional

PREDICTED_LOWER_COL and PREDICTED_UPPER_COL are present if self.coverage is not None.

Return type

pandas.DataFrame

score(X, y, sample_weight=None)

Default scorer for the estimator (Used in GridSearchCV/RandomizedSearchCV if scoring=None)

Notes

If null_model_params is not None, returns R2_null_model_score of model error relative to null model, evaluated by score_func.

If null_model_params is None, returns score_func of the model itself.

By default, grid search (with no scoring parameter) optimizes improvement of score_func against null model.

To optimize a different score function, pass scoring to GridSearchCV/RandomizedSearchCV.

Parameters
Returns

score – Comparison of predictions against null predictions, according to specified score function

Return type

float or None

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

summary(max_colwidth=20)

Creates human readable string of how the model works, including relevant diagnostics These details cannot be extracted from the forecast alone Prints model configuration. Extend this in child class to print the trained model parameters.

Log message is printed to the cst.LOGGER_NAME logger.

class greykite.sklearn.estimator.base_silverkite_estimator.BaseSilverkiteEstimator(silverkite: greykite.algo.forecast.silverkite.forecast_silverkite.SilverkiteForecast = <greykite.algo.forecast.silverkite.forecast_silverkite.SilverkiteForecast object>, silverkite_diagnostics: greykite.algo.forecast.silverkite.silverkite_diagnostics.SilverkiteDiagnostics = <greykite.algo.forecast.silverkite.silverkite_diagnostics.SilverkiteDiagnostics object>, score_func: callable = <function mean_squared_error>, coverage: float = None, null_model_params: Optional[Dict] = None, uncertainty_dict: Optional[Dict] = None)[source]

A base class for forecast estimators that fit using forecast.

Notes

Allows estimators that fit using forecast to share the same functions for input data validation, fit postprocessing, predict, summary, plot_components, etc.

Subclasses should:

  • Implement their own __init__ that uses a superset of the parameters here.

  • Implement their own fit, with this sequence of steps:

    • calls super().fit

    • calls SilverkiteForecast.forecast or SimpleSilverkiteForecast.forecast_simple and stores the result in self.model_dict

    • calls super().finish_fit

Uses coverage to set prediction band width. Even though coverage is not needed by forecast_silverkite, it is included in every BaseForecastEstimator to be used universally for forecast evaluation.

Therefore, uncertainty_dict must be consistent with coverage if provided as a dictionary. If uncertainty_dict is None or “auto”, an appropriate default value is set, according to coverage.

Parameters
  • score_func (callable, optional, default mean_squared_error) – See BaseForecastEstimator.

  • coverage (float between [0.0, 1.0] or None, optional) – See BaseForecastEstimator.

  • null_model_params (dict, optional) – Dictionary with arguments to define DummyRegressor null model, default is None. See BaseForecastEstimator.

  • uncertainty_dict (dict or str or None, optional) – How to fit the uncertainty model. See forecast. Note that this is allowed to be “auto”. If None or “auto”, will be set to a default value by coverage before calling forecast_silverkite.

silverkite

The silverkite algorithm instance used for forecasting

Type

Class or a derived class of SilverkiteForecast

silverkite_diagnostics

The silverkite class used for plotting and generating model summary.

Type

Class or a derived class of SilverkiteDiagnostics

model_dict

A dict with fitted model and its attributes. The output of forecast.

Type

dict or None

pred_cols

Names of the features used in the model.

Type

list [str] or None

feature_cols

Column names of the patsy design matrix built by design_mat_from_formula.

Type

list [str] or None

df

The training data used to fit the model.

Type

pandas.DataFrame or None

coef_

Estimated coefficient matrix for the model. Not available for random forest and gradient boosting methods and set to the default value None.

Type

pandas.DataFrame or None

_pred_category

A dictionary with keys being the predictor category and values being the predictors belonging to the category. For details, see pred_category.

Type

dict or None

extra_pred_cols

User provided extra predictor names, for details, see SimpleSilverkiteEstimator or SilverkiteEstimator.

Type

list or None

forecast

Output of predict_silverkite, set by self.predict.

Type

pandas.DataFrame or None

model_summary

The ModelSummary class.

Type

class or None

See also

None

Function performing the fit and predict.

Notes

The subclasses will pass fs_components_df to forecast_silverkite. The model terms it creates internally are used to generate the component plots.

  • fourier_series_multi_fcn uses fs_components_df["names"] (e.g. tod, tow) to build the fourier series and to create column names.

  • fs_components_df["seas_names"] (e.g. daily, weekly) is appended to the column names, if provided.

plot_silverkite_components groups based on fs_components_df["seas_names"] passed to forecast_silverkite during fit. E.g. any column containing daily is added to daily seasonality effect. The reason is as follows:

1. User can provide tow and str_dow for weekly seasonality. These should be aggregated, and we can do that only based on “seas_names”. 2. yearly and quarterly seasonality both use ct1 as “names” column. Only way to distinguish those effects is via “seas_names”. 3. ct1 is also used for growth. If it is interacted with seasonality, the columns become indistinguishable without “seas_names”.

Additionally, the function sets yaxis labels based on seas_names: daily as ylabel is much more informative than tod as ylabel in component plots.

fit(X, y=None, time_col='ts', value_col='y', **fit_params)[source]

Pre-processing before fitting Silverkite forecast model.

Parameters
  • X (pandas.DataFrame) – Input timeseries, with timestamp column, value column, and any additional regressors. The value column is the response, included in X to allow transformation by sklearn.pipeline.

  • y (ignored) – The original timeseries values, ignored. (The y for fitting is included in X).

  • time_col (str) – Time column name in X.

  • value_col (str) – Value column name in X.

  • fit_params (dict) – additional parameters for null model.

Notes

Subclasses are expected to call this at the beginning of their fit method, before calling forecast.

finish_fit()[source]

Makes important values of self.model_dict conveniently accessible.

To be called by subclasses at the end of their fit method. Sets {pred_cols, feature_cols, and coef_}.

predict(X, y=None)[source]

Creates forecast for the dates specified in X.

Parameters
  • X (pandas.DataFrame) – Input timeseries with timestamp column and any additional regressors. Timestamps are the dates for prediction. Value column, if provided in X, is ignored.

  • y (ignored.) –

Returns

predictions

Forecasted values for the dates in X. Columns:

  • TIME_COL: dates

  • PREDICTED_COL: predictions

  • PREDICTED_LOWER_COL: lower bound of predictions, optional

  • PREDICTED_UPPER_COL: upper bound of predictions, optional

  • [other columns], optional

PREDICTED_LOWER_COL and PREDICTED_UPPER_COL are present if self.coverage is not None.

Return type

pandas.DataFrame

property pred_category

A dictionary that stores the predictor names in each category.

This property is not initialized until used. This speeds up the fitting process. The categories includes

  • “intercept” : the intercept.

  • “time_features” : the predictors that include TIME_FEATURES but not SEASONALITY_REGEX.

  • “event_features” : the predictors that include EVENT_PREFIX.

  • “trend_features” : the predictors that include TREND_REGEX but not SEASONALITY_REGEX.

  • “seasonality_features” : the predictors that include SEASONALITY_REGEX.

  • “lag_features” : the predictors that include LAG_REGEX.

  • “regressor_features” : external regressors and other predictors manually passed to extra_pred_cols, but not in the categories above.

  • “interaction_features” : the predictors that include interaction terms, i.e., including a colon.

Note that each predictor falls into at least one category. Some “time_features” may also be “trend_features”. Predictors with an interaction are classified into all categories matched by the interaction components. Thus, “interaction_features” are already included in the other categories.

summary(max_colwidth=20)[source]

Creates human readable string of how the model works, including relevant diagnostics These details cannot be extracted from the forecast alone Prints model configuration. Extend this in child class to print the trained model parameters.

Log message is printed to the cst.LOGGER_NAME logger.

plot_trend(title=None)[source]

Convenience function to plot the data and the trend component.

Parameters

title (str, optional, default None) – Plot title.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

plot_seasonalities(title=None)[source]

Convenience function to plot the data and the seasonality components.

Parameters

title (str, optional, default None) – Plot title.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

plot_trend_changepoint_detection(params=None)[source]

Convenience function to plot the original trend changepoint detection results.

Parameters

params (dict or None, default None) –

The parameters in plot. If set to None, all components will be plotted.

Note: seasonality components plotting is not supported currently. plot parameter must be False.

Returns

fig – Figure.

Return type

plotly.graph_objs.Figure

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

score(X, y, sample_weight=None)

Default scorer for the estimator (Used in GridSearchCV/RandomizedSearchCV if scoring=None)

Notes

If null_model_params is not None, returns R2_null_model_score of model error relative to null model, evaluated by score_func.

If null_model_params is None, returns score_func of the model itself.

By default, grid search (with no scoring parameter) optimizes improvement of score_func against null model.

To optimize a different score function, pass scoring to GridSearchCV/RandomizedSearchCV.

Parameters
Returns

score – Comparison of predictions against null predictions, according to specified score function

Return type

float or None

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

class greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions(freq: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FREQ = <SILVERKITE_FREQ.DAILY: 'DAILY'>, seas: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_SEAS = <SILVERKITE_SEAS.LT: 'LT'>, gr: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_GR = <SILVERKITE_GR.LINEAR: 'LINEAR'>, cp: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_CP = <SILVERKITE_CP.NONE: 'NONE'>, hol: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_HOL = <SILVERKITE_HOL.NONE: 'NONE'>, feaset: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FEASET = <SILVERKITE_FEASET.OFF: 'OFF'>, algo: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_ALGO = <SILVERKITE_ALGO.LINEAR: 'LINEAR'>, ar: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_AR = <SILVERKITE_AR.OFF: 'OFF'>, dsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_DSI = <SILVERKITE_DSI.AUTO: 'AUTO'>, wsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_WSI = <SILVERKITE_WSI.AUTO: 'AUTO'>)[source]

Defines generic simple silverkite template options.

Attributes can be set to different values using SILVERKITE_COMPONENT_KEYWORDS for high level tuning.

freq represents data frequency.

The other attributes stand for seasonality, growth, changepoints_dict, events, feature_sets_enabled, fit_algorithm and autoregression in ModelComponentsParam, which are used in SimpleSilverkiteTemplate.

freq: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FREQ = 'DAILY'

Valid values for simple silverkite template string name frequency. See SILVERKITE_FREQ.

seas: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_SEAS = 'LT'

Valid values for simple silverkite template string name seasonality. See SILVERKITE_SEAS.

gr: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_GR = 'LINEAR'

Valid values for simple silverkite template string name growth. See SILVERKITE_GR.

cp: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_CP = 'NONE'

Valid values for simple silverkite template string name changepoints. See SILVERKITE_CP.

hol: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_HOL = 'NONE'

Valid values for simple silverkite template string name holiday. See SILVERKITE_HOL.

feaset: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FEASET = 'OFF'

Valid values for simple silverkite template string name feature sets enabled. See SILVERKITE_FEASET.

algo: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_ALGO = 'LINEAR'

Valid values for simple silverkite template string name fit algorithm. See SILVERKITE_ALGO.

ar: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_AR = 'OFF'

Valid values for simple silverkite template string name autoregression. See SILVERKITE_AR.

dsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_DSI = 'AUTO'

Valid values for simple silverkite template string name max daily seasonality interaction order. See SILVERKITE_DSI.

wsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_WSI = 'AUTO'

Valid values for simple silverkite template string name max weekly seasonality interaction order. See SILVERKITE_WSI.

class greykite.framework.templates.silverkite_template.SilverkiteTemplate[source]

A template for SilverkiteEstimator.

Takes input data and optional configuration parameters to customize the model. Returns a set of parameters to call forecast_pipeline.

Notes

The attributes of a ForecastConfig for SilverkiteEstimator are:

computation_param: ComputationParam or None, default None

How to compute the result. See ComputationParam.

coverage: float or None, default None

Intended coverage of the prediction bands (0.0 to 1.0). Same as coverage in forecast_pipeline. You may tune how the uncertainty is computed via model_components.uncertainty[“uncertainty_dict”].

evaluation_metric_param: EvaluationMetricParam or None, default None

What metrics to evaluate. See EvaluationMetricParam.

evaluation_period_param: EvaluationPeriodParam or None, default None

How to split data for evaluation. See EvaluationPeriodParam.

forecast_horizon: int or None, default None

Number of periods to forecast into the future. Must be > 0 If None, default is determined from input data frequency Same as forecast_horizon in forecast_pipeline

metadata_param: MetadataParam or None, default None

Information about the input data. See MetadataParam.

model_components_param: ModelComponentsParam or None, default None

Parameters to tune the model. See ModelComponentsParam. The fields are dictionaries with the following items.

See inline comments on which values accept lists for grid search.

seasonality: dict [str, any] or None, optional

How to model the seasonality. A dictionary with keys corresponding to parameters in forecast.

Allowed keys: "fs_components_df".

growth: dict [str, any] or None, optional

How to model the growth.

Allowed keys: None. (Use model_components.custom["extra_pred_cols"] to specify growth terms.)

events: dict [str, any] or None, optional

How to model the holidays/events. A dictionary with keys corresponding to parameters in forecast.

Allowed keys: "daily_event_df_dict".

Note

Event names derived from daily_event_df_dict must be specified via model_components.custom["extra_pred_cols"] to be included in the model. This parameter has no effect on the model unless event names are passed to extra_pred_cols.

The function get_event_pred_cols can be used to extract all event names from daily_event_df_dict.

changepoints: dict [str, any] or None, optional

How to model changes in trend and seasonality. A dictionary with keys corresponding to parameters in forecast.

Allowed keys: “changepoints_dict”, “seasonality_changepoints_dict”, “changepoint_detector”.

autoregression: dict [str, any] or None, optional

Specifies the autoregression configuration. Dictionary with the following optional key:

"autoreg_dict"dict or None or a list of such values for grid search

A dictionary with arguments for build_autoreg_df. That function’s parameter value_col is inferred from the input of current function forecast_silverkite. Other keys are:

  • "lag_dict" : dict or None

  • "agg_lag_dict" : dict or None

  • "series_na_fill_func" : callable

See more details for above parameters in build_autoreg_df.

regressors: dict [str, any] or None, optional

How to model the regressors.

Allowed keys: None. (Use model_components.custom["extra_pred_cols"] to specify regressors.)

uncertainty: dict [str, any] or None, optional

How to model the uncertainty. A dictionary with keys corresponding to parameters in forecast.

Allowed keys: "uncertainty_dict".

custom: dict [str, any] or None, optional

Custom parameters that don’t fit the categories above. A dictionary with keys corresponding to parameters in forecast.

Allowed keys: "silverkite", "silverkite_diagnostics", "origin_for_time_vars", "extra_pred_cols", "fit_algorithm_dict", "min_admissible_value", "max_admissible_value".

Note

"extra_pred_cols" should contain the desired growth terms, regressor names, and event names.

fit_algorithm_dict is a dictionary with fit_algorithm and fit_algorithm_params parameters to forecast:

fit_algorithm_dictdict or None, optional

How to fit the model. A dictionary with the following optional keys.

"fit_algorithm"str, optional, default “linear”

The type of predictive model used in fitting.

See fit_model_via_design_matrix for available options and their parameters.

"fit_algorithm_params"dict or None, optional, default None

Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

hyperparameter_override: dict [str, any] or None or list [dict [str, any] or None], optional

After the above model components are used to create a hyperparameter grid, the result is updated by this dictionary, to create new keys or override existing ones. Allows for complete customization of the grid search.

Keys should have format {named_step}__{parameter_name} for the named steps of the sklearn.pipeline.Pipeline returned by this function. See sklearn.pipeline.Pipeline.

For example:

hyperparameter_override={
    "estimator__origin_for_time_vars": 2018.0,
    "input__response__null__impute_algorithm": "ts_interpolate",
    "input__response__null__impute_params": {"orders": [7, 14]},
    "input__regressors_numeric__normalize__normalize_algorithm": "RobustScaler",
}

If a list of dictionaries, grid search will be done for each dictionary in the list. Each dictionary in the list override the defaults. This enables grid search over specific combinations of parameters to reduce the search space.

  • For example, the first dictionary could define combinations of parameters for a “complex” model, and the second dictionary could define combinations of parameters for a “simple” model, to prevent mixed combinations of simple and complex.

  • Or the first dictionary could grid search over fit algorithm, and the second dictionary could use a single fit algorithm and grid search over seasonality.

The result is passed as the param_distributions parameter to sklearn.model_selection.RandomizedSearchCV.

model_template: str

This class only accepts “SK”.

DEFAULT_MODEL_TEMPLATE = 'SK'

The default model template. See ModelTemplateEnum. Uses a string to avoid circular imports. Overrides the value from ForecastConfigDefaults.

property allow_model_template_list

SilverkiteTemplate does not allow config.model_template to be a list.

property allow_model_components_param_list

SilverkiteTemplate does not allow config.model_components_param to be a list.

get_regressor_cols()[source]

Returns regressor column names.

Implements the method in BaseTemplate.

The intersection of extra_pred_cols from model components and self.df columns, excluding time_col and value_col.

Returns

regressor_cols – See forecast_pipeline.

Return type

list [str] or None

get_hyperparameter_grid()[source]

Returns hyperparameter grid.

Implements the method in BaseTemplate.

Uses self.time_properties and self.config to generate the hyperparameter grid.

Converts model components and time properties into SilverkiteEstimator hyperparameters.

Notes

forecast_pipeline handles the train/test splits according to EvaluationPeriodParam, so estimator__train_test_thresh and estimator__training_fraction are always None.

estimator__changepoint_detector is always None, to prevent leaking future information into the past. Pass changepoints_dict with method=”auto” for automatic detection.

Returns

hyperparameter_grid – See forecast_pipeline. The output dictionary values are lists, combined in grid search.

Return type

dict, list [dict] or None

static apply_computation_defaults(computation: Optional[greykite.framework.templates.autogen.forecast_config.ComputationParam] = None)greykite.framework.templates.autogen.forecast_config.ComputationParam

Applies the default ComputationParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a ComputationParam object.

Parameters

computation (ComputationParam or None) – The ComputationParam object.

Returns

computation – Valid ComputationParam object with the provided attribute values and the default attribute values if not.

Return type

ComputationParam

static apply_evaluation_metric_defaults(evaluation: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam] = None)greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam

Applies the default EvaluationMetricParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a EvaluationMetricParam object.

Parameters

evaluation (EvaluationMetricParam or None) – The EvaluationMetricParam object.

Returns

evaluation – Valid EvaluationMetricParam object with the provided attribute values and the default attribute values if not.

Return type

EvaluationMetricParam

static apply_evaluation_period_defaults(evaluation: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam] = None)greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam

Applies the default EvaluationPeriodParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a EvaluationPeriodParam object.

Parameters

evaluation (EvaluationPeriodParam or None) – The EvaluationMetricParam object.

Returns

evaluation – Valid EvaluationPeriodParam object with the provided attribute values and the default attribute values if not.

Return type

EvaluationPeriodParam

apply_forecast_config_defaults(config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None)greykite.framework.templates.autogen.forecast_config.ForecastConfig

Applies the default Forecast Config values to the given config. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input config is None, it creates a Forecast Config.

Parameters

config (ForecastConfig or None) – Forecast configuration if available. See ForecastConfig.

Returns

config – A valid Forecast Config which contains the provided attribute values and the default attribute values if not.

Return type

ForecastConfig

static apply_metadata_defaults(metadata: Optional[greykite.framework.templates.autogen.forecast_config.MetadataParam] = None)greykite.framework.templates.autogen.forecast_config.MetadataParam

Applies the default MetadataParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a MetadataParam object.

Parameters

metadata (MetadataParam or None) – The MetadataParam object.

Returns

metadata – Valid MetadataParam object with the provided attribute values and the default attribute values if not.

Return type

MetadataParam

static apply_model_components_defaults(model_components: Optional[Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[Optional[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]]] = None) → Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]

Applies the default ModelComponentsParam values to the given object.

Converts None to a ModelComponentsParam object. Unpacks a list of a single element to the element itself.

Parameters

model_components (ModelComponentsParam or None or list of such items) – The ModelComponentsParam object.

Returns

model_components – Valid ModelComponentsParam object with the provided attribute values and the default attribute values if not.

Return type

ModelComponentsParam or list of such items

apply_model_template_defaults(model_template: Optional[Union[str, List[Optional[str]]]] = None) → Union[str, List[str]]

Applies the default ModelComponentsParam values to the given object.

Unpacks a list of a single element to the element itself. Sets default value if None.

Parameters

model_template (str or None or list [None, str]) – The model template name. See valid names in ModelTemplateEnum.

Returns

model_template – The model template name, with defaults value used if not provided.

Return type

str or list [str]

apply_template_for_pipeline_params(df: pandas.core.frame.DataFrame, config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None) → Dict[source]

Explicitly calls the method in BaseTemplate to make use of the decorator in this class.

Parameters
  • df (pandas.DataFrame) – The time series dataframe with time_col and value_col and optional regressor columns.

  • config (ForecastConfig.) – The ForecastConfig class that includes model training parameters.

Returns

pipeline_parameters – The pipeline parameters consumable by forecast_pipeline.

Return type

dict

property estimator

The estimator instance to use as the final step in the pipeline. An instance of BaseForecastEstimator.

get_forecast_time_properties()

Returns forecast time parameters.

Uses self.df, self.config, self.regressor_cols.

Available parameters:

  • self.df

  • self.config

  • self.score_func

  • self.score_func_greater_is_better

  • self.regressor_cols

  • self.estimator

  • self.pipeline

Returns

time_properties – Time properties dictionary (likely produced by get_forecast_time_properties) with keys:

"period"int

Period of each observation (i.e. minimum time between observations, in seconds).

"simple_freq"SimpleTimeFrequencyEnum

SimpleTimeFrequencyEnum member corresponding to data frequency.

"num_training_points"int

Number of observations for training.

"num_training_days"int

Number of days for training.

"start_year"int

Start year of the training period.

"end_year"int

End year of the forecast period.

"origin_for_time_vars"float

Continuous time representation of the first date in df.

Return type

dict [str, any] or None, default None

get_pipeline()

Returns pipeline.

Implementation may be overridden by subclass if a different pipeline is desired.

Uses self.estimator, self.score_func, self.score_func_greater_is_better, self.config, self.regressor_cols.

Available parameters:

  • self.df

  • self.config

  • self.score_func

  • self.score_func_greater_is_better

  • self.regressor_cols

  • self.estimator

Returns

pipeline – See forecast_pipeline.

Return type

sklearn.pipeline.Pipeline

static apply_template_decorator(func)[source]

Decorator for apply_template_for_pipeline_params function.

Overrides the method in BaseTemplate.

Raises

ValueError if config.model_template != "SK"

Prophet Template

class greykite.framework.templates.prophet_template.ProphetTemplate(estimator: greykite.sklearn.estimator.base_forecast_estimator.BaseForecastEstimator = ProphetEstimator())[source]

A template for ProphetEstimator.

Takes input data and optional configuration parameters to customize the model. Returns a set of parameters to call forecast_pipeline.

Notes

The attributes of a ForecastConfig for ProphetEstimator are:

computation_param: ComputationParam or None, default None

How to compute the result. See ComputationParam.

coverage: float or None, default None

Intended coverage of the prediction bands (0.0 to 1.0) If None, the upper/lower predictions are not returned Same as coverage in forecast_pipeline

evaluation_metric_param: EvaluationMetricParam or None, default None

What metrics to evaluate. See EvaluationMetricParam.

evaluation_period_param: EvaluationPeriodParam or None, default None

How to split data for evaluation. See EvaluationPeriodParam.

forecast_horizon: int or None, default None

Number of periods to forecast into the future. Must be > 0 If None, default is determined from input data frequency Same as forecast_horizon in forecast_pipeline

metadata_param: MetadataParam or None, default None

Information about the input data. See MetadataParam.

model_components_param: ModelComponentsParam or None, default None

Parameters to tune the model. See ModelComponentsParam. The fields are dictionaries with the following items.

seasonality: dict [str, any] or None

Seasonality config dictionary, with the following optional keys.

"seasonality_mode": str or None or list of such values for grid search

Can be ‘additive’ (default) or ‘multiplicative’.

"seasonality_prior_scale": float or None or list of such values for grid search

Parameter modulating the strength of the seasonality model. Larger values allow the model to fit larger seasonal fluctuations, smaller values dampen the seasonality. Specify for individual seasonalities using add_seasonality_dict.

"yearly_seasonality": str or bool or int or list of such values for grid search, default ‘auto’

Determines the yearly seasonality Can be ‘auto’, True, False, or a number of Fourier terms to generate.

"weekly_seasonality": str or bool or int or list of such values for grid search, default ‘auto’

Determines the weekly seasonality Can be ‘auto’, True, False, or a number of Fourier terms to generate.

"daily_seasonality": str or bool or int or list of such values for grid search, default ‘auto’

Determines the daily seasonality Can be ‘auto’, True, False, or a number of Fourier terms to generate.

"add_seasonality_dict": dict or None or list of such values for grid search

dict of custom seasonality parameters to be added to the model, default=None Key is the seasonality component name e.g. ‘monthly’; parameters are specified via dict. See prophet_estimator for details.

growth: dict [str, any] or None

Specifies the growth parameter configuration. Dictionary with the following optional key:

"growth_term": str or None or list of such values for grid search

How to model the growth. Valid options are “linear” and “logistic” Specify a linear or logistic trend, these terms have their origin at the train start date.

events: dict [str, any] or None

Holiday/events configuration dictionary with the following optional keys:

"holiday_lookup_countries": list [str] or “auto” or None

Which countries’ holidays to include. Must contain all the holidays you intend to model. If “auto”, uses default list of countries with large contribution to Internet traffic. If None or an empty list, no holidays are modeled.

"holidays_prior_scale": float or None or list of such values for grid search, default 10.0

Modulates the strength of the holiday effect.

"holiday_pre_num_days": list [int] or None, default 2

Model holiday effects for holiday_pre_num_days days before the holiday. Grid search is not supported. Must be a list with one element or None.

"holiday_post_num_days": list [int] or None, default 2

Model holiday effects for holiday_post_num_days days after the holiday Grid search is not supported. Must be a list with one element or None.

changepoints: dict [str, any] or None

Specifies the changepoint configuration. Dictionary with the following optional keys:

"changepoint_prior_scale"float or None or list of such values for grid search, default 0.05

Parameter modulating the flexibility of the automatic changepoint selection. Large values will allow many changepoints, small values will allow few changepoints.

"changepoints"list [datetime.datetime] or None or list of such values for grid search, default None

List of dates at which to include potential changepoints. If not specified, potential changepoints are selected automatically.

"n_changepoints"int or None or list of such values for grid search, default 25

Number of potential changepoints to include. Not used if input changepoints is supplied. If changepoints is not supplied, then n_changepoints potential changepoints are selected uniformly from the first changepoint_range proportion of the history.

"changepoint_range"float or None or list of such values for grid search, default 0.8

Proportion of history in which trend changepoints will be estimated. Permitted values: (0,1] Not used if input changepoints is supplied.

regressors: dict [str, any] or None

Specifies the regressors to include in the model (e.g. macro-economic factors). Dictionary with the following optional keys:

"add_regressor_dict"dict or None or list of such values for grid search, default None

Dictionary of extra regressors to be modeled. See ProphetEstimator for details.

uncertainty: dict [str, any] or None

Specifies the uncertainty configuration. A dictionary with the following optional keys:

"mcmc_samples"int or None or list of such values for grid search, default 0

if greater than 0, will do full Bayesian inference with the specified number of MCMC samples. If 0, will do MAP estimation.

"uncertainty_samples"int or None or list of such values for grid search, default 1000

Number of simulated draws used to estimate uncertainty intervals. Setting this value to 0 or False will disable uncertainty estimation and speed up the calculation.

hyperparameter_override: dict [str, any] or None or list [dict [str, any] or None]

After the above model components are used to create a hyperparameter grid, the result is updated by this dictionary, to create new keys or override existing ones. Allows for complete customization of the grid search.

Keys should have format {named_step}__{parameter_name} for the named steps of the sklearn.pipeline.Pipeline returned by this function. See sklearn.pipeline.Pipeline.

For example:

hyperparameter_override={
    "estimator__yearly_seasonality": [True, False],
    "estimator__seasonality_prior_scale": [5.0, 15.0],
    "input__response__null__impute_algorithm": "ts_interpolate",
    "input__response__null__impute_params": {"orders": [7, 14]},
    "input__regressors_numeric__normalize__normalize_algorithm": "RobustScaler",
}

If a list of dictionaries, grid search will be done for each dictionary in the list. Each dictionary in the list override the defaults. This enables grid search over specific combinations of parameters to reduce the search space.

  • For example, the first dictionary could define combinations of parameters for a “complex” model, and the second dictionary could define combinations of parameters for a “simple” model, to prevent mixed combinations of simple and complex.

  • Or the first dictionary could grid search over fit algorithm, and the second dictionary could use a single fit algorithm and grid search over seasonality.

The result is passed as the param_distributions parameter to sklearn.model_selection.RandomizedSearchCV.

autoregression: dict [str, any] or None

Ignored. Prophet template does not support autoregression.

custom: dict [str, any] or None

Ignored. There are no custom options.

model_template: str

This class only accepts “PROPHET”.

DEFAULT_MODEL_TEMPLATE = 'PROPHET'

The default model template. See ModelTemplateEnum. Uses a string to avoid circular imports. Overrides the value from ForecastConfigDefaults.

HOLIDAY_LOOKUP_COUNTRIES_AUTO = ('UnitedStates', 'UnitedKingdom', 'India', 'France', 'China')

Default holiday countries to use if countries=’auto’

property allow_model_template_list

ProphetTemplate does not allow config.model_template to be a list.

property allow_model_components_param_list

ProphetTemplate does not allow config.model_components_param to be a list.

get_prophet_holidays(year_list, countries='auto', lower_window=- 2, upper_window=2)[source]

Generates holidays for Prophet model.

Parameters
  • year_list (list [int]) – List of years for selecting the holidays across given countries.

  • countries (list [str] or “auto” or None, default “auto”) –

    Countries for selecting holidays.

    • If “auto”, uses Top Countries for internet traffic.

    • If a list, a list of country names.

    • If None, the function returns None.

  • lower_window (int or None, default -2) – Negative integer. Model holiday effects for given number of days before the holiday.

  • upper_window (int or None, default 2) – Positive integer. Model holiday effects for given number of days after the holiday.

Returns

holidays – holidays dataframe to pass to Prophet’s holidays argument.

Return type

pandas.DataFrame

See also

None, to, None

get_regressor_cols()[source]

Returns regressor column names.

Implements the method in BaseTemplate.

Returns

regressor_cols – The names of regressor columns used in any hyperparameter set requested by model_components. None if there are no regressors.

Return type

list [str] or None

apply_prophet_model_components_defaults(model_components=None, time_properties=None)[source]

Sets default values for model_components.

Called by get_hyperparameter_grid after time_properties` is defined. Requires ``time_properties as well as model_components so we do not simply override apply_model_components_defaults.

Parameters
  • model_components (ModelComponentsParam or None, default None) – Configuration of model growth, seasonality, events, etc. See the docstring of this class for details.

  • time_properties (dict [str, any] or None, default None) –

    Time properties dictionary (likely produced by get_forecast_time_properties) with keys:

    "period"int

    Period of each observation (i.e. minimum time between observations, in seconds).

    "simple_freq"SimpleTimeFrequencyEnum

    SimpleTimeFrequencyEnum member corresponding to data frequency.

    "num_training_points"int

    Number of observations for training.

    "num_training_days"int

    Number of days for training.

    "start_year"int

    Start year of the training period.

    "end_year"int

    End year of the forecast period.

    "origin_for_time_vars"float

    Continuous time representation of the first date in df.

    If None, start_year is set to 2015 and end_year to 2030.

Returns

model_components – The provided model_components with default values set

Return type

ModelComponentsParam

get_hyperparameter_grid()[source]

Returns hyperparameter grid.

Implements the method in BaseTemplate.

Uses self.time_properties and self.config to generate the hyperparameter grid.

Converts model components and time properties into ProphetEstimator hyperparameters.

Returns

hyperparameter_gridProphetEstimator hyperparameters.

See forecast_pipeline. The output dictionary values are lists, combined in grid search.

Return type

dict [str, list [any]] or None

static apply_computation_defaults(computation: Optional[greykite.framework.templates.autogen.forecast_config.ComputationParam] = None)greykite.framework.templates.autogen.forecast_config.ComputationParam

Applies the default ComputationParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a ComputationParam object.

Parameters

computation (ComputationParam or None) – The ComputationParam object.

Returns

computation – Valid ComputationParam object with the provided attribute values and the default attribute values if not.

Return type

ComputationParam

static apply_evaluation_metric_defaults(evaluation: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam] = None)greykite.framework.templates.autogen.forecast_config.EvaluationMetricParam

Applies the default EvaluationMetricParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a EvaluationMetricParam object.

Parameters

evaluation (EvaluationMetricParam or None) – The EvaluationMetricParam object.

Returns

evaluation – Valid EvaluationMetricParam object with the provided attribute values and the default attribute values if not.

Return type

EvaluationMetricParam

static apply_evaluation_period_defaults(evaluation: Optional[greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam] = None)greykite.framework.templates.autogen.forecast_config.EvaluationPeriodParam

Applies the default EvaluationPeriodParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a EvaluationPeriodParam object.

Parameters

evaluation (EvaluationPeriodParam or None) – The EvaluationMetricParam object.

Returns

evaluation – Valid EvaluationPeriodParam object with the provided attribute values and the default attribute values if not.

Return type

EvaluationPeriodParam

apply_forecast_config_defaults(config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None)greykite.framework.templates.autogen.forecast_config.ForecastConfig

Applies the default Forecast Config values to the given config. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input config is None, it creates a Forecast Config.

Parameters

config (ForecastConfig or None) – Forecast configuration if available. See ForecastConfig.

Returns

config – A valid Forecast Config which contains the provided attribute values and the default attribute values if not.

Return type

ForecastConfig

static apply_metadata_defaults(metadata: Optional[greykite.framework.templates.autogen.forecast_config.MetadataParam] = None)greykite.framework.templates.autogen.forecast_config.MetadataParam

Applies the default MetadataParam values to the given object. If an expected attribute value is provided, the value is unchanged. Otherwise the default value for it is used. Other attributes are untouched. If the input object is None, it creates a MetadataParam object.

Parameters

metadata (MetadataParam or None) – The MetadataParam object.

Returns

metadata – Valid MetadataParam object with the provided attribute values and the default attribute values if not.

Return type

MetadataParam

static apply_model_components_defaults(model_components: Optional[Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[Optional[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]]] = None) → Union[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, List[greykite.framework.templates.autogen.forecast_config.ModelComponentsParam]]

Applies the default ModelComponentsParam values to the given object.

Converts None to a ModelComponentsParam object. Unpacks a list of a single element to the element itself.

Parameters

model_components (ModelComponentsParam or None or list of such items) – The ModelComponentsParam object.

Returns

model_components – Valid ModelComponentsParam object with the provided attribute values and the default attribute values if not.

Return type

ModelComponentsParam or list of such items

apply_model_template_defaults(model_template: Optional[Union[str, List[Optional[str]]]] = None) → Union[str, List[str]]

Applies the default ModelComponentsParam values to the given object.

Unpacks a list of a single element to the element itself. Sets default value if None.

Parameters

model_template (str or None or list [None, str]) – The model template name. See valid names in ModelTemplateEnum.

Returns

model_template – The model template name, with defaults value used if not provided.

Return type

str or list [str]

property estimator

The estimator instance to use as the final step in the pipeline. An instance of BaseForecastEstimator.

get_forecast_time_properties()

Returns forecast time parameters.

Uses self.df, self.config, self.regressor_cols.

Available parameters:

  • self.df

  • self.config

  • self.score_func

  • self.score_func_greater_is_better

  • self.regressor_cols

  • self.estimator

  • self.pipeline

Returns

time_properties – Time properties dictionary (likely produced by get_forecast_time_properties) with keys:

"period"int

Period of each observation (i.e. minimum time between observations, in seconds).

"simple_freq"SimpleTimeFrequencyEnum

SimpleTimeFrequencyEnum member corresponding to data frequency.

"num_training_points"int

Number of observations for training.

"num_training_days"int

Number of days for training.

"start_year"int

Start year of the training period.

"end_year"int

End year of the forecast period.

"origin_for_time_vars"float

Continuous time representation of the first date in df.

Return type

dict [str, any] or None, default None

get_pipeline()

Returns pipeline.

Implementation may be overridden by subclass if a different pipeline is desired.

Uses self.estimator, self.score_func, self.score_func_greater_is_better, self.config, self.regressor_cols.

Available parameters:

  • self.df

  • self.config

  • self.score_func

  • self.score_func_greater_is_better

  • self.regressor_cols

  • self.estimator

Returns

pipeline – See forecast_pipeline.

Return type

sklearn.pipeline.Pipeline

apply_template_for_pipeline_params(df: pandas.core.frame.DataFrame, config: Optional[greykite.framework.templates.autogen.forecast_config.ForecastConfig] = None) → Dict[source]

Explicitly calls the method in BaseTemplate to make use of the decorator in this class.

Parameters
  • df (pandas.DataFrame) – The time series dataframe with time_col and value_col and optional regressor columns.

  • config (ForecastConfig.) – The ForecastConfig class that includes model training parameters.

Returns

pipeline_parameters – The pipeline parameters consumable by forecast_pipeline.

Return type

dict

static apply_template_decorator(func)[source]

Decorator for apply_template_for_pipeline_params function.

Overrides the method in BaseTemplate.

Raises

ValueError if config.model_template != "PROPHET"

class greykite.sklearn.estimator.prophet_estimator.ProphetEstimator(score_func=<function mean_squared_error>, coverage=0.8, null_model_params=None, growth='linear', changepoints=None, n_changepoints=25, changepoint_range=0.8, yearly_seasonality='auto', weekly_seasonality='auto', daily_seasonality='auto', holidays=None, seasonality_mode='additive', seasonality_prior_scale=10.0, holidays_prior_scale=10.0, changepoint_prior_scale=0.05, mcmc_samples=0, uncertainty_samples=1000, add_regressor_dict=None, add_seasonality_dict=None)[source]

Wrapper for Facebook Prophet model.

Parameters
  • score_func (callable) – see BaseForecastEstimator

  • coverage (float between [0.0, 1.0]) – see BaseForecastEstimator

  • null_model_params (dict with arguments to define DummyRegressor null model, optional, default=None) – see BaseForecastEstimator

  • add_regressor_dict (dictionary of extra regressors to be added to the model, optional, default=None) –

    These should be available for training and entire prediction interval.

    Dictionary format:

    add_regressor_dict={  # we can add as many regressors as we want, in the following format
        "reg_col1": {
            "prior_scale": 10,
            "standardize": True,
            "mode": 'additive'
        },
        "reg_col2": {
            "prior_scale": 20,
            "standardize": True,
            "mode": 'multiplicative'
        }
    }
    

  • add_seasonality_dict (dict of custom seasonality parameters to be added to the model, optional, default=None) –

    parameter details: https://github.com/facebook/prophet/blob/master/python/fbprophet/forecaster.py - refer to add_seasonality() function. Key is the seasonality component name e.g. ‘monthly’; parameters are specified via dict.

    Dictionary format:

    add_seasonality_dict={
        'monthly': {
            'period': 30.5,
            'fourier_order': 5
        },
        'weekly': {
            'period': 7,
            'fourier_order': 20,
            'prior_scale': 0.6,
            'mode': 'additive',
            'condition_name': 'condition_col'  # takes a bool column in df with True/False values. This means that
            # the seasonality will only be applied to dates where the condition_name column is True.
        },
        'yearly': {
            'period': 365.25,
            'fourier_order': 10,
            'prior_scale': 0.2,
            'mode': 'additive'
        }
    }
    

    Note: If there is a conflict in built-in and custom seasonality e.g. both have “yearly”, then custom seasonality will be used and Model will throw a warning such as: “INFO:fbprophet:Found custom seasonality named “yearly”, disabling built-in yearly seasonality.”

  • kwargs (additional parameters) –

    Other parameters are the same as Prophet model, with one exception: interval_width is specified by coverage.

    See source code __init__ for the parameter names, and refer to Prophet documentation for a description:

model

Prophet model object

Type

Prophet object

forecast

Output of predict method of Prophet.

Type

pandas.DataFrame

fit(X, y=None, time_col='ts', value_col='y', **fit_params)[source]

Fits fbprophet model.

Parameters
  • X (pandas.DataFrame) – Input timeseries, with timestamp column, value column, and any additional regressors. The value column is the response, included in X to allow transformation by sklearn.pipeline.Pipeline

  • y (ignored) – The original timeseries values, ignored. (The y for fitting is included in X.)

  • time_col (str) – Time column name in X

  • value_col (str) – Value column name in X

  • fit_params (dict) – additional parameters for null model

Returns

self – Fitted model is stored in self.model.

Return type

self

predict(X, y=None)[source]

Creates forecast for dates specified in X.

Parameters
  • X (pandas.DataFrame) – Input timeseries with timestamp column and any additional regressors. Timestamps are the dates for prediction. Value column, if provided in X, is ignored.

  • y (ignored) –

Returns

predictions

Forecasted values for the dates in X. Columns:

  • TIME_COL dates

  • PREDICTED_COL predictions

  • PREDICTED_LOWER_COL lower bound of predictions, optional

  • PREDICTED_UPPER_COL upper bound of predictions, optional

  • [other columns], optional

PREDICTED_LOWER_COL and PREDICTED_UPPER_COL are present iff coverage is not None

Return type

pandas.DataFrame

summary()[source]

Prints input parameters and Prophet model parameters.

Returns

log_message – log message printed to logging.info()

Return type

str

plot_components(uncertainty=True, plot_cap=True, weekly_start=0, yearly_start=0, figsize=None)[source]

Plot the Prophet forecast components on the dataset passed to predict.

Will plot whichever are available of: trend, holidays, weekly seasonality, and yearly seasonality.

Parameters
  • uncertainty (bool, optional, default True) – Boolean to plot uncertainty intervals.

  • plot_cap (bool, optional, default True) – Boolean indicating if the capacity should be shown in the figure, if available.

  • weekly_start (int, optional, default 0) – Specifying the start day of the weekly seasonality plot. 0 (default) starts the week on Sunday. 1 shifts by 1 day to Jan 2, and so on.

  • yearly_start (int, optional, default 0) – Specifying the start day of the yearly seasonality plot. 0 (default) starts the year on Jan 1. 1 shifts by 1 day to Jan 2, and so on.

  • figsize (tuple , optional, default None) – Width, height in inches.

Returns

fig – A matplotlib figure.

Return type

matplotlib.figure.Figure

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

score(X, y, sample_weight=None)

Default scorer for the estimator (Used in GridSearchCV/RandomizedSearchCV if scoring=None)

Notes

If null_model_params is not None, returns R2_null_model_score of model error relative to null model, evaluated by score_func.

If null_model_params is None, returns score_func of the model itself.

By default, grid search (with no scoring parameter) optimizes improvement of score_func against null model.

To optimize a different score function, pass scoring to GridSearchCV/RandomizedSearchCV.

Parameters
Returns

score – Comparison of predictions against null predictions, according to specified score function

Return type

float or None

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

Forecast Pipeline

greykite.framework.pipeline.pipeline.forecast_pipeline(df: pandas.core.frame.DataFrame, time_col='ts', value_col='y', date_format=None, tz=None, freq=None, train_end_date=None, anomaly_info=None, pipeline=None, regressor_cols=None, estimator=SimpleSilverkiteEstimator(), hyperparameter_grid=None, hyperparameter_budget=None, n_jobs=1, verbose=1, forecast_horizon=None, coverage=0.95, test_horizon=None, periods_between_train_test=None, agg_periods=None, agg_func=None, score_func='MeanAbsolutePercentError', score_func_greater_is_better=False, cv_report_metrics='ALL', null_model_params=None, relative_error_tolerance=None, cv_horizon=None, cv_min_train_periods=None, cv_expanding_window=False, cv_use_most_recent_splits=False, cv_periods_between_splits=None, cv_periods_between_train_test=None, cv_max_splits=3)[source]

Computation pipeline for end-to-end forecasting.

Trains a forecast model end-to-end:

  1. checks input data

  2. runs cross-validation to select optimal hyperparameters e.g. best model

  3. evaluates best model on test set

  4. provides forecast of best model (re-trained on all data) into the future

Returns forecasts with methods to plot and see diagnostics. Also returns the fitted pipeline and CV results.

Provides a high degree of customization over training and evaluation parameters:

  1. model

  2. cross validation

  3. evaluation

  4. forecast horizon

See test cases for examples.

Parameters
  • df (pandas.DataFrame) – Timeseries data to forecast. Contains columns [time_col, value_col], and optional regressor columns Regressor columns should include future values for prediction

  • time_col (str, default TIME_COL in constants.py) – name of timestamp column in df

  • value_col (str, default VALUE_COL in constants.py) – name of value column in df (the values to forecast)

  • date_format (str or None, default None) – strftime format to parse time column, eg %m/%d/%Y. Note that %f will parse all the way up to nanoseconds. If None (recommended), inferred by pandas.to_datetime.

  • tz (str or None, default None) – Passed to pandas.tz_localize to localize the timestamp

  • freq (str or None, default None) – Frequency of input data. Used to generate future dates for prediction. Frequency strings can have multiples, e.g. ‘5H’. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for a list of frequency aliases. If None, inferred by pandas.infer_freq. Provide this parameter if df has missing timepoints.

  • train_end_date (datetime.datetime, optional, default None) – Last date to use for fitting the model. Forecasts are generated after this date. If None, it is set to the last date with a non-null value in value_col of df.

  • anomaly_info (dict or list [dict] or None, default None) –

    Anomaly adjustment info. Anomalies in df are corrected before any forecasting is done.

    If None, no adjustments are made.

    A dictionary containing the parameters to adjust_anomalous_data. See that function for details. The possible keys are:

    "value_col"str

    The name of the column in df to adjust. You may adjust the value to forecast as well as any numeric regressors.

    "anomaly_df"pandas.DataFrame

    Adjustments to correct the anomalies.

    "start_date_col": str, default START_DATE_COL

    Start date column in anomaly_df.

    "end_date_col": str, default END_DATE_COL

    End date column in anomaly_df.

    "adjustment_delta_col": str or None, default None

    Impact column in anomaly_df.

    "filter_by_dict": dict or None, default None

    Used to filter anomaly_df to the relevant anomalies for the value_col in this dictionary. Key specifies the column name, value specifies the filter value.

    "filter_by_value_col"": str or None, default None

    Adds {filter_by_value_col: value_col} to filter_by_dict if not None, for the value_col in this dictionary.

    "adjustment_method"str (“add” or “subtract”), default “add”

    How to make the adjustment, if adjustment_delta_col is provided.

    Accepts a list of such dictionaries to adjust multiple columns in df.

  • pipeline (sklearn.pipeline.Pipeline or None, default None) – Pipeline to fit. The final named step must be called “estimator”. If None, will use the default Pipeline from get_basic_pipeline.

  • regressor_cols (list [str] or None, default None) – A list of regressor columns used in the training and prediction DataFrames. It should contain only the regressors that are being used in the grid search. If None, no regressor columns are used. Regressor columns that are unavailable in df are dropped.

  • estimator (instance of an estimator that implements greykite.algo.models.base_forecast_estimator.BaseForecastEstimator) – Estimator to use as the final step in the pipeline. Ignored if pipeline is provided.

  • forecast_horizon (int or None, default None) – Number of periods to forecast into the future. Must be > 0. If None, default is determined from input data frequency

  • coverage (float or None, default=0.95) – Intended coverage of the prediction bands (0.0 to 1.0) If None, the upper/lower predictions are not returned Ignored if pipeline is provided. Uses coverage of the pipeline estimator instead.

  • test_horizon (int or None, default None) – Numbers of periods held back from end of df for test. The rest is used for cross validation. If None, default is forecast_horizon. Set to 0 to skip backtest.

  • periods_between_train_test (int or None, default None) – Number of periods for the gap between train and test data. If None, default is 0.

  • agg_periods (int or None, default None) –

    Number of periods to aggregate before evaluation.

    Model is fit and forecasted on the dataset’s original frequency.

    Before evaluation, the actual and forecasted values are aggregated, using rolling windows of size agg_periods and the function agg_func. (e.g. if the dataset is hourly, use agg_periods=24, agg_func=np.sum, to evaluate performance on the daily totals).

    If None, does not aggregate before evaluation.

    Currently, this is only used when calculating CV metrics and the R2_null_model_score metric in backtest/forecast. No pre-aggregation is applied for the other backtest/forecast evaluation metrics.

  • agg_func (callable or None, default None) –

    Takes an array and returns a number, e.g. np.max, np.sum.

    Defines how to aggregate rolling windows of actual and predicted values before evaluation.

    Ignored if agg_periods is None.

    Currently, this is only used when calculating CV metrics and the R2_null_model_score metric in backtest/forecast. No pre-aggregation is applied for the other backtest/forecast evaluation metrics.

  • score_func (str or callable, default EvaluationMetricEnum.MeanAbsolutePercentError.name) – Score function used to select optimal model in CV. If a callable, takes arrays y_true, y_pred and returns a float. If a string, must be either a EvaluationMetricEnum member name or FRACTION_OUTSIDE_TOLERANCE.

  • score_func_greater_is_better (bool, default False) – True if score_func is a score function, meaning higher is better, and False if it is a loss function, meaning lower is better. Must be provided if score_func is a callable (custom function). Ignored if score_func is a string, because the direction is known.

  • cv_report_metrics (str, or list [str], or None, default CV_REPORT_METRICS_ALL) –

    Additional metrics to compute during CV, besides the one specified by score_func.

    • If the string constant greykite.framework.constants.CV_REPORT_METRICS_ALL, computes all metrics in EvaluationMetricEnum. Also computes FRACTION_OUTSIDE_TOLERANCE if relative_error_tolerance is not None. The results are reported by the short name (.get_metric_name()) for EvaluationMetricEnum members and FRACTION_OUTSIDE_TOLERANCE_NAME for FRACTION_OUTSIDE_TOLERANCE. These names appear in the keys of forecast_result.grid_search.cv_results_ returned by this function.

    • If a list of strings, each of the listed metrics is computed. Valid strings are EvaluationMetricEnum member names and FRACTION_OUTSIDE_TOLERANCE.

      For example:

      ["MeanSquaredError", "MeanAbsoluteError", "MeanAbsolutePercentError", "MedianAbsolutePercentError", "FractionOutsideTolerance2"]
      
    • If None, no additional metrics are computed.

  • null_model_params (dict or None, default None) –

    Defines baseline model to compute R2_null_model_score evaluation metric. R2_null_model_score is the improvement in the loss function relative to a null model. It can be used to evaluate model quality with respect to a simple baseline. For details, see r2_null_model_score.

    The null model is a DummyRegressor, which returns constant predictions.

    Valid keys are “strategy”, “constant”, “quantile”. See DummyRegressor. For example:

    null_model_params = {
        "strategy": "mean",
    }
    null_model_params = {
        "strategy": "median",
    }
    null_model_params = {
        "strategy": "quantile",
        "quantile": 0.8,
    }
    null_model_params = {
        "strategy": "constant",
        "constant": 2.0,
    }
    

    If None, R2_null_model_score is not calculated.

    Note: CV model selection always optimizes score_func`, not the ``R2_null_model_score.

  • relative_error_tolerance (float or None, default None) – Threshold to compute the Outside Tolerance metric, defined as the fraction of forecasted values whose relative error is strictly greater than relative_error_tolerance. For example, 0.05 allows for 5% relative error. If None, the metric is not computed.

  • hyperparameter_grid (dict, list [dict] or None, default None) –

    Sets properties of the steps in the pipeline, and specifies combinations to search over. Should be valid input to sklearn.model_selection.GridSearchCV (param_grid) or sklearn.model_selection.RandomizedSearchCV (param_distributions).

    Prefix transform/estimator attributes by the name of the step in the pipeline. See details at: https://scikit-learn.org/stable/modules/compose.html#nested-parameters

    If None, uses the default pipeline parameters.

  • hyperparameter_budget (int or None, default None) –

    Max number of hyperparameter sets to try within the hyperparameter_grid search space

    Runs a full grid search if hyperparameter_budget is sufficient to exhaust full hyperparameter_grid, otherwise samples uniformly at random from the space.

    If None, uses defaults:

    • full grid search if all values are constant

    • 10 if any value is a distribution to sample from

  • n_jobs (int or None, default COMPUTATION_N_JOBS) – Number of jobs to run in parallel (the maximum number of concurrently running workers). -1 uses all CPUs. -2 uses all CPUs but one. None is treated as 1 unless in a joblib.Parallel backend context that specifies otherwise.

  • verbose (int, default 1) – Verbosity level during CV. if > 0, prints number of fits if > 1, prints fit parameters, total score + fit time if > 2, prints train/test scores

  • cv_horizon (int or None, default None) – Number of periods in each CV test set If None, default is forecast_horizon. Set either cv_horizon or cv_max_splits to 0 to skip CV.

  • cv_min_train_periods (int or None, default None) – Minimum number of periods for training each CV fold. If cv_expanding_window is False, every training period is this size If None, default is 2 * cv_horizon

  • cv_expanding_window (bool, default False) – If True, training window for each CV split is fixed to the first available date. Otherwise, train start date is sliding, determined by cv_min_train_periods.

  • cv_use_most_recent_splits (bool, default False) – If True, splits from the end of the dataset are used. Else a sampling strategy is applied. Check _sample_splits for details.

  • cv_periods_between_splits (int or None, default None) – Number of periods to slide the test window between CV splits If None, default is cv_horizon

  • cv_periods_between_train_test (int or None, default None) – Number of periods for the gap between train and test in a CV split. If None, default is periods_between_train_test.

  • cv_max_splits (int or None, default 3) – Maximum number of CV splits. Given the above configuration, samples up to max_splits train/test splits, preferring splits toward the end of available data. If None, uses all splits. Set either cv_horizon or cv_max_splits to 0 to skip CV.

Returns

forecast_result – Forecast result. See ForecastResult for details.

  • If cv_horizon=0, forecast_result.grid_search.best_estimator_ and forecast_result.grid_search.best_params_ attributes are defined according to the provided single set of parameters. There must be a single set of parameters to skip cross-validation.

  • If test_horizon=0, forecast_result.backtest is None.

Return type

ForecastResult

class greykite.framework.pipeline.pipeline.ForecastResult(timeseries: greykite.framework.input.univariate_time_series.UnivariateTimeSeries = None, grid_search: sklearn.model_selection._search.RandomizedSearchCV = None, model: sklearn.pipeline.Pipeline = None, backtest: greykite.framework.output.univariate_forecast.UnivariateForecast = None, forecast: greykite.framework.output.univariate_forecast.UnivariateForecast = None)[source]

Forecast results. Contains results from cross-validation, backtest, and forecast, the trained model, and the original input data.

timeseries: greykite.framework.input.univariate_time_series.UnivariateTimeSeries = None

Input time series in standard format with stats and convenient plot functions.

Result of cross-validation grid search on training dataset. The relevant attributes are:

  • cv_results_ cross-validation scores

  • best_estimator_ the model used for backtesting

  • best_params_ the optimal parameters used for backtesting.

Also see summarize_grid_search_results. We recommend using this function to extract results, rather than accessing cv_results_ directly.

model: sklearn.pipeline.Pipeline = None

Model fitted on full dataset, using the best parameters selected via cross-validation. Has .fit(), .predict(), and diagnostic functions depending on the model.

backtest: greykite.framework.output.univariate_forecast.UnivariateForecast = None

Forecast on backtest period. Backtest period is a holdout test set to check forecast quality against the most recent actual values available. The best model from cross validation is refit on data prior to this period. The timestamps in backtest.df are sorted in ascending order. Has a .plot() method and attributes to get forecast vs actuals, evaluation results.

forecast: greykite.framework.output.univariate_forecast.UnivariateForecast = None

Forecast on future period. Future dates are after the train end date, following the holdout test set. The best model from cross validation is refit on data prior to this period. The timestamps in forecast.df are sorted in ascending order. Has a .plot() method and attributes to get forecast vs actuals, evaluation results.

Template Output

class greykite.framework.input.univariate_time_series.UnivariateTimeSeries[source]

Defines univariate time series input. The dataset can include regressors, but only one metric is designated as the target metric to forecast.

Loads time series into a standard format. Provides statistics, plotting functions, and ability to generate future dataframe for prediction.

df

Data frame containing timestamp and value, with standardized column names for internal use (TIME_COL, VALUE_COL). Rows are sorted by time index, and missing gaps between dates are filled in so that dates are spaced at regular intervals. Values are adjusted for anomalies according to anomaly_info. The index can be timezone aware (but TIME_COL is not).

Type

pandas.DataFrame

y

Value of time series to forecast.

Type

pandas.Series, dtype float64

time_stats

Summary statistics about the timestamp column.

Type

dict

value_stats

Summary statistics about the value column.

Type

dict

original_time_col

Name of time column in original input data.

Type

str

original_value_col

Name of value column in original input data.

Type

str

regressor_cols

A list of regressor columns in the training and prediction DataFrames.

Type

list [str]

last_date_for_val

Date or timestamp corresponding to last non-null value in df[original_value_col].

Type

datetime.datetime or None, default None

last_date_for_reg

Date or timestamp corresponding to last non-null value in df[regressor_cols]. If regressor_cols is None, last_date_for_reg is None.

Type

datetime.datetime or None, default None

train_end_date

Last date or timestamp in fit_df. It is always less than or equal to minimum non-null values of last_date_for_val and last_date_for_reg.

Type

datetime.datetime

fit_cols

A list of columns used in the training and prediction DataFrames.

Type

list [str]

fit_df

Data frame containing timestamp and value, with standardized column names for internal use. Will be used for fitting (train, cv, backtest).

Type

pandas.DataFrame

fit_y

Value of time series for fit_df.

Type

pandas.Series, dtype float64

freq

timeseries frequency, DateOffset alias, e.g. {‘T’ (minute), ‘H’, D’, ‘W’, ‘M’ (month end), ‘MS’ (month start), ‘Y’ (year end), ‘Y’ (year start)} See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases

Type

str

anomaly_info

Anomaly adjustment info. Anomalies in df are corrected before any forecasting is done. See self.load_data()

Type

dict or list [dict] or None, default None

df_before_adjustment

self.df before adjustment by anomaly_info. Used by self.plot() to show the adjustment.

Type

pandas.DataFrame or None, default None

load_data(df: pandas.core.frame.DataFrame, time_col: str = 'ts', value_col: str = 'y', freq: str = None, date_format: str = None, tz: str = None, train_end_date: datetime.datetime = None, regressor_cols: List[str] = None, anomaly_info: Optional[Union[Dict, List[Dict]]] = None)[source]

Loads data to internal representation. Parses date column, sets timezone aware index. Checks for irregularities and raises an error if input is invalid. Adjusts for anomalies according to anomaly_info.

Parameters
  • df (pandas.DataFrame) – Input timeseries. A data frame which includes the timestamp column as well as the value column.

  • time_col (str) – The column name in df representing time for the time series data. The time column can be anything that can be parsed by pandas DatetimeIndex.

  • value_col (str) – The column name which has the value of interest to be forecasted.

  • freq (str, optional, default None) – Timeseries frequency, DateOffset alias, If None automatically inferred.

  • date_format (str, optional, default None) – strftime format to parse time column, eg %m/%d/%Y. Note that %f will parse all the way up to nanoseconds. If None (recommended), inferred by pandas.to_datetime.

  • tz (str or pytz.timezone object, optional, default None) – Passed to pandas.tz_localize to localize the timestamp.

  • train_end_date (datetime.datetime, optional, default None) – Last date to use for fitting the model. Forecasts are generated after this date. If None, it is set to the minimum of self.last_date_for_val and self.last_date_for_reg.

  • regressor_cols (list [str], optional, default None) – A list of regressor columns used in the training and prediction DataFrames. If None, no regressor columns are used. Regressor columns that are unavailable in df are dropped. anomaly_info : dict or None, default None

  • anomaly_info (dict or list [dict] or None, default None) –

    Anomaly adjustment info. Anomalies in df are corrected before any forecasting is done.

    If None, no adjustments are made.

    A dictionary containing the parameters to adjust_anomalous_data. See that function for details. The possible keys are:

    "value_col"str

    The name of the column in df to adjust. You may adjust the value to forecast as well as any numeric regressors.

    "anomaly_df"pandas.DataFrame

    Adjustments to correct the anomalies.

    "start_date_col": str, default START_DATE_COL

    Start date column in anomaly_df.

    "end_date_col": str, default END_DATE_COL

    End date column in anomaly_df.

    "adjustment_delta_col": str or None, default None

    Impact column in anomaly_df.

    "filter_by_dict": dict or None, default None

    Used to filter anomaly_df to the relevant anomalies for the value_col in this dictionary. Key specifies the column name, value specifies the filter value.

    "filter_by_value_col"": str or None, default None

    Adds {filter_by_value_col: value_col} to filter_by_dict if not None, for the value_col in this dictionary.

    "adjustment_method"str (“add” or “subtract”), default “add”

    How to make the adjustment, if adjustment_delta_col is provided.

    Accepts a list of such dictionaries to adjust multiple columns in df.

Returns

self – Sets self.df with standard column names, value adjusted for anomalies, and time gaps filled in, sorted by time index.

Return type

Returns self.

describe_time_col()[source]

Basic descriptive stats on the timeseries time column.

Returns

time_stats

Dictionary with descriptive stats on the timeseries time column.

  • data_points: int

    number of time points

  • mean_increment_secs: float

    mean frequency

  • min_timestamp: datetime64

    start date

  • max_timestamp: datetime64

    end date

Return type

dict

describe_value_col()[source]

Basic descriptive stats on the timeseries value column.

Returns

value_stats – Dict with keys: count, mean, std, min, 25%, 50%, 75%, max

Return type

dict [str, float]

make_future_dataframe(periods: int = None, include_history=True)[source]

Extends the input data for prediction into the future.

Includes the historical values (VALUE_COL) so this can be fed into a Pipeline that transforms input data for fitting, and for use in evaluation.

Parameters
  • periods (int or None) – Number of periods to forecast. If there are no regressors, default is 30. If there are regressors, default is to predict all available dates.

  • include_history (bool) – Whether to return historical dates and values with future dates.

Returns

future_df – Dataframe with future timestamps for prediction. Contains columns for:

  • prediction dates (TIME_COL),

  • values (VALUE_COL),

  • optional regressors

Return type

pandas.DataFrame

plot(color='rgb(32, 149, 212)', show_anomaly_adjustment=False, **kwargs)[source]

Returns interactive plotly graph of the value against time.

If anomaly info is provided, there is an option to show the anomaly adjustment.

Parameters
  • color (str, default “rgb(32, 149, 212)” (light blue)) – Color of the value line (after adjustment, if applicable).

  • show_anomaly_adjustment (bool, default False) – Whether to show the anomaly adjustment.

  • kwargs (additional parameters) – Additional parameters to pass to plot_univariate such as title and color.

Returns

fig – Interactive plotly graph of the value against time.

See plot_forecast_vs_actual return value for how to plot the figure and add customization.

Return type

plotly.graph_objs.Figure

get_grouping_evaluation(aggregation_func=<function nanmean>, aggregation_func_name='mean', groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None)[source]

Group-wise computation of aggregated timeSeries value. Can be used to evaluate error/ aggregated value by a time feature, over time, or by a user-provided column.

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided.

Parameters
  • aggregation_func (callable, optional, default numpy.nanmean) – Function that aggregates an array to a number. Signature (y: array) -> aggregated value: float.

  • aggregation_func_name (str or None, optional, default “mean”) – Name of grouping function, used to report results. If None, defaults to “aggregation”.

  • groupby_time_feature (str or None, optional) – If provided, groups by a column generated by build_time_features_df. See that function for valid values.

  • groupby_sliding_window_size (int or None, optional) – If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

  • groupby_custom_column (pandas.Series or None, optional) – If provided, groups by this column value. Should be same length as the DataFrame.

Returns

grouped_df

  1. grouping_func_name: evaluation metric for aggregation of timeseries.

  2. group name: group name depends on the grouping method: groupby_time_feature for groupby_time_feature cst.TIME_COL for groupby_sliding_window_size groupby_custom_column.name for groupby_custom_column.

Return type

pandas.DataFrame with two columns:

plot_grouping_evaluation(aggregation_func=<function nanmean>, aggregation_func_name='mean', groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None, xlabel=None, ylabel=None, title=None)[source]

Computes aggregated timeseries by group and plots the result. Can be used to plot aggregated timeseries by a time feature, over time, or by a user-provided column.

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided.

Parameters
  • aggregation_func (callable, optional, default numpy.nanmean) – Function that aggregates an array to a number. Signature (y: array) -> aggregated value: float.

  • aggregation_func_name (str or None, optional, default “mean”) – Name of grouping function, used to report results. If None, defaults to “aggregation”.

  • groupby_time_feature (str or None, optional) – If provided, groups by a column generated by build_time_features_df. See that function for valid values.

  • groupby_sliding_window_size (int or None, optional) – If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

  • groupby_custom_column (pandas.Series or None, optional) – If provided, groups by this column value. Should be same length as the DataFrame.

  • xlabel (str, optional, default None) – X-axis label of the plot.

  • ylabel (str, optional, default None) – Y-axis label of the plot.

  • title (str or None, optional) – Plot title. If None, default is based on axis labels.

Returns

fig – plotly graph object showing aggregated timeseries by group. x-axis label depends on the grouping method: groupby_time_feature for groupby_time_feature TIME_COL for groupby_sliding_window_size groupby_custom_column.name for groupby_custom_column.

Return type

plotly.graph_objs.Figure

get_quantiles_and_overlays(groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None, show_mean=False, show_quantiles=False, show_overlays=False, overlay_label_time_feature=None, overlay_label_sliding_window_size=None, overlay_label_custom_column=None, center_values=False, value_col='y', mean_col_name='mean', quantile_col_prefix='Q', **overlay_pivot_table_kwargs)[source]

Computes mean, quantiles, and overlays by the requested grouping dimension.

Overlays are best explained in the plotting context. The grouping dimension goes on the x-axis, and one line is shown for each level of the overlay dimension. This function returns a column for each line to plot (e.g. mean, each quantile, each overlay value).

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided as the grouping dimension.

If show_overlays is True, exactly one of: overlay_label_time_feature, overlay_label_sliding_window_size, overlay_label_custom_column can be provided to specify the label_col (overlay dimension). Internally, the function calls pandas.DataFrame.pivot_table with index=groupby_col, columns=label_col, values=value_col to get the overlay values for plotting. You can pass additional parameters to pandas.DataFrame.pivot_table via overlay_pivot_table_kwargs, e.g. to change the aggregation method. If an explicit label is not provided, the records are labeled by their position within the group.

For example, to show yearly seasonality mean, quantiles, and overlay plots for each individual year, use:

self.get_quantiles_and_overlays(
    groupby_time_feature="doy",         # Rows: a row for each day of year (1, 2, ..., 366)
    show_mean=True,                     # mean value on that day
    show_quantiles=[0.1, 0.9],          # quantiles of the observed distribution on that day
    show_overlays=True,                 # Include overlays defined by ``overlay_label_time_feature``
    overlay_label_time_feature="year")  # One column for each observed "year" (2016, 2017, 2018, ...)

To show weekly seasonality over time, use:

self.get_quantiles_and_overlays(
    groupby_time_feature="dow",            # Rows: a row for each day of week (1, 2, ..., 7)
    show_mean=True,                        # mean value on that day
    show_quantiles=[0.1, 0.5, 0.9],        # quantiles of the observed distribution on that day
    show_overlays=True,                    # Include overlays defined by ``overlay_label_time_feature``
    overlay_label_sliding_window_size=90,  # One column for each 90 period sliding window in the dataset,
    aggfunc="median")                      # overlay value is the median value for the dow over the period (default="mean").

It may be difficult to assess the weekly seasonality from the previous result, because overlays shift up/down over time due to trend/yearly seasonality. Use center_values=True to adjust each overlay so its average value is centered at 0. Mean and quantiles are shifted by a single constant to center the mean at 0, while preserving their relative values:

self.get_quantiles_and_overlays(
    groupby_time_feature="dow",
    show_mean=True,
    show_quantiles=[0.1, 0.5, 0.9],
    show_overlays=True,
    overlay_label_sliding_window_size=90,
    aggfunc="median",
    center_values=True)  # Centers the output

Centering reduces the variability in the overlays to make it easier to isolate the effect by the groupby column. As a result, centered overlays have smaller variability than that reported by the quantiles, which operate on the original, uncentered data points. Similarly, if overlays are aggregates of individual values (i.e. aggfunc is needed in the call to pandas.DataFrame.pivot_table), the quantiles of overlays will be less extreme than those of the original data.

  • To assess variability conditioned on the groupby value, check the quantiles.

  • To assess variability conditioned on both the groupby and overlay value, after any necessary aggregation, check the variability of the overlay values. Compute quantiles of overlays from the return value if desired.

Parameters
  • groupby_time_feature (str or None, default None) – If provided, groups by a column generated by build_time_features_df. See that function for valid values.

  • groupby_sliding_window_size (int or None, default None) – If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

  • groupby_custom_column (pandas.Series or None, default None) – If provided, groups by this column value. Should be same length as the DataFrame.

  • show_mean (bool, default False) – Whether to return the mean value by the groupby column.

  • show_quantiles (bool or list [float] or numpy.array, default False) – Whether to return the quantiles of the value by the groupby column. If False, does not return quantiles. If True, returns default quantiles (0.1 and 0.9). If array-like, a list of quantiles to compute (e.g. (0.1, 0.25, 0.75, 0.9)).

  • show_overlays (bool or int or array-like [int or str] or None, default False) –

    Whether to return overlays of the value by the groupby column.

    If False, no overlays are shown.

    If True and label_col is defined, calls pandas.DataFrame.pivot_table with index=groupby_col, columns=label_col, values=value_col. label_col is defined by one of overlay_label_time_feature, overlay_label_sliding_window_size, or overlay_label_custom_column. Returns one column for each value of the label_col.

    If True and the label_col is not defined, returns the raw values within each group. Values across groups are put into columns by their position in the group (1st element in group, 2nd, 3rd, etc.). Positional order in a group is not guaranteed to correspond to anything meaningful, so the items within a column may not have anything in common. It is better to specify one of overlay_* to explicitly define the overlay labels.

    If an integer, the number of overlays to randomly sample. The same as True, then randomly samples up to int columns. This is useful if there are too many values.

    If a list [int], a list of column indices (int type). The same as True, then selects the specified columns by index.

    If a list [str], a list of column names. Column names are matched by their string representation to the names in this list. The same as True, then selects the specified columns by name.

  • overlay_label_time_feature (str or None, default None) –

    If show_overlays is True, can be used to define label_col, i.e. which dimension to show separately as overlays.

    If provided, uses a column generated by build_time_features_df. See that function for valid values.

  • overlay_label_sliding_window_size (int or None, default None) –

    If show_overlays is True, can be used to define label_col, i.e. which dimension to show separately as overlays.

    If provided, uses a column that sequentially partitions data into groups of size groupby_sliding_window_size.

  • overlay_label_custom_column (pandas.Series or None, default None) –

    If show_overlays is True, can be used to define label_col, i.e. which dimension to show separately as overlays.

    If provided, uses this column value. Should be same length as the DataFrame.

  • value_col (str, default VALUE_COL) – The column name for the value column. By default, shows the univariate time series value, but it can be any other column in self.df.

  • mean_col_name (str, default “mean”) – The name to use for the mean column in the output. Applies if show_mean=True.

  • quantile_col_prefix (str, default “Q”) – The prefix to use for quantile column names in the output. Columns are named with this prefix followed by the quantile, rounded to 2 decimal places.

  • center_values (bool, default False) –

    Whether to center the return values. If True, shifts each overlay so its average value is centered at 0. Shifts mean and quantiles by a constant to center the mean at 0, while preserving their relative values.

    If False, values are not centered.

  • overlay_pivot_table_kwargs (additional parameters) – Additional keyword parameters to pass to pandas.DataFrame.pivot_table, used in generating the overlays. See above description for details.

Returns

grouped_df – Dataframe with mean, quantiles, and overlays by the grouping column. Overlays are defined by the grouping column and overlay dimension.

ColumnIndex is a multiindex with first level as the “category”, a subset of [MEAN_COL_GROUP, QUANTILE_COL_GROUP, OVERLAY_COL_GROUP] depending on what is requests.

  • grouped_df[MEAN_COL_GROUP] = df with single column, named mean_col_name.

  • grouped_df[QUANTILE_COL_GROUP] = df with a column for each quantile, named f”{quantile_col_prefix}{round(str(q))}”, where q is the quantile.

  • grouped_df[OVERLAY_COL_GROUP] = df with one column per overlay value, named by the overlay value.

For example, it might look like:

category    mean    quantile        overlay
name        mean    Q0.1    Q0.9    2007    2008    2009
doy
1               8.42    7.72    9.08    8.29    7.75    8.33
2               8.82    8.20    9.56    8.43    8.80    8.53
3               8.95    8.25    9.88    8.26    9.12    8.70
4               9.07    8.60    9.49    8.10    9.99    8.73
5               8.73    8.29    9.24    7.95    9.26    8.37
...         ...     ...     ...     ...     ...     ...

Return type

pandas.DataFrame

plot_quantiles_and_overlays(groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None, show_mean=False, show_quantiles=False, show_overlays=False, overlay_label_time_feature=None, overlay_label_sliding_window_size=None, overlay_label_custom_column=None, center_values=False, value_col='y', mean_col_name='mean', quantile_col_prefix='Q', mean_style=None, quantile_style=None, overlay_style=None, xlabel=None, ylabel=None, title=None, showlegend=True, **overlay_pivot_table_kwargs)[source]

Plots mean, quantiles, and overlays by the requested grouping dimension.

The grouping dimension goes on the x-axis, and one line is shown for the mean, each quantile, and each level of the overlay dimension, as requested. By default, shading is applied between the quantiles.

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided as the grouping dimension.

If show_overlays is True, exactly one of: overlay_label_time_feature, overlay_label_sliding_window_size, overlay_label_custom_column can be provided to specify the label_col (overlay dimension). Internally, the function calls pandas.DataFrame.pivot_table with index=groupby_col, columns=label_col, values=value_col to get the overlay values for plotting. You can pass additional parameters to pandas.DataFrame.pivot_table via overlay_pivot_table_kwargs, e.g. to change the aggregation method. If an explicit label is not provided, the records are labeled by their position within the group.

For example, to show yearly seasonality mean, quantiles, and overlay plots for each individual year, use:

self.plot_quantiles_and_overlays(
    groupby_time_feature="doy",         # Rows: a row for each day of year (1, 2, ..., 366)
    show_mean=True,                     # mean value on that day
    show_quantiles=[0.1, 0.9],          # quantiles of the observed distribution on that day
    show_overlays=True,                 # Include overlays defined by ``overlay_label_time_feature``
    overlay_label_time_feature="year")  # One column for each observed "year" (2016, 2017, 2018, ...)

To show weekly seasonality over time, use:

self.plot_quantiles_and_overlays(
    groupby_time_feature="dow",            # Rows: a row for each day of week (1, 2, ..., 7)
    show_mean=True,                        # mean value on that day
    show_quantiles=[0.1, 0.5, 0.9],        # quantiles of the observed distribution on that day
    show_overlays=True,                    # Include overlays defined by ``overlay_label_time_feature``
    overlay_label_sliding_window_size=90,  # One column for each 90 period sliding window in the dataset,
    aggfunc="median")                      # overlay value is the median value for the dow over the period (default="mean").

It may be difficult to assess the weekly seasonality from the previous result, because overlays shift up/down over time due to trend/yearly seasonality. Use center_values=True to adjust each overlay so its average value is centered at 0. Mean and quantiles are shifted by a single constant to center the mean at 0, while preserving their relative values:

self.plot_quantiles_and_overlays(
    groupby_time_feature="dow",
    show_mean=True,
    show_quantiles=[0.1, 0.5, 0.9],
    show_overlays=True,
    overlay_label_sliding_window_size=90,
    aggfunc="median",
    center_values=True)  # Centers the output

Centering reduces the variability in the overlays to make it easier to isolate the effect by the groupby column. As a result, centered overlays have smaller variability than that reported by the quantiles, which operate on the original, uncentered data points. Similarly, if overlays are aggregates of individual values (i.e. aggfunc is needed in the call to pandas.DataFrame.pivot_table), the quantiles of overlays will be less extreme than those of the original data.

  • To assess variability conditioned on the groupby value, check the quantiles.

  • To assess variability conditioned on both the groupby and overlay value, after any necessary aggregation, check the variability of the overlay values. Compute quantiles of overlays from the return value if desired.

Parameters
  • groupby_time_feature (str or None, default None) – If provided, groups by a column generated by build_time_features_df. See that function for valid values.

  • groupby_sliding_window_size (int or None, default None) – If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

  • groupby_custom_column (pandas.Series or None, default None) – If provided, groups by this column value. Should be same length as the DataFrame.

  • show_mean (bool, default False) – Whether to return the mean value by the groupby column.

  • show_quantiles (bool or list [float] or numpy.array, default False) – Whether to return the quantiles of the value by the groupby column. If False, does not return quantiles. If True, returns default quantiles (0.1 and 0.9). If array-like, a list of quantiles to compute (e.g. (0.1, 0.25, 0.75, 0.9)).

  • show_overlays (bool or int or array-like [int or str], default False) –

    Whether to return overlays of the value by the groupby column.

    If False, no overlays are shown.

    If True and label_col is defined, calls pandas.DataFrame.pivot_table with index=groupby_col, columns=label_col, values=value_col. label_col is defined by one of overlay_label_time_feature, overlay_label_sliding_window_size, or overlay_label_custom_column. Returns one column for each value of the label_col.

    If True and the label_col is not defined, returns the raw values within each group. Values across groups are put into columns by their position in the group (1st element in group, 2nd, 3rd, etc.). Positional order in a group is not guaranteed to correspond to anything meaningful, so the items within a column may not have anything in common. It is better to specify one of overlay_* to explicitly define the overlay labels.

    If an integer, the number of overlays to randomly sample. The same as True, then randomly samples up to int columns. This is useful if there are too many values.

    If a list [int], a list of column indices (int type). The same as True, then selects the specified columns by index.

    If a list [str], a list of column names. Column names are matched by their string representation to the names in this list. The same as True, then selects the specified columns by name.

  • overlay_label_time_feature (str or None, default None) –

    If show_overlays is True, can be used to define label_col, i.e. which dimension to show separately as overlays.

    If provided, uses a column generated by build_time_features_df. See that function for valid values.

  • overlay_label_sliding_window_size (int or None, default None) –

    If show_overlays is True, can be used to define label_col, i.e. which dimension to show separately as overlays.

    If provided, uses a column that sequentially partitions data into groups of size groupby_sliding_window_size.

  • overlay_label_custom_column (pandas.Series or None, default None) –

    If show_overlays is True, can be used to define label_col, i.e. which dimension to show separately as overlays.

    If provided, uses this column value. Should be same length as the DataFrame.

  • value_col (str, default VALUE_COL) – The column name for the value column. By default, shows the univariate time series value, but it can be any other column in self.df.

  • mean_col_name (str, default “mean”) – The name to use for the mean column in the output. Applies if show_mean=True.

  • quantile_col_prefix (str, default “Q”) – The prefix to use for quantile column names in the output. Columns are named with this prefix followed by the quantile, rounded to 2 decimal places.

  • center_values (bool, default False) –

    Whether to center the return values. If True, shifts each overlay so its average value is centered at 0. Shifts mean and quantiles by a constant to center the mean at 0, while preserving their relative values.

    If False, values are not centered.

  • mean_style (dict or None, default None) –

    How to style the mean line, passed as keyword arguments to plotly.graph_objs.Scatter. If None, the default is:

    mean_style = {
        "line": dict(
            width=2,
            color="#595959"),  # gray
        "legendgroup": MEAN_COL_GROUP}
    

  • quantile_style (dict or None, default None) –

    How to style the quantile lines, passed as keyword arguments to plotly.graph_objs.Scatter. If None, the default is:

    quantile_style = {
        "line": dict(
            width=2,
            color="#1F9AFF",  # blue
            dash="solid"),
        "legendgroup": QUANTILE_COL_GROUP,  # show/hide them together
        "fill": "tonexty"}
    

    Note that fill style is removed from to the first quantile line, to fill only between items in the same category.

  • overlay_style (dict or None, default None) –

    How to style the overlay lines, passed as keyword arguments to plotly.graph_objs.Scatter. If None, the default is:

    overlay_style = {
        "opacity": 0.5,  # makes it easier to see density
        "line": dict(
            width=1,
            color="#B3B3B3",  # light gray
            dash="solid"),
        "legendgroup": OVERLAY_COL_GROUP}
    

  • xlabel (str, optional, default None) – X-axis label of the plot.

  • ylabel (str, optional, default None) – Y-axis label of the plot. If None, uses value_col.

  • title (str or None, default None) – Plot title. If None, default is based on axis labels.

  • showlegend (bool, default True) – Whether to show the legend.

  • overlay_pivot_table_kwargs (additional parameters) – Additional keyword parameters to pass to pandas.DataFrame.pivot_table, used in generating the overlays. See get_quantiles_and_overlays description for details.

Returns

fig – plotly graph object showing the mean, quantiles, and overlays.

Return type

plotly.graph_objs.Figure

See also

None

To get the mean, quantiles, and overlays as a pandas.DataFrame without plotting.

class greykite.framework.output.univariate_forecast.UnivariateForecast(df, time_col='ts', actual_col='actual', predicted_col='forecast', predicted_lower_col='forecast_lower', predicted_upper_col='forecast_upper', null_model_predicted_col='forecast_null', ylabel='y', train_end_date=None, test_start_date=None, forecast_horizon=None, coverage=0.95, r2_loss_function=<function mean_squared_error>, estimator=None, relative_error_tolerance=None)[source]

Stores predicted and actual values. Provides functionality to evaluate a forecast:

  • plots true against actual with prediction bands.

  • evaluates model performance.

Input should be one of two kinds of forecast results:

  • model fit to train data, forecast on test set (actuals available).

  • model fit to all data, forecast on future dates (actuals not available).

The input df is a concatenation of fitted and forecasted values.

df

Timestamp, predicted, and actual values.

Type

pandas.DataFrame

time_col

Column in df with timestamp (default “ts”).

Type

str

actual_col

Column in df with actual values (default “y”).

Type

str

predicted_col

Column in df with predicted values (default “forecast”).

Type

str

predicted_lower_col

Column in df with predicted lower bound (default “forecast_lower”, optional).

Type

str or None

predicted_upper_col

Column in df with predicted upper bound (default “forecast_upper”, optional).

Type

str or None

null_model_predicted_col

Column in df with predicted value of null model (default “forecast_null”, optional).

Type

str or None

ylabel

Unit of measurement (default “y”)

Type

str

train_end_date

End date for train period. If None, assumes all data were used for training.

Type

str or datetime or None, default None

test_start_date

Start date of test period. If None, set to the time_col value immediately after train_end_date. This assumes that all data not used in training were used for testing.

Type

str or datetime or None, default None

forecast_horizon

Number of periods forecasted into the future. Must be > 0.

Type

int or None, default None

coverage

Intended coverage of the prediction bands (0.0 to 1.0).

Type

float or None

r2_loss_function

Loss function to calculate cst.R2_null_model_score, with signature loss_func(y_true, y_pred) (default mean_squared_error)

Type

function

estimator

The fitted estimator, the last step in the forecast pipeline.

Type

An instance of an estimator that implements greykite.models.base_forecast_estimator.BaseForecastEstimator.

relative_error_tolerance

Threshold to compute the Outside Tolerance metric, defined as the fraction of forecasted values whose relative error is strictly greater than relative_error_tolerance. For example, 0.05 allows for 5% relative error. If None, the metric is not computed.

Type

float or None, default None

df_train

Subset of df where df[time_col] <= train_end_date.

Type

pandas.DataFrame

df_test

Subset of df where df[time_col] > train_end_date.

Type

pandas.DataFrame

train_evaluation

Evaluation metrics on training set.

Type

dict [str, float]

test_evaluation

Evaluation metrics on test set (if actual values provided after train_end_date).

Type

dict [str, float]

test_na_count

Count of NA values in test data.

Type

int

compute_evaluation_metrics_split()[source]

Computes __evaluation_metrics for train and test set separately.

Returns

dictionary with train and test evaluation metrics

plot(**kwargs)[source]

Plots predicted against actual.

Parameters

kwargs (additional parameters) – Additional parameters to pass to plot_forecast_vs_actual such as title, colors, and line styling.

Returns

fig – Plotly figure of forecast against actuals, with prediction intervals if available.

See plot_forecast_vs_actual return value for how to plot the figure and add customization.

Return type

plotly.graph_objs.Figure

get_grouping_evaluation(score_func=<function add_finite_filter_to_scorer.<locals>.score_func_finite>, score_func_name='MAPE', which='train', groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None)[source]

Group-wise computation of forecasting error. Can be used to evaluate error/ aggregated value by a time feature, over time, or by a user-provided column.

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided.

Parameters
  • score_func (callable, optional) – Function that maps two arrays to a number. Signature (y_true: array, y_pred: array) -> error: float

  • score_func_name (str or None, optional) – Name of the score function used to report results. If None, defaults to “metric”.

  • which (str) – “train” or “test”. Which dataset to evaluate.

  • groupby_time_feature (str or None, optional) – If provided, groups by a column generated by build_time_features_df. See that function for valid values.

  • groupby_sliding_window_size (int or None, optional) – If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

  • groupby_custom_column (pandas.Series or None, optional) – If provided, groups by this column value. Should be same length as the DataFrame.

Returns

grouped_df

  1. grouping_func_name: evaluation metric computing forecasting error of timeseries.

  2. group name: group name depends on the grouping method: groupby_time_feature for groupby_time_feature cst.TIME_COL for groupby_sliding_window_size groupby_custom_column.name for groupby_custom_column.

Return type

pandas.DataFrame with two columns:

plot_grouping_evaluation(score_func=<function add_finite_filter_to_scorer.<locals>.score_func_finite>, score_func_name='MAPE', which='train', groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None, xlabel=None, ylabel=None, title=None)[source]

Computes error by group and plots the result. Can be used to plot error by a time feature, over time, or by a user-provided column.

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided.

Parameters
  • score_func (callable, optional) – Function that maps two arrays to a number. Signature (y_true: array, y_pred: array) -> error: float

  • score_func_name (str or None, optional) – Name of the score function used to report results. If None, defaults to “metric”.

  • which (str, optional, default “train”) – Which dataset to evaluate, “train” or “test”.

  • groupby_time_feature (str or None, optional) – If provided, groups by a column generated by build_time_features_df. See that function for valid values.

  • groupby_sliding_window_size (int or None, optional) – If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

  • groupby_custom_column (pandas.Series or None, optional) – If provided, groups by this column value. Should be same length as the DataFrame.

  • xlabel (str, optional, default None) – X-axis label of the plot.

  • ylabel (str, optional, default None) – Y-axis label of the plot.

  • title (str or None, optional) – Plot title, if None this function creates a suitable title.

Returns

fig – plotly graph object showing forecasting error by group. x-axis label depends on the grouping method: groupby_time_feature for groupby_time_feature time_col for groupby_sliding_window_size groupby_custom_column.name for groupby_custom_column.

Return type

plotly.graph_objs.Figure

autocomplete_map_func_dict(map_func_dict)[source]

Sweeps through map_func_dict, converting values that are ElementwiseEvaluationMetricEnum member names to their corresponding row-wise evaluation function with appropriate column names for this UnivariateForecast instance.

For example:

map_func_dict = {
    "squared_error": ElementwiseEvaluationMetricEnum.SquaredError.name,
    "coverage": ElementwiseEvaluationMetricEnum.Coverage.name,
    "custom_metric": custom_function
}

is converted to

map_func_dict = {
    "squared_error": lambda row: ElementwiseEvaluationMetricEnum.SquaredError.get_metric_func()(
                                row[self.actual_col],
                                row[self.predicted_col]),
    "coverage": lambda row: ElementwiseEvaluationMetricEnum.Coverage.get_metric_func()(
                                row[self.actual_col],
                                row[self.predicted_lower_col],
                                row[self.predicted_upper_col]),
    "custom_metric": custom_function
}
Parameters

map_func_dict (dict or None) – Same as flexible_grouping_evaluation, with one exception: values may a ElementwiseEvaluationMetricEnum member name. There are converted a callable for flexible_grouping_evaluation.

Returns

map_func_dict – Can be passed to flexible_grouping_evaluation.

Return type

dict

get_flexible_grouping_evaluation(which='train', groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None, map_func_dict=None, agg_kwargs=None, extend_col_names=False)[source]

Group-wise computation of evaluation metrics. Whereas self.get_grouping_evaluation computes one metric, this allows computation of any number of custom metrics.

For example:

  • Mean and quantiles of squared error by group.

  • Mean and quantiles of residuals by group.

  • Mean and quantiles of actual and forecast by group.

  • % of actuals outside prediction intervals by group

  • any combination of the above metrics by the same group

First adds a groupby column by passing groupby_ parameters to add_groupby_column. Then computes grouped evaluation metrics by passing map_func_dict, agg_kwargs and extend_col_names to flexible_grouping_evaluation.

Exactly one of: groupby_time_feature, groupby_sliding_window_size, groupby_custom_column must be provided.

which: str

“train” or “test”. Which dataset to evaluate.

groupby_time_featurestr or None, optional

If provided, groups by a column generated by build_time_features_df. See that function for valid values.

groupby_sliding_window_sizeint or None, optional

If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

groupby_custom_columnpandas.Series or None, optional

If provided, groups by this column value. Should be same length as the DataFrame.

map_func_dictdict [str, callable] or None, default None

Row-wise transformation functions to create new columns. If None, no new columns are added.

  • key: new column name

  • value: row-wise function to apply to df to generate the column value.

    Signature (row: pandas.DataFrame) -> transformed value: float.

For example:

map_func_dict = {
    "residual": lambda row: row["actual"] - row["forecast"],
    "squared_error": lambda row: (row["actual"] - row["forecast"])**2
}

Some predefined functions are available in ElementwiseEvaluationMetricEnum. For example:

map_func_dict = {
    "residual": lambda row: ElementwiseEvaluationMetricEnum.Residual.get_metric_func()(
        row["actual"],
        row["forecast"]),
    "squared_error": lambda row: ElementwiseEvaluationMetricEnum.SquaredError.get_metric_func()(
        row["actual"],
        row["forecast"]),
    "q90_loss": lambda row: ElementwiseEvaluationMetricEnum.Quantile90.get_metric_func()(
        row["actual"],
        row["forecast"]),
    "abs_percent_error": lambda row: ElementwiseEvaluationMetricEnum.AbsolutePercentError.get_metric_func()(
        row["actual"],
        row["forecast"]),
    "coverage": lambda row: ElementwiseEvaluationMetricEnum.Coverage.get_metric_func()(
        row["actual"],
        row["forecast_lower"],
        row["forecast_upper"]),
}

As shorthand, it is sufficient to provide the enum member name. These are auto-expanded into the appropriate function. So the following is equivalent:

map_func_dict = {
    "residual": ElementwiseEvaluationMetricEnum.Residual.name,
    "squared_error": ElementwiseEvaluationMetricEnum.SquaredError.name,
    "q90_loss": ElementwiseEvaluationMetricEnum.Quantile90.name,
    "abs_percent_error": ElementwiseEvaluationMetricEnum.AbsolutePercentError.name,
    "coverage": ElementwiseEvaluationMetricEnum.Coverage.name,
}
agg_kwargsdict or None, default None

Passed as keyword args to pandas.core.groupby.DataFrameGroupBy.aggregate after creating new columns and grouping by groupby_col.

See pandas.core.groupby.DataFrameGroupBy.aggregate or flexible_grouping_evaluation for details.

extend_col_namesbool or None, default False

How to flatten index after aggregation. In some cases, the column index after aggregation is a multi-index. This parameter controls how to flatten an index with 2 levels to 1 level.

  • If None, the index is not flattened.

  • If True, column name is a composite: {index0}_{index1} Use this option if index1 is not unique.

  • If False, column name is simply {index1}

Ignored if the ColumnIndex after aggregation has only one level (e.g. if named aggregation is used in agg_kwargs).

Returns

df_transformeddf after transformation and optional aggregation.

If groupby_col is None, returns df with additional columns as the keys in map_func_dict. Otherwise, df is grouped by groupby_col and this becomes the index. Columns are determined by agg_kwargs and extend_col_names.

Return type

pandas.DataFrame

See also

None

called by this function

None

called by this function

plot_flexible_grouping_evaluation(which='train', groupby_time_feature=None, groupby_sliding_window_size=None, groupby_custom_column=None, map_func_dict=None, agg_kwargs=None, extend_col_names=False, y_col_style_dict='auto-fill', default_color='rgba(0, 145, 202, 1.0)', xlabel=None, ylabel=None, title=None, showlegend=True)[source]

Plots group-wise evaluation metrics. Whereas plot_grouping_evaluation shows one metric, this can show any number of custom metrics.

For example:

  • Mean and quantiles of squared error by group.

  • Mean and quantiles of residuals by group.

  • Mean and quantiles of actual and forecast by group.

  • % of actuals outside prediction intervals by group

  • any combination of the above metrics by the same group

See get_flexible_grouping_evaluation for details.

which: str

“train” or “test”. Which dataset to evaluate.

groupby_time_featurestr or None, optional

If provided, groups by a column generated by build_time_features_df. See that function for valid values.

groupby_sliding_window_sizeint or None, optional

If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

groupby_custom_columnpandas.Series or None, optional

If provided, groups by this column value. Should be same length as the DataFrame.

map_func_dictdict [str, callable] or None, default None

Grouping evaluation metric specification, along with agg_kwargs. See get_flexible_grouping_evaluation.

agg_kwargsdict or None, default None

Grouping evaluation metric specification, along with map_func_dict. See get_flexible_grouping_evaluation.

extend_col_namesbool or None, default False

How to name the grouping metrics. See get_flexible_grouping_evaluation.

y_col_style_dict: dict [str, dict or None] or “plotly” or “auto” or “auto-fill”, default “auto-fill”

The column(s) to plot on the y-axis, and how to style them. The names should match those generated by agg_kwargs and extend_col_names. The function get_flexible_grouping_evaluation can be used to check the column names.

For convenience, start with “auto-fill” or “plotly”, then adjust styling as needed.

See plot_multivariate for details.

default_color: str, default “rgba(0, 145, 202, 1.0)” (blue)

Default line color when y_col_style_dict is one of “auto”, “auto-fill”.

xlabelstr or None, default None

x-axis label. If None, default is x_col.

ylabelstr or None, default None

y-axis label. If None, y-axis is not labeled.

titlestr or None, default None

Plot title. If None and ylabel is provided, a default title is used.

showlegendbool, default True

Whether to show the legend.

Returns

fig – Interactive plotly graph showing the evaluation metrics.

See plot_forecast_vs_actual return value for how to plot the figure and add customization.

Return type

plotly.graph_objs.Figure

See also

None

called by this function

None

called by this function

make_univariate_time_series()[source]

Converts prediction into a UnivariateTimeSeries Useful to convert a forecast into the input regressor for a subsequent forecast.

Returns

UnivariateTimeSeries

plot_components(**kwargs)[source]

Class method to plot the components of a UnivariateForecast object.

Silverkite calculates component plots based on fit dataset. Prophet calculates component plots based on predict dataset.

For estimator specific component plots with advanced plotting options call self.estimator.plot_components().

Returns

figmatplotlib.figure.Figure for Prophet Figure plotting components against appropriate time scale.

Return type

plotly.graph_objs.Figure for Silverkite

class greykite.algo.common.model_summary.ModelSummary(x, y, pred_cols, pred_category, fit_algorithm, ml_model, max_colwidth=20)[source]

A class to store regression model summary statistics.

The class can be printed to get a well formatted model summary.

x

The design matrix.

Type

numpy.array

beta

The estimated coefficients.

Type

numpy.array

y

The response.

Type

numpy.array

pred_cols

List of predictor names.

Type

list [ str ]

pred_category

Predictor category, returned by create_pred_category.

Type

dict

fit_algorithm

The name of algorithm to fit the regression.

Type

str

ml_model

The trained machine learning model class.

Type

class

max_colwidth

The maximum length for predictors to be shown in their original name. If the maximum length of predictors exceeds this parameter, all predictors name will be suppressed and only indices are shown.

Type

int

info_dict

The model summary dictionary, output of _get_summary

Type

dict

__str__()[source]

print method.

__repr__()[source]

print method

_get_summary()[source]

Gets the model summary from input. This function is called during initialization.

Returns

info_dict – Includes direct and derived metrics about the trained model. For detailed keys, refer to get_info_dict_lm or get_info_dict_tree.

Return type

dict

get_coef_summary(is_intercept=None, is_time_feature=None, is_event=None, is_trend=None, is_seasonality=None, is_lag=None, is_regressor=None, is_interaction=None, return_df=False)[source]

Gets the coefficient summary filtered by conditions.

Parameters
  • is_intercept (bool or None, default None) – Intercept or not.

  • is_time_feature (bool or None, default None) – Time features or not. Time features belong to TIME_FEATURES.

  • is_event (bool or None, default None) – Event features or not. Event features have EVENT_PREFIX.

  • is_trend (bool or None, default None) – Trend features or not. Trend features have CHANGEPOINT_COL_PREFIX or “cpd”.

  • is_seasonality (bool or None, default None) – Seasonality feature or not. Seasonality features have SEASONALITY_REGEX.

  • is_lag (bool or None, default None) – Lagged features or not. Lagged features have “lag”.

  • is_regressor (0 or 1) – Extra features provided by users. They are provided through extra_pred_cols in the fit function.

  • is_interaction (bool or None, default None) – Interaction feature or not. Interaction features have “:”.

  • return_df (bool, default False) –

    If True, the filtered coefficient summary df is also returned.

    Otherwise, the filtered coefficient summary df is printed only.

Returns

filtered_coef_summary – If return_df is set to True, returns the filtered coefficient summary df filtered by the given conditions.

Return type

pandas.DataFrame or None

Constants

class greykite.common.evaluation.EvaluationMetricEnum(value)[source]

Valid evaluation metrics. The values tuple is (score_func: callable, greater_is_better: boolean, short_name: str)

add_finite_filter_to_scorer is added to the metrics that are directly imported from sklearn.metrics (e.g. mean_squared_error) to ensure that the metric gets calculated even when inputs have missing values.

Correlation = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, True, 'CORR')

Pearson correlation coefficient between forecast and actuals. Higher is better.

CoefficientOfDetermination = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, True, 'R2')

Coefficient of determination. See sklearn.metrics.r2_score. Higher is better. Equals 1.0 - mean_squared_error / variance(actuals).

MeanSquaredError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'MSE')

Mean squared error, the average of squared differences, see sklearn.metrics.mean_squared_error.

RootMeanSquaredError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'RMSE')

Root mean squared error, the square root of sklearn.metrics.mean_squared_error

MeanAbsoluteError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'MAE')

Mean absolute error, average of absolute differences, see sklearn.metrics.mean_absolute_error.

MedianAbsoluteError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'MedAE')

Median absolute error, median of absolute differences, see sklearn.metrics.median_absolute_error.

MeanAbsolutePercentError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'MAPE')

Mean absolute percent error, error relative to actuals expressed as a %, see wikipedia MAPE.

MedianAbsolutePercentError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'MedAPE')

Median absolute percent error, median of error relative to actuals expressed as a %, a median version of the MeanAbsolutePercentError, less affected by extreme values.

SymmetricMeanAbsolutePercentError = (<function add_finite_filter_to_scorer.<locals>.score_func_finite>, False, 'sMAPE')

Symmetric mean absolute percent error, error relative to (actuals+forecast) expressed as a %. Note that we do not include a factor of 2 in the denominator, so the range is 0% to 100%, see wikipedia sMAPE.

Quantile80 = (<function quantile_loss_q.<locals>.quantile_loss_wrapper>, False, 'Q80')

Quantile loss with q=0.80:

np.where(y_true < y_pred, (1 - q) * (y_pred - y_true), q * (y_true - y_pred)).mean()
Quantile95 = (<function quantile_loss_q.<locals>.quantile_loss_wrapper>, False, 'Q95')

Quantile loss with q=0.95:

np.where(y_true < y_pred, (1 - q) * (y_pred - y_true), q * (y_true - y_pred)).mean()
Quantile99 = (<function quantile_loss_q.<locals>.quantile_loss_wrapper>, False, 'Q99')

Quantile loss with q=0.99:

np.where(y_true < y_pred, (1 - q) * (y_pred - y_true), q * (y_true - y_pred)).mean()
FractionOutsideTolerance1 = (functools.partial(<function add_finite_filter_to_scorer.<locals>.score_func_finite>, rtol=0.01), False, 'OutsideTolerance1p')

Fraction of forecasted values that deviate more than 1% from the actual

FractionOutsideTolerance2 = (functools.partial(<function add_finite_filter_to_scorer.<locals>.score_func_finite>, rtol=0.02), False, 'OutsideTolerance2p')

Fraction of forecasted values that deviate more than 2% from the actual

FractionOutsideTolerance3 = (functools.partial(<function add_finite_filter_to_scorer.<locals>.score_func_finite>, rtol=0.03), False, 'OutsideTolerance3p')

Fraction of forecasted values that deviate more than 3% from the actual

FractionOutsideTolerance4 = (functools.partial(<function add_finite_filter_to_scorer.<locals>.score_func_finite>, rtol=0.04), False, 'OutsideTolerance4p')

Fraction of forecasted values that deviate more than 4% from the actual

FractionOutsideTolerance5 = (functools.partial(<function add_finite_filter_to_scorer.<locals>.score_func_finite>, rtol=0.05), False, 'OutsideTolerance5p')

Fraction of forecasted values that deviate more than 5% from the actual

get_metric_func()[source]

Returns the metric function

get_metric_greater_is_better()[source]

Returns the greater_is_better boolean

get_metric_name()[source]

Returns the short name

Constants used by code in common or in multiple places: algo, sklearn, and/or framework.

greykite.common.constants.TIME_COL = 'ts'

The default name for the column with the timestamps of the time series

greykite.common.constants.VALUE_COL = 'y'

The default name for the column with the values of the time series

greykite.common.constants.ACTUAL_COL = 'actual'

The column name representing actual (observed) values

greykite.common.constants.PREDICTED_COL = 'forecast'

The column name representing the predicted values

greykite.common.constants.PREDICTED_LOWER_COL = 'forecast_lower'

The column name representing upper bounds of prediction interval

greykite.common.constants.PREDICTED_UPPER_COL = 'forecast_upper'

The column name representing lower bounds of prediction interval

greykite.common.constants.NULL_PREDICTED_COL = 'forecast_null'

The column name representing predicted values from null model

greykite.common.constants.ERR_STD_COL = 'err_std'

The column name representing the error standard deviation from models

greykite.common.constants.R2_null_model_score = 'R2_null_model_score'

Evaluation metric. Improvement in the specified loss function compared to the predictions of a null model.

greykite.common.constants.FRACTION_OUTSIDE_TOLERANCE = 'Outside Tolerance (fraction)'

Evaluation metric. The fraction of predictions outside the specified tolerance level

greykite.common.constants.PREDICTION_BAND_WIDTH = 'Prediction Band Width (%)'

Evaluation metric. Relative size of prediction bands vs actual, as a percent

greykite.common.constants.PREDICTION_BAND_COVERAGE = 'Prediction Band Coverage (fraction)'

Evaluation metric. Fraction of observations within the bands

greykite.common.constants.LOWER_BAND_COVERAGE = 'Coverage: Lower Band'

Evaluation metric. Fraction of observations within the lower band

greykite.common.constants.UPPER_BAND_COVERAGE = 'Coverage: Upper Band'

Evaluation metric. Fraction of observations within the upper band

greykite.common.constants.COVERAGE_VS_INTENDED_DIFF = 'Coverage Diff: Actual_Coverage - Intended_Coverage'

Evaluation metric. Difference between actual and intended coverage

greykite.common.constants.EVENT_DF_DATE_COL = 'date'

Name of date column for the DataFrames passed to silverkite custom_daily_event_df_dict

greykite.common.constants.EVENT_DF_LABEL_COL = 'event_name'

Name of event column for the DataFrames passed to silverkite custom_daily_event_df_dict

greykite.common.constants.EVENT_PREFIX = 'events'

Prefix for naming event features.

greykite.common.constants.EVENT_DEFAULT = ''

Label used for days without an event.

greykite.common.constants.EVENT_INDICATOR = 'event'

Binary indicatory for an event

greykite.common.constants.CHANGEPOINT_COL_PREFIX = 'changepoint'

Prefix for naming changepoint features.

greykite.common.constants.CHANGEPOINT_COL_PREFIX_SHORT = 'cp'

Short prefix for naming changepoint features.

greykite.common.constants.START_DATE_COL = 'start_date'

Start timestamp column name

greykite.common.constants.END_DATE_COL = 'end_date'

Standard end timestamp column

greykite.common.constants.ADJUSTMENT_DELTA_COL = 'adjustment_delta'

Adjustment column

greykite.common.constants.METRIC_COL = 'metric'

Column to denote metric of interest

greykite.common.constants.DIMENSION_COL = 'dimension'

Dimension column

greykite.common.constants.GROWTH_COL_ALIAS = {'cuberoot': 'ct_root3', 'cubic': 'ct3', 'linear': 'ct1', 'quadratic': 'ct2', 'sqrt': 'ct_sqrt'}

Human-readable names for the growth columns generated by build_time_features_df

greykite.common.constants.TIME_FEATURES = ['datetime', 'date', 'year', 'year_length', 'quarter', 'quarter_start', 'quarter_length', 'month', 'month_length', 'woy', 'doy', 'doq', 'dom', 'dow', 'str_dow', 'str_doy', 'hour', 'minute', 'second', 'year_month', 'year_woy', 'month_dom', 'year_woy_dow', 'woy_dow', 'dow_hr', 'dow_hr_min', 'tod', 'tow', 'tom', 'toq', 'toy', 'conti_year', 'is_weekend', 'dow_grouped', 'ct1', 'ct2', 'ct3', 'ct_sqrt', 'ct_root3']

Time features generated by build_time_features_df

greykite.common.constants.LAG_INFIX = '_lag'

Infix for lagged feature names

greykite.common.constants.AGG_LAG_INFIX = 'avglag'

Infix for aggregated lag feature names

greykite.common.constants.TREND_REGEX = 'changepoint\\d|ct\\d|ct_|cp\\d'

Growth terms, including changepoints.

greykite.common.constants.SEASONALITY_REGEX = 'sin\\d|cos\\d'

Seasonality terms modeled by fourier series.

greykite.common.constants.EVENT_REGEX = 'events_'

Event terms.

greykite.common.constants.LAG_REGEX = '_lag\\d|_avglag_\\d'

Lag terms.

greykite.common.constants.LOGGER_NAME = 'Greykite'

Name used by the logger.

Constants used by `~greykite.framework.

greykite.framework.constants.EVALUATION_PERIOD_CV_MAX_SPLITS = 3

Default value for EvaluationPeriodParam().cv_max_splits

greykite.framework.constants.COMPUTATION_N_JOBS = 1

Default value for ComputationParam.n_jobs

greykite.framework.constants.COMPUTATION_VERBOSE = 1

Default value for ComputationParam.verbose

greykite.framework.constants.CV_REPORT_METRICS_ALL = 'ALL'

Set cv_report_metrics to this value to compute all metrics during CV

greykite.framework.constants.FRACTION_OUTSIDE_TOLERANCE_NAME = 'OutsideTolerance'

Short name used to report the result of FRACTION_OUTSIDE_TOLERANCE in CV

greykite.framework.constants.CUSTOM_SCORE_FUNC_NAME = 'score'

Short name used to report the result of custom score_func in CV

greykite.framework.constants.MEAN_COL_GROUP = 'mean'

Columns with mean.

greykite.framework.constants.QUANTILE_COL_GROUP = 'quantile'

Columns with quantile.

greykite.framework.constants.OVERLAY_COL_GROUP = 'overlay'

Columns with overlay.

greykite.framework.constants.FORECAST_STEP_COL = 'forecast_step'

The column name for forecast step in benchmarking

class greykite.algo.forecast.silverkite.constants.silverkite_constant.SilverkiteConstant[source]

Uses the appropriate constant mixins to provide all the constants that will be used by Silverkite.

get_silverkite_column() → Type[greykite.algo.forecast.silverkite.constants.silverkite_column.SilverkiteColumn]

Return the SilverkiteColumn constants

get_silverkite_components_enum() → Type[greykite.algo.forecast.silverkite.constants.silverkite_component.SilverkiteComponentsEnum]

Return the SilverkiteComponentsEnum constants

get_silverkite_holiday() → Type[greykite.algo.forecast.silverkite.constants.silverkite_holiday.SilverkiteHoliday]

Return the SilverkiteHoliday constants

get_silverkite_seasonality_enum() → Type[greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonalityEnum]

Return the SilverkiteSeasonalityEnum constants

get_silverkite_time_frequency_enum() → Type[greykite.algo.forecast.silverkite.constants.silverkite_time_frequency.SilverkiteTimeFrequencyEnum]

Return the SilverkiteTimeFrequencyEnum constants

class greykite.algo.forecast.silverkite.constants.silverkite_column.SilverkiteColumn[source]

Silverkite feature sets for sub-daily data.

COLS_HOUR_OF_WEEK: str = 'hour_of_week'

Silverkite feature_sets_enabled key. constant hour of week effect

COLS_WEEKEND_SEAS: str = 'is_weekend:daily_seas'

Silverkite feature_sets_enabled key. daily seasonality interaction with is_weekend

COLS_DAY_OF_WEEK_SEAS: str = 'day_of_week:daily_seas'

Silverkite feature_sets_enabled key. daily seasonality interaction with day of week

COLS_TREND_DAILY_SEAS: str = 'trend:is_weekend:daily_seas'

Silverkite feature_sets_enabled key. allow daily seasonality to change over time, depending on is_weekend

COLS_EVENT_SEAS: str = 'event:daily_seas'

Silverkite feature_sets_enabled key. allow sub-daily event effects

COLS_EVENT_WEEKEND_SEAS: str = 'event:is_weekend:daily_seas'

Silverkite feature_sets_enabled key. allow sub-daily event effect to interact with is_weekend

COLS_DAY_OF_WEEK: str = 'day_of_week'

Silverkite feature_sets_enabled key. constant day of week effect

COLS_TREND_WEEKEND: str = 'trend:is_weekend'

Silverkite feature_sets_enabled key. allow trend (growth, changepoints) to interact with is_weekend

COLS_TREND_DAY_OF_WEEK: str = 'trend:day_of_week'

Silverkite feature_sets_enabled key. allow trend to interact with day of week

COLS_TREND_WEEKLY_SEAS: str = 'trend:weekly_seas'

Silverkite feature_sets_enabled key. allow weekly seasonality to change over time

class greykite.algo.forecast.silverkite.constants.silverkite_component.SilverkiteComponentsEnum(value)[source]

Defines groupby time feature, xlabel and ylabel for Silverkite Component Plots.

class greykite.algo.forecast.silverkite.constants.silverkite_holiday.SilverkiteHoliday[source]

Holiday constants to be used by Silverkite

HOLIDAY_LOOKUP_COUNTRIES_AUTO = ('UnitedStates', 'UnitedKingdom', 'India', 'France', 'China')

Auto setting for the countries that contain the holidays to include in the model

HOLIDAYS_TO_MODEL_SEPARATELY_AUTO = ("New Year's Day", 'Chinese New Year', 'Christmas Day', 'Independence Day', 'Thanksgiving', 'Labor Day', 'Good Friday', 'Easter Monday [England, Wales, Northern Ireland]', 'Memorial Day', 'Veterans Day')

Auto setting for the holidays to include in the model

ALL_HOLIDAYS_IN_COUNTRIES = 'ALL_HOLIDAYS_IN_COUNTRIES'

Value for holidays_to_model_separately to request all holidays in the lookup countries

HOLIDAYS_TO_INTERACT = ('Christmas Day', 'Christmas Day_minus_1', 'Christmas Day_minus_2', 'Christmas Day_plus_1', 'Christmas Day_plus_2', 'New Years Day', 'New Years Day_minus_1', 'New Years Day_minus_2', 'New Years Day_plus_1', 'New Years Day_plus_2', 'Thanksgiving', 'Thanksgiving_plus_1', 'Independence Day')

Significant holidays that may have a different daily seasonality pattern

class greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonalityEnum(value)[source]

Defines default seasonalities for Silverkite estimator. Names should match those in SeasonalityEnum. The default order for various seasonalities is stored in this enum.

DAILY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='tod', period=24.0, order=12, seas_names='daily', default_min_days=2)

tod is 0-24 time of day (tod granularity based on input data, up to second level). Requires at least two full cycles to add the seasonal term (default_min_days=2).

WEEKLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='tow', period=7.0, order=4, seas_names='weekly', default_min_days=14)

tow is 0-7 time of week (tow granularity based on input data, up to second level). order=4 for full flexibility to model daily input.

MONTHLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='tom', period=1.0, order=2, seas_names='monthly', default_min_days=60)

tom is 0-1 time of month (tom granularity based on input data, up to daily level).

QUARTERLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='toq', period=1.0, order=5, seas_names='quarterly', default_min_days=180)

toq (continuous time of quarter) with natural period. Each day is mapped to a value in [0.0, 1.0) based on its position in the calendar quarter: (Jan1-Mar31, Apr1-Jun30, Jul1-Sep30, Oct1-Dec31). The start of each quarter is 0.0.

YEARLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='ct1', period=1.0, order=15, seas_names='yearly', default_min_days=548)

ct1 (continuous year) with natural period.

class greykite.algo.forecast.silverkite.constants.silverkite_time_frequency.SilverkiteTimeFrequencyEnum(value)[source]

Provides properties for modeling for various time frequencies in Silverkite. The enum names is the time frequency, corresponding to the simple time frequencies in SimpleTimeFrequencyEnum.

Provides templates for SimpleSilverkiteEstimator that are pre-tuned to fit specific use cases.

A subset of these templates are recognized by ModelTemplateEnum.

simple_silverkite_template also accepts any model_template name that follows the naming convention in this file. For details, see the model_template parameter in SimpleSilverkiteTemplate.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FREQ(value)[source]

Valid values for simple silverkite template string name frequency.

greykite.framework.templates.simple_silverkite_template_config.VALID_FREQ = ['HOURLY', 'DAILY', 'WEEKLY']

Valid non-default values for simple silverkite template string name frequency. These are the non-default frequencies recognized by SimpleSilverkiteTemplateOptions.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_SEAS(value)[source]

Valid values for simple silverkite template string name seasonality.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_GR(value)[source]

Valid values for simple silverkite template string name growth_term.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_CP(value)[source]

Valid values for simple silverkite template string name changepoints_dict.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_HOL(value)[source]

Valid values for simple silverkite template string name events.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FEASET(value)[source]

Valid values for simple silverkite template string name feature_sets_enabled.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_ALGO(value)[source]

Valid values for simple silverkite template string name fit_algorithm.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_AR(value)[source]

Valid values for simple silverkite template string name autoregression.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_DSI(value)[source]

Valid values for simple silverkite template string name daily seasonality max interaction order.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_WSI(value)[source]

Valid values for simple silverkite template string name weekly seasonality max interaction order.

class greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_COMPONENT_KEYWORDS(value)[source]

Valid values for simple silverkite template string name keywords. The names are the keywords and the values are the corresponding value enum. Can be used to create an instance of SimpleSilverkiteTemplateOptions.

class greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions(freq: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FREQ = <SILVERKITE_FREQ.DAILY: 'DAILY'>, seas: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_SEAS = <SILVERKITE_SEAS.LT: 'LT'>, gr: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_GR = <SILVERKITE_GR.LINEAR: 'LINEAR'>, cp: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_CP = <SILVERKITE_CP.NONE: 'NONE'>, hol: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_HOL = <SILVERKITE_HOL.NONE: 'NONE'>, feaset: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FEASET = <SILVERKITE_FEASET.OFF: 'OFF'>, algo: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_ALGO = <SILVERKITE_ALGO.LINEAR: 'LINEAR'>, ar: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_AR = <SILVERKITE_AR.OFF: 'OFF'>, dsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_DSI = <SILVERKITE_DSI.AUTO: 'AUTO'>, wsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_WSI = <SILVERKITE_WSI.AUTO: 'AUTO'>)[source]

Defines generic simple silverkite template options.

Attributes can be set to different values using SILVERKITE_COMPONENT_KEYWORDS for high level tuning.

freq represents data frequency.

The other attributes stand for seasonality, growth, changepoints_dict, events, feature_sets_enabled, fit_algorithm and autoregression in ModelComponentsParam, which are used in SimpleSilverkiteTemplate.

freq: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FREQ = 'DAILY'

Valid values for simple silverkite template string name frequency. See SILVERKITE_FREQ.

seas: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_SEAS = 'LT'

Valid values for simple silverkite template string name seasonality. See SILVERKITE_SEAS.

gr: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_GR = 'LINEAR'

Valid values for simple silverkite template string name growth. See SILVERKITE_GR.

cp: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_CP = 'NONE'

Valid values for simple silverkite template string name changepoints. See SILVERKITE_CP.

hol: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_HOL = 'NONE'

Valid values for simple silverkite template string name holiday. See SILVERKITE_HOL.

feaset: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FEASET = 'OFF'

Valid values for simple silverkite template string name feature sets enabled. See SILVERKITE_FEASET.

algo: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_ALGO = 'LINEAR'

Valid values for simple silverkite template string name fit algorithm. See SILVERKITE_ALGO.

ar: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_AR = 'OFF'

Valid values for simple silverkite template string name autoregression. See SILVERKITE_AR.

dsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_DSI = 'AUTO'

Valid values for simple silverkite template string name max daily seasonality interaction order. See SILVERKITE_DSI.

wsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_WSI = 'AUTO'

Valid values for simple silverkite template string name max weekly seasonality interaction order. See SILVERKITE_WSI.

greykite.framework.templates.simple_silverkite_template_config.COMMON_MODELCOMPONENTPARAM_PARAMETERS = {'ALGO': {'LASSO': {'fit_algorithm': 'lasso', 'fit_algorithm_params': None}, 'LINEAR': {'fit_algorithm': 'linear', 'fit_algorithm_params': None}, 'RIDGE': {'fit_algorithm': 'ridge', 'fit_algorithm_params': None}, 'SGD': {'fit_algorithm': 'sgd', 'fit_algorithm_params': None}}, 'AR': {'AUTO': {'autoreg_dict': 'auto'}, 'OFF': {'autoreg_dict': None}}, 'CP': {'DAILY': {'HV': {'method': 'auto', 'no_changepoint_distance_from_end': '180D', 'potential_changepoint_distance': '15D', 'regularization_strength': 0.3, 'resample_freq': '7D', 'yearly_seasonality_change_freq': '365D', 'yearly_seasonality_order': 15}, 'LT': {'method': 'auto', 'no_changepoint_distance_from_end': '90D', 'potential_changepoint_distance': '15D', 'regularization_strength': 0.6, 'resample_freq': '7D', 'yearly_seasonality_change_freq': None, 'yearly_seasonality_order': 15}, 'NM': {'method': 'auto', 'no_changepoint_distance_from_end': '180D', 'potential_changepoint_distance': '15D', 'regularization_strength': 0.5, 'resample_freq': '7D', 'yearly_seasonality_change_freq': '365D', 'yearly_seasonality_order': 15}, 'NONE': None}, 'HOURLY': {'HV': {'method': 'auto', 'no_changepoint_distance_from_end': '30D', 'potential_changepoint_distance': '15D', 'regularization_strength': 0.3, 'resample_freq': 'D', 'yearly_seasonality_change_freq': '365D', 'yearly_seasonality_order': 15}, 'LT': {'method': 'auto', 'no_changepoint_distance_from_end': '30D', 'potential_changepoint_distance': '7D', 'regularization_strength': 0.6, 'resample_freq': 'D', 'yearly_seasonality_change_freq': None, 'yearly_seasonality_order': 15}, 'NM': {'method': 'auto', 'no_changepoint_distance_from_end': '30D', 'potential_changepoint_distance': '15D', 'regularization_strength': 0.5, 'resample_freq': 'D', 'yearly_seasonality_change_freq': '365D', 'yearly_seasonality_order': 15}, 'NONE': None}, 'WEEKLY': {'HV': {'method': 'auto', 'no_changepoint_distance_from_end': '180D', 'potential_changepoint_distance': '14D', 'regularization_strength': 0.3, 'resample_freq': '7D', 'yearly_seasonality_change_freq': '365D', 'yearly_seasonality_order': 15}, 'LT': {'method': 'auto', 'no_changepoint_distance_from_end': '180D', 'potential_changepoint_distance': '14D', 'regularization_strength': 0.6, 'resample_freq': '7D', 'yearly_seasonality_change_freq': None, 'yearly_seasonality_order': 15}, 'NM': {'method': 'auto', 'no_changepoint_distance_from_end': '180D', 'potential_changepoint_distance': '14D', 'regularization_strength': 0.5, 'resample_freq': '7D', 'yearly_seasonality_change_freq': '365D', 'yearly_seasonality_order': 15}, 'NONE': None}}, 'DSI': {'DAILY': {'AUTO': 0, 'OFF': 0}, 'HOURLY': {'AUTO': 5, 'OFF': 0}, 'WEEKLY': {'AUTO': 0, 'OFF': 0}}, 'FEASET': {'AUTO': 'auto', 'OFF': False, 'ON': True}, 'GR': {'LINEAR': {'growth_term': 'linear'}, 'NONE': {'growth_term': None}}, 'HOL': {'NONE': {'daily_event_df_dict': None, 'holiday_lookup_countries': [], 'holiday_post_num_days': 0, 'holiday_pre_num_days': 0, 'holiday_pre_post_num_dict': None, 'holidays_to_model_separately': []}, 'SP1': {'daily_event_df_dict': None, 'holiday_lookup_countries': 'auto', 'holiday_post_num_days': 1, 'holiday_pre_num_days': 1, 'holiday_pre_post_num_dict': None, 'holidays_to_model_separately': 'auto'}, 'SP2': {'daily_event_df_dict': None, 'holiday_lookup_countries': 'auto', 'holiday_post_num_days': 2, 'holiday_pre_num_days': 2, 'holiday_pre_post_num_dict': None, 'holidays_to_model_separately': 'auto'}, 'SP4': {'daily_event_df_dict': None, 'holiday_lookup_countries': 'auto', 'holiday_post_num_days': 4, 'holiday_pre_num_days': 4, 'holiday_pre_post_num_dict': None, 'holidays_to_model_separately': 'auto'}, 'TG': {'daily_event_df_dict': None, 'holiday_lookup_countries': 'auto', 'holiday_post_num_days': 3, 'holiday_pre_num_days': 3, 'holiday_pre_post_num_dict': None, 'holidays_to_model_separately': []}}, 'SEAS': {'DAILY': {'HV': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 4, 'yearly_seasonality': 25}, 'HVQM': {'daily_seasonality': 0, 'monthly_seasonality': 4, 'quarterly_seasonality': 6, 'weekly_seasonality': 4, 'yearly_seasonality': 25}, 'LT': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 3, 'yearly_seasonality': 8}, 'LTQM': {'daily_seasonality': 0, 'monthly_seasonality': 2, 'quarterly_seasonality': 3, 'weekly_seasonality': 3, 'yearly_seasonality': 8}, 'NM': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 3, 'yearly_seasonality': 15}, 'NMQM': {'daily_seasonality': 0, 'monthly_seasonality': 4, 'quarterly_seasonality': 4, 'weekly_seasonality': 3, 'yearly_seasonality': 15}, 'NONE': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 0, 'yearly_seasonality': 0}}, 'HOURLY': {'HV': {'daily_seasonality': 12, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 6, 'yearly_seasonality': 25}, 'HVQM': {'daily_seasonality': 12, 'monthly_seasonality': 4, 'quarterly_seasonality': 4, 'weekly_seasonality': 6, 'yearly_seasonality': 25}, 'LT': {'daily_seasonality': 5, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 3, 'yearly_seasonality': 8}, 'LTQM': {'daily_seasonality': 5, 'monthly_seasonality': 2, 'quarterly_seasonality': 2, 'weekly_seasonality': 3, 'yearly_seasonality': 8}, 'NM': {'daily_seasonality': 8, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 4, 'yearly_seasonality': 15}, 'NMQM': {'daily_seasonality': 8, 'monthly_seasonality': 3, 'quarterly_seasonality': 3, 'weekly_seasonality': 4, 'yearly_seasonality': 15}, 'NONE': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 0, 'yearly_seasonality': 0}}, 'WEEKLY': {'HV': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 0, 'yearly_seasonality': 25}, 'HVQM': {'daily_seasonality': 0, 'monthly_seasonality': 4, 'quarterly_seasonality': 4, 'weekly_seasonality': 0, 'yearly_seasonality': 25}, 'LT': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 0, 'yearly_seasonality': 8}, 'LTQM': {'daily_seasonality': 0, 'monthly_seasonality': 2, 'quarterly_seasonality': 2, 'weekly_seasonality': 0, 'yearly_seasonality': 8}, 'NM': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 0, 'yearly_seasonality': 15}, 'NMQM': {'daily_seasonality': 0, 'monthly_seasonality': 3, 'quarterly_seasonality': 3, 'weekly_seasonality': 0, 'yearly_seasonality': 15}, 'NONE': {'daily_seasonality': 0, 'monthly_seasonality': 0, 'quarterly_seasonality': 0, 'weekly_seasonality': 0, 'yearly_seasonality': 0}}}, 'WSI': {'DAILY': {'AUTO': 2, 'OFF': 0}, 'HOURLY': {'AUTO': 2, 'OFF': 0}, 'WEEKLY': {'AUTO': 0, 'OFF': 0}}}

Defines the default component values for SimpleSilverkiteTemplate. The components include seasonality, growth, holiday, trend changepoints, feature sets, autoregression, fit algorithm, etc. These are used when config.model_template provides the SimpleSilverkiteTemplateOptions.

greykite.framework.templates.simple_silverkite_template_config.SILVERKITE = ModelComponentsParam(autoregression={'autoreg_dict': None}, changepoints={'changepoints_dict': None, 'seasonality_changepoints_dict': None}, custom={'fit_algorithm_dict': {'fit_algorithm': 'ridge', 'fit_algorithm_params': None}, 'feature_sets_enabled': 'auto', 'max_daily_seas_interaction_order': 5, 'max_weekly_seas_interaction_order': 2, 'extra_pred_cols': [], 'min_admissible_value': None, 'max_admissible_value': None}, events={'holidays_to_model_separately': 'auto', 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 2, 'holiday_post_num_days': 2, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, growth={'growth_term': 'linear'}, hyperparameter_override=None, regressors={'regressor_cols': []}, seasonality={'yearly_seasonality': 'auto', 'quarterly_seasonality': 'auto', 'monthly_seasonality': 'auto', 'weekly_seasonality': 'auto', 'daily_seasonality': 'auto'}, uncertainty={'uncertainty_dict': None})

Defines the SILVERKITE template. Contains automatic growth, seasonality, holidays, and interactions. Does not include autoregression. Best for hourly and daily frequencies. Uses SimpleSilverkiteEstimator.

greykite.framework.templates.simple_silverkite_template_config.MULTI_TEMPLATES = {'SILVERKITE_DAILY_90': ['DAILY_SEAS_LTQM_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'DAILY_SEAS_LTQM_GR_LINEAR_CP_NONE_HOL_SP2_FEASET_AUTO_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'DAILY_SEAS_LTQM_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO', 'DAILY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO'], 'SILVERKITE_HOURLY_1': ['HOURLY_SEAS_LT_GR_LINEAR_CP_NONE_HOL_TG_FEASET_AUTO_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_LT_GR_LINEAR_CP_NM_HOL_SP4_FEASET_OFF_ALGO_RIDGE_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP1_FEASET_AUTO_ALGO_RIDGE_AR_AUTO'], 'SILVERKITE_HOURLY_168': ['HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NONE_HOL_SP4_FEASET_OFF_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP1_FEASET_AUTO_ALGO_RIDGE_AR_OFF'], 'SILVERKITE_HOURLY_24': ['HOURLY_SEAS_LT_GR_LINEAR_CP_NM_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_AUTO', 'HOURLY_SEAS_LT_GR_LINEAR_CP_NONE_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP1_FEASET_OFF_ALGO_LINEAR_AR_AUTO', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_AUTO'], 'SILVERKITE_HOURLY_336': ['HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP2_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_LT_GR_LINEAR_CP_LT_HOL_SP4_FEASET_AUTO_ALGO_RIDGE_AR_OFF', 'HOURLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_SP1_FEASET_AUTO_ALGO_LINEAR_AR_OFF', 'HOURLY_SEAS_NM_GR_LINEAR_CP_NM_HOL_SP1_FEASET_AUTO_ALGO_LINEAR_AR_AUTO'], 'SILVERKITE_WEEKLY': ['WEEKLY_SEAS_NM_GR_LINEAR_CP_NONE_HOL_NONE_FEASET_OFF_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'WEEKLY_SEAS_NM_GR_LINEAR_CP_LT_HOL_NONE_FEASET_OFF_ALGO_LINEAR_AR_OFF_DSI_AUTO_WSI_AUTO', 'WEEKLY_SEAS_HV_GR_LINEAR_CP_NM_HOL_NONE_FEASET_OFF_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO', 'WEEKLY_SEAS_HV_GR_LINEAR_CP_LT_HOL_NONE_FEASET_OFF_ALGO_RIDGE_AR_OFF_DSI_AUTO_WSI_AUTO']}

A dictionary of multi templates.

  • Keys are the available multi templates names (valid strings for config.model_template).

  • Values correspond to a list of ModelComponentsParam.

greykite.framework.templates.simple_silverkite_template_config.SINGLE_MODEL_TEMPLATE_TYPE

Types accepted by SimpleSilverkiteTemplate for config.model_template for a single template.

alias of Union[str, greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions]

class greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateConstants(COMMON_MODELCOMPONENTPARAM_PARAMETERS: Dict = <factory>, MULTI_TEMPLATES: Dict = <factory>, SILVERKITE: Union[str, greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions] = ModelComponentsParam(autoregression={'autoreg_dict': None}, changepoints={'changepoints_dict': None, 'seasonality_changepoints_dict': None}, custom={'fit_algorithm_dict': {'fit_algorithm': 'ridge', 'fit_algorithm_params': None}, 'feature_sets_enabled': 'auto', 'max_daily_seas_interaction_order': 5, 'max_weekly_seas_interaction_order': 2, 'extra_pred_cols': [], 'min_admissible_value': None, 'max_admissible_value': None}, events={'holidays_to_model_separately': 'auto', 'holiday_lookup_countries': 'auto', 'holiday_pre_num_days': 2, 'holiday_post_num_days': 2, 'holiday_pre_post_num_dict': None, 'daily_event_df_dict': None}, growth={'growth_term': 'linear'}, hyperparameter_override=None, regressors={'regressor_cols': []}, seasonality={'yearly_seasonality': 'auto', 'quarterly_seasonality': 'auto', 'monthly_seasonality': 'auto', 'weekly_seasonality': 'auto', 'daily_seasonality': 'auto'}, uncertainty={'uncertainty_dict': None}), SILVERKITE_COMPONENT_KEYWORDS: Type[enum.Enum] = <enum 'SILVERKITE_COMPONENT_KEYWORDS'>, SILVERKITE_EMPTY: Union[str, greykite.framework.templates.autogen.forecast_config.ModelComponentsParam, greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions] = 'DAILY_SEAS_NONE_GR_NONE_CP_NONE_HOL_NONE_FEASET_OFF_ALGO_LINEAR_AR_OFF_DSI_OFF_WSI_OFF', VALID_FREQ: List = <factory>, SimpleSilverkiteTemplateOptions: dataclasses.dataclass = <class 'greykite.framework.templates.simple_silverkite_template_config.SimpleSilverkiteTemplateOptions'>)[source]

Constants used by SimpleSilverkiteTemplate. Includes the model templates and their default values.

mutable_field is used when the default value is a mutable type like dict and list. Dataclass requires mutable default values to be wrapped in ‘default_factory’, so that instances of this dataclass cannot accidentally modify the default value. mutable_field wraps the constant accordingly.

COMMON_MODELCOMPONENTPARAM_PARAMETERS

Defines the default component values for SimpleSilverkiteTemplate. The components include seasonality, growth, holiday, trend changepoints, feature sets, autoregression, fit algorithm, etc. These are used when config.model_template provides the SimpleSilverkiteTemplateOptions.

MULTI_TEMPLATES

A dictionary of multi templates.

  • Keys are the available multi templates names (valid strings for config.model_template).

  • Values correspond to a list of ModelComponentsParam.

SILVERKITE

Defines the "SILVERKITE" template. Contains automatic growth, seasonality, holidays, and interactions. Does not include autoregression. Best for hourly and daily frequencies. Uses SimpleSilverkiteEstimator.

class SILVERKITE_COMPONENT_KEYWORDS(value)

Valid values for simple silverkite template string name keywords. The names are the keywords and the values are the corresponding value enum. Can be used to create an instance of SimpleSilverkiteTemplateOptions.

SILVERKITE_EMPTY

Defines the "SILVERKITE_EMPTY" template. Everything here is None or off.

VALID_FREQ

Valid non-default values for simple silverkite template string name frequency. SimpleSilverkiteTemplateOptions.

class SimpleSilverkiteTemplateOptions(freq: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FREQ = <SILVERKITE_FREQ.DAILY: 'DAILY'>, seas: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_SEAS = <SILVERKITE_SEAS.LT: 'LT'>, gr: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_GR = <SILVERKITE_GR.LINEAR: 'LINEAR'>, cp: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_CP = <SILVERKITE_CP.NONE: 'NONE'>, hol: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_HOL = <SILVERKITE_HOL.NONE: 'NONE'>, feaset: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_FEASET = <SILVERKITE_FEASET.OFF: 'OFF'>, algo: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_ALGO = <SILVERKITE_ALGO.LINEAR: 'LINEAR'>, ar: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_AR = <SILVERKITE_AR.OFF: 'OFF'>, dsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_DSI = <SILVERKITE_DSI.AUTO: 'AUTO'>, wsi: greykite.framework.templates.simple_silverkite_template_config.SILVERKITE_WSI = <SILVERKITE_WSI.AUTO: 'AUTO'>)

Defines generic simple silverkite template options. Attributes can be set to different values using SILVERKITE_COMPONENT_KEYWORDS for high level tuning.

Changepoint Detection

class greykite.algo.changepoint.adalasso.changepoint_detector.ChangepointDetector[source]

A class to implement change point detection.

Currently supports long-term change point detection only. Input is a dataframe with time_col indicating the column of time info (the format should be able to be parsed by pd.to_datetime), and value_col indicating the column of observed time series values.

original_df

The original data df, used to retrieve original observations, if aggregation is used in fitting change points.

Type

pandas.DataFrame

time_col

The column name for time column.

Type

str

value_col

The column name for value column.

Type

str

trend_potential_changepoint_n

The number of change points that are evenly distributed over the time period.

Type

int

yearly_seasonality_order

The yearly seasonality order used when fitting trend.

Type

int

y

The observations after aggregation.

Type

pandas.Series

trend_df

The augmented df of the original_df, including regressors of trend change points and Fourier series for yearly seasonality.

Type

pandas.DataFrame

trend_model

The fitted trend model.

Type

sklearn.base.RegressionMixin

trend_coef

The estimated trend coefficients.

Type

numpy.array

trend_intercept

The estimated trend intercept.

Type

float

adaptive_lasso_coef

The list of length two, first element is estimated trend coefficients, and second element is intercept, both estimated by adaptive lasso.

Type

list

trend_changepoints

The list of detected trend change points, parsable by pd.to_datetime

Type

list

trend_estimation

The estimated trend with detected trend change points.

Type

pd.Series

seasonality_df

The augmented df of original_df, including regressors of seasonality change points with different Fourier series frequencies.

Type

pandas.DataFrame

seasonality_changepoints

The dictionary of detected seasonality change points for each component. Keys are component names, and values are list of change points.

Type

dict

seasonality_estimation

The estimated seasonality with detected seasonality change points. The series has the same length as original_df. Index is timestamp, and values are the estimated seasonality at each timestamp. The seasonality estimation is the estimated of seasonality effect with trend estimated by estimate_trend_with_detected_changepoints removed.

Type

pandas.Series

find_trend_changepoints : callable

Finds the potential trend change points for a given time series df.

plot : callable

Plot the results after implementing find_trend_changepoints.

find_trend_changepoints(df, time_col, value_col, yearly_seasonality_order=8, yearly_seasonality_change_freq=None, resample_freq='D', trend_estimator='ridge', adaptive_lasso_initial_estimator='ridge', regularization_strength=None, actual_changepoint_min_distance='30D', potential_changepoint_distance=None, potential_changepoint_n=100, no_changepoint_distance_from_begin=None, no_changepoint_proportion_from_begin=0.0, no_changepoint_distance_from_end=None, no_changepoint_proportion_from_end=0.0)[source]

Finds trend change points automatically by adaptive lasso.

The algorithm does an aggregation with a user-defined frequency, defaults daily.

If potential_changepoint_distance is not given, potential_changepoint_n potential change points are evenly distributed over the time period, else potential_changepoint_n is overridden by:

total_time_length / ``potential_changepoint_distance``

Users can specify either no_changepoint_proportion_from_end to specify what proportion from the end of data they do not want changepoints, or no_changepoint_distance_from_end (overrides no_changepoint_proportion_from_end) to specify how long from the end they do not want change points.

Then all potential change points will be selected by adaptive lasso, with the initial estimator specified by adaptive_lasso_initial_estimator. If user specifies regularization_strength, then the adaptive lasso will be run with a single tuning parameter calculated based on user provided prior, else a cross-validation will be run to automatically select the tuning parameter.

A yearly seasonality is also fitted at the same time, preventing trend from catching yearly periodical changes.

A rule-based guard function is applied at the end to ensure change points are not too close, as specified by actual_changepoint_min_distance.

Parameters
  • df (pandas.DataFrame) – The data df

  • time_col (str) – Time column name in df

  • value_col (str) – Value column name in df

  • yearly_seasonality_order (int, default 8) – Fourier series order to capture yearly seasonality.

  • yearly_seasonality_change_freq (DateOffset, Timedelta or str or None, default None) –

    How often to change the yearly seasonality model. Set to None to disable this feature.

    This is useful if you have more than 2.5 years of data and the detected trend without this feature is inaccurate because yearly seasonality changes over the training period. Modeling yearly seasonality separately over the each period can prevent trend changepoints from fitting changes in yearly seasonality. For example, if you have 2.5 years of data and yearly seasonality increases in magnitude after the first year, setting this parameter to “365D” will model each year’s yearly seasonality differently and capture both shapes. However, without this feature, both years will have the same yearly seasonality, roughly the average effect across the training set.

    Note that if you use str as input, the maximal supported unit is day, i.e., you might use “200D” but not “12M” or “1Y”.

  • resample_freq (DateOffset, Timedelta or str, default “D”.) – The frequency to aggregate data. Coarser aggregation leads to fitting longer term trends.

  • trend_estimator (str in [“ridge”, “lasso” or “ols”], default “ridge”.) – The estimator to estimate trend. The estimated trend is only for plotting purposes. ‘ols’ is not recommended when yearly_seasonality_order is specified other than 0, because significant over-fitting will happen. In this case, the given value is overridden by “ridge”.

  • adaptive_lasso_initial_estimator (str in [“ridge”, “lasso” or “ols”], default “ridge”.) – The initial estimator to compute adaptive lasso weights

  • regularization_strength (float in [0, 1] or None) – The regularization for change points. Greater value implies fewer change points. 0 indicates all change points, and 1 indicates no change point. If None, the turning parameter will be selected by cross-validation. If a value is given, it will be used as the tuning parameter.

  • actual_changepoint_min_distance (DateOffset, Timedelta or str, default “30D”) – The minimal distance allowed between detected change points. If consecutive change points are within this minimal distance, the one with smaller absolute change coefficient will be dropped. Note: maximal unit is ‘D’, i.e., you may use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • potential_changepoint_distance (DateOffset, Timedelta, str or None, default None) – The distance between potential change points. If provided, will override the parameter potential_changepoint_n. Note: maximal unit is ‘D’, i.e., you may only use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • potential_changepoint_n (int, default 100) – Number of change points to be evenly distributed, recommended 1-2 per month, based on the training data length.

  • no_changepoint_distance_from_begin (DateOffset, Timedelta, str or None, default None) – The length of time from the beginning of training data, within which no change point will be placed. If provided, will override the parameter no_changepoint_proportion_from_begin. Note: maximal unit is ‘D’, i.e., you may only use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • no_changepoint_proportion_from_begin (float in [0, 1], default 0.0.) – potential_changepoint_n change points will be placed evenly over the whole training period, however, change points that are located within the first no_changepoint_proportion_from_begin proportion of training period will not be used for change point detection.

  • no_changepoint_distance_from_end (DateOffset, Timedelta, str or None, default None) – The length of time from the end of training data, within which no change point will be placed. If provided, will override the parameter no_changepoint_proportion_from_end. Note: maximal unit is ‘D’, i.e., you may only use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • no_changepoint_proportion_from_end (float in [0, 1], default 0.0.) – potential_changepoint_n change points will be placed evenly over the whole training period, however, change points that are located within the last no_changepoint_proportion_from_end proportion of training period will not be used for change point detection.

Returns

result – result dictionary with keys:

"trend_feature_df"pandas.DataFrame

The augmented df for change detection, in other words, the design matrix for the regression model. Columns:

  • ’changepoint0’: regressor for change point 0, equals the continuous time of the observation minus the continuous time for time of origin.

  • ’changepoint{potential_changepoint_n}’: regressor for change point {potential_changepoint_n}, equals the continuous time of the observation minus the continuous time of the {potential_changepoint_n}th change point.

  • ’cos1_conti_year_yearly’: cosine yearly seasonality regressor of first order.

  • ’sin1_conti_year_yearly’: sine yearly seasonality regressor of first order.

  • ’cos{yearly_seasonality_order}_conti_year_yearly’ : cosine yearly seasonality regressor of {yearly_seasonality_order}th order.

  • ’sin{yearly_seasonality_order}_conti_year_yearly’ : sine yearly seasonality regressor of {yearly_seasonality_order}th order.

"trend_changepoints"list

The list of detected change points.

"changepoints_dict"dict

The change point dictionary that is compatible as an input with forecast

"trend_estimation"pandas.Series

The estimated trend with detected trend change points.

Return type

dict

find_seasonality_changepoints(df, time_col, value_col, seasonality_components_df= name period order seas_names 0 tod 24.0 3 daily 1 tow 7.0 3 weekly 2 conti_year 1.0 5 yearly, resample_freq='H', regularization_strength=0.6, actual_changepoint_min_distance='30D', potential_changepoint_distance=None, potential_changepoint_n=50, no_changepoint_distance_from_end=None, no_changepoint_proportion_from_end=0.0, trend_changepoints=None)[source]

Finds the seasonality change points (defined as the time points where seasonality magnitude changes, i.e., the time series becomes “fatter” or “thinner”.)

Subtracts the estimated trend from the original time series first, then uses regression-based regularization methods to select important seasonality change points. Regressors are built from truncated Fourier series.

If you have run find_trend_changepoints before running find_seasonality_changepoints with the same df, the estimated trend will be automatically used for removing trend in find_seasonality_changepoints. Otherwise, find_trend_changepoints will be run automatically with the same parameters as you passed to find_seasonality_changepoints. If you do not want to use the same parameters, run find_trend_changepoints with your desired parameter before calling find_seasonality_changepoints.

The algorithm does an aggregation with a user-defined frequency, default hourly.

The regression features consists of potential_changepoint_n + 1 blocks of predictors. The first block consists of Fourier series according to seasonality_components_df, and other blocks are a copy of the first block truncated at the corresponding potential change point.

If potential_changepoint_distance is not given, potential_changepoint_n potential change points are evenly distributed over the time period, else potential_changepoint_n is overridden by:

total_time_length / ``potential_changepoint_distance``

Users can specify either no_changepoint_proportion_from_end to specify what proportion from the end of data they do not want changepoints, or no_changepoint_distance_from_end (overrides no_changepoint_proportion_from_end) to specify how long from the end they do not want change points.

Then all potential change points will be selected by adaptive lasso, with the initial estimator specified by adaptive_lasso_initial_estimator. The regularization strength is specified by regularization_strength, which lies between 0 and 1.

A rule-based guard function is applied at the end to ensure change points are not too close, as specified by actual_changepoint_min_distance.

Parameters
  • df (pandas.DataFrame) – The data df

  • time_col (str) – Time column name in df

  • value_col (str) – Value column name in df

  • seasonality_components_df (pandas.DataFrame) – The df to generate seasonality design matrix, which is compatible with seasonality_components_df in find_seasonality_changepoints

  • resample_freq (DateOffset, Timedelta or str, default “H”.) – The frequency to aggregate data. Coarser aggregation leads to fitting longer term trends.

  • regularization_strength (float in [0, 1] or None, default 0.6.) – The regularization for change points. Greater value implies fewer change points. 0 indicates all change points, and 1 indicates no change point. If None, the turning parameter will be selected by cross-validation. If a value is given, it will be used as the tuning parameter. Here “None” is not recommended, because seasonality change has different levels, and automatic selection by cross-validation may produce more change points than desired. Practically, 0.6 is a good choice for most cases. Tuning around 0.6 is recommended.

  • actual_changepoint_min_distance (DateOffset, Timedelta or str, default “30D”) – The minimal distance allowed between detected change points. If consecutive change points are within this minimal distance, the one with smaller absolute change coefficient will be dropped. Note: maximal unit is ‘D’, i.e., you may use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • potential_changepoint_distance (DateOffset, Timedelta, str or None, default None) – The distance between potential change points. If provided, will override the parameter potential_changepoint_n. Note: maximal unit is ‘D’, i.e., you may only use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • potential_changepoint_n (int, default 50) – Number of change points to be evenly distributed, recommended 1 per month, based on the training data length.

  • no_changepoint_distance_from_end (DateOffset, Timedelta, str or None, default None) – The length of time from the end of training data, within which no change point will be placed. If provided, will override the parameter no_changepoint_proportion_from_end. Note: maximal unit is ‘D’, i.e., you may only use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta.

  • no_changepoint_proportion_from_end (float in [0, 1], default 0.0.) – potential_changepoint_n change points will be placed evenly over the whole training period, however, only change points that are not located within the last no_changepoint_proportion_from_end proportion of training period will be used for change point detection.

  • trend_changepoints (list or None) – A list of user specified trend change points, used to estimated the trend to be removed from the time series before detecting seasonality change points. If provided, the algorithm will not check existence of detected trend change points or run find_trend_changepoints, but will use these change points directly for trend estimation.

Returns

result – result dictionary with keys:

"seasonality_feature_df"pandas.DataFrame

The augmented df for seasonality changepoint detection, in other words, the design matrix for the regression model. Columns:

  • ”cos1_tod_daily”: cosine daily seasonality regressor of first order at change point 0.

  • ”sin1_tod_daily”: sine daily seasonality regressor of first order at change point 0.

  • ”cos1_conti_year_yearly”: cosine yearly seasonality regressor of first order at change point 0.

  • ”sin1_conti_year_yearly”: sine yearly seasonality regressor of first order at change point 0.

  • ”cos{daily_seasonality_order}_tod_daily_cp{potential_changepoint_n}” : cosine daily seasonality regressor of {yearly_seasonality_order}th order at change point {potential_changepoint_n}.

  • ”sin{daily_seasonality_order}_tod_daily_cp{potential_changepoint_n}” : sine daily seasonality regressor of {yearly_seasonality_order}th order at change point {potential_changepoint_n}.

  • ”cos{yearly_seasonality_order}_conti_year_yearly_cp{potential_changepoint_n}” : cosine yearly seasonality regressor of {yearly_seasonality_order}th order at change point {potential_changepoint_n}.

  • ”sin{yearly_seasonality_order}_conti_year_yearly_cp{potential_changepoint_n}” : sine yearly seasonality regressor of {yearly_seasonality_order}th order at change point {potential_changepoint_n}.

"seasonality_changepoints"dict`[`list`[`datetime]]

The dictionary of detected seasonality change points for each component. Keys are component names, and values are list of change points.

"seasonality_estimation"pandas.Series
The estimated seasonality with detected seasonality change points.

The series has the same length as original_df. Index is timestamp, and values are the estimated seasonality at each timestamp. The seasonality estimation is the estimated of seasonality effect with trend estimated by estimate_trend_with_detected_changepoints removed.

"seasonality_components_dfpandas.DataFrame

The processed seasonality_components_df. Daily component row is removed if inferred frequency or aggregation frequency is at least one day.

Return type

dict

plot(observation=True, observation_original=True, trend_estimate=True, trend_change=True, yearly_seasonality_estimate=False, adaptive_lasso_estimate=False, seasonality_change=False, seasonality_change_by_component=True, seasonality_estimate=False, plot=True)[source]

Makes a plot to show the observations/estimations/change points.

In this function, component parameters specify if each component in the plot is included or not. These are bool variables. For those components that are set to True, their values will be replaced by the corresponding data. Other components values will be set to None. Then these variables will be fed into plot_change

Parameters
  • observation (bool) – Whether to include observation

  • observation_original (bool) – Set True to plot original observations, and False to plot aggregated observations. No effect is observation is False

  • trend_estimate (bool) – Set True to add trend estimation.

  • trend_change (bool) – Set True to add change points.

  • yearly_seasonality_estimate (bool) – Set True to add estimated yearly seasonality.

  • adaptive_lasso_estimate (bool) – Set True to add adaptive lasso estimated trend.

  • seasonality_change (bool) – Set True to add seasonality change points.

  • seasonality_change_by_component (bool) – If true, seasonality changes will be plotted separately for different components, else all will be in the same symbol. No effect if seasonality_change is False

  • seasonality_estimate (bool) – Set True to add estimated seasonality. The seasonality if plotted around trend, so the actual seasonality shown is trend estimation + seasonality estimation.

  • plot (bool, default True) – Set to True to display the plot, and set to False to return the plotly figure object.

Returns

  • None (if plot == True) – The function shows a plot.

  • fig (plotly.graph_objs.Figure) – The plot object.

Benchmarking

class greykite.framework.benchmark.benchmark_class.BenchmarkForecastConfig(df: pandas.core.frame.DataFrame, configs: Dict[str, greykite.framework.templates.autogen.forecast_config.ForecastConfig], tscv: greykite.sklearn.cross_validation.RollingTimeSeriesSplit, forecaster: greykite.framework.templates.forecaster.Forecaster = <greykite.framework.templates.forecaster.Forecaster object>)[source]

Class for benchmarking multiple ForecastConfig on a rolling window basis.

df

Timeseries data to forecast. Contains columns [time_col, value_col], and optional regressor columns. Regressor columns should include future values for prediction.

Type

pandas.DataFrame

configs

Dictionary of model configurations. A model configuration is a ForecastConfig. See ForecastConfig for details on valid ForecastConfig. Validity of the configs for benchmarking is checked via the validate method.

Type

Dict [str, ForecastConfig]

tscv

Cross-validation object that determines the rolling window evaluation. See RollingTimeSeriesSplit for details. The forecast_horizon and periods_between_train_test parameters of configs are matched against that of tscv. A ValueError is raised if there is a mismatch.

Type

RollingTimeSeriesSplit

forecaster

Forecaster used to create the forecasts.

Type

Forecaster

is_run

Indicator of whether the run method is executed. After executing run, this indicator is set to True. Some class methods like get_forecast requires is_run to be True to be executed.

Type

bool, default False

result

Stores the benchmarking results. Has the same keys as configs.

Type

dict

forecasts

Merged DataFrame of forecasts, upper and lower confidence interval for all input configs. Also stores train end date and forecast step for each prediction.

Type

pandas.DataFrame, default None

validate()[source]

Validates the inputs to the class for the method run.

Raises a ValueError if there is a mismatch between the following parameters of configs and tscv:

  • forecast_horizon

  • periods_between_train_test

Raises ValueError if all the configs do not have the same coverage parameter.

run()[source]

Runs every config and stores the output of the forecast_pipeline. This function runs only if the configs and tscv are jointly valid.

Returns

self

Return type

Returns self. Stores pipeline output of every config in self.result.

extract_forecasts()[source]

Extracts forecasts, upper and lower confidence interval for each individual config. This is saved as a pandas.DataFrame with the name rolling_forecast_df within the corresponding config of self.result. e.g. if config key is “silverkite”, then the forecasts are stored in self.result["silverkite"]["rolling_forecast_df"].

This method also constructs a merged DataFrame of forecasts, upper and lower confidence interval for all input configs.

plot_forecasts_by_step(forecast_step: int, config_names: List = None, xlabel: str = 'ts', ylabel: str = 'y', title: str = None, showlegend: bool = True)[source]

Returns a forecast_step ahead rolling forecast plot. The plot consists one line for each valid. config_names. If available, the corresponding actual values are also plotted.

For a more customizable plot, see plot_multivariate

Parameters
  • forecast_step (int) – Which forecast step to plot. A forecast step is an integer between 1 and the forecast horizon, inclusive, indicating the number of periods from train end date to the prediction date (# steps ahead).

  • config_names (list [str], default None) – Which config results to plot. A list of config names. If None, uses all the available config keys.

  • xlabel (str or None, default TIME_COL) – x-axis label.

  • ylabel (str or None, default VALUE_COL) – y-axis label.

  • title (str or None, default None) – Plot title. If None, default is based on forecast_step.

  • showlegend (bool, default True) – Whether to show the legend.

Returns

fig – Interactive plotly graph. Plots multiple column(s) in self.forecasts against TIME_COL.

See plot_forecast_vs_actual return value for how to plot the figure and add customization.

Return type

plotly.graph_objs.Figure

plot_forecasts_by_config(config_name: str, colors: List = ['rgb(31, 119, 180)', 'rgb(255, 127, 14)', 'rgb(44, 160, 44)', 'rgb(214, 39, 40)', 'rgb(148, 103, 189)', 'rgb(140, 86, 75)', 'rgb(227, 119, 194)', 'rgb(127, 127, 127)', 'rgb(188, 189, 34)', 'rgb(23, 190, 207)'], xlabel: str = 'ts', ylabel: str = 'y', title: str = None, showlegend: bool = True)[source]

Returns a rolling plot of the forecasts by config_name against TIME_COL. The plot consists of one line for each available split. Some lines may overlap if test period in corresponding splits intersect. Hence every line is given a different color. If available, the corresponding actual values are also plotted.

For a more customizable plot, see plot_multivariate_grouped

Parameters
  • config_name (str) – Which config result to plot. The name must match the name of one of the input configs.

  • colors ([str, List [str]], default DEFAULT_PLOTLY_COLORS) – Which colors to use to build the color palette. This can be a list of RGB colors or a str from PLOTLY_SCALES. To use a single color for all lines, pass a List with a single color.

  • xlabel (str or None, default TIME_COL) – x-axis label.

  • ylabel (str or None, default VALUE_COL) – y-axis label.

  • title (str or None, default None) – Plot title. If None, default is based on config_name.

  • showlegend (bool, default True) – Whether to show the legend.

Returns

fig – Interactive plotly graph. Plots multiple column(s) in self.forecasts against TIME_COL.

Return type

plotly.graph_objs.Figure

get_evaluation_metrics(metric_dict: Dict, config_names: List = None)[source]

Returns rolling train and test evaluation metric values.

Parameters
  • metric_dict (dict [str, callable]) –

    Evaluation metrics to compute.

    • key: evaluation metric name, used to create column name in output.

    • value: metric function to apply to forecast df in each split to generate the column value.

      Signature (y_true: str, y_pred: str) -> transformed value: float.

    For example:

    metric_dict = {
        "median_residual": lambda y_true, y_pred: np.median(y_true - y_pred),
        "mean_squared_error": lambda y_true, y_pred: np.mean((y_true - y_pred)**2)
    }
    

    Some predefined functions are available in evaluation. For example:

    metric_dict = {
        "correlation": lambda y_true, y_pred: correlation(y_true, y_pred),
        "RMSE": lambda y_true, y_pred: root_mean_squared_error(y_true, y_pred),
        "Q_95": lambda y_true, y_pred: partial(quantile_loss(y_true, y_pred, q=0.95))
    }
    

    As shorthand, it is sufficient to provide the corresponding EvaluationMetricEnum member. These are auto-expanded into the appropriate function. So the following is equivalent:

    metric_dict = {
        "correlation": EvaluationMetricEnum.Correlation,
        "RMSE": EvaluationMetricEnum.RootMeanSquaredError,
        "Q_95": EvaluationMetricEnum.Quantile95
    }
    

  • config_names (list [str], default None) – Which config results to plot. A list of config names. If None, uses all the available config keys.

Returns

evaluation_metrics_df – A DataFrame containing splitwise train and test evaluation metrics for metric_dict and config_names.

For example. Let’s assume:

metric_dict = {
    "RMSE": EvaluationMetricEnum.RootMeanSquaredError,
    "Q_95": EvaluationMetricEnum.Quantile95
}

config_names = ["default_prophet", "custom_silverkite"]
These are valid ``config_names`` and there are 2 splits for each.

Then evaluation_metrics_df =

config_name     split_num   train_RMSE  test_RMSE   train_Q_95  test_Q_95
default_prophet      0          *           *           *           *
default_prophet      1          *           *           *           *
custom_silverkite    0          *           *           *           *
custom_silverkite    1          *           *           *           *

where * represents computed values.

Return type

pd.DataFrame

plot_evaluation_metrics(metric_dict: Dict, config_names: List = None, xlabel: str = None, ylabel: str = 'Metric value', title: str = None, showlegend: bool = True)[source]

Returns a barplot of the train and test values of metric_dict of config_names. Value of a metric for all config_names are plotted as a grouped bar. Train and test values of a metric are plot side-by-side for easy comparison.

Parameters
  • metric_dict (dict [str, callable]) – Evaluation metrics to compute. Same as get_evaluation_metrics. To get the best visualization, keep number of metrics <= 2.

  • config_names (list [str], default None) – Which config results to plot. A list of config names. If None, uses all the available config keys.

  • xlabel (str or None, default None) – x-axis label.

  • ylabel (str or None, default “Metric value”) – y-axis label.

  • title (str or None, default None) – Plot title.

  • showlegend (bool, default True) – Whether to show the legend.

Returns

fig – Interactive plotly bar plot.

Return type

plotly.graph_objs.Figure

get_grouping_evaluation_metrics(metric_dict: Dict, config_names: List = None, which: str = 'train', groupby_time_feature: str = None, groupby_sliding_window_size: int = None, groupby_custom_column: pandas.core.series.Series = None)[source]
Returns splitwise rolling evaluation metric values.

These values are grouped by the grouping method chosen by groupby_time_feature, groupby_sliding_window_size and groupby_custom_column.

See get_grouping_evaluation for details on grouping method.

Parameters

metric_dictdict [str, callable]

Evaluation metrics to compute. Same as get_evaluation_metrics.

config_nameslist [str], default None

Which config results to plot. A list of config names. If None, uses all the available config keys.

which: str

“train” or “test”. Which dataset to evaluate.

groupby_time_featurestr or None, default None

If provided, groups by a column generated by build_time_features_df. See that function for valid values.

groupby_sliding_window_sizeint or None, default None

If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

groupby_custom_columnpandas.Series or None, default None

If provided, groups by this column value. Should be same length as the DataFrame.

Returns

grouped_evaluation_df – A DataFrame containing splitwise train and test evaluation metrics for metric_dict and config_names. The evaluation metrics are grouped by the grouping method.

Return type

pandas.DataFrame

plot_grouping_evaluation_metrics(metric_dict: Dict, config_names: List = None, which: str = 'train', groupby_time_feature: str = None, groupby_sliding_window_size: int = None, groupby_custom_column: pandas.core.series.Series = None, xlabel=None, ylabel='Metric value', title=None, showlegend=True)[source]

Returns a line plot of the grouped evaluation values of metric_dict of config_names. These values are grouped by the grouping method chosen by groupby_time_feature,

groupby_sliding_window_size and groupby_custom_column.

See get_grouping_evaluation for details on grouping method.

Parameters

metric_dictdict [str, callable]

Evaluation metrics to compute. Same as get_evaluation_metrics. To get the best visualization, keep number of metrics <= 2.

config_nameslist [str], default None

Which config results to plot. A list of config names. If None, uses all the available config keys.

which: str

“train” or “test”. Which dataset to evaluate.

groupby_time_featurestr or None, optional

If provided, groups by a column generated by build_time_features_df. See that function for valid values.

groupby_sliding_window_sizeint or None, optional

If provided, sequentially partitions data into groups of size groupby_sliding_window_size.

groupby_custom_columnpandas.Series or None, optional

If provided, groups by this column value. Should be same length as the DataFrame.

xlabelstr or None, default None

x-axis label. If None, label is determined by the groupby column name.

ylabelstr or None, default “Metric value”

y-axis label.

titlestr or None, default None

Plot title. If None, default is based on config_name.

showlegendbool, default True

Whether to show the legend.

Returns

fig – Interactive plotly graph.

Return type

plotly.graph_objs.Figure

get_runtimes(config_names: List = None)[source]

Returns rolling average runtime in seconds for config_names.

Parameters

config_names (list [str], default None) – Which config results to plot. A list of config names. If None, uses all the available config keys.

Returns

runtimes_df – A DataFrame containing splitwise runtime in seconds for config_names.

For example. Let’s assume:

config_names = ["default_prophet", "custom_silverkite"]
These are valid ``config_names`` and there are 2 splits for each.

Then runtimes_df =

config_name     split_num   runtime_sec
default_prophet      0          *
default_prophet      1          *
custom_silverkite    0          *
custom_silverkite    1          *

where * represents computed values.

Return type

pd.DataFrame

plot_runtimes(config_names: List = None, xlabel: str = None, ylabel: str = 'Mean runtime in seconds', title: str = 'Average runtime across rolling windows', showlegend: bool = True)[source]

Returns a barplot of the runtimes of config_names.

Parameters
  • config_names (list [str], default None) – Which config results to plot. A list of config names. If None, uses all the available config keys.

  • xlabel (str or None, default None) – x-axis label.

  • ylabel (str or None, default “Mean runtime in seconds”) – y-axis label.

  • title (str or None, default “Average runtime across rolling windows”) – Plot title.

  • showlegend (bool, default True) – Whether to show the legend.

Returns

fig – Interactive plotly bar plot.

Return type

plotly.graph_objs.Figure

get_valid_config_names(config_names: List = None)[source]

Validate config_names against keys of configs. Raises a ValueError in case of a mismatch.

Parameters

config_names (list [str], default None) – Which config results to plot. A list of config names. If None, uses all the available config keys.

Returns

config_names – List of valid config names.

Return type

list

static autocomplete_metric_dict(metric_dict, enum_class)[source]

Sweeps through metric_dict, converting members of enum_class to their corresponding evaluation function.

For example:

metric_dict = {
    "correlation": EvaluationMetricEnum.Correlation,
    "RMSE": EvaluationMetricEnum.RootMeanSquaredError,
    "Q_95": EvaluationMetricEnum.Quantile95
    "custom_metric": custom_function
}

is converted to

metric_dict = {
    "correlation": correlation(y_true, y_pred),
    "RMSE": root_mean_squared_error(y_true, y_pred),
    "Q_95": quantile_loss_q(y_true, y_pred, q=0.95),
    "custom_function": custom_function
}
Parameters
  • metric_dict (dict [str, callable]) – Evaluation metrics to compute. Same as get_evaluation_metrics.

  • enum_class (Enum) – The enum class metric_dict elements might be member of. It must have a method get_metric_func.

Returns

updated_metric_dict – Autocompleted metric dict.

Return type

dict

Cross Validation

class greykite.sklearn.cross_validation.RollingTimeSeriesSplit(forecast_horizon, min_train_periods=None, expanding_window=False, use_most_recent_splits=False, periods_between_splits=None, periods_between_train_test=0, max_splits=3)[source]

Flexible splitter for time-series cross validation and rolling window evaluation. Suitable for use in GridSearchCV.

min_splits

Guaranteed min number of splits. This is always set to 1. If provided configuration results in 0 splits, the cross validator will yield a default split.

Type

int

__starting_test_index

Test end index of the first CV split. Actual offset = __starting_test_index + _get_offset(X), for a particular dataset X. Cross validator ensures the last test split contains the last observation in X.

Type

int

Examples

>>> from greykite.sklearn.cross_validation import RollingTimeSeriesSplit
>>> X = np.random.rand(20, 4)
>>> tscv = RollingTimeSeriesSplit(forecast_horizon=3, max_splits=4)
>>> tscv.get_n_splits(X=X)
4
>>> for train, test in tscv.split(X=X):
...     print(train, test)
[2 3 4 5 6 7] [ 8  9 10]
[ 5  6  7  8  9 10] [11 12 13]
[ 8  9 10 11 12 13] [14 15 16]
[11 12 13 14 15 16] [17 18 19]
>>> X = np.random.rand(20, 4)
>>> tscv = RollingTimeSeriesSplit(forecast_horizon=2,
...                               min_train_periods=4,
...                               expanding_window=True,
...                               periods_between_splits=4,
...                               periods_between_train_test=2,
...                               max_splits=None)
>>> tscv.get_n_splits(X=X)
4
>>> for train, test in tscv.split(X=X):
...     print(train, test)
[0 1 2 3] [6 7]
[0 1 2 3 4 5 6 7] [10 11]
[ 0  1  2  3  4  5  6  7  8  9 10 11] [14 15]
[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15] [18 19]
>>> X = np.random.rand(5, 4)  # default split if there is not enough data
>>> for train, test in tscv.split(X=X):
...     print(train, test)
[0 1 2 3] [4]
split(X, y=None, groups=None)[source]
Generates indices to split data into training and test CV folds according to rolling

window time series cross validation

Parameters
  • X (array-like, shape (n_samples, n_features)) – Training data, where n_samples is the number of samples and n_features is the number of features. Must have shape method.

  • y (array-like, shape (n_samples,), optional) – The target variable for supervised learning problems. Always ignored, exists for compatibility.

  • groups (array-like, with shape (n_samples,), optional) – Group labels for the samples used while splitting the dataset into train/test set. Always ignored, exists for compatibility.

Yields
  • train (numpy.array) – The training set indices for that split.

  • test (numpy.array) – The testing set indices for that split.

get_n_splits(X=None, y=None, groups=None)[source]

Returns the number of splitting iterations yielded by the cross-validator

Parameters
  • X (array-like, shape (n_samples, n_features)) – Input data to split

  • y (object) – Always ignored, exists for compatibility.

  • groups (object) – Always ignored, exists for compatibility.

Returns

n_splits – The number of splitting iterations yielded by the cross-validator.

Return type

int

get_n_splits_without_capping(X=None)[source]
Returns the number of splitting iterations in the cross-validator as configured, ignoring

self.max_splits and self.min_splits

Parameters

X (array-like, shape (n_samples, n_features)) – Input data to split

Returns

n_splits – The number of splitting iterations in the cross-validator as configured, ignoring self.max_splits and self.min_splits

Return type

int

_get_offset(X=None)[source]

Returns an offset to add to test set indices when creating CV splits CV splits are shifted so that the last test observation is the last point in X. This shift does not affect the total number of splits.

Parameters

X (array-like, shape (n_samples, n_features)) – Input data to split

Returns

offset – The number of observations to ignore at the beginning of X when creating CV splits

Return type

int

_sample_splits(num_splits, seed=48912)[source]

Samples up to max_splits items from list(range(num_splits)).

If use_most_recent_splits is True, highest split indices up to max_splits are retained. Otherwise, the following sampling scheme is implemented:

  • takes the last 2 splits

  • samples from the rest uniformly at random

Parameters
  • num_splits (int) – Number of splits before sampling.

  • seed (int) – Seed for random sampling.

Returns

n_splits – Indices of splits to keep (subset of list(range(num_splits))).

Return type

list

_iter_test_indices(X=None, y=None, groups=None)[source]

Class directly implements split instead of providing this function

_iter_test_masks(X=None, y=None, groups=None)

Generates boolean masks corresponding to test sets.

By default, delegates to _iter_test_indices(X, y, groups)

Transformers

class greykite.sklearn.transform.zscore_outlier_transformer.ZscoreOutlierTransformer(z_cutoff=None, use_fit_baseline=False)[source]

Replaces outliers in data with NaN. Outliers are determined by z-score cutoff. Columns are handled independently.

Parameters
  • z_cutoff (float or None, default None) – z-score cutoff to define outliers. If None, this transformer is a no-op.

  • use_fit_baseline (bool, default False) –

    If True, the z-scores are calculated using the mean and standard deviation of the dataset passed to fit.

    If False, the transformer is stateless. z-scores are calculated for the dataset passed to transform, regardless of fit.

mean

Mean of each column. NaNs are ignored.

Type

pandas.Series

std

Standard deviation of each column. NaNs are ignored.

Type

pandas.Series

_is_fitted

Whether the transformer is fitted.

Type

bool

fit(X, y=None)[source]

Computes the column mean and standard deviation, stored as mean and std attributes.

Parameters
  • X (pandas.DataFrame) – Training input data. e.g. each column is a timeseries. Columns are expected to be numeric.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

self – Returns self.

Return type

object

transform(X)[source]

Replaces outliers with NaN.

Parameters

X (pandas.DataFrame) – Data to transform. e.g. each column is a timeseries. Columns are expected to be numeric.

Returns

X_outlier – A copy of the data frame with original values and outliers replaced with NaN.

Return type

pandas.DataFrame

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Input samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).

  • **fit_params (dict) – Additional fit parameters.

Returns

X_new – Transformed array.

Return type

ndarray array of shape (n_samples, n_features_new)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

class greykite.sklearn.transform.normalize_transformer.NormalizeTransformer(normalize_algorithm=None, normalize_params=None)[source]

Normalizes time series data.

Parameters
scaler

sklearn class used for normalization

Type

class

_is_fitted

Whether the transformer is fitted.

Type

bool

fit(X, y=None)[source]

Fits the normalization transform.

Parameters
  • X (pandas.DataFrame) – Training input data. e.g. each column is a timeseries. Columns are expected to be numeric.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

self – Returns self.

Return type

object

transform(X)[source]

Normalizes data using the specified scaling method.

Parameters

X (pandas.DataFrame) – Data to transform. e.g. each column is a timeseries. Columns are expected to be numeric.

Returns

X_normalized – A normalized copy of the data frame.

Return type

pandas.DataFrame

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Input samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).

  • **fit_params (dict) – Additional fit parameters.

Returns

X_new – Transformed array.

Return type

ndarray array of shape (n_samples, n_features_new)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

class greykite.sklearn.transform.null_transformer.NullTransformer(max_frac=0.1, impute_algorithm=None, impute_params=None, impute_all=True)[source]

Imputes nulls in time series data.

This transform is stateless in the sense that transform output does not depend on the data passed to fit. The dataset passed to transform is used to impute itself.

Parameters
  • max_frac (float, default 0.10) – issues warning if fraction of nulls is above this value

  • impute_algorithm (str or None, default “interpolate”) –

    Which imputation algorithm to use. Valid options are:

    If None, this transformer is a no-op. No null imputation is done.

  • impute_params (dict or None, default None) –

    Params to pass to the imputation algorithm. See pandas.DataFrame.interpolate and impute_with_lags_multi for their respective options.

    For pandas “interpolate”, the “ffill”, “pad”, “bfill”, “backfill” methods are not allowed to avoid confusion with the fill axis parameter. Use “linear” with axis=0 instead, with direction controlled by limit_direction.

    If None, uses the defaults provided in this class.

  • impute_all (bool, default True) –

    Whether to impute all values. If True, NaNs are not allowed in the transformed result. Ignored if impute_algorithm is None.

    The transform specified by impute_algorithm and impute_params may leave NaNs in the dataset. For example, if it fills in the forward direction but the first value in a column is NaN.

    A first pass is taken with the impute algorithm specified. A second pass is taken with the “interpolate” algorithm (method=”linear”, limit_direction=”both”) to fill in remaining NaNs.

null_frac

The fraction data points that are null

Type

int

_is_fitted

Whether the transformer is fitted.

Type

bool

missing_info

Information about the missing data. Set by transform if impute_algorithm = "ts_interpolate".

Type

dict

fit(X, y=None)[source]

Updates self.impute_params.

Parameters
  • X (pandas.DataFrame) – Training input data. e.g. each column is a timeseries. Columns are expected to be numeric.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

self – Returns self.

Return type

object

transform(X)[source]

Imputes missing values in input time series.

Checks the % of data points that are null, and provides warning if it exceeds self.max_frac.

Parameters

X (pandas.DataFrame) – Data to transform. e.g. each column is a timeseries. Columns are expected to be numeric.

Returns

X_imputed – A copy of the data frame with original values and missing values imputed

Return type

pandas.DataFrame

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Input samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).

  • **fit_params (dict) – Additional fit parameters.

Returns

X_new – Transformed array.

Return type

ndarray array of shape (n_samples, n_features_new)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

class greykite.sklearn.transform.drop_degenerate_transformer.DropDegenerateTransformer(drop_degenerate=False)[source]

Removes degenerate (constant) columns.

Parameters

drop_degenerate (bool, default False) – Whether to drop degenerate columns.

drop_cols

Degenerate columns to drop

Type

list [str] or None

keep_cols

Columns to keep

Type

list [str] or None

fit(X, y=None)[source]

Identifies the degenerate columns, and sets self.keep_cols and self.drop_cols.

Parameters
  • X (pandas.DataFrame) – Training input data. e.g. each column is a timeseries. Columns are expected to be numeric.

  • y (None) – There is no need of a target in a transformer, yet the pipeline API requires this parameter.

Returns

self – Returns self.

Return type

object

transform(X)[source]

Normalizes data using the specified scaling method.

Parameters

X (pandas.DataFrame) – Data to transform. e.g. each column is a timeseries. Columns are expected to be numeric.

Returns

X_subset – Selected columns of X. Keeps columns that were not degenerate on the training data.

Return type

pandas.DataFrame

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Input samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs), default=None) – Target values (None for unsupervised transformations).

  • **fit_params (dict) – Additional fit parameters.

Returns

X_new – Transformed array.

Return type

ndarray array of shape (n_samples, n_features_new)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance

Utility Functions

greykite.common.features.timeseries_features.get_available_holiday_lookup_countries(countries=None)[source]

Returns list of available countries for modeling holidays

Parameters

countries – List[str] only look for available countries in this set

Returns

List[str] list of available countries for modeling holidays

greykite.common.features.timeseries_features.get_available_holidays_across_countries(countries, year_start, year_end)[source]

Returns a list of holidays that occur any of the countries between the years specified.

Parameters
  • countries – List[str] countries for which we need holidays

  • year_start – int first year of interest

  • year_end – int last year of interest

Returns

List[str] names of holidays in any of the countries between [year_start, year_end]

greykite.common.features.timeseries_features.build_time_features_df(dt, conti_year_origin)[source]

This function gets a datetime-like vector and creates new columns containing temporal features useful for time series analysis and forecasting e.g. year, week of year, etc.

Parameters
  • dt (array-like (1-dimensional)) – A vector of datetime-like values

  • conti_year_origin (float) – The origin used for creating continuous time.

Returns

time_features_df

Dataframe with the following time features.

  • ”datetime”: datetime.datetime object, a combination of date and a time

  • ”date”: datetime.date object, date with the format (year, month, day)

  • ”year”: integer, year of the date e.g. 2018

  • ”year_length”: integer, number of days in the year e.g. 365 or 366

  • ”quarter”: integer, quarter of the date, 1, 2, 3, 4

  • ”quarter_start”: pandas.DatetimeIndex, date of beginning of the current quarter

  • ”quarter_length”: integer, number of days in the quarter, 90/91 for Q1, 91 for Q2, 92 for Q3 and Q4

  • ”month”: integer, month of the year, January=1, February=2, …, December=12

  • ”month_length”: integer, number of days in the month, 28/ 29/ 30/ 31

  • ”woy”: integer, ISO 8601 week of the year where a week starts from Monday, 1, 2, …, 53

  • ”doy”: integer, ordinal day of the year, 1, 2, …, year_length

  • ”doq”: integer, ordinal day of the quarter, 1, 2, …, quarter_length

  • ”dom”: integer, ordinal day of the month, 1, 2, …, month_length

  • ”dow”: integer, day of the week, Monday=1, Tuesday=2, …, Sunday=7

  • ”str_dow”: string, day of the week as a string e.g. “1-Mon”, “2-Tue”, …, “7-Sun”

  • ”str_doy”: string, day of the year e.g. “2020-03-20” for March 20, 2020

  • ”hour”: integer, discrete hours of the datetime, 0, 1, …, 23

  • ”minute”: integer, minutes of the datetime, 0, 1, …, 59

  • ”second”: integer, seconds of the datetime, 0, 1, …, 3599

  • ”year_month”: string, (year, month) e.g. “2020-03” for March 2020

  • ”year_woy”: string, (year, week of year) e.g. “2020_42” for 42nd week of 2020

  • ”month_dom”: string, (month, day of month) e.g. “02/20” for February 20th

  • ”year_woy_dow”: string, (year, week of year, day of week) e.g. “2020_03_6” for Saturday of 3rd week in 2020

  • ”woy_dow”: string, (week of year, day of week) e.g. “03_6” for Saturday of 3rd week

  • ”dow_hr”: string, (day of week, hour) e.g. “4_09” for 9am on Thursday

  • ”dow_hr_min”: string, (day of week, hour, minute) e.g. “4_09_10” for 9:10am on Thursday

  • ”tod”: float, time of day, continuous, 0.0 to 24.0

  • ”tow”: float, time of week, continuous, 0.0 to 7.0

  • ”tom”: float, standardized time of month, continuous, 0.0 to 1.0

  • ”toq”: float, time of quarter, continuous, 0.0 to 1.0

  • ”toy”: float, standardized time of year, continuous, 0.0 to 1.0

  • ”conti_year”: float, year in continuous time, eg 2018.5 means middle of the year 2018

  • ”is_weekend”: boolean, weekend indicator, True for weekend, else False

  • ”dow_grouped”: string, Monday-Thursday=1234-MTuWTh, Friday=5-Fri, Saturday=6-Sat, Sunday=7-Sun

  • ”ct1”: float, linear growth based on conti_year_origin, -infinity to infinity

  • ”ct2”: float, signed quadratic growth, -infinity to infinity

  • ”ct3”: float, signed cubic growth, -infinity to infinity

  • ”ct_sqrt”: float, signed square root growth, -infinity to infinity

  • ”ct_root3”: float, signed cubic root growth, -infinity to infinity

Return type

pandas.DataFrame

greykite.common.features.timeseries_features.get_holidays(countries, year_start, year_end)[source]

This function extracts a holiday data frame for the period of interest [year_start to year_end] for the given countries. This is done using the holidays libraries in pypi:fbprophet and pypi:holidays Implementation resembles that of make_holidays_df

Parameters
  • countries (list [str]) – countries for which we need holidays

  • year_start (int) – first year of interest, inclusive

  • year_end (int) – last year of interest, inclusive

Returns

holiday_df_dict

  • key: country name

  • value: data frame with holidays for that country Each data frame has two columns: EVENT_DF_DATE_COL, EVENT_DF_LABEL_COL

Return type

dict [str, pandas.DataFrame]

greykite.common.features.timeseries_features.add_event_window_multi(event_df_dict, time_col, label_col, time_delta='1D', pre_num=1, post_num=1, pre_post_num_dict=None)[source]

For a given dictionary of events data frames with a time_col and label_col it adds shifted events prior and after the given events For example if the event data frame includes the row ‘2019-12-25, Christmas’ as a row the function will produce dataframes with the events ‘2019-12-24, Christmas’ and ‘2019-12-26, Christmas’ if pre_num and post_num are 1 or more.

Parameters
  • event_df_dict (dict [str, pandas.DataFrame]) – A dictionary of events data frames with each having two columns: time_col and label_col.

  • time_col (str) – The column with the timestamp of the events. This can be daily but does not have to be.

  • label_col (str) – The column with labels for the events.

  • time_delta (str, default “1D”) – The amount of the shift for each unit specified by a string e.g. ‘1D’ stands for one day delta

  • pre_num (int, default 1) – The number of events to be added prior to the given event for each event in df.

  • post_num (int, default 1) – The number of events to be added after to the given event for each event in df.

  • pre_post_num_dict (dict [str, (int, int)] or None, default None) – Optionally override pre_num and post_num for each key in event_df_dict. For example, if event_df_dict has keys “US” and “India”, this parameter can be set to pre_post_num_dict = {"US": [1, 3], "India": [1, 2]}, denoting that the “US” pre_num is 1 and post_num is 3, and “India” pre_num is 1 and post_num is 2. Keys not specified by pre_post_num_dict use the default given by pre_num and post_num.

Returns

df – A dictionary of dataframes for each needed shift. For example if pre_num=2 and post_num=3. 2 + 3 = 5 data frames will be stored in the return dictionary.

Return type

dict [str, pandas.DataFrame]

greykite.common.features.timeseries_features.add_daily_events(df, event_df_dict, date_col='date', regular_day_label='')[source]
For each key of event_df_dict, it adds a new column to a data frame (df)

with a date column (date_col). Each new column will represent the events given for that key.

Notes

As a side effect, the columns in event_df_dict are renamed.

Parameters
  • df (pandas.DataFrame) – The data frame which has a date column.

  • event_df_dict (dict [str, pandas.DataFrame]) –

    A dictionary of data frames, each representing events data for the corresponding key. Values are DataFrames with two columns:

    • The first column contains the date. Must be at the same frequency as df[date_col] for proper join. Must be in a format recognized by pandas.to_datetime.

    • The second column contains the event label for each date

  • date_col (str) – Column name in df that contains the dates for joining against the events in event_df_dict.

  • regular_day_label (str) – The label used for regular days which are not “events”.

Returns

df_daily_events – An augmented data frame version of df with new label columns – one for each key of event_df_dict.

Return type

pandas.DataFrame

greykite.common.features.timeseries_features.convert_date_to_continuous_time(dt)[source]

Converts date to continuous time. Each year is one unit.

Parameters

dt (datetime object) – the date to convert

Returns

conti_date – the date represented in years

Return type

float

greykite.algo.forecast.silverkite.forecast_simple_silverkite_helper.get_event_pred_cols(daily_event_df_dict)[source]

Generates the names of internal predictor columns from the event dictionary passed to forecast. These can be passed via the extra_pred_cols parameter to model event effects.

Note

The returned strings are patsy model formula terms. Each provides full set of levels so that prediction works even if a level is not found in the training set.

If a level does not appear in the training set, its coefficient may be unbounded in the “linear” fit_algorithm. A method with regularization avoids this issue (e.g. “ridge”, “elastic_net”).

Parameters

daily_event_df_dict (dict or None, optional, default None) – A dictionary of data frames, each representing events data for the corresponding key. See forecast.

Returns

event_pred_cols – List of patsy model formula terms, one for each key of daily_event_df_dict.

Return type

list [str]

greykite.framework.pipeline.utils.get_basic_pipeline(estimator=SimpleSilverkiteEstimator(), score_func='MeanAbsolutePercentError', score_func_greater_is_better=False, agg_periods=None, agg_func=None, relative_error_tolerance=None, coverage=0.95, null_model_params=None, regressor_cols=None)[source]

Returns a basic pipeline for univariate forecasting. Allows for outlier detection, normalization, null imputation, degenerate column removal, and forecast model fitting. By default, only null imputation is enabled. See source code for the pipeline steps.

Notes

While score_func is used to define the estimator’s score function, the the scoring parameter of RandomizedSearchCV should be provided when using this pipeline in grid search. Otherwise, grid search assumes higher values are better for score_func.

Parameters
  • estimator (instance of an estimator that implements BaseForecastEstimator, default SimpleSilverkiteEstimator() # noqa: E501) – Estimator to use as the final step in the pipeline.

  • score_func (str or callable, default EvaluationMetricEnum.MeanAbsolutePercentError.name) – Score function used to select optimal model in CV. If a callable, takes arrays y_true, y_pred and returns a float. If a string, must be either a EvaluationMetricEnum member name or FRACTION_OUTSIDE_TOLERANCE.

  • score_func_greater_is_better (bool, default False) – True if score_func is a score function, meaning higher is better, and False if it is a loss function, meaning lower is better. Must be provided if score_func is a callable (custom function). Ignored if score_func is a string, because the direction is known.

  • agg_periods (int or None, default None) – Number of periods to aggregate before evaluation. Model is fit at original frequency, and forecast is aggregated according to agg_periods E.g. fit model on hourly data, and evaluate performance at daily level If None, does not apply aggregation

  • agg_func (callable or None, default None) – Takes an array and returns a number, e.g. np.max, np.sum Used to aggregate data prior to evaluation (applied to actual and predicted) Ignored if agg_periods is None

  • relative_error_tolerance (float or None, default None) – Threshold to compute the FRACTION_OUTSIDE_TOLERANCE metric, defined as the fraction of forecasted values whose relative error is strictly greater than relative_error_tolerance. For example, 0.05 allows for 5% relative error. Required if score_func is FRACTION_OUTSIDE_TOLERANCE.

  • coverage (float or None, optional, default=0.95) – Intended coverage of the prediction bands (0.0 to 1.0) If None, the upper/lower predictions are not returned Ignored if pipeline is provided. Uses coverage of the pipeline estimator instead.

  • null_model_params (dict or None, default None) –

    Defines baseline model to compute R2_null_model_score evaluation metric. R2_null_model_score is the improvement in the loss function relative to a null model. It can be used to evaluate model quality with respect to a simple baseline. For details, see r2_null_model_score.

    The null model is a DummyRegressor, which returns constant predictions.

    Valid keys are “strategy”, “constant”, “quantile”. See DummyRegressor. For example:

    null_model_params = {
        "strategy": "mean",
    }
    null_model_params = {
        "strategy": "median",
    }
    null_model_params = {
        "strategy": "quantile",
        "quantile": 0.8,
    }
    null_model_params = {
        "strategy": "constant",
        "constant": 2.0,
    }
    

    If None, R2_null_model_score is not calculated.

    Note: CV model selection always optimizes score_func`, not the ``R2_null_model_score.

  • regressor_cols (list [str] or None, default None) – A list of regressor columns used in the training and prediction DataFrames. It should contain only the regressors that are being used in the grid search. If None, no regressor columns are used. Regressor columns that are unavailable in df are dropped.

Returns

pipeline – sklearn Pipeline for univariate forecasting.

Return type

sklearn.pipeline.Pipeline

greykite.framework.utils.result_summary.summarize_grid_search_results(grid_search, only_changing_params=True, combine_splits=True, decimals=None, score_func='MeanAbsolutePercentError', score_func_greater_is_better=False, cv_report_metrics='ALL', column_order=None)[source]

Summarizes CV results for each grid search parameter combination.

While grid_search.cv_results_ could be imported into a pandas.DataFrame without this function, the following conveniences are provided:

  • returns the correct ranks based on each metric’s greater_is_better direction.

  • summarizes the hyperparameter space, only showing the parameters that change

  • combines split scores into a tuple to save table width

  • rounds the values to specified decimals

  • orders columns by type (test score, train score, metric, etc.)

Parameters
  • grid_search (RandomizedSearchCV) – Grid search output (fitted RandomizedSearchCV object).

  • only_changing_params (bool, default True) – If True, only show parameters with multiple values in the hyperparameter_grid.

  • combine_splits (bool, default True) –

    Whether to report split scores as a tuple in a single column.

    • If True, adds a column for the test splits scores for each requested metric. Adds a column with train split scores if those are available.

      For example, “split_train_score” would contain the values (split1_train_score, split2_train_score, split3_train_score) as as tuple.

    • If False, this summary column is not added.

    The original split columns are available either way.

  • decimals (int or None, default None) – Number of decimal places to round to. If decimals is negative, it specifies the number of positions to the left of the decimal point. If None, does not round.

  • score_func (str or callable, default EvaluationMetricEnum.MeanAbsolutePercentError.name) –

    Score function used to select optimal model in CV. If a callable, takes arrays y_true, y_pred and returns a float. If a string, must be either a EvaluationMetricEnum member name or FRACTION_OUTSIDE_TOLERANCE.

    Used in this function to fix the "rank_test_score" column if score_func_greater_is_better=False.

    Should be the same as what was passed to run_forecast_config, or forecast_pipeline, or get_hyperparameter_searcher.

  • score_func_greater_is_better (bool, default False) –

    True if score_func is a score function, meaning higher is better, and False if it is a loss function, meaning lower is better. Must be provided if score_func is a callable (custom function). Ignored if score_func is a string, because the direction is known.

    Used in this function to fix the "rank_test_score" column if score_func_greater_is_better=False.

    Should be the same as what was passed to run_forecast_config, or forecast_pipeline, or get_hyperparameter_searcher.

  • cv_report_metrics (CV_REPORT_METRICS_ALL, or list [str], or None, default CV_REPORT_METRICS_ALL # noqa: E501) –

    Additional metrics to show in the summary, besides the one specified by score_func.

    If a metric is specified but not available, a warning will be given.

    Should be the same as what was passed to run_forecast_config, or forecast_pipeline, or get_hyperparameter_searcher, or a subset of computed metric to show.

    If a list of strings, valid strings are greykite.common.evaluation.EvaluationMetricEnum member names and FRACTION_OUTSIDE_TOLERANCE.

  • column_order (list [str] or None, default None) –

    How to order the columns. A list of regex to order column names, in greedy fashion. Column names matching the first item are placed first. Among remaining items, those matching the second items are placed next, etc. Use “*” as the last element to select all available columns, if desired. If None, uses default ordering:

    column_order = ["rank_test", "mean_test", "split_test", "mean_train",
                    "params", "param", "split_train", "time", ".*"]
    

Notes

Metrics are named in grid_search.cv_results_ according to the scoring parameter passed to RandomizedSearchCV.

"score" is the default used by sklearn for single metric evaluation.

If a dictionary is provided to scoring, as is the case through templates, then the metrics are named by its keys, and the metric used for selection is defined by refit. The keys are derived from score_func and cv_report_metrics in get_scoring_and_refit.

  • The key for score_func if it is a callable is CUSTOM_SCORE_FUNC_NAME.

  • The key for EvaluationMetricEnum member name is the short name from .get_metric_name().

  • The key for FRACTION_OUTSIDE_TOLERANCE is FRACTION_OUTSIDE_TOLERANCE_NAME.

Returns

cv_results – A summary of cross-validation results in tabular format. Each row corresponds to a set of parameters used in the grid search.

The columns have the following format, where name is the canonical short name for the metric.

"rank_test_{name}"int

The params ranked by mean_test_score (1 is best).

"mean_test_{name}"float

Average test score.

"split_test_{name}"list [float]

Test score on each split. [split 0, split 1, …]

"std_test_{name}"float

Standard deviation of test scores.

"mean_train_{name}"float

Average train score.

"split_train_{name}"list [float]

Train score on each split. [split 0, split 1, …]

"std_train_{name}"float

Standard deviation of train scores.

"mean_fit_time"float

Average time to fit each CV split (in seconds)

"std_fit_time"float

Std of time to fit each CV split (in seconds)

"mean_score_time"float

Average time to score each CV split (in seconds)

"std_score_time"float

Std of time to score each CV split (in seconds)

"params"dict

The parameters used. If only_changing==True, only shows the parameters which are not identical across all CV splits.

"param_{pipeline__param__name}"Any

The value of pipeline parameter pipeline__param__name for each row.

Return type

pandas.DataFrame

greykite.framework.utils.result_summary.get_ranks_and_splits(grid_search, score_func='MeanAbsolutePercentError', greater_is_better=False, combine_splits=True, decimals=None, warn_metric=True)[source]

Extracts CV results from grid_search for the specified score function. Returns the correct ranks on the test set and a tuple of the scores across splits, for both test set and train set (if available).

Notes

While cv_results contains keys with the ranks, these ranks are inverted if lower values are better and the scoring function was initialized with greater_is_better=True to report metrics with their original sign.

This function always returns the correct ranks, accounting for metric direction.

Parameters
  • grid_search (RandomizedSearchCV) – Grid search output (fitted RandomizedSearchCV object).

  • score_func (str or callable, default EvaluationMetricEnum.MeanAbsolutePercentError.name) –

    Score function to get the ranks for. If a callable, takes arrays y_true, y_pred and returns a float. If a string, must be either a EvaluationMetricEnum member name or FRACTION_OUTSIDE_TOLERANCE.

    Should be the same as what was passed to run_forecast_config, or forecast_pipeline, or get_hyperparameter_searcher.

  • greater_is_better (bool or None, default False) –

    True if score_func is a score function, meaning higher is better, and False if it is a loss function, meaning lower is better. Must be provided if score_func is a callable (custom function). Ignored if score_func is a string, because the direction is known.

    Used in this function to rank values in the proper direction.

    Should be the same as what was passed to run_forecast_config, or forecast_pipeline, or get_hyperparameter_searcher.

  • combine_splits (bool, default True) – Whether to report split scores as a tuple in a single column. If True, a single column is returned for all the splits of a given metric and train/test set. For example, “split_train_score” would contain the values (split1_train_score, split2_train_score, split3_train_score) as as tuple. If False, they are reported in their original columns.

  • decimals (int or None, default None) – Number of decimal places to round to. If decimals is negative, it specifies the number of positions to the left of the decimal point. If None, does not round.

  • warn_metric (bool, default True) – Whether to issue a warning if the requested metric is not found in the CV results.

Returns

ranks_and_splits – Ranks and split scores. Dictionary with the following keys:

"short_name"int

Canonical short name for the score_func.

"ranks"numpy.array

Ranks of the test scores for the score_func, where 1 is the best.

"split_train"list [list [float]]

Train split scores. Outer list corresponds to the parameter setting; inner list contains the scores for that parameter setting across all splits.

"split_test"list [list [float]]

Test split scores. Outer list corresponds to the parameter setting; inner list contains the scores for that parameter setting across all splits.

Return type

dict

greykite.common.viz.timeseries_plotting.plot_multivariate(df, x_col, y_col_style_dict='plotly', default_color='rgba(0, 145, 202, 1.0)', xlabel=None, ylabel='y', title=None, showlegend=True)[source]

Plots one or more lines against the same x-axis values.

Parameters
  • df (pandas.DataFrame) – Data frame with x_col and columns named by the keys in y_col_style_dict.

  • x_col (str) – Which column to plot on the x-axis.

  • y_col_style_dict (dict [str, dict or None] or “plotly” or “auto” or “auto-fill”, default “plotly”) –

    The column(s) to plot on the y-axis, and how to style them.

    If a dictionary:

    • keystr

      column name in df

    • valuedict or None

      Optional styling options, passed as kwargs to go.Scatter. If None, uses the default: line labeled by the column name. See reference page for plotly.graph_objs.Scatter for options (e.g. color, mode, width/size, opacity). https://plotly.com/python/reference/#scatter.

    If a string, plots all columns in df besides x_col against x_col:

    • ”plotly”: plot lines with default plotly styling

    • ”auto”: plot lines with color default_color, sorted by value (ascending)

    • ”auto-fill”: plot lines with color default_color, sorted by value (ascending), and fills between lines

  • default_color (str, default “rgba(0, 145, 202, 1.0)” (blue)) – Default line color when y_col_style_dict is one of “auto”, “auto-fill”.

  • xlabel (str or None, default None) – x-axis label. If None, default is x_col.

  • ylabel (str or None, default VALUE_COL) – y-axis label

  • title (str or None, default None) – Plot title. If None, default is based on axis labels.

  • showlegend (bool, default True) – Whether to show the legend.

Returns

fig – Interactive plotly graph of one or more columns in df against x_col.

See plot_forecast_vs_actual return value for how to plot the figure and add customization.

Return type

plotly.graph_objs.Figure

greykite.common.viz.timeseries_plotting.plot_univariate(df, x_col, y_col, xlabel=None, ylabel=None, title=None, color='rgb(32, 149, 212)', showlegend=True)[source]

Simple plot of univariate timeseries.

Parameters
  • df (pandas.DataFrame) – Data frame with x_col and y_col

  • x_col (str) – x-axis column name, usually the time column

  • y_col (str) – y-axis column name, the value the plot

  • xlabel (str or None, default None) – x-axis label

  • ylabel (str or None, default None) – y-axis label

  • title (str or None, default None) – Plot title. If None, default is based on axis labels.

  • color (str, default “rgb(32, 149, 212)” (light blue)) – Line color

  • showlegend (bool, default True) – Whether to show the legend

Returns

fig – Interactive plotly graph of the value against time.

See plot_forecast_vs_actual return value for how to plot the figure and add customization.

Return type

plotly.graph_objs.Figure

See also

None

Provides more styling options. Also consider using plotly’s go.Scatter and go.Layout directly.

greykite.common.viz.timeseries_plotting.plot_forecast_vs_actual(df, time_col='ts', actual_col='actual', predicted_col='forecast', predicted_lower_col='forecast_lower', predicted_upper_col='forecast_upper', xlabel='ts', ylabel='y', train_end_date=None, title=None, showlegend=True, actual_mode='lines+markers', actual_points_color='rgba(250, 43, 20, 0.7)', actual_points_size=2.0, actual_color_opacity=1.0, forecast_curve_color='rgba(0, 90, 181, 0.7)', forecast_curve_dash='solid', ci_band_color='rgba(0, 90, 181, 0.15)', ci_boundary_curve_color='rgba(0, 90, 181, 0.5)', ci_boundary_curve_width=0.0, vertical_line_color='rgba(100, 100, 100, 0.9)', vertical_line_width=1.0)[source]

Plots forecast with prediction intervals, against actuals Adapted from plotly user guide: https://plot.ly/python/v3/continuous-error-bars/#basic-continuous-error-bars

Parameters
  • df (pandas.DataFrame) – Timestamp, predicted, and actual values

  • time_col (str, default TIME_COL) – Column in df with timestamp (x-axis)

  • actual_col (str, default ACTUAL_COL) – Column in df with actual values

  • predicted_col (str, default PREDICTED_COL) – Column in df with predicted values

  • predicted_lower_col (str or None, default PREDICTED_LOWER_COL) – Column in df with predicted lower bound

  • predicted_upper_col (str or None, default PREDICTED_UPPER_COL) – Column in df with predicted upper bound

  • xlabel (str, default TIME_COL) – x-axis label.

  • ylabel (str, default VALUE_COL) – y-axis label.

  • train_end_date (datetime.datetime or None, default None) – Train end date. Must be a value in df[time_col].

  • title (str or None, default None) – Plot title.

  • showlegend (bool, default True) – Whether to show a plot legend.

  • actual_mode (str, default “lines+markers”) – How to show the actuals. Options: markers, lines, lines+markers

  • actual_points_color (str, default “rgba(99, 114, 218, 1.0)”) – Color of actual line/marker.

  • actual_points_size (float, default 2.0) – Size of actual markers. Only used if “markers” is in actual_mode.

  • actual_color_opacity (float or None, default 1.0) – Opacity of actual values points.

  • forecast_curve_color (str, default “rgba(0, 145, 202, 1.0)”) – Color of forecasted values.

  • forecast_curve_dash (str, default “solid”) – ‘dash’ property of forecast scatter.line. One of: ['solid', 'dot', 'dash', 'longdash', 'dashdot', 'longdashdot'] or a string containing a dash length list in pixels or percentages (e.g. '5px 10px 2px 2px', '5, 10, 2, 2', '10% 20% 40%')

  • ci_band_color (str, default “rgba(0, 145, 202, 0.15)”) – Fill color of the prediction bands.

  • ci_boundary_curve_color (str, default “rgba(0, 145, 202, 0.15)”) – Color of the prediction upper/lower lines.

  • ci_boundary_curve_width (float, default 0.0) – Width of the prediction upper/lower lines. default 0.0 (hidden)

  • vertical_line_color (str, default “rgba(100, 100, 100, 0.9)”) – Color of the vertical line indicating train end date. Default is black with opacity of 0.9.

  • vertical_line_width (float, default 1.0) – width of the vertical line indicating train end date

Returns

fig – Plotly figure of forecast against actuals, with prediction intervals if available.

Can show, convert to HTML, update:

# show figure (saved to HTML)
from plotly.offline import plot
plot(fig)
# show figure in notebook
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=True)
iplot(fig)

# get HTML string, write to file
fig.to_html(include_plotlyjs=False, full_html=True)
fig.write_html("figure.html", include_plotlyjs=False, full_html=True)

# customize layout (https://plot.ly/python/v3/user-guide/)
update_layout = dict(
    yaxis=dict(title="new ylabel"),
    title_text="new title",
    title_font_size=30)
fig.update(layout=update_layout)

Return type

plotly.graph_objs.Figure

greykite.common.features.timeseries_impute.impute_with_lags(df, value_col, orders, agg_func=<function mean>, iter_num=1)[source]

A function to impute timeseries values (given in df) and in value_col with chosen lagged values or an aggregated of those. For example for daily data one could use the 7th lag to impute using the value of the same day of past week as opposed to the closest value available which can be inferior for business related timeseries.

The imputation can be done multiple times by specifying iter_num to decrease the number of missing in some cases. Note that there are no guarantees to impute all missing values with this method by design. However the original number of missing values and the final number of missing values are returned by the function along with the imputed dataframe.

Parameters
  • df (pandas.DataFrame) – Input dataframe which must include value_col as a column.

  • value_col (str) – The column name in df representing the values of the timeseries.

  • orders (list of int) – The lag orders to be used for aggregation.

  • agg_func (callable, default np.mean) – pandas.Series -> float An aggregation function to aggregate the chosen lags.

  • iter_num (int, default 1) – Maximum number of iterations to impute the series. Each iteration represent an imputation of the series using the provided lag orders (orders) and return an imputed dataframe. It might be the case that with one iterations some values are not imputed but with more iterations one can achieve more imputed values.

Returns

impute_info – A dictionary with following items:

”df”pandas.DataFrame

A dataframe with the imputed values.

”initial_missing_num”int

Initial number of missing values.

”final_missing_num”int

Final number of missing values after imputations.

Return type

dict

greykite.common.features.timeseries_impute.impute_with_lags_multi(df, orders, agg_func=<function mean>, iter_num=1, cols=None)[source]

Imputes every column of df using impute_with_lags.

Parameters
  • df (pandas.DataFrame) – Input dataframe which must include value_col as a column.

  • orders (list of int) – The lag orders to be used for aggregation.

  • agg_func (callable, default np.mean) – pandas.Series -> float An aggregation function to aggregate the chosen lags.

  • iter_num (int, default 1) – Maximum number of iterations to impute the series. Each iteration represent an imputation of the series using the provided lag orders (orders) and return an imputed dataframe. It might be the case that with one iterations some values are not imputed but with more iterations one can achieve more imputed values.

  • cols (list [str] or None, default None) – Which columns to impute. If None, imputes all columns.

Returns

impute_info – A dictionary with following items:

”df”pandas.DataFrame

A dataframe with the imputed values.

”missing_info”dict

Dictionary with information about the missing info.

Key = name of a column in df Value = dictionary containing:

”initial_missing_num”int

Initial number of missing values.

”final_missing_num”int

Final number of missing values after imputation.

Return type

dict

greykite.common.features.adjust_anomalous_data.adjust_anomalous_data(df, time_col, value_col, anomaly_df, start_date_col='start_date', end_date_col='end_date', adjustment_delta_col=None, filter_by_dict=None, filter_by_value_col=None, adjustment_method='add')[source]

This function takes:

  • a time series, in the form of a dataframe: df

  • the anomaly information, in the form of a dataframe: anomaly_df.

It then adjusts the values of the time series based on the perceived impact of the anomalies given in the column adjustment_delta_col and assigns np.nan if the impact is not given.

Note that anomaly_df can contain the anomaly information for many different timeseries. This is enabled by allowing multiple metrics and dimensions to be listed in the same anomaly dataframe. Columns can indicate the metric name and dimension value.

This function first subsets the anomaly_df to the relevant rows for the value_col as specified by filter_by_dict, then makes the specified adjustments to df.

Parameters
  • df (pandas.DataFrame) – A data frame which includes the timestamp column as well as the value column.

  • time_col (str) – The column name in df representing time for the time series data. The time column can be anything that can be parsed by pandas.DatetimeIndex.

  • value_col (str) – The column name which has the value of interest to be forecasted.

  • anomaly_df (pandas.DataFrame) –

    A dataframe which includes the anomaly information for the input series (df) but potentially for multiple series and dimensions.

    This dataframe must include these two columns:

    • start_date_col

    • end_date_col

    and include

    • adjustment_delta_col if it is not None in the function call.

    Moreover if dimensions are requested by passing the filter_by_dict argument (not None), all of this dictionary keys must also appear in anomaly_df.

    Here is an example:

    anomaly_df = pd.DataFrame({
        "start_date": ["1/1/2018", "1/4/2018", "1/8/2018", "1/10/2018"],
        "end_date": ["1/2/2018", "1/6/2018", "1/9/2018", "1/10/2018"],
        "adjustment_delta": [np.nan, 3, -5, np.nan],
        # extra columns for filtering
        "metric": ["y", "y", "z", "z"],
        "platform": ["MOBILE", "MOBILE", "DESKTOP", "DESKTOP"],
        "vertical": ["ads", "sales", "ads", "ads"],
    })
    

    In the above example,

    • ”start_date” is the start date of the anomaly, which is provided using the argument start_date_col.

    • ”end_date” is the end date of the anomaly, which is provided using the argument end_date_col.

    • ”adjustment_delta” is the column which includes the delta if it is known. The name of this column is provided using the argument adjustment_delta_col. Use numpy.nan if the adjustment size is not known, and the adjusted value will be set to numpy.nan.

    • ”metric”, “platform”, and “vertical” are example columns for filtering. They contain the metric name and dimensions for which the anomaly is applicable. filter_by_dict` is used to filter on these columns to get the relevant anomalies for the timeseries represented by ``df[value_col].

  • start_date_col (str, default START_DATE_COL) – The column name in anomaly_df representing the start timestamp of the anomalous period, inclusive. The format can be anything that can be parsed by pandas DatetimeIndex.

  • end_date_col (str, default END_DATE_COL) – The column name in anomaly_df representing the start timestamp of the anomalous period, inclusive. The format can be anything that can be parsed by pandas DatetimeIndex.

  • adjustment_delta_col (str or None, default None) –

    The column name in anomaly_df for the impact delta of the anomalies on the values of the series.

    If the value is available, it will be used to adjust the timeseries values in the given period by adding or subtracting this value to the raw series values in that period. Whether to add or subtract is specified by adjustment_method. If the value for a row is “” or np.nan, the adjusted value is set to np.nan.

    If adjustment_delta_col is None, all adjusted values are set to np.nan.

  • filter_by_dict (dict [str, any] or None, default None) –

    A dictionary whose keys are column names of anomaly_df, and values are the desired value for that column (e.g. a string or int). If the value is an iterable (list, tuple, set), then it enumerates all allowed values for that column.

    This dictionary is used to filter anomaly_df to the matching anomalies. This helps when the anomaly_df includes the anomalies for various metrics and dimensions, so matching is needed to get the relevant anomalies for df.

    Columns in anomaly_df can contain information on metric name, metric dimension (e.g. mobile/desktop), issue severity, etc. for filtering.

  • filter_by_value_col (str or None, default None) –

    If provided, {filter_by_value_col: value_col} is added to filter_by_dict for filtering. This filters anomaly_df to rows where anomaly_df[filter_by_value_col] == value_col.

    If value_col is the metric name, this is a convenient way to find anomalies matching the metric name.

  • adjustment_method (str (“add” or “subtract”), default “add”) –

    How the adjustment in anomaly_df should be used to adjust the value in df.

    • If “add”, the value in adjustment_delta_col is added to the original value.

    • If “subtract”, it is subtracted from the original value.

Returns

Result – A dictionary with the following items (specified by key):

  • ”adjusted_df”: pandas.DataFrame

    A dataframe identical to the input dataframe df, but with value_col updated to the adjusted values.

  • ”augmented_df”: pandas.DataFrame

    A dataframe identical to the input dataframe df, with one extra column for the adjusted values: f"adjusted_{value_col}". value_col retains the original values. This is useful to inspect which values have changed.

Return type

dict

greykite.common.evaluation.r2_null_model_score(y_true, y_pred, y_pred_null=None, y_train=None, loss_func=<function mean_squared_error>)[source]

Calculates improvement in the loss function compared to the predictions of a null model. Can be used to evaluate model quality with respect to a simple baseline model.

The score is defined as:

R2_null_model_score = 1.0 - loss_func(y_true, y_pred) / loss_func(y_true, y_pred_null)
Parameters
  • y_true (list [float] or numpy.array) – Observed response (usually on a test set).

  • y_pred (list [float] or numpy.array) – Model predictions (usually on a test set).

  • y_pred_null (list [float] or numpy.array or None) – A baseline prediction model to compare against. If None, derived from y_train or y_true.

  • y_train (list [float] or numpy.array or None) – Response values in the training data. If y_pred_null is None, then y_pred_null is set to the mean of y_train. If y_train is also None, then y_pred_null is set to the mean of y_true.

  • loss_func (callable, default sklearn.metrics.mean_squared_error) – The error loss function with signature (true_values, predicted_values).

Returns

r2_null_model – A value within (-infty, 1.0]. Higher scores are better. Can be interpreted as the improvement in the loss function compared to the predictions of the null model. For example, a score of 0.74 means the loss is 74% lower than for the null model.

Return type

float

Notes

There is a connection between R2_null_model_score and R2. R2_null_model_score can be interpreted as the additional improvement in the coefficient of determination (i.e. R2, see sklearn.metrics.r2_score) with respect to a null model.

Under the default settings of this function, where loss_func is mean squared error and y_pred_null is the average of y_true, the scores are equivalent:

# simplified definition of R2_score, where SSE is sum of squared error
y_true_avg = np.repeat(np.average(y_true), y_true.shape[0])
R2_score := 1.0 - SSE(y_true, y_pred) / SSE(y_true, y_true_avg)
R2_score := 1.0 - MSE(y_true, y_pred) / VAR(y_true)  # equivalent definition

r2_null_model_score(y_true, y_pred) == r2_score(y_true, y_pred)

r2_score is 0 if simply predicting the mean (y_pred = y_true_avg).

If y_pred_null is passed, and if loss_func is mean squared error and y_true has nonzero variance, this function measures how much “r2_score of the predictions (y_pred)” closes the gap between “r2_score of the null model (y_pred)” and the “r2_score of the best possible model (y_true)”, which is 1.0:

R2_pred = r2_score(y_true, y_pred)       # R2 of predictions
R2_null = r2_score(y_pred_null, y_pred)  # R2 of null model
r2_null_model_score(y_true, y_pred, y_pred_null) == (R2_pred - R2_null) / (1.0 - R2_null)

When y_pred_null=y_true_avg, R2_null is 0 and this reduces to the formula above.

Summary (for loss_func=mean_squared_error):

  • If R2_null>0 (good null model), then R2_null_model_score < R2_score

  • If R2_null=0 (uninformative null model), then R2_null_model_score = R2_score

  • If R2_null<0 (poor null model), then R2_null_model_score > R2_score

For other loss functions, r2_null_model_score has the same connection to pseudo R2.

greykite.framework.pipeline.utils.get_score_func_with_aggregation(score_func, greater_is_better=None, agg_periods=None, agg_func=None, relative_error_tolerance=None)[source]

Returns a score function that pre-aggregates inputs according to agg_func, and filters out invalid true values before evaluation. This allows fitting the model at a granular level, yet evaluating at a coarser level.

Also returns the proper direction and short name for the score function.

Parameters
  • score_func (str or callable) – If callable, a function that maps two arrays to a number: (true, predicted) -> score.

  • greater_is_better (bool, default False) – True if score_func is a score function, meaning higher is better, and False if it is a loss function, meaning lower is better. Must be provided if score_func is a callable (custom function). Ignored if score_func is a string, because the direction is known.

  • agg_periods (int or None, default None) – Number of periods to aggregate before evaluation. Model is fit at original frequency, and forecast is aggregated according to agg_periods E.g. fit model on hourly data, and evaluate performance at daily level If None, does not apply aggregation

  • agg_func (callable or None, default None) – Takes an array and returns a number, e.g. np.max, np.sum Used to aggregate data prior to evaluation (applied to actual and predicted) Ignored if agg_periods is None

  • relative_error_tolerance (float or None, default None) – Threshold to compute the FRACTION_OUTSIDE_TOLERANCE metric, defined as the fraction of forecasted values whose relative error is strictly greater than relative_error_tolerance. For example, 0.05 allows for 5% relative error. Required if score_func is FRACTION_OUTSIDE_TOLERANCE.

Returns

  • score_func (callable) – scorer with pre-aggregation function and filter,

  • greater_is_better (bool) – Whether greater_is_better for the scorer. Uses the provided greater_is_better if the provided score_func is a callable. Otherwise, looks up the direction.

  • short_name (str) – Canonical short name for the score_func.

greykite.framework.pipeline.utils.get_hyperparameter_searcher(hyperparameter_grid, model, cv=None, hyperparameter_budget=None, n_jobs=1, verbose=1, **kwargs) → sklearn.model_selection._search.RandomizedSearchCV[source]

Returns RandomizedSearchCV object for hyperparameter tuning via cross validation

sklearn.model_selection.RandomizedSearchCV runs a full grid search if hyperparameter_budget is sufficient to exhaust the full hyperparameter_grid, otherwise it samples uniformly at random from the space.

Parameters
  • hyperparameter_grid (dict or list [dict]) –

    Dictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a rvs method for sampling (such as those from scipy.stats.distributions). Lists of parameters are sampled uniformly.

    May also be a list of such dictionaries to avoid undesired combinations of parameters. Passed as param_distributions to sklearn.model_selection.RandomizedSearchCV, see docs for more info.

  • model (estimator object) – A object of that type is instantiated for each grid point. This is assumed to implement the scikit-learn estimator interface.

  • cv (int, cross-validation generator, iterable, or None, default None) – Determines the cross-validation splitting strategy. See sklearn.model_selection.RandomizedSearchCV.

  • hyperparameter_budget (int or None, default None) –

    max number of hyperparameter sets to try within the hyperparameter_grid search space If None, uses defaults:

    • exhaustive grid search if all values are constant

    • 10 if any value is a distribution to sample from

  • n_jobs (int or None, default 1) – Number of jobs to run in parallel (the maximum number of concurrently running workers). -1 uses all CPUs. -2 uses all CPUs but one. None is treated as 1 unless in a joblib.Parallel backend context that specifies otherwise.

  • verbose (int, default 1) –

    Verbosity level during CV.

    • if > 0, prints number of fits

    • if > 1, prints fit parameters, total score + fit time

    • if > 2, prints train/test scores

  • kwargs (additional parameters) –

    Keyword arguments to pass to get_scoring_and_refit. Accepts the following parameters:

    • "score_func"

    • "score_func_greater_is_better"

    • "cv_report_metrics"

    • "agg_periods"

    • "agg_func"

    • "relative_error_tolerance"

Returns

grid_search – Object that can run randomized search on hyper parameters.

Return type

sklearn.model_selection.RandomizedSearchCV

greykite.framework.pipeline.utils.get_scoring_and_refit(score_func='MeanAbsolutePercentError', score_func_greater_is_better=False, cv_report_metrics=None, agg_periods=None, agg_func=None, relative_error_tolerance=None)[source]

Provides scoring and refit parameters for RandomizedSearchCV.

Together, scoring and refit specify how what metrics to evaluate and how to evaluate the predictions on the test set to identify the optimal model.

Notes

Sets greater_is_better=True in scoring for all metrics to report them with their original sign, and properly accounts for this in refit to extract the best index.

Pass both scoring and refit to RandomizedSearchCV

Parameters
  • score_func (str or callable, default EvaluationMetricEnum.MeanAbsolutePercentError.name) – Score function used to select optimal model in CV. If a callable, takes arrays y_true, y_pred and returns a float. If a string, must be either a EvaluationMetricEnum member name or FRACTION_OUTSIDE_TOLERANCE.

  • score_func_greater_is_better (bool, default False) – True if score_func is a score function, meaning higher is better, and False if it is a loss function, meaning lower is better. Must be provided if score_func is a callable (custom function). Ignored if score_func is a string, because the direction is known.

  • cv_report_metrics (CV_REPORT_METRICS_ALL, or list [str], or None, default None # noqa: E501) –

    Additional metrics to compute during CV, besides the one specified by score_func.

    • If the string constant greykite.common.constants.CV_REPORT_METRICS_ALL, computes all metrics in EvaluationMetricEnum. Also computes FRACTION_OUTSIDE_TOLERANCE if relative_error_tolerance is not None. The results are reported by the short name (.get_metric_name()) for EvaluationMetricEnum members and FRACTION_OUTSIDE_TOLERANCE_NAME for FRACTION_OUTSIDE_TOLERANCE.

    • If a list of strings, each of the listed metrics is computed. Valid strings are greykite.common.evaluation.EvaluationMetricEnum member names and FRACTION_OUTSIDE_TOLERANCE.

      For example:

      ["MeanSquaredError", "MeanAbsoluteError", "MeanAbsolutePercentError", "MedianAbsolutePercentError", "FractionOutsideTolerance2"]
      
    • If None, no additional metrics are computed.

  • agg_periods (int or None, default None) – Number of periods to aggregate before evaluation. Model is fit at original frequency, and forecast is aggregated according to agg_periods E.g. fit model on hourly data, and evaluate performance at daily level If None, does not apply aggregation

  • agg_func (callable or None, default None) – Takes an array and returns a number, e.g. np.max, np.sum Used to aggregate data prior to evaluation (applied to actual and predicted) Ignored if agg_periods is None

  • relative_error_tolerance (float or None, default None) – Threshold to compute the FRACTION_OUTSIDE_TOLERANCE metric, defined as the fraction of forecasted values whose relative error is strictly greater than relative_error_tolerance. For example, 0.05 allows for 5% relative error. If None, the metric is not computed.

Returns

  • scoring (dict) – A dictionary of metrics to evaluate for each CV split. The key is the metric name, the value is an instance of evaluation_PredictScorerDF generated by make_scorer_df.

    The value has a score method that takes actual and predicted values and returns a single number.

    There is one item in the dictionary for score_func and an additional item for each additional element in cv_report_metrics.

    • The key for score_func if it is a callable is CUSTOM_SCORE_FUNC_NAME.

    • The key for EvaluationMetricEnum member name is the short name from .get_metric_name().

    • The key for FRACTION_OUTSIDE_TOLERANCE is FRACTION_OUTSIDE_TOLERANCE_NAME.

    See RandomizedSearchCV.

  • refit (callable) – Callable that takes cv_results_ from grid search and returns the best index.

    See RandomizedSearchCV.

greykite.framework.pipeline.utils.get_best_index(results, metric='score', greater_is_better=False)[source]

Suitable for use as the refit parameter to RandomizedSearchCV, after wrapping with functools.partial.

Callable that takes cv_results_ from grid search and returns the best index.

Parameters
  • results (dict [str, numpy.array]) – Results from CV grid search. See RandomizedSearchCV cv_results_ attribute for the format.

  • metric (str, default “score”) – Which metric to use to select the best parameters. In single metric evaluation, the metric name should be “score”. For multi-metric evaluation, the scoring parameter to RandomizedSearchCV is a dictionary, and metric must be a key of scoring.

  • greater_is_better (bool, default False) – If True, selects the parameters with highest test values for metric. Otherwise, selects those with the lowest test values for metric.

Returns

best_index – Best index to use for refitting the model.

Return type

int

Examples

>>> from functools import partial
>>> from sklearn.model_selection import RandomizedSearchCV
>>> refit = partial(get_best_index, metric="score", greater_is_better=False)
>>> # RandomizedSearchCV(..., refit=refit)
greykite.framework.pipeline.utils.get_forecast(df, trained_model: sklearn.pipeline.Pipeline, train_end_date=None, test_start_date=None, forecast_horizon=None, xlabel='ts', ylabel='y', relative_error_tolerance=None)greykite.framework.output.univariate_forecast.UnivariateForecast[source]

Runs model predictions on df and creates a UnivariateForecast object.

Parameters
  • df (pandas.DataFrame) – Has columns cst.TIME_COL, cst.VALUE_COL, to forecast.

  • trained_model (sklearn.pipeline) – A fitted Pipeline with estimator step and predict function.

  • train_end_date (datetime.datetime, default None) – Train end date. Passed to UnivariateForecast.

  • test_start_date (datetime.datetime, default None) – Test start date. Passed to UnivariateForecast.

  • forecast_horizon (int or None, default None) – Number of periods forecasted into the future. Must be > 0. Passed to UnivariateForecast.

  • xlabel (str) – Time column to use in representing forecast (e.g. x-axis in plots).

  • ylabel (str) – Time column to use in representing forecast (e.g. y-axis in plots).

  • relative_error_tolerance (float or None, default None) – Threshold to compute the Outside Tolerance metric, defined as the fraction of forecasted values whose relative error is strictly greater than relative_error_tolerance. For example, 0.05 allows for 5% relative error. If None, the metric is not computed.

Returns

univariate_forecast – Forecasts represented as a UnivariateForecast object.

Return type

UnivariateForecast

class greykite.common.data_loader.DataLoader[source]

Returns datasets included in the library in pandas.DataFrame format.

available_datasets

The names of the available datasets.

Type

list [str]

static get_data_home(data_dir=None, data_sub_dir=None)[source]

Returns the folder path data_dir/data_sub_dir. If data_dir is None returns the internal data directory. By default the Greykite data dir is set to a folder named ‘data’ in the project source code. Alternatively, it can be set programmatically by giving an explicit folder path.

Parameters
  • data_dir (str or None, default None) – The path to the input data directory. If None, it is set to ‘{project_trunk}/data’.

  • data_sub_dir (str or None, default None) – The name of the input data sub directory. Updates path by appending to the data_dir at the end. If None, data_dir path is unchanged.

Returns

data_home – Path to the data folder.

Return type

str

static get_data_names(data_path)[source]

Returns the names of the .csv and .csv.xz files in data_path.

Parameters

data_path (str) – Path to the data folder.

Returns

file_names – The names of the .csv and .csv.xz files in data_path.

Return type

list [str]

get_data_inventory()[source]

Returns the names of the available internal datasets.

Returns

file_names – The names of the available internal datasets.

Return type

list [str]

get_df(data_path, data_name)[source]

Returns a pandas.DataFrame containing the dataset from data_path/data_name. The input data must be in .csv or .csv.xz format. Raises a ValueError if the the specified input file is not found.

Parameters
  • data_path (str) – Path to the data folder.

  • data_name (str) – Name of the csv file to be loaded from. For example ‘peyton_manning’.

Returns

df – Input dataset.

Return type

pandas.DataFrame

load_peyton_manning()[source]

Loads the Daily Peyton Manning dataset.

This dataset contains log daily page views for the Wikipedia page for Peyton Manning. One of the primary datasets used for demonstrations by Facebook Prophet algorithm. Source: https://github.com/facebook/prophet/blob/master/examples/example_wp_log_peyton_manning.csv

Below is the dataset attribute information:

ts : date of the page view y : log of the number of page views

Returns

df

Has the following columns:

”ts” : date of the page view. “y” : log of the number of page views.

Return type

pandas.DataFrame object with Peyton Manning data.

load_parking(system_code_number=None)[source]

Loads the Hourly Parking dataset. This dataset contains occupancy rates (8:00 to 16:30) from 2016/10/04 to 2016/12/19 from car parks in Birmingham that are operated by NCP from Birmingham City Council. Source: https://archive.ics.uci.edu/ml/datasets/Parking+Birmingham UK Open Government Licence (OGL)

Below is the dataset attribute information:

SystemCodeNumber: car park ID Capacity: car park capacity Occupancy: car park occupancy rate LastUpdated: date and time of the measure

Parameters

system_code_number (str or None, default None) – If None, occupancy rate is averaged across all the SystemCodeNumber. Else only the occupancy rate of the given system_code_number is returned.

Returns

df

Has the following columns:

”LastUpdated” : time, rounded to the nearest half hour. “Capacity” : car park capacity “Occupancy” : car park occupancy rate “OccupancyRatio” : Occupancy divided by Capacity.

Return type

pandas.DataFrame object with Parking data.

load_bikesharing()[source]

Loads the Hourly Bike Sharing Count dataset.

This dataset contains aggregated hourly count of the number of rented bikes. The data also includes weather data: Maximum Daily temperature (tmax); Minimum Daily Temperature (tmin); Precipitation (pn) The raw bike-sharing data is provided by Capital Bikeshares. Source: https://www.capitalbikeshare.com/system-data The raw weather data (Baltimore-Washington INTL Airport) https://www.ncdc.noaa.gov/data-access/land-based-station-data

Below is the dataset attribute information:

ts : hour and date count : number of shared bikes tmin : minimum daily temperature tmax : maximum daily temperature pn : precipitation

Returns

df

Has the following columns:

”date” : day of year “ts” : hourly timestamp “count” : number of rented bikes across Washington DC. “tmin” : minimum daily temperature “tmax” : maximum daily temperature “pn” : precipitation

Return type

pandas.DataFrame with bikesharing data.

load_beijing_pm()[source]

Loads the Beijing Particulate Matter (PM2.5) dataset. https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data

This hourly data set contains the PM2.5 data of US Embassy in Beijing. Meanwhile, meteorological data from Beijing Capital International Airport are also included.

The dataset’s time period is between Jan 1st, 2010 to Dec 31st, 2014. Missing data are denoted as NA.

Below is the dataset attribute information:

No : row number year : year of data in this row month : month of data in this row day : day of data in this row hour : hour of data in this row pm2.5: PM2.5 concentration (ug/m^3) DEWP : dew point (celsius) TEMP : temperature (celsius) PRES : pressure (hPa) cbwd : combined wind direction Iws : cumulated wind speed (m/s) Is : cumulated hours of snow Ir : cumulated hours of rain

Returns

df

Has the following columns:

TIME_COL : hourly timestamp “year” : year of data in this row “month” : month of data in this row “day” : day of data in this row “hour” : hour of data in this row “pm” : PM2.5 concentration (ug/m^3) “dewp” : dew point (celsius) “temp” : temperature (celsius) “pres” : pressure (hPa) “cbwd” : combined wind direction “iws” : cumulated wind speed (m/s) “is” : cumulated hours of snow “ir” : cumulated hours of rain

Return type

pandas.DataFrame with Beijing PM2.5 data.

load_data(data_name, **kwargs)[source]

Loads dataset by name from the internal data library.

Parameters

data_name (str) – Dataset to load from the internal data library.

Returns

df

Return type

UnivariateTimeSeries object with data_name.

class greykite.framework.benchmark.data_loader_ts.DataLoaderTS[source]

Returns datasets included in the library in pandas.DataFrame or UnivariateTimeSeries format.

Extends DataLoader

load_peyton_manning_ts()[source]

Loads the Daily Peyton Manning dataset.

This dataset contains log daily page views for the Wikipedia page for Peyton Manning. One of the primary datasets used for demonstrations by Facebook Prophet algorithm. Source: https://github.com/facebook/prophet/blob/master/examples/example_wp_log_peyton_manning.csv

Below is the dataset attribute information:

ts : date of the page view y : log of the number of page views

Returns

ts

Peyton Manning page views data. Time and value column:

time_col”ts”

Date of the page view.

value_col”y”

Log of the number of page views.

Return type

UnivariateTimeSeries

load_parking_ts(system_code_number=None)[source]

Loads the Hourly Parking dataset.

This dataset contains occupancy rates (8:00 to 16:30) from 2016/10/04 to 2016/12/19 from car parks in Birmingham that are operated by NCP from Birmingham City Council. Source: https://archive.ics.uci.edu/ml/datasets/Parking+Birmingham UK Open Government Licence (OGL)

Below is the dataset attribute information:

SystemCodeNumber: car park ID Capacity: car park capacity Occupancy: car park occupancy rate LastUpdated: date and time of the measure

Parameters

system_code_number (str or None, default None) – If None, occupancy rate is averaged across all the SystemCodeNumber. Else only the occupancy rate of the given system_code_number is returned.

Returns

ts

Parking data. Time and value column:

time_col”LastUpdated”

Date and Time of the Occupancy Rate, rounded to the nearest half hour.

value_col”OccupancyRatio”

Occupancy divided by Capacity.

Return type

UnivariateTimeSeries

load_bikesharing_ts()[source]

Loads the Hourly Bike Sharing Count dataset.

This dataset contains aggregated hourly count of the number of rented bikes. The data also includes weather data: Maximum Daily temperature (tmax); Minimum Daily Temperature (tmin); Precipitation (pn) The raw bike-sharing data is provided by Capital Bikeshares. Source: https://www.capitalbikeshare.com/system-data The raw weather data (Baltimore-Washington INTL Airport) https://www.ncdc.noaa.gov/data-access/land-based-station-data

Below is the dataset attribute information:

ts : hour and date count : number of shared bikes tmin : minimum daily temperature tmax : maximum daily temperature pn : precipitation

Returns

ts

Bike Sharing Count data. Time and value column:

time_col”ts”

Hour and Date.

value_col”y”

Number of rented bikes across Washington DC.

Additional regressors:

”tmin” : minimum daily temperature “tmax” : maximum daily temperature “pn” : precipitation

Return type

UnivariateTimeSeries

load_beijing_pm_ts()[source]

Loads the Beijing Particulate Matter (PM2.5) dataset. https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data

This hourly data set contains the PM2.5 data of US Embassy in Beijing. Meanwhile, meteorological data from Beijing Capital International Airport are also included.

The dataset’s time period is between Jan 1st, 2010 to Dec 31st, 2014. Missing data are denoted as NA.

Below is the dataset attribute information:

No : row number year : year of data in this row month : month of data in this row day : day of data in this row hour : hour of data in this row pm2.5: PM2.5 concentration (ug/m^3) DEWP : dew point (celsius) TEMP : temperature (celsius) PRES : pressure (hPa) cbwd : combined wind direction Iws : cumulated wind speed (m/s) Is : cumulated hours of snow Ir : cumulated hours of rain

Returns

ts

Beijing PM2.5 data. Time and value column:

time_colTIME_COL

hourly timestamp

value_col”pm”

PM2.5 concentration (ug/m^3)

Additional regressors:

”dewp” : dew point (celsius) “temp” : temperature (celsius) “pres” : pressure (hPa) “cbwd” : combined wind direction “iws” : cumulated wind speed (m/s) “is” : cumulated hours of snow “ir” : cumulated hours of rain

Return type

UnivariateTimeSeries

load_data_ts(data_name, **kwargs)[source]

Loads dataset by name from the internal data library.

Parameters

data_name (str) – Dataset to load from the internal data library.

Returns

ts – Has the requested data_name.

Return type

UnivariateTimeSeries

static get_data_home(data_dir=None, data_sub_dir=None)

Returns the folder path data_dir/data_sub_dir. If data_dir is None returns the internal data directory. By default the Greykite data dir is set to a folder named ‘data’ in the project source code. Alternatively, it can be set programmatically by giving an explicit folder path.

Parameters
  • data_dir (str or None, default None) – The path to the input data directory. If None, it is set to ‘{project_trunk}/data’.

  • data_sub_dir (str or None, default None) – The name of the input data sub directory. Updates path by appending to the data_dir at the end. If None, data_dir path is unchanged.

Returns

data_home – Path to the data folder.

Return type

str

get_data_inventory()

Returns the names of the available internal datasets.

Returns

file_names – The names of the available internal datasets.

Return type

list [str]

static get_data_names(data_path)

Returns the names of the .csv and .csv.xz files in data_path.

Parameters

data_path (str) – Path to the data folder.

Returns

file_names – The names of the .csv and .csv.xz files in data_path.

Return type

list [str]

get_df(data_path, data_name)

Returns a pandas.DataFrame containing the dataset from data_path/data_name. The input data must be in .csv or .csv.xz format. Raises a ValueError if the the specified input file is not found.

Parameters
  • data_path (str) – Path to the data folder.

  • data_name (str) – Name of the csv file to be loaded from. For example ‘peyton_manning’.

Returns

df – Input dataset.

Return type

pandas.DataFrame

load_beijing_pm()

Loads the Beijing Particulate Matter (PM2.5) dataset. https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data

This hourly data set contains the PM2.5 data of US Embassy in Beijing. Meanwhile, meteorological data from Beijing Capital International Airport are also included.

The dataset’s time period is between Jan 1st, 2010 to Dec 31st, 2014. Missing data are denoted as NA.

Below is the dataset attribute information:

No : row number year : year of data in this row month : month of data in this row day : day of data in this row hour : hour of data in this row pm2.5: PM2.5 concentration (ug/m^3) DEWP : dew point (celsius) TEMP : temperature (celsius) PRES : pressure (hPa) cbwd : combined wind direction Iws : cumulated wind speed (m/s) Is : cumulated hours of snow Ir : cumulated hours of rain

Returns

df

Has the following columns:

TIME_COL : hourly timestamp “year” : year of data in this row “month” : month of data in this row “day” : day of data in this row “hour” : hour of data in this row “pm” : PM2.5 concentration (ug/m^3) “dewp” : dew point (celsius) “temp” : temperature (celsius) “pres” : pressure (hPa) “cbwd” : combined wind direction “iws” : cumulated wind speed (m/s) “is” : cumulated hours of snow “ir” : cumulated hours of rain

Return type

pandas.DataFrame with Beijing PM2.5 data.

load_bikesharing()

Loads the Hourly Bike Sharing Count dataset.

This dataset contains aggregated hourly count of the number of rented bikes. The data also includes weather data: Maximum Daily temperature (tmax); Minimum Daily Temperature (tmin); Precipitation (pn) The raw bike-sharing data is provided by Capital Bikeshares. Source: https://www.capitalbikeshare.com/system-data The raw weather data (Baltimore-Washington INTL Airport) https://www.ncdc.noaa.gov/data-access/land-based-station-data

Below is the dataset attribute information:

ts : hour and date count : number of shared bikes tmin : minimum daily temperature tmax : maximum daily temperature pn : precipitation

Returns

df

Has the following columns:

”date” : day of year “ts” : hourly timestamp “count” : number of rented bikes across Washington DC. “tmin” : minimum daily temperature “tmax” : maximum daily temperature “pn” : precipitation

Return type

pandas.DataFrame with bikesharing data.

load_data(data_name, **kwargs)

Loads dataset by name from the internal data library.

Parameters

data_name (str) – Dataset to load from the internal data library.

Returns

df

Return type

UnivariateTimeSeries object with data_name.

load_parking(system_code_number=None)

Loads the Hourly Parking dataset. This dataset contains occupancy rates (8:00 to 16:30) from 2016/10/04 to 2016/12/19 from car parks in Birmingham that are operated by NCP from Birmingham City Council. Source: https://archive.ics.uci.edu/ml/datasets/Parking+Birmingham UK Open Government Licence (OGL)

Below is the dataset attribute information:

SystemCodeNumber: car park ID Capacity: car park capacity Occupancy: car park occupancy rate LastUpdated: date and time of the measure

Parameters

system_code_number (str or None, default None) – If None, occupancy rate is averaged across all the SystemCodeNumber. Else only the occupancy rate of the given system_code_number is returned.

Returns

df

Has the following columns:

”LastUpdated” : time, rounded to the nearest half hour. “Capacity” : car park capacity “Occupancy” : car park occupancy rate “OccupancyRatio” : Occupancy divided by Capacity.

Return type

pandas.DataFrame object with Parking data.

load_peyton_manning()

Loads the Daily Peyton Manning dataset.

This dataset contains log daily page views for the Wikipedia page for Peyton Manning. One of the primary datasets used for demonstrations by Facebook Prophet algorithm. Source: https://github.com/facebook/prophet/blob/master/examples/example_wp_log_peyton_manning.csv

Below is the dataset attribute information:

ts : date of the page view y : log of the number of page views

Returns

df

Has the following columns:

”ts” : date of the page view. “y” : log of the number of page views.

Return type

pandas.DataFrame object with Peyton Manning data.

Internal Functions

class greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonalityEnum(value)[source]

Defines default seasonalities for Silverkite estimator. Names should match those in SeasonalityEnum. The default order for various seasonalities is stored in this enum.

DAILY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='tod', period=24.0, order=12, seas_names='daily', default_min_days=2)

tod is 0-24 time of day (tod granularity based on input data, up to second level). Requires at least two full cycles to add the seasonal term (default_min_days=2).

WEEKLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='tow', period=7.0, order=4, seas_names='weekly', default_min_days=14)

tow is 0-7 time of week (tow granularity based on input data, up to second level). order=4 for full flexibility to model daily input.

MONTHLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='tom', period=1.0, order=2, seas_names='monthly', default_min_days=60)

tom is 0-1 time of month (tom granularity based on input data, up to daily level).

QUARTERLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='toq', period=1.0, order=5, seas_names='quarterly', default_min_days=180)

toq (continuous time of quarter) with natural period. Each day is mapped to a value in [0.0, 1.0) based on its position in the calendar quarter: (Jan1-Mar31, Apr1-Jun30, Jul1-Sep30, Oct1-Dec31). The start of each quarter is 0.0.

YEARLY_SEASONALITY: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonality = SilverkiteSeasonality(name='ct1', period=1.0, order=15, seas_names='yearly', default_min_days=548)

ct1 (continuous year) with natural period.

greykite.algo.common.ml_models.fit_ml_model(df, model_formula_str=None, fit_algorithm='linear', fit_algorithm_params=None, y_col=None, pred_cols=None, min_admissible_value=None, max_admissible_value=None, uncertainty_dict=None, normalize_method='min_max', regression_weight_col=None)[source]

Fits predictive ML (machine learning) models to continuous response vector (given in y_col) and returns fitted model.

Parameters
  • df (pd.DataFrame) – A data frame with the response vector (y) and the feature columns (x_mat).

  • model_formula_str (str) – The prediction model formula string e.g. “y~x1+x2+x3*x4”. This is similar to R formulas. See https://patsy.readthedocs.io/en/latest/formulas.html#how-formulas-work.

  • fit_algorithm (str, optional, default “linear”) –

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

  • fit_algorithm_params (dict or None, optional, default None) – Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • y_col (str) – The column name which has the value of interest to be forecasted If the model_formula_str is not passed, y_col e.g. [“y”] is used as the response vector column

  • pred_cols (List[str]) – The names of the feature columns If the model_formula_str is not passed, pred_cols e.g. [“x1”, “x2”, “x3”] is used as the design matrix columns

  • min_admissible_value (Optional[Union[int, float, double]]) – the minimum admissible value for the predict function to return

  • max_admissible_value (Optional[Union[int, float, double]]) – the maximum admissible value for the predict function to return

  • uncertainty_dict (dict or None) –

    If passed as a dictionary an uncertainty model will be fit. The items in the dictionary are:

    "uncertainty_method"str

    the title of the method as of now only “simple_conditional_residuals” is implemented which calculates CIs by using residuals

    "params"dict

    A dictionary of parameters needed for the uncertainty_method requested

  • normalize_method (str or None, default “min_max”) – If a string is provided, it will be used as the normalization method in normalize_df, passed via the argument method. Available options are: “min_max”, “statistical”. If None, no normalization will be performed. See that function for more details.

  • regression_weight_col (str or None, default None) – The column name for the weights to be used in weighted regression version of applicable machine-learning models.

Returns

trained_model

Trained model dictionary with keys:

"ml_model" : A trained model with predict method "uncertainty_model" : dict

The returned uncertainty_model dict from conf_interval.

Return type

dict

greykite.algo.common.ml_models.fit_ml_model_with_evaluation(df, model_formula_str=None, y_col=None, pred_cols=None, fit_algorithm='linear', fit_algorithm_params=None, ind_train=None, ind_test=None, training_fraction=0.9, randomize_training=False, min_admissible_value=None, max_admissible_value=None, uncertainty_dict=None, normalize_method='min_max', regression_weight_col=None)[source]

Fits prediction models to continuous response vector (y) and report results.

Parameters
  • df (pandas.DataFrame) – A data frame with the response vector (y) and the feature columns (x_mat)

  • model_formula_str (str) – The prediction model formula e.g. “y~x1+x2+x3*x4”. This is similar to R language (https://www.r-project.org/) formulas. See https://patsy.readthedocs.io/en/latest/formulas.html#how-formulas-work.

  • y_col (str) – The column name which has the value of interest to be forecasted If the model_formula_str is not passed, y_col e.g. [“y”] is used as the response vector column

  • pred_cols (list [str]) – The names of the feature columns If the model_formula_str is not passed, pred_cols e.g. [“x1”, “x2”, “x3”] is used as the design matrix columns

  • fit_algorithm (str, optional, default “linear”) –

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

  • fit_algorithm_params (dict or None, optional, default None) – Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • ind_train (list [int]) – The index (row number) of the training set

  • ind_test (list [int]) – The index (row number) of the test set

  • training_fraction (float, between 0.0 and 1.0) – The fraction of data used for training This is invoked if ind_train and ind_test are not passed If this is also None or 1.0, then we skip testing and train on the entire dataset

  • randomize_training (bool) – If True, then the training and the test sets will be randomized rather than in chronological order

  • min_admissible_value (Optional[Union[int, float, double]]) – The minimum admissible value for the predict function to return

  • max_admissible_value (Optional[Union[int, float, double]]) – The maximum admissible value for the predict function to return

  • uncertainty_dict (dict or None) –

    If passed as a dictionary an uncertainty model will be fit. The items in the dictionary are:

    "uncertainty_method"str

    the title of the method as of now only “simple_conditional_residuals” is implemented which calculates CIs by using residuals

    "params"dict

    A dictionary of parameters needed for the uncertainty_method requested

  • normalize_method (str or None, default “min_max”) – If a string is provided, it will be used as the normalization method in normalize_df, passed via the argument method. Available options are: “min_max”, “statistical”. If None, no normalization will be performed. See that function for more details.

  • regression_weight_col (str or None, default None) – The column name for the weights to be used in weighted regression version of applicable machine-learning models.

Returns

trained_model

Trained model dictionary with the following keys.

”ml_model”: A trained model object “summary”: Summary of the final model trained on all data “x_mat”: Feature vectors matrix used for training of full data (rows of df with NA are dropped) “y”: Response vector for training and testing (rows of df with NA are dropped).

The index corresponds to selected rows in the input df.

”y_train”: Response vector used for training “y_train_pred”: Predicted values of y_train “training_evaluation”: score function value of y_train and y_train_pred “y_test”: Response vector used for testing “y_test_pred”: Predicted values of y_test “test_evaluation”: score function value of y_test and y_test_pred “uncertainty_model”: dict

The returned uncertainty_model dict from conf_interval.

”plt_compare_test”: plot function to compare y_test and y_test_pred, “plt_pred”: plot function to compare

y_train, y_train_pred, y_test and y_test_pred.

Return type

dict

greykite.algo.forecast.silverkite.forecast_silverkite_helper.get_silverkite_uncertainty_dict(uncertainty, simple_freq='DAY', coverage=None)[source]

Returns an uncertainty_dict for forecast input parameter: uncertainty_dict.

The logic is as follows:

  • If uncertainty is passed as dict:
    • If quantiles are not passed through uncertainty we fill them using coverage.

    • If coverage also missing or quantiles calculated in two ways (via uncertainty["params"]["quantiles"] and coverage) do not match, we throw Exceptions

  • If uncertainty=="auto":
    • We provide defaults based on time frequency of data.

    • Specify uncertainty["params"]["quantiles"] based on coverage if provided, otherwise the default coverage is 0.95.

Parameters
  • uncertainty (str or dict or None) –

    It specifies what method should be used for uncertainty. If a dict is passed then it is directly returned to be passed to forecast as uncertainty_dict.

    If “auto”, it builds a generic dict depending on frequency.
    • For frequencies less than or equal to one day it sets conditional_cols to be [“dow_hr”].

    • Otherwise it sets the conditional_cols to be None

    If None and coverage is None, the upper/lower predictions are not returned

  • simple_freq (str, optional) – SimpleTimeFrequencyEnum member that best matches the input data frequency according to get_simple_time_frequency_from_period

  • coverage (float or None, optional) – Intended coverage of the prediction bands (0.0 to 1.0) If None and uncertainty is None, the upper/lower predictions are not returned

Returns

uncertainty – An uncertainty dict to be used as input to forecast. See that function’s docstring for more details.

Return type

dict or None

class greykite.algo.forecast.silverkite.forecast_simple_silverkite.SimpleSilverkiteForecast(constants: greykite.algo.forecast.silverkite.constants.silverkite_constant.SilverkiteConstant = <greykite.algo.forecast.silverkite.constants.silverkite_constant.SilverkiteConstant object>)[source]

A derived class of SilverkiteForecast. Provides an alternative interface with simplified configuration parameters. Produces the same trained model output and uses the same predict functions.

convert_params(df: pandas.core.frame.DataFrame, time_col: str, value_col: str, time_properties: Optional[Dict] = None, freq: Optional[str] = None, forecast_horizon: Optional[int] = None, origin_for_time_vars: Optional[float] = None, train_test_thresh: Optional[datetime.datetime] = None, training_fraction: Optional[float] = 0.9, fit_algorithm: str = 'ridge', fit_algorithm_params: Optional[Dict] = None, holidays_to_model_separately: Optional[Union[str, List[str]]] = 'auto', holiday_lookup_countries: Optional[Union[str, List[str]]] = 'auto', holiday_pre_num_days: int = 2, holiday_post_num_days: int = 2, holiday_pre_post_num_dict: Optional[Dict] = None, daily_event_df_dict: Optional[Dict] = None, changepoints_dict: Optional[Dict] = None, yearly_seasonality: Union[bool, str, int] = 'auto', quarterly_seasonality: Union[bool, str, int] = 'auto', monthly_seasonality: Union[bool, str, int] = 'auto', weekly_seasonality: Union[bool, str, int] = 'auto', daily_seasonality: Union[bool, str, int] = 'auto', max_daily_seas_interaction_order: Optional[int] = None, max_weekly_seas_interaction_order: Optional[int] = None, autoreg_dict: Optional[Dict] = None, seasonality_changepoints_dict: Optional[Dict] = None, min_admissible_value: Optional[float] = None, max_admissible_value: Optional[float] = None, uncertainty_dict: Optional[Dict] = None, growth_term: Optional[str] = 'linear', regressor_cols: Optional[List[str]] = None, feature_sets_enabled: Optional[Union[bool, str, Dict[str, Optional[Union[bool, str]]]]] = 'auto', extra_pred_cols: Optional[List[str]] = None, regression_weight_col: Optional[str] = None, simulation_based: Optional[bool] = False)[source]

Converts parameters of forecast_simple_silverkite into those of SilverkiteForecast::forecast.

Makes it easier to set parameters to SilverkiteForecast::forecast suitable for most forecasting problems. Provides data-aware defaults for seasonality and interaction terms. Provides a simple configuration of holidays from an internal holiday database, and user-friendly configuration for growth and regressors.

These parameters can be set from a plain-text config (e.g. no pandas dataframes). The parameter list is intentionally flat to facilitate hyperparameter grid search. Every parameter is either a parameter of SilverkiteForecast::forecast or a tuning parameter.

Notes

The basic parameters are identical to SilverkiteForecast::forecast. The more complex parameters are specified via config parameters:

  • daily_event_df_dict (via holiday*)

  • fs_components_df (via *_seasonality`)

  • extra_pred_cols (via holiday*, *seas*, growth_term, regressor_cols, feature_sets_enabled, extra_pred_cols)

Parameters
  • df (pandas.DataFrame) – A data frame which includes the timestamp column as well as the value column. This is the df for training the model, not for future prediction.

  • time_col (str) – The column name in df representing time for the time series data The time column can be anything that can be parsed by pandas DatetimeIndex

  • value_col (str) – The column name which has the value of interest to be forecasted

  • time_properties (dict [str, any] or None, optional) –

    Time properties dictionary (likely produced by get_forecast_time_properties) with keys:

    "ts"UnivariateTimeSeries or None

    df converted to a UnivariateTimeSeries.

    "period"int

    Period of each observation (i.e. minimum time between observations, in seconds).

    "simple_freq"SimpleTimeFrequencyEnum

    SimpleTimeFrequencyEnum member corresponding to data frequency.

    "num_training_points"int

    Number of observations for training.

    "num_training_days"int

    Number of days for training.

    "start_year"int

    Start year of the training period.

    "end_year"int

    End year of the forecast period.

    "origin_for_time_vars"float

    Continuous time representation of the first date in df.

    In this function,

    • start_year and end_year are used to define daily_event_df_dict.

    • simple_freq and num_training_days are used to define fs_components_df.

    • simple_freq and num_training_days are used to set default feature_sets_enabled.

    • origin_for_time_vars is used to set default origin_for_time_vars.

    • the other parameters are ignored

    It is okay if num_training_points, num_training_days, start_year, end_year are computed for a superset of df. This allows CV splits and backtest, which train on partial data, to use the same data-aware model parameters as the forecast on all training data.

    If None, the values are computed for df. This corresponds to using the same modeling approach on the CV splits and backtest from forecast_pipeline, without requiring the same parameters. In this case, make sure forecast_horizon is at least as large as the test period for the split, to ensure all holidays are captured.

  • freq (str or None, optional, default None) – Frequency of input data. Used to compute time_properties only if time_properties is None. Frequency strings can have multiples, e.g. ‘5H’. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for a list of frequency aliases. If None, inferred by pandas.infer_freq. Provide this parameter if df has missing timepoints.

  • forecast_horizon (int or None, optional, default None) – Number of periods to forecast into the future. Must be > 0. Used to compute time_properties only if time_properties is None. If None, default is determined by input data frequency. Used to determine forecast end date, to pull the appropriate holiday data. Should be at least as large as the prediction period (if this function is called from forecast_pipeline, the prediction period for different splits is set via cv_horizon, test_horizon, forecast_horizon).

  • origin_for_time_vars (float or None, optional, default None) – The time origin used to create continuous variables for time. If None, uses the value from time_properties.

  • train_test_thresh (datetime.datetime or None, optional, default None) – e.g. datetime.datetime(2019, 6, 30) The threshold for training and testing split. Note that the final returned model is trained using all data. If None, training split is based on training_fraction.

  • training_fraction (float or None, optional, default 0.9) – The fraction of data used for training (0.0 to 1.0) Used only if train_test_thresh is None. If this is also None or 1.0, then we skip testing and train on the entire dataset.

  • fit_algorithm (str, optional, default “linear”) –

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

  • fit_algorithm_params (dict or None, optional, default None) – Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • holiday_lookup_countries (list [str] or “auto” or None, optional, default “auto”) –

    The countries that contain the holidays you intend to model (holidays_to_model_separately).

    • If “auto”, uses a default list of countries that contain the default holidays_to_model_separately. See HOLIDAY_LOOKUP_COUNTRIES_AUTO.

    • If a list, must be a list of country names.

    • If None or an empty list, no holidays are modeled.

  • holidays_to_model_separately (list [str] or “auto” or ALL_HOLIDAYS_IN_COUNTRIES or None, optional, default “auto” # noqa: E501) –

    Which holidays to include in the model. The model creates a separate key, value for each item in holidays_to_model_separately. The other holidays in the countries are grouped together as a single effect.

    • If “auto”, uses a default list of important holidays. See HOLIDAYS_TO_MODEL_SEPARATELY_AUTO.

    • If ALL_HOLIDAYS_IN_COUNTRIES, uses all available holidays in holiday_lookup_countries. This can often create a model that has too many parameters, and should typically be avoided.

    • If a list, must be a list of holiday names.

    • If None or an empty list, all holidays in holiday_lookup_countries are grouped together as a single effect.

    Use holiday_lookup_countries to provide a list of countries where these holiday occur.

  • holiday_pre_num_days (int, default 2) – Model holiday effects for holiday_pre_num_days days before the holiday.

  • holiday_post_num_days (int, default 2) – Model holiday effects for holiday_post_num_days days after the holiday.

  • holiday_pre_post_num_dict (dict [str, (int, int)] or None, default None) – Overrides pre_num and post_num for each holiday in holidays_to_model_separately. For example, if holidays_to_model_separately contains “Thanksgiving” and “Labor Day”, this parameter can be set to {"Thanksgiving": [1, 3], "Labor Day": [1, 2]}, denoting that the “Thanksgiving” pre_num is 1 and post_num is 3, and “Labor Day” pre_num is 1 and post_num is 2. Holidays not specified use the default given by pre_num and post_num.

  • daily_event_df_dict (dict [str, pandas.DataFrame] or None, default None) –

    A dictionary of data frames, each representing events data for the corresponding key. Specifies additional events to include besides the holidays specified above. The format is the same as in forecast. The DataFrame has two columns:

    • The first column contains event dates. Must be in a format recognized by pandas.to_datetime. Must be at daily frequency for proper join. It is joined against the time in df, converted to a day: pd.to_datetime(pd.DatetimeIndex(df[time_col]).date).

    • the second column contains the event label for each date

    The column order is important; column names are ignored. The event dates must span their occurrences in both the training and future prediction period.

    During modeling, each key in the dictionary is mapped to a categorical variable named f"{EVENT_PREFIX}_{key}", whose value at each timestamp is specified by the corresponding DataFrame.

    For example, to manually specify a yearly event on September 1 during a training/forecast period that spans 2020-2022:

    daily_event_df_dict = {
        "custom_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
            "label": ["is_event", "is_event", "is_event"]
        })
    }
    

    It’s possible to specify multiple events in the same df. Two events, "sep" and "oct" are specified below for 2020-2021:

    daily_event_df_dict = {
        "custom_event": pd.DataFrame({
            "date": ["2020-09-01", "2020-10-01", "2021-09-01", "2021-10-01"],
            "event_name": ["sep", "oct", "sep", "oct"]
        })
    }
    

    Use multiple keys if two events may fall on the same date. These events must be in separate DataFrames:

    daily_event_df_dict = {
        "fixed_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
            "event_name": "fixed_event"
        }),
        "moving_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-08-28", "2022-09-03"],
            "event_name": "moving_event"
        }),
    }
    

    The multiple event specification can be used even if events never overlap. An equivalent specification to the second example:

    daily_event_df_dict = {
        "sep": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01"],
            "event_name": "is_event"
        }),
        "oct": pd.DataFrame({
            "date": ["2020-10-01", "2021-10-01"],
            "event_name": "is_event"
        }),
    }
    

    Note: All these events are automatically added to the model. There is no need to specify them in extra_pred_cols as you would for forecast.

    Note: Do not use EVENT_DEFAULT in the second column. This is reserved to indicate dates that do not correspond to an event.

  • changepoints_dict (dict or None, optional, default None) –

    Specifies the changepoint configuration.

    "method": str

    The method to locate changepoints. Valid options:

    • ”uniform”. Places n_changepoints evenly spaced changepoints to allow growth to change.

    • ”custom”. Places changepoints at the specified dates.

    • ”auto”. Automatically detects change points. For configuration, see find_trend_changepoints

    Additional keys to provide parameters for each particular method are described below.

    "continuous_time_col": str, optional

    Column to apply growth_func to, to generate changepoint features Typically, this should match the growth term in the model

    "growth_func": callable or None, optional

    Growth function (scalar -> scalar). Changepoint features are created by applying growth_func to continuous_time_col with offsets. If None, uses identity function to use continuous_time_col directly as growth term If changepoints_dict[“method”] == “uniform”, this other key is required:

    "n_changepoints": int

    number of changepoints to evenly space across training period

    If changepoints_dict[“method”] == “custom”, this other key is required:

    "dates": Iterable[Union[int, float, str, datetime]]

    Changepoint dates. Must be parsable by pd.to_datetime. Changepoints are set at the closest time on or after these dates in the dataset.

    If changepoints_dict[“method”] == “auto”, the keys that matches the parameters in find_trend_changepoints, except df, time_col and value_col, are optional. Extra keys also include “dates”, “combine_changepoint_min_distance” and “keep_detected” to specify additional custom trend changepoints. These three parameters correspond to the three parameters “custom_changepoint_dates”, “min_distance” and “keep_detected” in combine_detected_and_custom_trend_changepoints.

  • yearly_seasonality (str or bool or int) – Determines the yearly seasonality. ‘auto’, True, False, or a number for the Fourier order

  • quarterly_seasonality (str or bool or int) – Determines the quarterly seasonality. ‘auto’, True, False, or a number for the Fourier order

  • monthly_seasonality (str or bool or int) – Determines the monthly seasonality. ‘auto’, True, False, or a number for the Fourier order

  • weekly_seasonality (str or bool or int) – Determines the weekly seasonality. ‘auto’, True, False, or a number for the Fourier order

  • daily_seasonality (str or bool or int) – Determines the daily seasonality. ‘auto’, True, False, or a number for the Fourier order

  • max_daily_seas_interaction_order (int or None, optional, default None) – Max fourier order for interaction terms with daily seasonality. If None, uses all available terms.

  • max_weekly_seas_interaction_order (int or None, optional, default None) – Max fourier order for interaction terms with weekly seasonality. If None, uses all available terms.

  • autoreg_dict (dict or None, optional, default None) –

    A dictionary with arguments for build_autoreg_df. That function’s parameter value_col is inferred from the input of current function SilverkiteForecast::forecast. Other keys are:

    "lag_dict" : dict or None "agg_lag_dict" : dict or None "series_na_fill_func" : callable

    See more details for above parameters in build_autoreg_df.

  • seasonality_changepoints_dict (dict or None, optional, default None) – The parameter dictionary for seasonality change point detection. Parameters are in find_seasonality_changepoints. Note df, time_col, value_col and trend_changepoints are auto populated, and do not need to be provided.

  • min_admissible_value (float or None, optional, default None) – The minimum admissible value to return during prediction. If None, no limit is applied.

  • max_admissible_value (float or None, optional, default None) – The maximum admissible value to return during prediction. If None, no limit is applied.

  • uncertainty_dict (dict or None, optional, default None) –

    How to fit the uncertainty model. A dictionary with keys:
    "uncertainty_method"str

    The title of the method. Only “simple_conditional_residuals” is implemented in fit_prediction_model which calculates CIs using residuals

    "params": dict

    A dictionary of parameters needed for the requested uncertainty_method. For example, for uncertainty_method="simple_conditional_residuals", see parameters of conf_interval, listed briefly here:

    "conditional_cols" "quantiles" "quantile_estimation_method" "sample_size_thresh" "small_sample_size_method" "small_sample_size_quantile"

    If None, no uncertainty intervals are calculated.

  • growth_term (str or None, optional, default “ct1”) – How to model the growth. Valid options are {“linear”, “quadratic”, “sqrt”, “cuberoot”}.

  • regressor_cols (list [str] or None, optional, default None) – The columns in df to use as regressors. These must be provided during prediction as well.

  • feature_sets_enabled (dict [str, bool or “auto” or None] or bool or “auto” or None, default “auto”) –

    Whether to include interaction terms and categorical variables to increase model flexibility.

    If a dict, boolean values indicate whether include various sets of features in the model. The following keys are recognized (from SilverkiteColumn):

    "COLS_HOUR_OF_WEEK"str

    Constant hour of week effect

    "COLS_WEEKEND_SEAS"str

    Daily seasonality interaction with is_weekend

    "COLS_DAY_OF_WEEK_SEAS"str

    Daily seasonality interaction with day of week

    "COLS_TREND_DAILY_SEAS"str

    Allow daily seasonality to change over time by is_weekend

    "COLS_EVENT_SEAS"str

    Allow sub-daily event effects

    "COLS_EVENT_WEEKEND_SEAS"str

    Allow sub-daily event effect to interact with is_weekend

    "COLS_DAY_OF_WEEK"str

    Constant day of week effect

    "COLS_TREND_WEEKEND"str

    Allow trend (growth, changepoints) to interact with is_weekend

    "COLS_TREND_DAY_OF_WEEK"str

    Allow trend to interact with day of week

    "COLS_TREND_WEEKLY_SEAS"str

    Allow weekly seasonality to change over time

    The following dictionary values are recognized:

    • True: include the feature set in the model

    • False: do not include the feature set in the model

    • None: do not include the feature set in the model

    • ”auto” or not provided: use the default setting based on data frequency and size

    If not a dict:

    • if a boolean, equivalent to a dictionary with all values set to the boolean.

    • if None, equivalent to a dictionary with all values set to False.

    • if “auto”, equivalent to a dictionary with all values set to “auto”.

  • extra_pred_cols (list [str] or None, optional, default None) – Columns to include in extra_pred_cols for SilverkiteForecast::forecast. Other columns are added to extra_pred_cols by the other parameters of this function (i.e. holidays_*, growth_term, regressors, feature_sets_enabled). If None, treated is the same as [].

  • regression_weight_col (str or None, default None) – The column name for the weights to be used in weighted regression version of applicable machine-learning models.

  • simulation_based (bool, default False) – Boolean to specify if the future predictions are to be using simulations or not. Note that this is only used in deciding what parameters should be used for certain components e.g. autoregression, if automatic methods are requested. However, the auto-settings and the prediction settings regarding using simulations should match.

Returns

parameters – Parameters to call forecast.

Return type

dict

forecast_simple(*args, **kwargs)[source]

A wrapper around SilverkiteForecast::forecast that simplifies some of the input parameters.

Parameters
  • args (positional args) – Positional args to pass to convert_simple_silverkite_params. See that function for details.

  • kwargs (keyword args) – Keyword args to pass to convert_simple_silverkite_params. See that function for details.

Returns

trained_model – The return value of forecast A dictionary that includes the fitted model from the function fit_ml_model_with_evaluation.

Return type

dict

forecast(df, time_col, value_col, freq=None, origin_for_time_vars=None, extra_pred_cols=None, train_test_thresh=None, training_fraction=0.9, fit_algorithm='linear', fit_algorithm_params=None, daily_event_df_dict=None, fs_components_df= name period order seas_names 0 tod 24.0 3 daily 1 tow 7.0 3 weekly 2 toy 1.0 5 yearly, autoreg_dict=None, changepoints_dict=None, seasonality_changepoints_dict=None, changepoint_detector=None, min_admissible_value=None, max_admissible_value=None, uncertainty_dict=None, normalize_method=None, adjust_anomalous_dict=None, impute_dict=None, regression_weight_col=None, forecast_horizon=None, simulation_based=False)

A function for forecasting. It captures growth, seasonality, holidays and other patterns. See “Capturing the time-dependence in the precipitation process for weather risk assessment” as a reference: https://link.springer.com/article/10.1007/s00477-016-1285-8

Parameters
  • df (pandas.DataFrame) – A data frame which includes the timestamp column as well as the value column.

  • time_col (str) – The column name in df representing time for the time series data. The time column can be anything that can be parsed by pandas DatetimeIndex.

  • value_col (str) – The column name which has the value of interest to be forecasted.

  • freq (str, optional, default None) – The intended timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases. If None automatically inferred. This frequency will be passed through this function as a part of the trained model and used at predict time if needed. If data include missing timestamps, and frequency is monthly/annual, user should pass this parameter, as it cannot be inferred.

  • origin_for_time_vars (float, optional, default None) – The time origin used to create continuous variables for time. If None, uses the first record in df.

  • extra_pred_cols (list of str, default None) –

    Names of the extra predictor columns.

    If None, uses [“ct1”], a simple linear growth term.

    It can leverage regressors included in df and those generated by the other parameters. The following effects will not be modeled unless specified in extra_pred_cols:

    • included in df: e.g. macro-economic factors, related timeseries

    • from build_time_features_df: e.g. ct1, ct_sqrt, dow, …

    • from daily_event_df_dict: e.g. “events_India”, …

    The columns corresponding to the following parameters are included in the model without specification in extra_pred_cols. extra_pred_cols can be used to add interactions with these terms.

    changepoints_dict: e.g. changepoint0, changepoint1, … fs_components_df: e.g. sin2_dow, cos4_dow_weekly autoreg_dict: e.g. x_lag1, x_avglag_2_3_4, y_avglag_1_to_5

    If a regressor is passed in df, it needs to be provided to the associated predict function:

    predict_silverkite: via fut_df or new_external_regressor_df silverkite.predict_n(_no_sim: via new_external_regressor_df

  • train_test_thresh (datetime.datetime, optional) – e.g. datetime.datetime(2019, 6, 30) The threshold for training and testing split. Note that the final returned model is trained using all data. If None, training split is based on training_fraction

  • training_fraction (float, optional) – The fraction of data used for training (0.0 to 1.0) Used only if train_test_thresh is None. If this is also None or 1.0, then we skip testing and train on the entire dataset.

  • fit_algorithm (str, optional, default “linear”) –

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

  • fit_algorithm_params (dict or None, optional, default None) – Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • daily_event_df_dict (dict or None, optional, default None) –

    A dictionary of data frames, each representing events data for the corresponding key. The DataFrame has two columns:

    • The first column contains event dates. Must be in a format recognized by pandas.to_datetime. Must be at daily frequency for proper join. It is joined against the time in df, converted to a day: pd.to_datetime(pd.DatetimeIndex(df[time_col]).date).

    • the second column contains the event label for each date

    The column order is important; column names are ignored. The event dates must span their occurrences in both the training and future prediction period.

    During modeling, each key in the dictionary is mapped to a categorical variable named f"{EVENT_PREFIX}_{key}", whose value at each timestamp is specified by the corresponding DataFrame.

    For example, to manually specify a yearly event on September 1 during a training/forecast period that spans 2020-2022:

    daily_event_df_dict = {
        "custom_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
            "label": ["is_event", "is_event", "is_event"]
        })
    }
    

    It’s possible to specify multiple events in the same df. Two events, "sep" and "oct" are specified below for 2020-2021:

    daily_event_df_dict = {
        "custom_event": pd.DataFrame({
            "date": ["2020-09-01", "2020-10-01", "2021-09-01", "2021-10-01"],
            "event_name": ["sep", "oct", "sep", "oct"]
        })
    }
    

    Use multiple keys if two events may fall on the same date. These events must be in separate DataFrames:

    daily_event_df_dict = {
        "fixed_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
            "event_name": "fixed_event"
        }),
        "moving_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-08-28", "2022-09-03"],
            "event_name": "moving_event"
        }),
    }
    

    The multiple event specification can be used even if events never overlap. An equivalent specification to the second example:

    daily_event_df_dict = {
        "sep": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01"],
            "event_name": "is_event"
        }),
        "oct": pd.DataFrame({
            "date": ["2020-10-01", "2021-10-01"],
            "event_name": "is_event"
        }),
    }
    

    Note

    The events you want to use must be specified in extra_pred_cols. These take the form: f"{EVENT_PREFIX}_{key}", where EVENT_PREFIX is the constant.

    Do not use EVENT_DEFAULT in the second column. This is reserved to indicate dates that do not correspond to an event.

  • fs_components_df (pandas.DataFrame or None, optional) –

    A dataframe with information about fourier series generation. Must contain columns with following names:

    ”name”: name of the timeseries feature e.g. “tod”, “tow” etc. “period”: Period of the fourier series, optional, default 1.0 “order”: Order of the fourier series, optional, default 1.0 “seas_names”: season names corresponding to the name (e.g. “daily”, “weekly” etc.), optional.

    Default includes daily, weekly , yearly seasonality.

  • autoreg_dict (dict or str or None, optional, default None) –

    If a dict: A dictionary with arguments for build_autoreg_df. That function’s parameter value_col is inferred from the input of current function self.forecast. Other keys are:

    "lag_dict" : dict or None "agg_lag_dict" : dict or None "series_na_fill_func" : callable

    If a str: The string will represent a method and a dictionary will be constructed using that str. Currently only implemented method is “auto” which uses __get_default_autoreg_dict to create a dictionary. See more details for above parameters in build_autoreg_df.

  • changepoints_dict (dict or None, optional, default None) –

    Specifies the changepoint configuration.

    ”method”: str

    The method to locate changepoints. Valid options:

    • ”uniform”. Places n_changepoints evenly spaced changepoints to allow growth to change.

    • ”custom”. Places changepoints at the specified dates.

    • ”auto”. Automatically detects change points. For configuration, see find_trend_changepoints

    Additional keys to provide parameters for each particular method are described below.

    ”continuous_time_col”: str, optional

    Column to apply growth_func to, to generate changepoint features Typically, this should match the growth term in the model

    ”growth_func”: Optional[func]

    Growth function (scalar -> scalar). Changepoint features are created by applying growth_func to continuous_time_col with offsets. If None, uses identity function to use continuous_time_col directly as growth term If changepoints_dict[“method”] == “uniform”, this other key is required:

    "n_changepoints": int

    number of changepoints to evenly space across training period

    If changepoints_dict[“method”] == “custom”, this other key is required:

    "dates": Iterable[Union[int, float, str, datetime]]

    Changepoint dates. Must be parsable by pd.to_datetime. Changepoints are set at the closest time on or after these dates in the dataset.

    If changepoints_dict[“method”] == “auto”, the keys that matches the parameters in find_trend_changepoints, except df, time_col and value_col, are optional. Extra keys also include “dates”, “combine_changepoint_min_distance” and “keep_detected” to specify additional custom trend changepoints. These three parameters correspond to the three parameters “custom_changepoint_dates”, “min_distance” and “keep_detected” in combine_detected_and_custom_trend_changepoints.

  • seasonality_changepoints_dict (dict or None, default None) – The parameter dictionary for seasonality change point detection. Parameters are in find_seasonality_changepoints. Note df, time_col, value_col and trend_changepoints are auto populated, and do not need to be provided.

  • changepoint_detector (ChangepointDetector or None, default None) – The ChangepointDetector class ChangepointDetector. This is specifically for forecast_simple_silverkite to pass the ChangepointDetector class for plotting purposes, in case that users use forecast_simple_silverkite with changepoints_dict["method"] == "auto". The trend change point detection has to be run there to include possible interaction terms, so we need to pass the detection result from there to include in the output.

  • min_admissible_value (float or None, optional, default None) – The minimum admissible value to return during prediction. If None, no limit is applied.

  • max_admissible_value (float or None, optional, default None) – The maximum admissible value to return during prediction. If None, no limit is applied.

  • uncertainty_dict (dict or None, optional, default None) –

    How to fit the uncertainty model. A dictionary with keys:
    "uncertainty_method"str

    The title of the method. Only “simple_conditional_residuals” is implemented in fit_ml_model which calculates CIs using residuals

    "params"dict

    A dictionary of parameters needed for the requested uncertainty_method. For example, for uncertainty_method="simple_conditional_residuals", see parameters of conf_interval:

    "conditional_cols" "quantiles" "quantile_estimation_method" "sample_size_thresh" "small_sample_size_method" "small_sample_size_quantile"

    If None, no uncertainty intervals are calculated.

  • normalize_method (str or None, default None) – If a string is provided, it will be used as the normalization method in normalize_df, passed via the argument method. Available options are: “min_max”, “statistical”. If None, no normalization will be performed. See that function for more details.

  • adjust_anomalous_dict (dict or None, default None) –

    If not None, a dictionary with following items:

    • ”func”callable

      A function to perform adjustment of anomalous data with following signature:

      adjust_anomalous_dict["func"](
          df=df,
          time_col=time_col,
          value_col=value_col,
          **params) ->
      {"adjusted_df": adjusted_df, ...}
      
    • ”params”dict

      The extra parameters to be passed to the function above.

  • impute_dict (dict or None, default None) –

    If not None, a dictionary with following items:

    • ”func”callable

      A function to perform imputations with following signature:

      impute_dict["func"](
          df=df,
          value_col=value_col,
          **impute_dict["params"] ->
      {"df": imputed_df, ...}
      
    • ”params”dict

      The extra parameters to be passed to the function above.

  • regression_weight_col (str or None, default None) – The column name for the weights to be used in weighted regression version of applicable machine-learning models.

  • forecast_horizon (int or None, default None) – The number of periods for which forecast is needed. Note that this is only used in deciding what parameters should be used for certain components e.g. autoregression, if automatic methods are requested. While, the prediction time forecast horizon could be different from this variable, ideally they should be the same.

  • simulation_based (bool, default False) – Boolean to specify if the future predictions are to be using simulations or not. Note that this is only used in deciding what parameters should be used for certain components e.g. autoregression, if automatic methods are requested. However, the auto-settings and the prediction settings regarding using simulations should match.

Returns

trained_model – A dictionary that includes the fitted model from the function fit_ml_model_with_evaluation. The keys are:

df_dropna: pandas.DataFrame

The df with NAs dropped.

df: pandas.DataFrame

The original df.

num_training_points: int

The number of training points.

features_df: pandas.DataFrame

The df with augmented time features.

min_timestamp: pandas.Timestamp

The minimum timestamp in data.

max_timestamp: pandas.Timestamp

The maximum timestamp in data.

freq: str

The data frequency.

inferred_freq: str

The data freqency inferred from data.

inferred_freq_in_secsfloat

The data frequency inferred from data in seconds.

inferred_freq_in_days: float

The data frequency inferred from data in days.

time_col: str

The time column name.

value_col: str

The value column name.

origin_for_time_vars: float

The first time stamp converted to a float number.

fs_components_df: pandas.DataFrame

The dataframe that specifies the seasonality Fourier configuration.

autoreg_dict: dict

The dictionary that specifies the autoregression configuration.

normalize_method: str

The normalization method. See the function input parameter normalize_method.

daily_event_df_dict: dict

The dictionary that specifies daily events configuration.

changepoints_dict: dict

The dictionary that specifies changepoints configuration.

changepoint_values: list [float]

The list of changepoints in continuous time values.

normalized_changepoint_valueslist [float]

The list of changepoints in continuous time values normalized to 0 to 1.

continuous_time_col: str

The continuous time column name in features_df.

growth_func: func

The growth function used in changepoints, None is linear function.

fs_func: func

The function used to generate Fourier series for seasonality.

has_autoreg_structure: bool

Whether the model has autoregression structure.

autoreg_func: func

The function to generate autoregression columns.

min_lag_order: int

Minimal lag order in autoregression.

max_lag_order: int

Maximal lag order in autoregression.

uncertainty_dict: dict

The dictionary that specifies uncertainty model configuration.

pred_cols: list [str]

List of predictor names.

last_date_for_fit: str or pandas.Timestamp

The last timestamp used for fitting.

trend_changepoint_dates: list [pandas.Timestamp]

List of trend changepoints.

changepoint_detector: class

The ChangepointDetector class used to detected trend changepoints.

seasonality_changepoint_dates: list [pandas.Timestamp]

List of seasonality changepoints.

seasonality_changepoint_result: dict

The seasonality changepoint detection results.

fit_algorithm: str

The algorithm used to fit the model.

fit_algorithm_params: dict

The dictionary of parameters for fit_algorithm.

adjust_anomalous_info: dict

A dictionary that has anomaly adjustment results.

impute_info: dict

A dictionary that has the imputation results.

forecast_horizon: int

The forecast horizon in steps.

forecast_horizon_in_days: float

The forecast horizon in days.

forecast_horizon_in_timedelta: datetime.timmdelta

The forecast horizon in timedelta.

simulation_based: bool

Whether to use simulation in prediction with autoregression terms.

Return type

dict

partition_fut_df(fut_df, trained_model, freq, na_fill_func=<function SilverkiteForecast.<lambda>>)

This function takes a dataframe fut_df which includes the timestamps to forecast and a trained_model returned by forecast and decomposes fut_df to various dataframes which reflect if the timestamps are before, during or after the training periods. It also determines if: ‘the future timestamps after the training period’ are immediately after ‘the last training period’ or if there is some extra gap. In that case, this function creates an expanded dataframe which includes the missing timestamps as well. If fut_df also includes extra columns (they could be regressor columns), this function will interpolate the extra regressor columns.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and possibly regressors. Note that the timestamp column in fut_df must be the same as trained_model["time_col"]. We assume fut_df[time_col] is pandas.datetime64 type.

  • trained_model (dict) – A fitted silverkite model which is the output of forecast

  • freq (str) – Timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for the allowed frequencies.

  • na_fill_func (callable (pd.DataFrame -> pd.DataFrame)) –

    default:

    lambda df: df.interpolate().bfill().ffill()
    

    A function which interpolated missing values in a dataframe. The main usage is invoked when there is a gap between the timestamps. In that case to fill in the gaps, the regressors need to be interpolated/filled. The default works by first interpolating the continuous variables. Then it uses back-filling and then forward-filling for categorical variables.

Returns

result – A dictionary with following items:

  • "fut_freq_in_secs": float

    The inferred frequency in fut_df

  • "training_freq_in_secs": float

    The inferred frequency in training data

  • "index_before_training": list [bool]

    A boolean list to determine which rows of fut_df include a time which is before the training start.

  • "index_within_training": list [bool]

    A boolean list to determine which rows of fut_df include a time which is during the training period.

  • "index_after_training": list [bool]

    A boolean list to determine which rows of fut_df include a time which is after the training end date.

  • "fut_df_before_training": pandas.DataFrame

    A partition of fut_df with timestamps before the training start date

  • "fut_df_within_training": pandas.DataFrame

    A partition of fut_df with timestamps during the training period

  • "fut_df_after_training": pandas.DataFrame

    A partition of fut_df with timestamps after the training start date

  • "fut_df_gap": pandas.DataFrame or None

    If there is a gap between training end date and the first timestamp after the training end date in fut_df, this dataframe can fill the gap between the two. In case fut_df includes extra columns as well, the values for those columns will be filled using na_fill_func.

  • "fut_df_after_training_expanded": pandas.DataFrame

    If there is a gap between training end date and the first timestamp after the training end date in fut_df, this dataframe will include the data for the gaps (fut_df_gap) as well as fut_df_after_training.

  • "index_after_training_original": list [bool]

    A boolean list to determine which rows of fut_df_after_training_expanded correspond to raw data passed by user which are after training end date, appearing in fut_df. Note that this partition corresponds to fut_df_after_training which is the subset of data in fut_df provided by user and also returned by this function.

  • "missing_periods_num": int

    Number of missing timestamps between the last date of training and first date in fut_df appearing after the training end date

  • "inferred_forecast_horizon": int

    This is the inferred forecast horizon from fut_df. This is defined to be the distance between the last training end date and last date appearing in fut_df. Note that this value can be smaller or larger than the number of rows of fut_df. This is caclulated by adding the number of potentially missing timestamps and the number of time periods appearing after the training end point. Also note if there are no timestamps after the training end point in fut_df, this value will be zero.

  • "forecast_partition_summary": dict

    A dictionary which includes the size of various partitions of fut_df as well as the missing timestamps if needed. The dictionary keys are as follows:

    • "len_before_training": the number of time periods before training start

    • "len_within_training": the number of time periods within training

    • "len_after_training": the number of time periods after training

    • "len_gap": the number of missing time periods between training data and future time stamps in fut_df

Return type

dict

predict(fut_df, trained_model, freq=None, past_df=None, new_external_regressor_df=None, sim_num=10, include_err=None, force_no_sim=False, na_fill_func=<function SilverkiteForecast.<lambda>>)

Performs predictions using silverkite model. It determines if the prediction should be simulation-based or not and then predicts using that setting. The function determines if it should use simulation-based predictions or that is not necessary. Here is the logic for determining if simulations are needed:

  • If the model is not autoregressive, then clearly no simulations are needed

  • If the model is autoregressive, however the minimum lag appearing in the model is larger than the forecast horizon, then simulations are not needed. This is because the lags can be calculated fully without predicting the future.

User can overwrite the above behavior and force no simulations using force_no_sim argument, in which case some lags will be imputed. This option should not be used by most users. Some scenarios where advanced user might want to use this is (a) when min_lag_order >= forecast_horizon does not hold strictly but close to hold. (b) user want to predict fast, the autoregression lags are normalized. In that case the predictions returned could correspond to an approximation of a model without autoregression.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and possibly regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str, optional, default None) – Timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for the allowed strings. If None, it is extracted from trained_model input.

  • past_df (pandas.DataFrame or None, default None) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py

  • new_external_regressor_df (pandas.DataFrame or None, default None) – Contains the regressors not already included in fut_df.

  • sim_num (int, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

  • force_no_sim (bool, defaul False) – If True, prediction with no simulations is forced. This can be useful when speed is of concern or for validation purposes. In this case, the potential non-available lags will be imputed. Most users should not set this to True as the consequences could be hard to quantify.

  • na_fill_func (callable (pd.DataFrame -> pd.DataFrame)) –

    default:

    lambda df: df.interpolate().bfill().ffill()
    

    A function which interpolates missing values in a dataframe. The main usage is invoked when there is a gap between the timestamps in fut_df. The main use case is when the user wants to predict a period which is not an immediate period after training. In that case to fill in the gaps, the regressors need to be interpolated/filled. The default works by first interpolating the continuous variables. Then it uses back-filling and then forward-filling for categorical variables.

Returns

fut_df

A dataframe with the forecasts with following (potential) columns:

  1. A time column with the column name being trained_model["time_col"]

  2. The predicted response in value_col column.

  3. Quantile summary reponse in f"{value_col}_quantile_summary column. This column only appears if the model includes uncertainty.

  4. Error std in ERR_STD_COL column. This column only appears if the model includes uncertainty.

If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

predict_n(fut_time_num, trained_model, freq=None, past_df=None, new_external_regressor_df=None, sim_num=10, include_err=None, force_no_sim=False, na_fill_func=<function SilverkiteForecast.<lambda>>)

This is the forecast function which can be used to forecast a number of periods into the future. It determines if the prediction should be simulation-based or not and then predicts using that setting. Currently if the silverkite model uses autoregression simulation-based prediction/CIs are used.

Parameters
  • fut_time_num (int) – number of needed future values

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str, optional, default None) – Timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for the allowed frequencies. If None, it is extracted from trained_model input.

  • new_external_regressor_df (pandas.DataFrame or None) – Contains the extra regressors if specified.

  • sim_num (int, optional, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool or None, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

  • force_no_sim (bool, default False) – If True, prediction with no simulations is forced. This can be useful when speed is of concern or for validation purposes.

  • na_fill_func (callable (pd.DataFrame -> pd.DataFrame)) –

    default:

    lambda df: df.interpolate().bfill().ffill()
    

    A function which interpolated missing values in a dataframe. The main usage is invoked when there is a gap between the timestamps. In that case to fill in the gaps, the regressors need to be interpolated/filled. The default works by first interpolating the continuous variables. Then it uses back-filling and then forward-filling for categorical variables.

Returns

fut_df – A dataframe with predictions given in value_col

Return type

pandas.DataFrame: df

predict_n_no_sim(fut_time_num, trained_model, freq, new_external_regressor_df=None, time_features_ready=False, regressors_ready=False)

This is the forecast function which can be used to forecast. It accepts extra regressors (extra_pred_cols) originally in df via new_external_regressor_df.

Parameters
  • fut_time_num (int) – number of needed future values

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str) – Frequency of future predictions. Accepts any valid frequency for pd.date_range.

  • new_external_regressor_df (pandas.DataFrame or None) – Contains the extra regressors if specified.

  • time_features_ready (bool) – Boolean to denote if time features are already given in df or not.

  • regressors_ready (bool) – Boolean to denote if regressors are already added to data (fut_df).

Returns

predict_n_via_sim(fut_time_num, trained_model, freq, new_external_regressor_df=None, sim_num=10, include_err=None)

This is the forecast function which can be used to forecast. This function’s predictions are constructed using multiple simulations from the fitted series. The past_df to use in predict_silverkite_via_sim is set to be the training data which is available in trained_model. It accepts extra regressors (extra_pred_cols) originally in df via new_external_regressor_df.

Parameters
  • fut_time_num (int) – number of needed future values

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str) – Frequency of future predictions. Accepts any valid frequency for pd.date_range.

  • new_external_regressor_df (pandas.DataFrame or None) – Contains the extra regressors if specified.

  • sim_num (int, optional, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

Returns

`pandas.DataFrame` – A dataframe with predictions given in value_col

Return type

df

predict_no_sim(fut_df, trained_model, past_df=None, new_external_regressor_df=None, time_features_ready=False, regressors_ready=False)

Performs predictions for the dates in fut_df. If extra_pred_cols refers to a column in df, either fut_df or new_external_regressor_df must contain the regressors.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps. for prediction and any regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast.

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py.

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • time_features_ready (bool) – Boolean to denote if time features are already given in df or not.

  • regressors_ready (bool) – Boolean to denote if regressors are already added to data (fut_df).

Returns

The same as input dataframe with an added column for the response. If value_col already appears in fut_df, it will be over-written. If uncertainty_dict is provided as input, it will also contain a {value_col}_quantile_summary column.

Return type

pandas.DataFrame

predict_via_sim(fut_df, trained_model, past_df=None, new_external_regressor_df=None, sim_num=10, include_err=None)

Performs predictions and calculate uncertainty using multiple simulations.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and possibly regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • sim_num (int, optional, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

Returns

The same as input dataframe with an added columns for

  1. The predicted response in value_col column.

  2. Quantile summary reponse in f"{value_col}_quantile_summary column.

  3. Error std in ERR_STD_COL column.

If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

simulate(fut_df, trained_model, past_df=None, new_external_regressor_df=None, include_err=True, time_features_ready=False, regressors_ready=False)

A function to simulate future series. If the fitted model supports uncertainty e.g. via uncertainty_dict, errors are incorporated into the simulations.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and any regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast.

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • include_err (bool) – Boolean to determine if errors are to be incorporated in the simulations.

  • time_features_ready (bool) – Boolean to denote if time features are already given in df or not.

  • regressors_ready (bool) – Boolean to denote if regressors are already added to data (fut_df).

Returns

fut_df_sim – The same as input dataframe with an added column for the response. If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

simulate_multi(fut_df, trained_model, sim_num=10, past_df=None, new_external_regressor_df=None, include_err=None)

A function to simulate future series. If the fitted model supports uncertainty e.g. via uncertainty_dict, errors are incorporated into the simulations.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and any regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast.

  • sim_num (int) – The number of simulated series, (each of which have the same number of rows as fut_df) to be stacked up row-wise.

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py.

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False.

Returns

fut_df_sim – Row-wise Concatenation of dataframes each being the same as input dataframe (fut_df) with an added column for the response and a new column: “sim_label” to differentiate various simulations. The row number of the returned dataframe is:

sim_num times the row number of fut_df.

If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

greykite.algo.uncertainty.conditional.conf_interval.conf_interval(df, value_col, residual_col=None, conditional_cols=None, quantiles=[0.005, 0.025, 0.975, 0.995], quantile_estimation_method='normal_fit', sample_size_thresh=5, small_sample_size_method='std_quantiles', small_sample_size_quantile=0.95, min_admissible_value=None, max_admissible_value=None)[source]

A function to calculate confidence intervals (ci) for values given in value_col for each slice of data (given in conditional_cols) using approximate distributions estimated via estimate_empirical_distribution. The variability of the CI’s either come from “value_col” itself or alternatively come from the residual_col if provided.

We allow for calculating as many quantiles as needed (specified by quantiles) as opposed to only two quantiles representing a typical ci interval.

Two options are available for method of calculation of quantiles for each slice,

  • for each slice a confidence interval is calculated

    simply using empirical quantiles

  • using a normal distribution fit.

There are two main possibilities:

  • residual_col is not provided and values in value_col are used

    directly to calculate quantiles by using the distribution of the values in each slices

  • residual_col is provided, we calculate quantiles

    for the residuals distribution for each slice and then offset the quantiles by the value given in value_col. In that case we use a fixed_mean of zero when constructing quantiles for the residuals. This is done so that our predicted values given in value_col are not perturbed as they might be coming from a much more complex fitting model taking into account many more variables as compared with conditional_cols

Parameters
  • df (pandas.Dataframe) –

    The dataframe with the needed columns:

    • value_col,

    • conditional_cols,

    • residual_col (optional column)

  • value_col (str) – The column containing the values for the variable for which confidence interval is needed

  • residual_col (str) – If a residual column is given, quantiles will be built for the residual values and the interval is then offset using the value given in value_col itself

  • conditional_cols (list [str]) – These columns are used to slice the data first then calculate quantiles for each slice

  • quantiles (list [float]) – The quantiles calculated for each slice. These quantiles can be then used to construct the desired CIs. The default values [0.005, 0.025, 0.0975, 0.995] can be used to construct 99 and 95 percent CIs.

  • quantile_estimation_method (str) –

    There are two options implemented for the quantile estimation method (conditional on slice):

    • ”normal_fit”: it uses the standard deviation of the values in each slice to compute normal distribution quantiles

    • ”ecdf”: it uses the values directly to calculate sample quantiles

  • sample_size_thresh (int) – The minimum sample size for each slice where we allow for using the conditional distribution (conditioned on the “conditional_cols” argument). If sample size for that slice is smaller than this, we fall back to a fallback method

  • small_sample_size_method (str) –

    The method to use for slices with small sample size

    • ”std_quantile” method is implemented and it looks at the response std for each slice with sample size >= “sample_size_thresh” and takes the row which has its std being closest to “small_sample_size_quantile” quantile. It assigns that row to act as fall-back for calculating conf intervals.

  • min_admissible_value (Union[float, double, int]) – This is the lowest admissible value for the obtained ci limits and any value below this will be mapped back to this value.

  • max_admissible_value (Union[float, double, int]) – This is the highest admissible value for the obtained ci limits and any higher value will be mapped back to this value.

Returns

uncertainty_model

Dict with following items (main component is the predict function).

  • ”ecdf_df”: pandas.DataFrame

    ecdf_df generated by “estimate_empirical_distribution”

  • ”ecdf_df_overall”: pandas.DataFrame

    ecdf_df_overall generated by “estimate_empirical_distribution”

  • ”ecdf_df_fallback”: pandas.DataFrame

    ecdf_df_fallback, a fall back data to get the CI quantiles when the sample size for that slice is small or that slice is unobserved in that case.

    • if small_sample_size_method = “std_quantiles”, we use std quantiles to pick a slice which has a std close to that quantile and fall-back to that slice.

    • otherwise we fallback to “ecdf_overall”

  • ”predict”: callable

    it can apply to new_df and add quantiles the new column added has the extra column with name “{value_col}_quantile_summary” as well as the input slices given in “conditional_cols”

Return type

dict

greykite.algo.changepoint.adalasso.changepoints_utils.combine_detected_and_custom_trend_changepoints(detected_changepoint_dates, custom_changepoint_dates, min_distance=None, keep_detected=False)[source]

Adds custom trend changepoints to detected trend changepoints.

Compares the distance between custom changepoints and detected changepoints, and drops a detected changepoint or a custom changepoint depending on keep_detected if their distance is less than min_distance.

Parameters
  • detected_changepoint_dates (list) – A list of detected trend changepoints, parsable by pandas.to_datetime.

  • custom_changepoint_dates (list) – A list of additional custom trend changepoints, parsable by pandas.to_datetime.

  • min_distance (DateOffset, Timedelta, str or None, default None) – The minimum distance between detected changepoints and custom changepoints. If a detected changepoint and a custom changepoint have distance less than min_distance, either the detected changepoint or the custom changepoint will be dropped according to keep_detected. Does not compare the distance within detected changepoints or custom changepoints. Note: maximal unit is ‘D’, i.e., you may only use units no more than ‘D’ such as ‘10D’, ‘5H’, ‘100T’, ‘200S’. The reason is that ‘W’, ‘M’ or higher has either cycles or indefinite number of days, thus is not parsable by pandas as timedelta. For example, see pandas.tseries.frequencies.to_offset.

  • keep_detected (bool, default False) – When the distance of a detected changepoint and a custom changepoint is less than min_distance, whether to keep the detected changepoint or the custom changepoint.

Returns

combined_changepoint_dates – A list of combined changepoints in ascending order.

Return type

list

greykite.common.features.timeseries_lags.build_autoreg_df(value_col, lag_dict=None, agg_lag_dict=None, series_na_fill_func=<function <lambda>>)[source]
This function generates a function (“build_lags_func” in the returned dict)

which when called builds a lag data frame and an aggregated lag data frame using “build_lag_df” and “build_agg_lag_df” functions. Note: In case of training, validation and testing (e.g. cross-validation) for forecasting, this function needs to be applied after the data split is done. This is especially important if “series_na_fill_func” is using future values in interpolation - that is the case for the default which is lambda s: s.bfill().ffill()

Parameters
  • value_col – str the column name for the values of interest

  • lag_dict

    Optional[dict]

    A dictionary which encapsulates the needed params to be passed to the function “build_lag_df” Expected items are:

    • ”max_order”: Optional[int]

      the max_order for creating lags

    • ”orders”: Optional[List[int]]

      the orders for which lag is needed

    param agg_lag_dict

    Optional[dict] A dictionary encapsulating the needed params to be passed to the function “build_agg_lag_df” Expected items are:

    • ”orders_list”: List[List[int]]

      A list of list of integers. Each int list is to be used as order of lags to be aggregated See build_lag_df arguments for more details

    • ”interval_list”: List[tuple]

      A list of tuples each of length 2. Each tuple is used to construct an aggregated lag using all orders within that range See build_agg_lag_df arguments for more details

    • ”agg_func”: func (pd.Dataframe -> pd.Dataframe)

      The function used for aggregation in “build_agg_lag_df” If this key is not passed, the default of “build_agg_lag_df” will be used

    param series_na_fill_func

    (pd.Series -> pd.Series) default: lambda s: s.bfill.ffill() This function is used to fill in the missing data The default works by first back-filling and then forward-filling This function should not be applied to data before CV split is done.

    return

    dict a dictionary with following items

    • ”build_lags_func”: func

      pd.Daframe -> dict(lag_df=pd.DataFrame, agg_lag_df=pd.DataFrame) A function which takes a df (need to have value_col) as input calculates the lag_df and agg_lag_df and returns them

    • ”lag_col_names”: Optional[List[str]]

      The list of generated column names for the returned lag_df when “build_lags_func” is applied

    • ”agg_lag_col_names”: Optional[List[str]]

      The list of generated column names for returned agg_lag_df when “build_lags_func” is applied

    • ”max_order”: int

      the maximum lag order needed in the calculation of “build_lags_func”

    • ”min_order”: int

      the minimum lag order needed in the calculation of “build_lags_func”

greykite.common.features.timeseries_lags.build_agg_lag_df(value_col, df=None, orders_list=[], interval_list=[], agg_func=<function mean>, agg_name='avglag', max_order=None)[source]

A function which returns a dataframe including aggregated (e.g. averaged) time series lags in the form of dataframe columns. By “aggregated lags”, we mean an aggregate of several lags using an aggregation function given in “agg_func”. The advantage of “aggregated lags” over regular lags is we can aggregate (e.g. average) many lags in the past instead of using a large number of lags. This is useful in many applications and avoids over-fitting.

For a time series mathematically denoted by Y(t), one could consider the average lag processes as follows:

the average of last 3 values:

“avg(t) = (Y(t-1) + Y(t-2) + Y(t-3)) / 3”

the average of 7th, 14th and 21st lags:

“avg(t) = (Y(t-7) + Y(t-14) + Y(t-21)) / 3”

See following references:

Reza Hosseini et al. (2014) Non-linear time-varying stochastic models for agroclimate risk assessment, Environmental and Ecological Statistics https://link.springer.com/article/10.1007/s10651-014-0295-2

Alireza Hosseini et al. (2017) Capturing the time-dependence in the precipitation process for weather risk assessment, Stochastic Environmental Research and Risk Assessment https://link.springer.com/article/10.1007/s00477-016-1285-8

Parameters
  • value_col – str the column name for the values of interest

  • df – Optional[pd.DataFrame] the data frame which includes the time series of interest

  • orders_list

    List[int] a list including the order range for the average lags. For example if agg_func = np.mean and orders_list = [[1, 2, 3], [7, 14, 21]] then we construct two averaged lags:

    avg(t) = (Y(t-1) + Y(t-2) + Y(t-3)) / 3 and avg(t) = (Y(t-7) + Y(t-14) + Y(t-21)) / 3

  • interval_list

    List[tuple[int]] a list of (lag) intervals where interval is a tuple of length 2 with

    • first element denoting the lower bound and

    • second is the upper

    For example if interval_list = [(1, 3), (8, 11)] then we construct two “average lagged” variables:

    avg(t) = (Y(t-1) + Y(t-2) + Y(t-3)) / 3 and avg(t) = (Y(t-8) + Y(t-9) + Y(t-10) + Y(t-11)) / 4

  • agg_func – callable, default: np.mean the function used to aggregate the lag orders for each of orders specified in either of order_list or interval_list. Typically this function is an averaging function such as np.mean or np.median but more sophisticated functions are allowed.

  • agg_name

    str, default: “avglag” the aggregate function name used in constructing the column names for the output data frame. For example if

    • value_col = “y”

    • orders = [7 , 14, 21]

    • agg_name = “avglag”

    then the column name appearing in the output data frame will be “y_avglag_7_14_21”.

  • max_order

    Optional[int] maximum order of lags needed in calculations of lag aggregates this is usually calculated/inferred from these arguments:

    orders_list, interval_list

    unless the max_order is already pre-calculated before calling this function. Hence this argument is optional and only included for computational efficiency gains.

Returns

dict dictionary with following items:

  • ”col_names”: List[str]

    the generated column names

  • ”agg_lag_df”: Optional[pd.DataFrame]

    a data frame with the average lag columns. The column names are constructed in a way that reflects what lags are averaged. For example if

    • value_col = “y”

    • agg_name = “avglag”

    • orders_list = [[1, 2, 3], [7, 14, 21]]

    Then the column names are “y_avglag_1_2_3”, “y_avglag_7_14_21” and if

    • interval_list = [(1, 3), (8, 11)]

    Then the column names are “y_avglag_1_to_3”, “y_avglag_8_to_11”

class greykite.algo.forecast.silverkite.forecast_silverkite.SilverkiteForecast(constants: greykite.algo.forecast.silverkite.constants.silverkite_seasonality.SilverkiteSeasonalityEnumMixin = <greykite.algo.forecast.silverkite.constants.silverkite_constant.SilverkiteConstant object>)[source]
forecast(df, time_col, value_col, freq=None, origin_for_time_vars=None, extra_pred_cols=None, train_test_thresh=None, training_fraction=0.9, fit_algorithm='linear', fit_algorithm_params=None, daily_event_df_dict=None, fs_components_df= name period order seas_names 0 tod 24.0 3 daily 1 tow 7.0 3 weekly 2 toy 1.0 5 yearly, autoreg_dict=None, changepoints_dict=None, seasonality_changepoints_dict=None, changepoint_detector=None, min_admissible_value=None, max_admissible_value=None, uncertainty_dict=None, normalize_method=None, adjust_anomalous_dict=None, impute_dict=None, regression_weight_col=None, forecast_horizon=None, simulation_based=False)[source]

A function for forecasting. It captures growth, seasonality, holidays and other patterns. See “Capturing the time-dependence in the precipitation process for weather risk assessment” as a reference: https://link.springer.com/article/10.1007/s00477-016-1285-8

Parameters
  • df (pandas.DataFrame) – A data frame which includes the timestamp column as well as the value column.

  • time_col (str) – The column name in df representing time for the time series data. The time column can be anything that can be parsed by pandas DatetimeIndex.

  • value_col (str) – The column name which has the value of interest to be forecasted.

  • freq (str, optional, default None) – The intended timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases. If None automatically inferred. This frequency will be passed through this function as a part of the trained model and used at predict time if needed. If data include missing timestamps, and frequency is monthly/annual, user should pass this parameter, as it cannot be inferred.

  • origin_for_time_vars (float, optional, default None) – The time origin used to create continuous variables for time. If None, uses the first record in df.

  • extra_pred_cols (list of str, default None) –

    Names of the extra predictor columns.

    If None, uses [“ct1”], a simple linear growth term.

    It can leverage regressors included in df and those generated by the other parameters. The following effects will not be modeled unless specified in extra_pred_cols:

    • included in df: e.g. macro-economic factors, related timeseries

    • from build_time_features_df: e.g. ct1, ct_sqrt, dow, …

    • from daily_event_df_dict: e.g. “events_India”, …

    The columns corresponding to the following parameters are included in the model without specification in extra_pred_cols. extra_pred_cols can be used to add interactions with these terms.

    changepoints_dict: e.g. changepoint0, changepoint1, … fs_components_df: e.g. sin2_dow, cos4_dow_weekly autoreg_dict: e.g. x_lag1, x_avglag_2_3_4, y_avglag_1_to_5

    If a regressor is passed in df, it needs to be provided to the associated predict function:

    predict_silverkite: via fut_df or new_external_regressor_df silverkite.predict_n(_no_sim: via new_external_regressor_df

  • train_test_thresh (datetime.datetime, optional) – e.g. datetime.datetime(2019, 6, 30) The threshold for training and testing split. Note that the final returned model is trained using all data. If None, training split is based on training_fraction

  • training_fraction (float, optional) – The fraction of data used for training (0.0 to 1.0) Used only if train_test_thresh is None. If this is also None or 1.0, then we skip testing and train on the entire dataset.

  • fit_algorithm (str, optional, default “linear”) –

    The type of predictive model used in fitting.

    See fit_model_via_design_matrix for available options and their parameters.

  • fit_algorithm_params (dict or None, optional, default None) – Parameters passed to the requested fit_algorithm. If None, uses the defaults in fit_model_via_design_matrix.

  • daily_event_df_dict (dict or None, optional, default None) –

    A dictionary of data frames, each representing events data for the corresponding key. The DataFrame has two columns:

    • The first column contains event dates. Must be in a format recognized by pandas.to_datetime. Must be at daily frequency for proper join. It is joined against the time in df, converted to a day: pd.to_datetime(pd.DatetimeIndex(df[time_col]).date).

    • the second column contains the event label for each date

    The column order is important; column names are ignored. The event dates must span their occurrences in both the training and future prediction period.

    During modeling, each key in the dictionary is mapped to a categorical variable named f"{EVENT_PREFIX}_{key}", whose value at each timestamp is specified by the corresponding DataFrame.

    For example, to manually specify a yearly event on September 1 during a training/forecast period that spans 2020-2022:

    daily_event_df_dict = {
        "custom_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
            "label": ["is_event", "is_event", "is_event"]
        })
    }
    

    It’s possible to specify multiple events in the same df. Two events, "sep" and "oct" are specified below for 2020-2021:

    daily_event_df_dict = {
        "custom_event": pd.DataFrame({
            "date": ["2020-09-01", "2020-10-01", "2021-09-01", "2021-10-01"],
            "event_name": ["sep", "oct", "sep", "oct"]
        })
    }
    

    Use multiple keys if two events may fall on the same date. These events must be in separate DataFrames:

    daily_event_df_dict = {
        "fixed_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01", "2022-09-01"],
            "event_name": "fixed_event"
        }),
        "moving_event": pd.DataFrame({
            "date": ["2020-09-01", "2021-08-28", "2022-09-03"],
            "event_name": "moving_event"
        }),
    }
    

    The multiple event specification can be used even if events never overlap. An equivalent specification to the second example:

    daily_event_df_dict = {
        "sep": pd.DataFrame({
            "date": ["2020-09-01", "2021-09-01"],
            "event_name": "is_event"
        }),
        "oct": pd.DataFrame({
            "date": ["2020-10-01", "2021-10-01"],
            "event_name": "is_event"
        }),
    }
    

    Note

    The events you want to use must be specified in extra_pred_cols. These take the form: f"{EVENT_PREFIX}_{key}", where EVENT_PREFIX is the constant.

    Do not use EVENT_DEFAULT in the second column. This is reserved to indicate dates that do not correspond to an event.

  • fs_components_df (pandas.DataFrame or None, optional) –

    A dataframe with information about fourier series generation. Must contain columns with following names:

    ”name”: name of the timeseries feature e.g. “tod”, “tow” etc. “period”: Period of the fourier series, optional, default 1.0 “order”: Order of the fourier series, optional, default 1.0 “seas_names”: season names corresponding to the name (e.g. “daily”, “weekly” etc.), optional.

    Default includes daily, weekly , yearly seasonality.

  • autoreg_dict (dict or str or None, optional, default None) –

    If a dict: A dictionary with arguments for build_autoreg_df. That function’s parameter value_col is inferred from the input of current function self.forecast. Other keys are:

    "lag_dict" : dict or None "agg_lag_dict" : dict or None "series_na_fill_func" : callable

    If a str: The string will represent a method and a dictionary will be constructed using that str. Currently only implemented method is “auto” which uses __get_default_autoreg_dict to create a dictionary. See more details for above parameters in build_autoreg_df.

  • changepoints_dict (dict or None, optional, default None) –

    Specifies the changepoint configuration.

    ”method”: str

    The method to locate changepoints. Valid options:

    • ”uniform”. Places n_changepoints evenly spaced changepoints to allow growth to change.

    • ”custom”. Places changepoints at the specified dates.

    • ”auto”. Automatically detects change points. For configuration, see find_trend_changepoints

    Additional keys to provide parameters for each particular method are described below.

    ”continuous_time_col”: str, optional

    Column to apply growth_func to, to generate changepoint features Typically, this should match the growth term in the model

    ”growth_func”: Optional[func]

    Growth function (scalar -> scalar). Changepoint features are created by applying growth_func to continuous_time_col with offsets. If None, uses identity function to use continuous_time_col directly as growth term If changepoints_dict[“method”] == “uniform”, this other key is required:

    "n_changepoints": int

    number of changepoints to evenly space across training period

    If changepoints_dict[“method”] == “custom”, this other key is required:

    "dates": Iterable[Union[int, float, str, datetime]]

    Changepoint dates. Must be parsable by pd.to_datetime. Changepoints are set at the closest time on or after these dates in the dataset.

    If changepoints_dict[“method”] == “auto”, the keys that matches the parameters in find_trend_changepoints, except df, time_col and value_col, are optional. Extra keys also include “dates”, “combine_changepoint_min_distance” and “keep_detected” to specify additional custom trend changepoints. These three parameters correspond to the three parameters “custom_changepoint_dates”, “min_distance” and “keep_detected” in combine_detected_and_custom_trend_changepoints.

  • seasonality_changepoints_dict (dict or None, default None) – The parameter dictionary for seasonality change point detection. Parameters are in find_seasonality_changepoints. Note df, time_col, value_col and trend_changepoints are auto populated, and do not need to be provided.

  • changepoint_detector (ChangepointDetector or None, default None) – The ChangepointDetector class ChangepointDetector. This is specifically for forecast_simple_silverkite to pass the ChangepointDetector class for plotting purposes, in case that users use forecast_simple_silverkite with changepoints_dict["method"] == "auto". The trend change point detection has to be run there to include possible interaction terms, so we need to pass the detection result from there to include in the output.

  • min_admissible_value (float or None, optional, default None) – The minimum admissible value to return during prediction. If None, no limit is applied.

  • max_admissible_value (float or None, optional, default None) – The maximum admissible value to return during prediction. If None, no limit is applied.

  • uncertainty_dict (dict or None, optional, default None) –

    How to fit the uncertainty model. A dictionary with keys:
    "uncertainty_method"str

    The title of the method. Only “simple_conditional_residuals” is implemented in fit_ml_model which calculates CIs using residuals

    "params"dict

    A dictionary of parameters needed for the requested uncertainty_method. For example, for uncertainty_method="simple_conditional_residuals", see parameters of conf_interval:

    "conditional_cols" "quantiles" "quantile_estimation_method" "sample_size_thresh" "small_sample_size_method" "small_sample_size_quantile"

    If None, no uncertainty intervals are calculated.

  • normalize_method (str or None, default None) – If a string is provided, it will be used as the normalization method in normalize_df, passed via the argument method. Available options are: “min_max”, “statistical”. If None, no normalization will be performed. See that function for more details.

  • adjust_anomalous_dict (dict or None, default None) –

    If not None, a dictionary with following items:

    • ”func”callable

      A function to perform adjustment of anomalous data with following signature:

      adjust_anomalous_dict["func"](
          df=df,
          time_col=time_col,
          value_col=value_col,
          **params) ->
      {"adjusted_df": adjusted_df, ...}
      
    • ”params”dict

      The extra parameters to be passed to the function above.

  • impute_dict (dict or None, default None) –

    If not None, a dictionary with following items:

    • ”func”callable

      A function to perform imputations with following signature:

      impute_dict["func"](
          df=df,
          value_col=value_col,
          **impute_dict["params"] ->
      {"df": imputed_df, ...}
      
    • ”params”dict

      The extra parameters to be passed to the function above.

  • regression_weight_col (str or None, default None) – The column name for the weights to be used in weighted regression version of applicable machine-learning models.

  • forecast_horizon (int or None, default None) – The number of periods for which forecast is needed. Note that this is only used in deciding what parameters should be used for certain components e.g. autoregression, if automatic methods are requested. While, the prediction time forecast horizon could be different from this variable, ideally they should be the same.

  • simulation_based (bool, default False) – Boolean to specify if the future predictions are to be using simulations or not. Note that this is only used in deciding what parameters should be used for certain components e.g. autoregression, if automatic methods are requested. However, the auto-settings and the prediction settings regarding using simulations should match.

Returns

trained_model – A dictionary that includes the fitted model from the function fit_ml_model_with_evaluation. The keys are:

df_dropna: pandas.DataFrame

The df with NAs dropped.

df: pandas.DataFrame

The original df.

num_training_points: int

The number of training points.

features_df: pandas.DataFrame

The df with augmented time features.

min_timestamp: pandas.Timestamp

The minimum timestamp in data.

max_timestamp: pandas.Timestamp

The maximum timestamp in data.

freq: str

The data frequency.

inferred_freq: str

The data freqency inferred from data.

inferred_freq_in_secsfloat

The data frequency inferred from data in seconds.

inferred_freq_in_days: float

The data frequency inferred from data in days.

time_col: str

The time column name.

value_col: str

The value column name.

origin_for_time_vars: float

The first time stamp converted to a float number.

fs_components_df: pandas.DataFrame

The dataframe that specifies the seasonality Fourier configuration.

autoreg_dict: dict

The dictionary that specifies the autoregression configuration.

normalize_method: str

The normalization method. See the function input parameter normalize_method.

daily_event_df_dict: dict

The dictionary that specifies daily events configuration.

changepoints_dict: dict

The dictionary that specifies changepoints configuration.

changepoint_values: list [float]

The list of changepoints in continuous time values.

normalized_changepoint_valueslist [float]

The list of changepoints in continuous time values normalized to 0 to 1.

continuous_time_col: str

The continuous time column name in features_df.

growth_func: func

The growth function used in changepoints, None is linear function.

fs_func: func

The function used to generate Fourier series for seasonality.

has_autoreg_structure: bool

Whether the model has autoregression structure.

autoreg_func: func

The function to generate autoregression columns.

min_lag_order: int

Minimal lag order in autoregression.

max_lag_order: int

Maximal lag order in autoregression.

uncertainty_dict: dict

The dictionary that specifies uncertainty model configuration.

pred_cols: list [str]

List of predictor names.

last_date_for_fit: str or pandas.Timestamp

The last timestamp used for fitting.

trend_changepoint_dates: list [pandas.Timestamp]

List of trend changepoints.

changepoint_detector: class

The ChangepointDetector class used to detected trend changepoints.

seasonality_changepoint_dates: list [pandas.Timestamp]

List of seasonality changepoints.

seasonality_changepoint_result: dict

The seasonality changepoint detection results.

fit_algorithm: str

The algorithm used to fit the model.

fit_algorithm_params: dict

The dictionary of parameters for fit_algorithm.

adjust_anomalous_info: dict

A dictionary that has anomaly adjustment results.

impute_info: dict

A dictionary that has the imputation results.

forecast_horizon: int

The forecast horizon in steps.

forecast_horizon_in_days: float

The forecast horizon in days.

forecast_horizon_in_timedelta: datetime.timmdelta

The forecast horizon in timedelta.

simulation_based: bool

Whether to use simulation in prediction with autoregression terms.

Return type

dict

predict_no_sim(fut_df, trained_model, past_df=None, new_external_regressor_df=None, time_features_ready=False, regressors_ready=False)[source]

Performs predictions for the dates in fut_df. If extra_pred_cols refers to a column in df, either fut_df or new_external_regressor_df must contain the regressors.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps. for prediction and any regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast.

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py.

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • time_features_ready (bool) – Boolean to denote if time features are already given in df or not.

  • regressors_ready (bool) – Boolean to denote if regressors are already added to data (fut_df).

Returns

The same as input dataframe with an added column for the response. If value_col already appears in fut_df, it will be over-written. If uncertainty_dict is provided as input, it will also contain a {value_col}_quantile_summary column.

Return type

pandas.DataFrame

predict_n_no_sim(fut_time_num, trained_model, freq, new_external_regressor_df=None, time_features_ready=False, regressors_ready=False)[source]

This is the forecast function which can be used to forecast. It accepts extra regressors (extra_pred_cols) originally in df via new_external_regressor_df.

Parameters
  • fut_time_num (int) – number of needed future values

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str) – Frequency of future predictions. Accepts any valid frequency for pd.date_range.

  • new_external_regressor_df (pandas.DataFrame or None) – Contains the extra regressors if specified.

  • time_features_ready (bool) – Boolean to denote if time features are already given in df or not.

  • regressors_ready (bool) – Boolean to denote if regressors are already added to data (fut_df).

Returns

simulate(fut_df, trained_model, past_df=None, new_external_regressor_df=None, include_err=True, time_features_ready=False, regressors_ready=False)[source]

A function to simulate future series. If the fitted model supports uncertainty e.g. via uncertainty_dict, errors are incorporated into the simulations.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and any regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast.

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • include_err (bool) – Boolean to determine if errors are to be incorporated in the simulations.

  • time_features_ready (bool) – Boolean to denote if time features are already given in df or not.

  • regressors_ready (bool) – Boolean to denote if regressors are already added to data (fut_df).

Returns

fut_df_sim – The same as input dataframe with an added column for the response. If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

simulate_multi(fut_df, trained_model, sim_num=10, past_df=None, new_external_regressor_df=None, include_err=None)[source]

A function to simulate future series. If the fitted model supports uncertainty e.g. via uncertainty_dict, errors are incorporated into the simulations.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and any regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast.

  • sim_num (int) – The number of simulated series, (each of which have the same number of rows as fut_df) to be stacked up row-wise.

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py.

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False.

Returns

fut_df_sim – Row-wise Concatenation of dataframes each being the same as input dataframe (fut_df) with an added column for the response and a new column: “sim_label” to differentiate various simulations. The row number of the returned dataframe is:

sim_num times the row number of fut_df.

If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

predict_via_sim(fut_df, trained_model, past_df=None, new_external_regressor_df=None, sim_num=10, include_err=None)[source]

Performs predictions and calculate uncertainty using multiple simulations.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and possibly regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • past_df (pandas.DataFrame, optional) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py

  • new_external_regressor_df (pandas.DataFrame, optional) – Contains the regressors not already included in fut_df.

  • sim_num (int, optional, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

Returns

The same as input dataframe with an added columns for

  1. The predicted response in value_col column.

  2. Quantile summary reponse in f"{value_col}_quantile_summary column.

  3. Error std in ERR_STD_COL column.

If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

predict_n_via_sim(fut_time_num, trained_model, freq, new_external_regressor_df=None, sim_num=10, include_err=None)[source]

This is the forecast function which can be used to forecast. This function’s predictions are constructed using multiple simulations from the fitted series. The past_df to use in predict_silverkite_via_sim is set to be the training data which is available in trained_model. It accepts extra regressors (extra_pred_cols) originally in df via new_external_regressor_df.

Parameters
  • fut_time_num (int) – number of needed future values

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str) – Frequency of future predictions. Accepts any valid frequency for pd.date_range.

  • new_external_regressor_df (pandas.DataFrame or None) – Contains the extra regressors if specified.

  • sim_num (int, optional, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

Returns

`pandas.DataFrame` – A dataframe with predictions given in value_col

Return type

df

predict(fut_df, trained_model, freq=None, past_df=None, new_external_regressor_df=None, sim_num=10, include_err=None, force_no_sim=False, na_fill_func=<function SilverkiteForecast.<lambda>>)[source]

Performs predictions using silverkite model. It determines if the prediction should be simulation-based or not and then predicts using that setting. The function determines if it should use simulation-based predictions or that is not necessary. Here is the logic for determining if simulations are needed:

  • If the model is not autoregressive, then clearly no simulations are needed

  • If the model is autoregressive, however the minimum lag appearing in the model is larger than the forecast horizon, then simulations are not needed. This is because the lags can be calculated fully without predicting the future.

User can overwrite the above behavior and force no simulations using force_no_sim argument, in which case some lags will be imputed. This option should not be used by most users. Some scenarios where advanced user might want to use this is (a) when min_lag_order >= forecast_horizon does not hold strictly but close to hold. (b) user want to predict fast, the autoregression lags are normalized. In that case the predictions returned could correspond to an approximation of a model without autoregression.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and possibly regressors.

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str, optional, default None) – Timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for the allowed strings. If None, it is extracted from trained_model input.

  • past_df (pandas.DataFrame or None, default None) – A data frame with past values if autoregressive methods are called via autoreg_dict parameter of greykite.algo.forecast.silverkite.SilverkiteForecast.py

  • new_external_regressor_df (pandas.DataFrame or None, default None) – Contains the regressors not already included in fut_df.

  • sim_num (int, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool, optional, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

  • force_no_sim (bool, defaul False) – If True, prediction with no simulations is forced. This can be useful when speed is of concern or for validation purposes. In this case, the potential non-available lags will be imputed. Most users should not set this to True as the consequences could be hard to quantify.

  • na_fill_func (callable (pd.DataFrame -> pd.DataFrame)) –

    default:

    lambda df: df.interpolate().bfill().ffill()
    

    A function which interpolates missing values in a dataframe. The main usage is invoked when there is a gap between the timestamps in fut_df. The main use case is when the user wants to predict a period which is not an immediate period after training. In that case to fill in the gaps, the regressors need to be interpolated/filled. The default works by first interpolating the continuous variables. Then it uses back-filling and then forward-filling for categorical variables.

Returns

fut_df

A dataframe with the forecasts with following (potential) columns:

  1. A time column with the column name being trained_model["time_col"]

  2. The predicted response in value_col column.

  3. Quantile summary reponse in f"{value_col}_quantile_summary column. This column only appears if the model includes uncertainty.

  4. Error std in ERR_STD_COL column. This column only appears if the model includes uncertainty.

If value_col already appears in fut_df, it will be over-written.

Return type

pandas.DataFrame

predict_n(fut_time_num, trained_model, freq=None, past_df=None, new_external_regressor_df=None, sim_num=10, include_err=None, force_no_sim=False, na_fill_func=<function SilverkiteForecast.<lambda>>)[source]

This is the forecast function which can be used to forecast a number of periods into the future. It determines if the prediction should be simulation-based or not and then predicts using that setting. Currently if the silverkite model uses autoregression simulation-based prediction/CIs are used.

Parameters
  • fut_time_num (int) – number of needed future values

  • trained_model (dict) – A fitted silverkite model which is the output of self.forecast

  • freq (str, optional, default None) – Timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for the allowed frequencies. If None, it is extracted from trained_model input.

  • new_external_regressor_df (pandas.DataFrame or None) – Contains the extra regressors if specified.

  • sim_num (int, optional, default 10) – The number of simulated series to be used in prediction.

  • include_err (bool or None, default None) – Boolean to determine if errors are to be incorporated in the simulations. If None, it will be set to True if uncertainty is passed to the model and otherwise will be set to False

  • force_no_sim (bool, default False) – If True, prediction with no simulations is forced. This can be useful when speed is of concern or for validation purposes.

  • na_fill_func (callable (pd.DataFrame -> pd.DataFrame)) –

    default:

    lambda df: df.interpolate().bfill().ffill()
    

    A function which interpolated missing values in a dataframe. The main usage is invoked when there is a gap between the timestamps. In that case to fill in the gaps, the regressors need to be interpolated/filled. The default works by first interpolating the continuous variables. Then it uses back-filling and then forward-filling for categorical variables.

Returns

fut_df – A dataframe with predictions given in value_col

Return type

pandas.DataFrame: df

partition_fut_df(fut_df, trained_model, freq, na_fill_func=<function SilverkiteForecast.<lambda>>)[source]

This function takes a dataframe fut_df which includes the timestamps to forecast and a trained_model returned by forecast and decomposes fut_df to various dataframes which reflect if the timestamps are before, during or after the training periods. It also determines if: ‘the future timestamps after the training period’ are immediately after ‘the last training period’ or if there is some extra gap. In that case, this function creates an expanded dataframe which includes the missing timestamps as well. If fut_df also includes extra columns (they could be regressor columns), this function will interpolate the extra regressor columns.

Parameters
  • fut_df (pandas.DataFrame) – The data frame which includes the timestamps for prediction and possibly regressors. Note that the timestamp column in fut_df must be the same as trained_model["time_col"]. We assume fut_df[time_col] is pandas.datetime64 type.

  • trained_model (dict) – A fitted silverkite model which is the output of forecast

  • freq (str) – Timeseries frequency, DateOffset alias. See https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases for the allowed frequencies.

  • na_fill_func (callable (pd.DataFrame -> pd.DataFrame)) –

    default:

    lambda df: df.interpolate().bfill().ffill()
    

    A function which interpolated missing values in a dataframe. The main usage is invoked when there is a gap between the timestamps. In that case to fill in the gaps, the regressors need to be interpolated/filled. The default works by first interpolating the continuous variables. Then it uses back-filling and then forward-filling for categorical variables.

Returns

result – A dictionary with following items:

  • "fut_freq_in_secs": float

    The inferred frequency in fut_df

  • "training_freq_in_secs": float

    The inferred frequency in training data

  • "index_before_training": list [bool]

    A boolean list to determine which rows of fut_df include a time which is before the training start.

  • "index_within_training": list [bool]

    A boolean list to determine which rows of fut_df include a time which is during the training period.

  • "index_after_training": list [bool]

    A boolean list to determine which rows of fut_df include a time which is after the training end date.

  • "fut_df_before_training": pandas.DataFrame

    A partition of fut_df with timestamps before the training start date

  • "fut_df_within_training": pandas.DataFrame

    A partition of fut_df with timestamps during the training period

  • "fut_df_after_training": pandas.DataFrame

    A partition of fut_df with timestamps after the training start date

  • "fut_df_gap": pandas.DataFrame or None

    If there is a gap between training end date and the first timestamp after the training end date in fut_df, this dataframe can fill the gap between the two. In case fut_df includes extra columns as well, the values for those columns will be filled using na_fill_func.

  • "fut_df_after_training_expanded": pandas.DataFrame

    If there is a gap between training end date and the first timestamp after the training end date in fut_df, this dataframe will include the data for the gaps (fut_df_gap) as well as fut_df_after_training.

  • "index_after_training_original": list [bool]

    A boolean list to determine which rows of fut_df_after_training_expanded correspond to raw data passed by user which are after training end date, appearing in fut_df. Note that this partition corresponds to fut_df_after_training which is the subset of data in fut_df provided by user and also returned by this function.

  • "missing_periods_num": int

    Number of missing timestamps between the last date of training and first date in fut_df appearing after the training end date

  • "inferred_forecast_horizon": int

    This is the inferred forecast horizon from fut_df. This is defined to be the distance between the last training end date and last date appearing in fut_df. Note that this value can be smaller or larger than the number of rows of fut_df. This is caclulated by adding the number of potentially missing timestamps and the number of time periods appearing after the training end point. Also note if there are no timestamps after the training end point in fut_df, this value will be zero.

  • "forecast_partition_summary": dict

    A dictionary which includes the size of various partitions of fut_df as well as the missing timestamps if needed. The dictionary keys are as follows:

    • "len_before_training": the number of time periods before training start

    • "len_within_training": the number of time periods within training

    • "len_after_training": the number of time periods after training

    • "len_gap": the number of missing time periods between training data and future time stamps in fut_df

Return type

dict