Model Catalog Overview#
Source Files
twiga/models/baseline/- no-training baselinestwiga/models/ml/- tree-based and linear modelstwiga/models/nn/- neural network modelstwiga/models/nn/rnn_model.py-RNNModel/RNNConfigtwiga/models/nn/prob/rnn_parametric.py- RNN parametric distribution wrapperstwiga/models/nn/prob/rnn_qr.py-RNNQR/RNNFPQRtwiga/forecaster/registry.py
A single TwigaForecaster trains, tunes, evaluates and backtests any model in the catalog through an identical API. Switching from a gradient-boosted tree to a neural network means changing one config object, not rewriting training code. The entire workflow — feature engineering, training, hyperparameter optimisation, backtesting and calibration — is unified in one interface with no library boundaries to cross.
Twiga provides forecasting models across three domains: baseline (parameter-free reference models), ml (gradient-boosted trees, random forests, linear, and NGBoost), and nn (neural networks). All models share a common interface and are loaded dynamically through the model registry.
The table below lists the base model configurations. In addition, every neural architecture (MLPF, MLPGAM, MLPGAF, NHITS) has dedicated probabilistic config classes for each distribution family (Normal, Laplace, LogNormal, Gamma, Beta, StudentT, QR, FPQR, CRC) - see Neural Network Models for the full matrix.
Available Models#
Baseline Domain#
Model |
Name |
Domain |
Type |
Config Class |
Description |
|---|---|---|---|---|---|
Naive |
|
baseline |
Point |
|
Per-window last value, training mean, or zero |
Seasonal Naive |
|
baseline |
Point |
|
Repeats value from m steps ago (daily/weekly) |
Window Average |
|
baseline |
Point |
|
Local mean over the last w steps |
Drift |
|
baseline |
Point |
|
Linear extrapolation of per-window trend |
Context Parrot |
|
baseline |
Point |
|
Repeats the last observed context window as the forecast |
ML Domain#
Model |
Name |
Domain |
Type |
Config Class |
Description |
|---|---|---|---|---|---|
CatBoost |
|
ml |
Point |
|
Gradient boosting with ordered boosting |
XGBoost |
|
ml |
Point |
|
Gradient boosting with regularization |
LightGBM |
|
ml |
Point |
|
Histogram-based gradient boosting |
Random Forest |
|
ml |
Point |
|
Bagged decision trees, native multi-output |
Linear Regression |
|
ml |
Point |
|
OLS linear model |
Gaussian CatBoost |
|
ml |
Probabilistic |
|
CatBoost with Gaussian output (mean + sigma) |
NGBoost Normal |
|
ml |
Probabilistic |
|
Natural gradient boosting, Gaussian distribution |
NGBoost LogNormal |
|
ml |
Probabilistic |
|
Natural gradient boosting, LogNormal (positive targets) |
NGBoost Exponential |
|
ml |
Probabilistic |
|
Natural gradient boosting, Exponential distribution |
QR CatBoost |
|
ml |
Quantile |
|
CatBoost with multi-quantile loss |
QR XGBoost |
|
ml |
Quantile |
|
XGBoost with quantile regression |
QR LightGBM |
|
ml |
Quantile |
|
LightGBM with quantile regression |
QR Random Forest |
|
ml |
Quantile |
|
Single-forest quantile regression, any quantile at inference |
NN Domain#
Each architecture ships as a point base model plus a full suite of probabilistic variants. All share the nn domain and are loaded via the same registry.
Base Architectures#
Model |
Name |
Config Class |
Description |
|---|---|---|---|
MLPF |
|
|
MLP forecaster with attention-based feature combination |
MLPGAM |
|
|
MLP with Generalized Additive Model structure |
MLPGAF |
|
|
MLP with Gated Additive Features |
N-HiTS |
|
|
Neural Hierarchical Interpolation for Time Series |
RNN |
|
|
GRU/LSTM recurrent network with bidirectional support |
MLPF Probabilistic Variants#
Name |
Type |
Config Class |
|---|---|---|
|
Parametric — Normal |
|
|
Parametric — Laplace |
|
|
Parametric — LogNormal |
|
|
Parametric — Gamma |
|
|
Parametric — Beta |
|
|
Quantile Regression |
|
|
Full Parameterised QR |
|
|
Conformal Residual Coverage |
|
MLPGAM Probabilistic Variants#
Name |
Type |
Config Class |
|---|---|---|
|
Parametric — Normal |
|
|
Parametric — Laplace |
|
|
Parametric — LogNormal |
|
|
Parametric — Gamma |
|
|
Parametric — Beta |
|
|
Parametric — Student-t |
|
|
Quantile Regression |
|
|
Full Parameterised QR |
|
|
Conformal Residual Coverage |
|
MLPGAF Probabilistic Variants#
Name |
Type |
Config Class |
|---|---|---|
|
Parametric — Normal |
|
|
Parametric — Laplace |
|
|
Parametric — LogNormal |
|
|
Parametric — Gamma |
|
|
Parametric — Beta |
|
|
Parametric — Student-t |
|
|
Quantile Regression |
|
|
Full Parameterised QR |
|
|
Conformal Residual Coverage |
|
N-HiTS Probabilistic Variants#
Name |
Type |
Config Class |
|---|---|---|
|
Parametric — Normal |
|
|
Parametric — Laplace |
|
|
Parametric — LogNormal |
|
|
Parametric — Gamma |
|
|
Parametric — Beta |
|
|
Parametric — Student-t |
|
|
Quantile Regression |
|
|
Full Parameterised QR |
|
|
Conformal Residual Coverage |
|
RNN Probabilistic Variants#
Name |
Type |
Config Class |
|---|---|---|
|
Parametric — Normal |
|
|
Parametric — Laplace |
|
|
Parametric — LogNormal |
|
|
Parametric — Gamma |
|
|
Parametric — Beta |
|
|
Parametric — Student-t |
|
|
Quantile Regression |
|
|
Full Parameterised QR |
|
Model Class Hierarchy#
classDiagram
class BaseRegressor {
<<ML / Baseline>>
+fit(X, y)
+predict(x)
+forecast(x)
}
class BaseQuantileRegressor {
<<ML Quantile>>
+quantiles: list
+fit(X, y)
+predict(X, sigma)
}
class BaseNGBoostRegressor {
<<ML Probabilistic>>
+fit(X, y)
+predict(X)
}
class BaseNeuralForecast {
<<NN Base>>
+fit(X_train, y_train, X_val, y_val)
+forecast(features)
+update(trial)
}
class MLPFVariants["MLPF Variants (×8)"] {
mlpfnormal · mlpflaplace · mlpflognormal
mlpfgamma · mlpfbeta
mlpfqr · mlpffpqr · mlpfcrc
}
class MLPGAMVariants["MLPGAM Variants (×9)"] {
mlpgamnormal · mlpgamlaplace · mlpgamlognormal
mlpgamgamma · mlpgambeta · mlpgamstudentt
mlpgamqr · mlpgamfpqr · mlpgamcrc
}
class MLPGAFVariants["MLPGAF Variants (×9)"] {
mlpgafnormal · mlpgaflaplace · mlpgaflognormal
mlpgafgamma · mlpgafbeta · mlpgafstudentt
mlpgafqr · mlpgaffpqr · mlpgafcrc
}
class NHITSVariants["N-HiTS Variants (×9)"] {
nhitsnormal · nhitslaplace · nhitslognormal
nhitsgamma · nhitsbeta · nhitsstudentt
nhitsqr · nhitsfpqr · nhitscrc
}
class RNNVariants["RNN Variants (×8)"] {
rnnnormal · rnnlaplace · rnnlognormal
rnngamma · rnnbeta · rnnstudentt
rnnqr · rnnfpqr
}
BaseRegressor <|-- NAIVEModel
BaseRegressor <|-- SEASONALNAIVEModel
BaseRegressor <|-- WINDOWAVERAGEModel
BaseRegressor <|-- DRIFTModel
BaseRegressor <|-- CONTEXTPARROTModel
BaseRegressor <|-- CATBOOSTModel
BaseRegressor <|-- XGBOOSTModel
BaseRegressor <|-- LIGHTGBMModel
BaseRegressor <|-- RANDOMFORESTModel
BaseRegressor <|-- LINEAREGModel
BaseRegressor <|-- GAUSSCATBOOSTModel
BaseRegressor <|-- QRRANDOMFORESTModel
BaseRegressor <|-- BaseQuantileRegressor
BaseQuantileRegressor <|-- QRCATBOOSTModel
BaseQuantileRegressor <|-- QRXGBOOSTModel
BaseQuantileRegressor <|-- QRLIGHTGBMModel
BaseNGBoostRegressor <|-- NGBOOSTNORMALModel
BaseNGBoostRegressor <|-- NGBOOSTLOGNORMALModel
BaseNGBoostRegressor <|-- NGBOOSTEXPONENTIALModel
BaseNeuralForecast <|-- MLPFModel
BaseNeuralForecast <|-- MLPGAMModel
BaseNeuralForecast <|-- MLPGAFModel
BaseNeuralForecast <|-- NHITSModel
BaseNeuralForecast <|-- RNNModel
MLPFModel <|-- MLPFVariants
MLPGAMModel <|-- MLPGAMVariants
MLPGAFModel <|-- MLPGAFVariants
NHITSModel <|-- NHITSVariants
RNNModel <|-- RNNVariants
Choosing a Model#
Use Case |
Recommended Models |
Why |
|---|---|---|
Establish performance floor |
|
Zero-training reference; compute skill scores against ML/NN |
Quick ML baseline |
|
Fast training, easy to interpret |
Tabular data, many features |
|
Strong with heterogeneous features |
GPU acceleration needed |
|
Native GPU support |
Parametric uncertainty — symmetric |
|
Gaussian output with mean + sigma |
Parametric uncertainty — heavy tails |
|
Student-t handles heavier-tailed residuals |
Parametric uncertainty — positive targets |
|
LogNormal / Gamma for count or power data |
Distribution-free quantiles — ML |
|
Pinball loss, any quantile at inference |
Distribution-free quantiles — NN |
|
Simultaneous multi-quantile output |
Learned quantile levels |
|
FPQR adapts quantile levels during training |
Post-hoc coverage guarantee |
|
Conformal residual coverage wraps any NN |
Sequential / recurrent patterns |
|
GRU/LSTM captures step-by-step temporal dependencies |
Complex temporal patterns |
|
Deep learning captures nonlinearities |
Interpretable forecasts |
|
Additive structure provides feature insights |
Small datasets |
|
Less prone to overfitting |
Model Registry#
Models are loaded dynamically via the registry in twiga/forecaster/registry.py. The naming convention is:
Module:
twiga.models.{domain}.{name}_modelModel class:
{NAME}Model(uppercase)Config class:
{NAME}Config(uppercase)
For example, get_model("catboost", "ml") loads from twiga.models.ml.catboost_model and returns (CATBOOSTModel, CATBOOSTConfig).
See Creating Custom Models for how to add new models to the registry.