Quantile Regression#

Source Files
  • twiga/models/ml/prob/base_quantile.py

  • twiga/models/ml/qrcatboost_model.py

  • twiga/models/ml/qrxgboost_model.py

  • twiga/models/ml/qrlightgbm_model.py

  • twiga/models/ml/qrrandomforest_model.py

  • twiga/models/nn/mlpfqr_model.py

  • twiga/models/nn/mlpgamqr_model.py

  • twiga/models/nn/mlpgafqr_model.py

  • twiga/models/nn/nhitsqr_model.py

  • twiga/models/nn/rnnqr_model.py

  • twiga/models/nn/mlpffpqr_model.py

  • twiga/models/nn/mlpgamfpqr_model.py

  • twiga/models/nn/mlpgaffpqr_model.py

  • twiga/models/nn/nhitsfpqr_model.py

  • twiga/models/nn/mlpfcrc_model.py

  • twiga/models/nn/mlpgamcrc_model.py

  • twiga/models/nn/mlpgafcrc_model.py

  • twiga/models/nn/nhitscrc_model.py

  • twiga/distributions/nn/quantile.py

  • twiga/distributions/nn/fpquantile.py

  • twiga/distributions/ml/utils.py

Quantile regression estimates conditional quantiles of the target distribution rather than the mean, enabling prediction intervals without distributional assumptions.

ML Quantile Models#

BaseQuantileRegressor#

All ML quantile models extend BaseQuantileRegressor (twiga/models/ml/prob/base_quantile.py), which itself extends BaseRegressor.

Constructor#

BaseQuantileRegressor(
    data_pipeline: Any | None = None,
    model_instance: Any = None,
    model_config: dict | None = None,
    quantiles: list[float] | None = None,
    conf_level: float = 0.05,
)

Attribute

Type

Description

quantiles

list[float]

Sorted quantile fractions, automatically extended with conf_level / 2 and 1 - conf_level / 2.

conf_level

float

Significance level; 0.1 produces a 90 % prediction interval.

supports_multi_output

bool

Whether the underlying engine natively handles multiple outputs.

models

dict[float, Any]

One trained model per quantile (LightGBM), or a single MultiOutputRegressor (CatBoost, XGBoost).

Key Methods#

Method

Signature

Description

fit

fit(X, y, verbose=False)

Trains all quantile models. Iterates per-quantile for LightGBM; fits a single model for CatBoost and XGBoost.

predict

predict(X, sigma=False)

Returns (median, quantile_array). When sigma=True returns (median, sigma) where sigma is the inter-quantile range. Quantile array shape: (batch, num_quantiles, horizon, num_targets).

forecast

forecast(x, sigma=False)

Returns {"loc": median, "scale": sigma} when sigma=True; otherwise (median, quantile_predictions).

Quantile merging

User-specified quantiles are merged with the confidence bounds derived from conf_level. For example, with quantiles=[0.25, 0.5, 0.75] and conf_level=0.1, the final sorted quantile list is [0.05, 0.25, 0.5, 0.75, 0.95].

Per-Engine Handling#

Each engine uses a different internal strategy for quantile regression:

Engine

Strategy

Loss / Parameter

CatBoost

Multi-quantile loss — single model predicts all quantiles

loss_function="MultiQuantile:alpha=q1,q2,..."

XGBoost

Single model with quantile_alpha array

objective="reg:quantileerror"

LightGBM

Per-quantile models — one LGBMRegressor per quantile

objective="quantile", alpha=qi

LightGBM training time

LightGBM trains a separate model per quantile, so training time scales linearly with the number of quantiles. With the default 5 quantiles plus 2 confidence bounds (7 total), expect roughly 7× the training time of a single point model. CatBoost and XGBoost predict all quantiles in a single model, making them more efficient for large quantile sets.

Available Models#

QRCATBOOSTModel#

Quantile regression CatBoost using MultiQuantile loss — all quantiles predicted in a single forward pass.

QRCATBOOSTConfig extends CATBOOSTConfig:

Field

Type

Default

Description

name

Literal["qrcatboost"]

"qrcatboost"

Model identifier.

quantiles

list | None

[0.05, 0.25, 0.5, 0.75, 0.95]

Quantile fractions for fitting.

conf_level

float | None

0.1

Significance level for prediction interval bounds.

from twiga.models.ml.qrcatboost_model import QRCATBOOSTConfig

config = QRCATBOOSTConfig(
    quantiles=[0.05, 0.25, 0.5, 0.75, 0.95],
    conf_level=0.1,
    task_type="CPU",
)

QRXGBOOSTModel#

Quantile regression XGBoost using reg:quantileerror objective with quantile_alpha.

QRXGBOOSTConfig extends XGBOOSTConfig:

Field

Type

Default

Description

name

Literal["qrxgboost"]

"qrxgboost"

Model identifier.

quantiles

list | None

[0.05, 0.25, 0.5, 0.75, 0.95]

Quantile fractions for fitting.

conf_level

float | None

0.1

Significance level for prediction interval bounds.

objective

Literal["reg:quantileerror"]

"reg:quantileerror"

XGBoost quantile error objective.

from twiga.models.ml.qrxgboost_model import QRXGBOOSTConfig

config = QRXGBOOSTConfig(
    quantiles=[0.05, 0.25, 0.5, 0.75, 0.95],
    conf_level=0.1,
    device="cpu",
)

QRLIGHTGBMModel#

Quantile regression LightGBM using per-quantile LGBMRegressor models with objective="quantile".

QRLIGHTGBMConfig extends LIGHTGBMConfig:

Field

Type

Default

Description

name

Literal["qrlightgbm"]

"qrlightgbm"

Model identifier.

quantiles

list | None

[0.05, 0.25, 0.5, 0.75, 0.95]

Quantile fractions for fitting.

conf_level

float | None

0.1

Significance level for prediction interval bounds.

objective

Literal["quantile"]

"quantile"

LightGBM quantile objective.

from twiga.models.ml.qrlightgbm_model import QRLIGHTGBMConfig

config = QRLIGHTGBMConfig(
    quantiles=[0.05, 0.25, 0.5, 0.75, 0.95],
    conf_level=0.1,
)

QRRANDOMFORESTModel#

Quantile regression using quantile_forest.RandomForestQuantileRegressor. A single forest is trained once and any quantile can be extracted at inference time without retraining. Monotonicity across quantiles is enforced post-hoc via isotonic regression.

QRRANDOMFORESTConfig extends RANDOMFORESTConfig:

Field

Type

Default

Description

name

Literal["qrrandomforest"]

"qrrandomforest"

Model identifier.

quantiles

list[float] | None

[0.05, 0.25, 0.5, 0.75, 0.95]

Quantile fractions for prediction.

conf_level

float | None

0.1

Significance level for prediction interval bounds.

from twiga.models.ml.qrrandomforest_model import QRRANDOMFORESTConfig

config = QRRANDOMFORESTConfig(
    quantiles=[0.05, 0.25, 0.5, 0.75, 0.95],
    conf_level=0.1,
    n_estimators=200,
    random_state=42,
)

Single-model inference

Because RandomForestQuantileRegressor stores the full leaf-node empirical distribution during training, any quantile can be queried at prediction time with no retraining. This makes QRRANDOMFORESTModel more efficient than QRLIGHTGBMModel when many quantile levels are needed.

NN Quantile Models#

All five NN architectures (MLPF, MLPGAM, MLPGAF, NHiTS, RNN) support QR and FPQR heads; MLPF, MLPGAM, MLPGAF, and NHiTS additionally support CRC heads. All are wired via the backbone/head architecture.

QR Models#

QR models pair a backbone with QRDistribution and predict at a fixed quantile grid set before training.

Model

Name

Config Class

Source Module

MLPF-QR

mlpfqr

MLPFQRConfig

twiga.models.nn.mlpfqr_model

MLPGAM-QR

mlpgamqr

MLPGAMQRConfig

twiga.models.nn.mlpgamqr_model

MLPGAF-QR

mlpgafqr

MLPGAFQRConfig

twiga.models.nn.mlpgafqr_model

N-HiTS-QR

nhitsqr

NHITSQRConfig

twiga.models.nn.nhitsqr_model

RNN-QR

rnnqr

RNNQRConfig

twiga.models.nn.rnnqr_model

All share the same config parameters:

Parameter

Type

Default

Description

quantiles

list[float] | None

None

Fixed quantile levels to predict

conf_level

float

0.05

Confidence level for interval bounds

loss_fn

Literal["pinball", "huber-pinball"]

"pinball"

Loss function

kappa

float

0.25

Huber transition parameter

eps

float

1e-6

Numerical stability

from twiga.models.nn.mlpfqr_model import MLPFQRConfig   # same pattern for all QR variants

config = MLPFQRConfig.from_data_config(data_config)
config.quantiles = [0.05, 0.25, 0.5, 0.75, 0.95]
config.conf_level = 0.05
config.loss_fn = "pinball"
config.max_epochs = 20

Loss Functions#

  • Pinball loss — standard asymmetric loss penalising over- and under-prediction differently at each quantile level \(\tau\)

  • Huber-pinball loss — smoothed near zero, controlled by kappa; reduces sensitivity to outliers

FPQR Models (Full Parameterised Quantile Regression)#

FPQR models pair each backbone with FPQRDistribution. The quantile levels are proposed dynamically per sample by a learned QuantileProposal network rather than fixed at config time.

Model

Name

Config Class

Source Module

MLPF-FPQR

mlpffpqr

MLPFFPQRConfig

twiga.models.nn.mlpffpqr_model

MLPGAM-FPQR

mlpgamfpqr

MLPGAMFPQRConfig

twiga.models.nn.mlpgamfpqr_model

MLPGAF-FPQR

mlpgaffpqr

MLPGAFFPQRConfig

twiga.models.nn.mlpgaffpqr_model

N-HiTS-FPQR

nhitsfpqr

NHITSFPQRConfig

twiga.models.nn.nhitsfpqr_model

RNN-FPQR

rnnfpqr

RNNFPQRConfig

twiga.models.nn.rnnqr_model

Parameter

Type

Default

Description

n_quantiles

int | None

9

Adaptive quantile levels (None → 9)

conf_level

float

0.05

Clamps proposals to [α/2, 1−α/2]

loss_fn

Literal["pinball", "huber-pinball"]

"pinball"

Loss function

kappa

float

0.25

Huber transition parameter

num_cosines

int

32

Cosine basis functions for tau embedding

from twiga.models.nn.mlpffpqr_model import MLPFFPQRConfig   # same pattern for all FPQR variants

config = MLPFFPQRConfig.from_data_config(data_config)
config.n_quantiles = 9
config.conf_level = 0.05
config.loss_fn = "pinball"
config.num_cosines = 32
config.max_epochs = 20

See FPQR Distribution for the full architecture description including QuantileProposal and CosinetauEmbedding.

CRC Models (Conditional Residual Calibration)#

CRC models pair a backbone with a learned scale head that approximates the absolute residual \(|y - \mu|\), giving calibrated prediction intervals without a separate conformal calibration step. The joint training objective is:

\[\mathcal{L} = \underbrace{\alpha \cdot \text{MSE}(\mu, y) + (1-\alpha) \cdot \text{MAE}(\mu, y)}_{\mathcal{L}_\mu} + \mathcal{L}_\sigma(\sigma,\, |y-\mu|)\]

Two head variants are used depending on the backbone:

  • CRCDistribution (MLPF, NHiTS): adds a linear projection from the backbone’s latent vector to \(\mu\).

  • AdditiveCRCDistribution (MLPGAM, MLPGAF): uses the backbone’s additive mean directly as \(\mu\), preserving the GAM decomposition.

Model

Name

Config Class

Source Module

MLPF-CRC

mlpfcrc

MLPFCRCConfig

twiga.models.nn.mlpfcrc_model

MLPGAM-CRC

mlpgamcrc

MLPGAMCRCConfig

twiga.models.nn.mlpgamcrc_model

MLPGAF-CRC

mlpgafcrc

MLPGAFCRCConfig

twiga.models.nn.mlpgafcrc_model

N-HiTS-CRC

nhitscrc

NHITSCRCConfig

twiga.models.nn.nhitscrc_model

All CRC configs share these additional fields (on top of their backbone config):

Field

Type

Default

Description

sigma_loss_fn

Literal["hybrid","gaussian","laplace","mse"]

"hybrid"

Calibration objective for σ

alpha

float

0.1

MSE weight in the mu and hybrid sigma losses

two_stage

bool

True

Two-stage training (stage 1: backbone; stage 2: sigma MLP only)

from twiga.models.nn.mlpgamcrc_model import MLPGAMCRCConfig

config = MLPGAMCRCConfig.from_data_config(data_config)
config.sigma_loss_fn = "hybrid"   # default — alpha*MSE + (1-alpha)*MAE on |y - mu|
config.alpha = 0.1
config.two_stage = True           # freeze backbone for sigma-only stage 2
config.max_epochs = 30

CRC vs post-hoc conformal prediction

CRC is trained jointly — no held-out calibration split required. Post-hoc conformal prediction wraps any already-trained model and provides finite-sample distribution-free coverage guarantees. CRC typically produces tighter intervals when the residual signal is strong; post-hoc conformal is always valid regardless of model quality.

See CRC Distribution for the full head architecture and loss equations.

ML Distribution Utilities#

Helper functions in twiga/distributions/ml/utils.py for post-processing quantile predictions:

Function

Input Shape

Output Shape

Description

interpolate_quantile(predictions, quantiles, target_quantile)

(B, Q, C)

(B, C)

Linear interpolation for any target quantile

get_median_prediction(predictions, quantiles)

(B, Q, C)

(B, C)

Extract or interpolate the median (0.5 quantile)

get_sigma_prediction(predictions, quantiles)

(B, Q, C)

(B, C)

Estimate sigma: (Q75 - Q25) / (2 * 0.6745)

from twiga.distributions.ml.utils import get_median_prediction, get_sigma_prediction

# predictions shape: (batch, num_quantiles, num_targets)
median = get_median_prediction(predictions, quantiles=[0.05, 0.25, 0.5, 0.75, 0.95])
sigma = get_sigma_prediction(predictions, quantiles=[0.05, 0.25, 0.5, 0.75, 0.95])

Evaluation#

Quantile predictions are evaluated using probabilistic metrics:

from twiga.core.metrics.prob import pinball_loss, pinball_score

# Per-quantile loss
loss = pinball_loss(y=true_values, q=quantile_predictions, tau=0.5)

# Aggregated pinball score
score = pinball_score(
    true=true_values,
    q_values=[q10, q50, q90],
    taus=[0.1, 0.5, 0.9],
)

Quantile predictions can also be used with conformal quantile regression for calibrated prediction intervals.

ML Usage Example#

import pandas as pd
from sklearn.preprocessing import StandardScaler

from twiga.core.config import DataPipelineConfig, ForecasterConfig
from twiga.forecaster.core import TwigaForecaster
from twiga.models.ml.qrcatboost_model import QRCATBOOSTConfig

data_config = DataPipelineConfig(
    target_feature="load_mw",
    period="1h",
    lookback_window_size=168,
    forecast_horizon=24,
    lags=[1, 24, 168],
    input_scaler=StandardScaler(),
)

train_config = ForecasterConfig(
    split_freq="months",
    train_size=6,
    test_size=1,
    window="expanding",
    project_name="QRForecast",
)

qr_config = QRCATBOOSTConfig(
    task_type="CPU",
    quantiles=[0.1, 0.25, 0.5, 0.75, 0.9],
    conf_level=0.1,
)

forecaster = TwigaForecaster(
    data_params=data_config,
    model_params=[qr_config],
    train_params=train_config,
)

forecaster.fit(train_df=train_df)

interval_dict, _ = forecaster.predict_interval(test_df=test_df)
for model_name, (lower, point, upper) in interval_dict.items():
    print(f"{model_name}: lower={lower.shape}, point={point.shape}, upper={upper.shape}")

API Reference#

class twiga.models.ml.prob.base_quantile.BaseQuantileRegressor(data_pipeline=None, model_instance=None, model_config=None, quantiles=None, conf_level=0.05)#

Bases: BaseRegressor

Base class for quantile regression models compatible with scikit-learn pipelines.

Supports multiple quantile models for probabilistic forecasting using any regression model (e.g., LightGBM, XGBoost, CatBoost). Automatically handles multi-output regression depending on model capabilities.

Variables:
  • data_pipeline (Any | None) – Optional preprocessing pipeline.

  • model_instance (Any) – Regression model class.

  • model_config (dict) – Configuration dictionary for the model.

  • quantiles (list[float]) – List of quantiles including confidence interval bounds.

  • conf_level (float) – Confidence level for interval predictions.

  • supports_multi_output (bool) – Whether the model supports multi-output regression.

  • models (dict[float, Any]) – Mapping from quantile to its corresponding trained model.

  • num_targets (int | None) – Number of output targets for multi-output cases.

  • horizon (int) – Prediction horizon length.

__init__(data_pipeline=None, model_instance=None, model_config=None, quantiles=None, conf_level=0.05)#

Initializes the BaseQuantileRegressor.

Parameters:
  • data_pipeline (Any | None) – Optional feature preprocessing pipeline.

  • model_instance (Any) – Regression model class (e.g., LGBMRegressor).

  • model_config (dict | None) – Parameters to initialize the model.

  • quantiles (list[float] | None) – List of quantiles to model (e.g., [0.25, 0.5, 0.75]). Defaults to [0.25, 0.5, 0.75, 0.95] plus confidence bounds.

  • conf_level (float) – Confidence level for interval bounds (e.g., 0.05 for 95% CI).

fit(X, y, eval_set=None, verbose=False)#

Fit the regression model to the training data.

Parameters:
  • X (ndarray) – Training input features.

  • y (ndarray) – Target values corresponding to the training inputs.

  • eval_set (tuple[ndarray, ndarray] | None) – Optional (X_val, y_val) for early stopping (reserved for future per-backend support; currently accepted but not forwarded).

  • verbose (bool) – Flag to control verbosity (default is False).

Return type:

BaseQuantileRegressor

Returns:

BaseQuantileRegressor – The instance itself.

Raises:

ValueError – If no model has been set prior to calling fit.

Example

>>> X = np.random.rand(10, 5, 3)
>>> y = np.random.rand(10, 5, 2)
>>> reg = BaseQuantileRegressor(model_instance=LinearRegression)
>>> reg.fit(X, y, verbose=True)
forecast(x, sigma=False)#

Forecast output using the fitted regression model.

Parameters:
  • x (ndarray) – Input features for forecasting.

  • sigma (bool) – Whether to return sigma predictions (default is False).

Return type:

dict

Returns:

dict – A dictionary containing the predicted values with key “loc”.

Example

>>> x = np.random.rand(10, 5, 3)
>>> reg = BaseRegressor()
>>> reg.model = SomeModel()  # assign a model implementing predict
>>> forecast_output = reg.forecast(x)
>>> "loc" in forecast_output
True
get_median_quantile(predictions)#

Get median quantile predictions for the input data.

Parameters:

predictions (ndarray) – Input features of shape (n_samples, n_features).

Return type:

ndarray

Returns:

np.ndarray – Median quantile predictions of shape (n_samples, n_targets).

get_sigma_quantile(predictions)#

Get sigma quantile predictions for the input data.

Parameters:

predictions (ndarray) – Input features of shape (n_samples, n_features).

Return type:

ndarray

Returns:

np.ndarray – Sigma quantile predictions of shape (n_samples, n_targets).

predict(X, sigma=False)#

Predict quantile values and optionally sigma for the input data.

Parameters:
  • X (ndarray) – Input features of shape (n_samples, seq_len, n_features).

  • sigma (bool) – Whether to return sigma predictions (default is True).

Returns:

tuple – Median predictions and either sigma predictions or full quantile predictions.

set_fit_request(*, eval_set='$UNCHANGED$', verbose='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the fit method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

eval_setstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for eval_set parameter in fit.

verbosestr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for verbose parameter in fit.

Returns#

selfobject

The updated object.

set_predict_request(*, sigma='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the predict method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sigmastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sigma parameter in predict.

Returns#

selfobject

The updated object.

set_score_request(*, sample_weight='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the score method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns#

selfobject

The updated object.

update(trial)#

Update the quantile regression model using hyperparameter suggestions.

The model is re-instantiated with new parameters obtained from the configuration’s search space.

Parameters:

trial – An Optuna trial object used to sample hyperparameters.

class twiga.models.ml.qrcatboost_model.QRCATBOOSTConfig(**data)#

Bases: CATBOOSTConfig

Configuration model for QRCatBoost algorithms.

This class extends CATBOOSTConfig with parameters specific to CatBoost, enabling hardware acceleration, reproducibility, verbosity control, and hyperparameter tuning. The configuration includes fixed parameters as well as a hyperparameter search space that defines ranges for parameters used during optimization.

Variables:
  • name (Literal["qrcatboost"]) – Identifier for the model type, fixed to “qrcatboost”. This field is excluded from parameter tuning. Input can be provided using the alias “name”.

  • task_type (Literal["GPU", "CPU"]) – Hardware acceleration type. Use “GPU” for GPU acceleration or “CPU” for standard processing.

  • random_state (int) – A positive integer seed used for random number generation to ensure reproducibility.

  • verbose (Literal[0, 1, 2]) –

    Verbosity level for model output. Acceptable values are:

    0 - Silent, 1 - Minimal, 2 - Detailed.

  • allow_writing_files (bool) – Flag indicating whether the model is allowed to write files to disk during training.

  • search_space (BaseSearchSpace) – Hyperparameter search space defining ranges for tuning parameters such as learning_rate, depth, iterations, and min_data_in_leaf.

Examples

How to use:
>>> # Instantiate the configuration with a positive seed value.
>>> config = CATBOOSTConfig(seed=42)
>>> print(config.task_type)
CPU
>>>
>>> # Integrate with an Optuna study for hyperparameter tuning:
>>> import optuna
>>> def objective(trial):
...     params = config.get_optuna_params(trial)
...     # Train your model with the suggested parameters and return an evaluation metric.
...     return evaluate_model(params)
>>> study = optuna.create_study(direction="minimize")
>>> study.optimize(objective, n_trials=10)
conf_level: float | None#
model_config: ClassVar[ConfigDict] = {'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: Literal['qrcatboost']#
quantiles: list | None#
class twiga.models.ml.qrcatboost_model.QRCATBOOSTModel(model_config=None)#

Bases: BaseQuantileRegressor

A quantile regression model class using CatBoost as the underlying model.

Inherits from:

BaseQuantileRegressor: The base regressor class providing core methods.

Parameters:
  • seed (int | None) – Random seed for reproducibility. Defaults to None.

  • verbose (int | bool | None) – Controls verbosity of CatBoost. Defaults to 0.

  • **kwargs – Additional keyword arguments to pass to the CatBoostRegressor.

Variables:
  • name (str) – Name of the model (“CATBOOST”).

  • model_params (dict) – Parameters for the CatBoostRegressor.

  • model (MultiOutputRegressor) – A CatBoostRegressor wrapped for multi-output regression.

set_fit_request(*, eval_set='$UNCHANGED$', verbose='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the fit method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

eval_setstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for eval_set parameter in fit.

verbosestr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for verbose parameter in fit.

Returns#

selfobject

The updated object.

set_predict_request(*, sigma='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the predict method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sigmastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sigma parameter in predict.

Returns#

selfobject

The updated object.

set_score_request(*, sample_weight='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the score method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns#

selfobject

The updated object.

class twiga.models.ml.qrxgboost_model.QRXGBOOSTConfig(**data)#

Bases: XGBOOSTConfig

Configuration model for QRXGBoost algorithm.

This class extends XGBOOSTConfig with parameters specific to QRXGBoost, enabling reproducibility, verbosity control, and hyperparameter tuning. It includes fixed parameters as well as a hyperparameter search space defining ranges for tuning key parameters.

Variables:
  • name (Literal["qrxgboost"]) – Identifier for the model type, fixed to “qrxgboost”. Excluded from parameter tuning. Input can be provided using the alias “model_name”.

  • quantiles (list|None) – List of quantile fractions for fitting the quantile regression model. Defaults to sorted list of common quantiles.

  • conf_level (float|None) – Confidence level for the quantile regression model. Defaults to 0.1, which corresponds to a 90% confidence interval.

  • (Literal["reg (objective) – squarederror”]): Objective function for XGBoost.

conf_level: float | None#
model_config: ClassVar[ConfigDict] = {'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: Literal['qrxgboost']#
objective: Literal['reg:quantileerror']#
quantiles: list | None#
class twiga.models.ml.qrxgboost_model.QRXGBOOSTModel(model_config=None)#

Bases: BaseQuantileRegressor

Quantile Regression model using QXGBoost as the underlying estimator.

This class provides an interface for initializing and updating an XGBoost model for quantile regression tasks. It utilizes a configuration model (QRXGBOOSTConfig) to manage hyperparameters and settings.

Parameters:

model_config (XGBOOSTConfig | None) – Configuration for XGBoost. Defaults to None, in which case the default configuration is used.

Variables:
  • model_config (XGBOOSTConfig) – The configuration object for XGBoost.

  • model (XGBRegressor) – The instantiated XGBRegressor model.

set_fit_request(*, eval_set='$UNCHANGED$', verbose='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the fit method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

eval_setstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for eval_set parameter in fit.

verbosestr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for verbose parameter in fit.

Returns#

selfobject

The updated object.

set_predict_request(*, sigma='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the predict method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sigmastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sigma parameter in predict.

Returns#

selfobject

The updated object.

set_score_request(*, sample_weight='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the score method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns#

selfobject

The updated object.

class twiga.models.ml.qrlightgbm_model.QRLIGHTGBMConfig(**data)#

Bases: LIGHTGBMConfig

Configuration model for QRLightGBM algorithms.

This class extends LIGHTGBMConfig with parameters specific to QRLightGBM, enabling hardware acceleration and hyperparameter tuning. It includes fixed parameters and a hyperparameter search space for tuning key parameters.

Variables:
  • name (Literal["qrlightgbm"]) – Model identifier fixed to “qrlightgbm”. Input can be provided using the alias “model_name”.

  • objective (Literal["regression"]) – Regression objective.

conf_level: float | None#
model_config: ClassVar[ConfigDict] = {'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: Literal['qrlightgbm']#
objective: Literal['quantile']#
quantiles: list | None#
class twiga.models.ml.qrlightgbm_model.QRLIGHTGBMModel(model_config=None)#

Bases: BaseQuantileRegressor

QRLightGBM regression model with multi-output support.

This class provides an interface for initializing and updating an QRLGBMRegressor model for regression tasks. It uses a configuration model (QRLIGHTGBMConfig) to manage hyperparameters and settings.

Parameters:

model_config (QRLIGHTGBMConfig | None) – Configuration for LightGBM. If None, the default configuration is used.

Variables:
  • model_config (QRLIGHTGBMConfig) – The configuration object for LightGBM.

  • model (MultiOutputRegressor) – The instantiated LightGBM regressor wrapped for multi-output regression.

set_fit_request(*, eval_set='$UNCHANGED$', verbose='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the fit method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

eval_setstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for eval_set parameter in fit.

verbosestr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for verbose parameter in fit.

Returns#

selfobject

The updated object.

set_predict_request(*, sigma='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the predict method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sigmastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sigma parameter in predict.

Returns#

selfobject

The updated object.

set_score_request(*, sample_weight='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the score method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns#

selfobject

The updated object.

class twiga.models.ml.qrrandomforest_model.QRRANDOMFORESTConfig(**data)#

Bases: RANDOMFORESTConfig

Configuration for the Quantile Random Forest probabilistic model.

Extends RANDOMFORESTConfig - inherits random_state, n_jobs, and the full search space.

Variables:
  • name – Fixed to "qrrandomforest".

  • quantiles – Quantile levels to predict at inference time. Combined with confidence-interval bounds derived from conf_level.

  • conf_level – Half-width of the symmetric confidence interval. E.g. 0.1 → 5th and 95th percentile bounds.

conf_level: float | None#
model_config: ClassVar[ConfigDict] = {'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: Literal['qrrandomforest']#
quantiles: list[float] | None#
class twiga.models.ml.qrrandomforest_model.QRRANDOMFORESTModel(model_config=None)#

Bases: BaseQuantileRegressor

Quantile Random Forest for probabilistic multi-horizon forecasting.

Wraps RandomForestQuantileRegressor which trains a single forest and delivers any quantile at inference time without retraining. Monotonicity across quantiles is enforced post-hoc via isotonic regression, consistent with all other QR variants in this library.

The eval_set argument is accepted in fit() for API compatibility but silently ignored - QRF has no early stopping.

Parameters:

model_config (RANDOMFORESTConfig | None) – Configuration object. Defaults to QRRANDOMFORESTConfig.

Example:

model = QRRANDOMFORESTModel()
model.fit(X_train, y_train)
loc, quantiles = model.predict(X_test)
set_fit_request(*, eval_set='$UNCHANGED$', verbose='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the fit method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to fit.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

eval_setstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for eval_set parameter in fit.

verbosestr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for verbose parameter in fit.

Returns#

selfobject

The updated object.

set_predict_request(*, sigma='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the predict method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to predict if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to predict.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sigmastr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sigma parameter in predict.

Returns#

selfobject

The updated object.

set_score_request(*, sample_weight='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the score method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters#

sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for sample_weight parameter in score.

Returns#

selfobject

The updated object.