### Configuration tunables
-| Path | Default | Type / Range | Description |
-| -------------------------------------------------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| _Protections_ | | | |
-| custom_protections.trade_duration_candles | 72 | int >= 1 | Estimated trade duration in candles. Scales protections stop duration candles and trade limit. |
-| custom_protections.lookback_period_fraction | 0.5 | float (0,1] | Fraction of `fit_live_predictions_candles` used to calculate `lookback_period_candles` for _MaxDrawdown_ and _StoplossGuard_ protections. |
-| custom_protections.cooldown.enabled | true | bool | Enable/disable _CooldownPeriod_ protection. |
-| custom_protections.cooldown.stop_duration_candles | 4 | int >= 1 | Number of candles to wait before allowing new trades after a trade is closed. |
-| custom_protections.drawdown.enabled | true | bool | Enable/disable _MaxDrawdown_ protection. |
-| custom_protections.drawdown.max_allowed_drawdown | 0.2 | float (0,1) | Maximum allowed drawdown. |
-| custom_protections.stoploss.enabled | true | bool | Enable/disable _StoplossGuard_ protection. |
-| _Leverage_ | | | |
-| leverage | `proposed_leverage` | float [1.0, max_leverage] | Leverage. Fallback to `proposed_leverage` for the pair. |
-| _Exit pricing_ | | | |
-| exit_pricing.trade_price_target_method | `moving_average` | enum {`moving_average`,`quantile_interpolation`,`weighted_average`} | Trade NATR computation method. (Deprecated alias: `exit_pricing.trade_price_target`) |
-| exit_pricing.thresholds_calibration.decline_quantile | 0.75 | float (0,1) | PnL decline quantile threshold. |
-| _Reversal confirmation_ | | | |
-| reversal_confirmation.lookback_period_candles | 0 | int >= 0 | Prior confirming candles; 0 = none. (Deprecated alias: `reversal_confirmation.lookback_period`) |
-| reversal_confirmation.decay_fraction | 0.5 | float (0,1] | Geometric per-candle volatility adjusted reversal threshold relaxation factor. (Deprecated alias: `reversal_confirmation.decay_ratio`) |
-| reversal_confirmation.min_natr_multiplier_fraction | 0.0095 | float [0,1] | Lower bound fraction for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.min_natr_ratio_percent`) |
-| reversal_confirmation.max_natr_multiplier_fraction | 0.075 | float [0,1] | Upper bound fraction (>= lower bound) for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.max_natr_ratio_percent`) |
-| _Regressor model_ | | | |
-| freqai.regressor | `xgboost` | enum {`xgboost`,`lightgbm`,`histgradientboostingregressor`} | Machine learning regressor algorithm. |
-| _Extrema smoothing_ | | | |
-| freqai.extrema_smoothing.method | `gaussian` | enum {`gaussian`,`kaiser`,`triang`,`smm`,`sma`,`savgol`,`gaussian_filter1d`} | Extrema smoothing method (`smm`=median, `sma`=mean, `savgol`=Savitzky–Golay). |
-| freqai.extrema_smoothing.window_candles | 5 | int >= 3 | Smoothing window length (candles). (Deprecated alias: `freqai.extrema_smoothing.window`) |
-| freqai.extrema_smoothing.beta | 8.0 | float > 0 | Shape parameter for `kaiser` kernel. |
-| freqai.extrema_smoothing.polyorder | 3 | int >= 1 | Polynomial order for `savgol` smoothing. |
-| freqai.extrema_smoothing.mode | `mirror` | enum {`mirror`,`constant`,`nearest`,`wrap`,`interp`} | Boundary mode for `savgol` and `gaussian_filter1d`. |
-| freqai.extrema_smoothing.sigma | 1.0 | float > 0 | Gaussian `sigma` for `gaussian_filter1d` smoothing. |
-| _Extrema weighting_ | | | |
-| freqai.extrema_weighting.strategy | `none` | enum {`none`,`amplitude`,`amplitude_threshold_ratio`,`volume_rate`,`speed`,`efficiency_ratio`,`volume_weighted_efficiency_ratio`} | Extrema weighting source: unweighted (`none`), swing amplitude (`amplitude`), swing amplitude / median volatility-threshold ratio (`amplitude_threshold_ratio`), swing volume per candle (`volume_rate`), swing speed (`speed`), swing efficiency ratio (`efficiency_ratio`), or swing volume-weighted efficiency ratio (`volume_weighted_efficiency_ratio`). |
-| freqai.extrema_weighting.standardization | `none` | enum {`none`,`zscore`,`robust`,`mmad`,`power_yj`} | Standardization method applied to smoothed weighted extrema before normalization. `none`=w, `zscore`=(w-μ)/σ, `robust`=(w-median)/IQR, `mmad`=(w-median)/(MAD·k), `power_yj`=YJ(w). |
-| freqai.extrema_weighting.robust_quantiles | [0.25, 0.75] | list[float] where 0 <= Q1 < Q3 <= 1 | Quantile range for robust standardization, Q1 and Q3. |
-| freqai.extrema_weighting.mmad_scaling_factor | 1.4826 | float > 0 | Scaling factor for MMAD standardization. |
-| freqai.extrema_weighting.normalization | `maxabs` | enum {`maxabs`,`minmax`,`sigmoid`,`none`} | Normalization method applied to smoothed weighted extrema. `maxabs`=w/max(\|w\|), `minmax`=low+(w-min)/(max-min)·(high-low), `sigmoid`=2·σ(scale·w)-1, `none`=w. |
-| freqai.extrema_weighting.minmax_range | [-1.0, 1.0] | list[float] | Target range for `minmax` normalization, min and max. |
-| freqai.extrema_weighting.sigmoid_scale | 1.0 | float > 0 | Scale parameter for `sigmoid` normalization, controls steepness. |
-| freqai.extrema_weighting.gamma | 1.0 | float (0,10] | Contrast exponent applied to smoothed weighted extrema after normalization: >1 emphasizes extrema, values between 0 and 1 soften. |
-| _Feature parameters_ | | | |
-| freqai.feature_parameters.label_period_candles | min/max midpoint | int >= 1 | Zigzag labeling NATR horizon. |
-| freqai.feature_parameters.min_label_period_candles | 12 | int >= 1 | Minimum labeling NATR horizon used for reversals labeling HPO. |
-| freqai.feature_parameters.max_label_period_candles | 24 | int >= 1 | Maximum labeling NATR horizon used for reversals labeling HPO. |
-| freqai.feature_parameters.label_natr_multiplier | min/max midpoint | float > 0 | Zigzag labeling NATR multiplier. (Deprecated alias: `freqai.feature_parameters.label_natr_ratio`) |
-| freqai.feature_parameters.min_label_natr_multiplier | 9.0 | float > 0 | Minimum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.min_label_natr_ratio`) |
-| freqai.feature_parameters.max_label_natr_multiplier | 12.0 | float > 0 | Maximum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.max_label_natr_ratio`) |
-| freqai.feature_parameters.label_frequency_candles | `auto` | int >= 2 \| `auto` | Reversals labeling frequency. `auto` = max(2, 2 \* number of whitelisted pairs). |
-| freqai.feature_parameters.label_weights | [1/7,1/7,1/7,1/7,1/7,1/7,1/7] | list[float] | Per-objective weights used in distance calculations to ideal point. Objectives: (1) number of detected reversals, (2) median swing amplitude, (3) median (swing amplitude / median volatility-threshold ratio), (4) median swing volume per candle, (5) median swing speed, (6) median swing efficiency ratio, (7) median swing volume-weighted efficiency ratio. |
-| freqai.feature_parameters.label_p_order | `None` | float \| None | p-order parameter for distance metrics. Used by minkowski (default 2.0) and power_mean (default 1.0). Ignored by other metrics. |
-| freqai.feature_parameters.label_method | `compromise_programming` | enum {`compromise_programming`,`topsis`,`kmeans`,`kmeans2`,`kmedoids`,`knn`,`medoid`} | HPO `label` Pareto front trial selection method. |
-| freqai.feature_parameters.label_distance_metric | `euclidean` | string | Distance metric for `compromise_programming` and `topsis` methods. |
-| freqai.feature_parameters.label_cluster_metric | `euclidean` | string | Distance metric for `kmeans`, `kmeans2`, and `kmedoids` methods. |
-| freqai.feature_parameters.label_cluster_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Cluster selection method for clustering-based label methods. |
-| freqai.feature_parameters.label_cluster_trial_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Best cluster trial selection method for clustering-based label methods. |
-| freqai.feature_parameters.label_density_metric | method-dependent | string | Distance metric for `knn` and `medoid` methods. |
-| freqai.feature_parameters.label_density_aggregation | `power_mean` | enum {`power_mean`,`quantile`,`min`,`max`} | Aggregation method for KNN neighbor distances. |
-| freqai.feature_parameters.label_density_n_neighbors | 5 | int >= 1 | Number of neighbors for KNN. |
-| freqai.feature_parameters.label_density_aggregation_param | aggregation-dependent | float \| None | Tunable for KNN neighbor distance aggregation: p-order (`power_mean`) or quantile value (`quantile`). |
-| _Predictions extrema_ | | | |
-| freqai.predictions_extrema.selection_method | `rank_extrema` | enum {`rank_extrema`,`rank_peaks`,`partition`} | Extrema selection method. `rank_extrema` ranks extrema values, `rank_peaks` ranks detected peak values, `partition` uses sign-based partitioning. |
-| freqai.predictions_extrema.threshold_smoothing_method | `mean` | enum {`mean`,`isodata`,`li`,`minimum`,`otsu`,`triangle`,`yen`,`median`,`soft_extremum`} | Thresholding method for prediction thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_smoothing`) |
-| freqai.predictions_extrema.soft_extremum_alpha | 12.0 | float >= 0 | Alpha for `soft_extremum` thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_alpha`) |
-| freqai.predictions_extrema.outlier_threshold_quantile | 0.999 | float (0,1) | Quantile threshold for predictions outlier filtering. (Deprecated alias: `freqai.predictions_extrema.threshold_outlier`) |
-| freqai.predictions_extrema.keep_extrema_fraction | 1.0 | float (0,1] | Fraction of extrema used for thresholds. `1.0` uses all, lower values keep only most significant. Applies to `rank_extrema` and `rank_peaks`; ignored for `partition`. (Deprecated alias: `freqai.predictions_extrema.extrema_fraction`) |
-| _Optuna / HPO_ | | | |
-| freqai.optuna_hyperopt.enabled | false | bool | Enables HPO. |
-| freqai.optuna_hyperopt.sampler | `tpe` | enum {`tpe`,`auto`} | HPO sampler algorithm for `hp` namespace. `tpe` uses [TPESampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html) with multivariate and group, `auto` uses [AutoSampler](https://hub.optuna.org/samplers/auto_sampler). |
-| freqai.optuna_hyperopt.label_sampler | `auto` | enum {`auto`,`tpe`,`nsgaii`,`nsgaiii`} | HPO sampler algorithm for multi-objective `label` namespace. `nsgaii` uses [NSGAIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIISampler.html), `nsgaiii` uses [NSGAIIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIIISampler.html). |
-| freqai.optuna_hyperopt.storage | `file` | enum {`file`,`sqlite`} | HPO storage backend. |
-| freqai.optuna_hyperopt.continuous | true | bool | Continuous HPO. |
-| freqai.optuna_hyperopt.warm_start | true | bool | Warm start HPO with previous best value(s). |
-| freqai.optuna_hyperopt.n_startup_trials | 15 | int >= 0 | HPO startup trials. |
-| freqai.optuna_hyperopt.n_trials | 50 | int >= 1 | Maximum HPO trials. |
-| freqai.optuna_hyperopt.n_jobs | CPU threads / 4 | int >= 1 | Parallel HPO workers. |
-| freqai.optuna_hyperopt.timeout | 7200 | int >= 0 | HPO wall-clock timeout in seconds. |
-| freqai.optuna_hyperopt.label_candles_step | 1 | int >= 1 | Step for Zigzag NATR horizon `label` search space. |
-| freqai.optuna_hyperopt.space_reduction | false | bool | Enable/disable `hp` search space reduction based on previous best parameters. |
-| freqai.optuna_hyperopt.space_fraction | 0.4 | float [0,1] | Fraction of the `hp` search space to use with `space_reduction`. Lower values create narrower search ranges around the best parameters. (Deprecated alias: `freqai.optuna_hyperopt.expansion_ratio`) |
-| freqai.optuna_hyperopt.min_resource | 3 | int >= 1 | Minimum resource per [HyperbandPruner](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.pruners.HyperbandPruner.html) rung. |
-| freqai.optuna_hyperopt.seed | 1 | int >= 0 | HPO RNG seed. |
+| Path | Default | Type / Range | Description |
+| -------------------------------------------------------------- | ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| _Protections_ | | | |
+| custom_protections.trade_duration_candles | 72 | int >= 1 | Estimated trade duration in candles. Scales protections stop duration candles and trade limit. |
+| custom_protections.lookback_period_fraction | 0.5 | float (0,1] | Fraction of `fit_live_predictions_candles` used to calculate `lookback_period_candles` for _MaxDrawdown_ and _StoplossGuard_ protections. |
+| custom_protections.cooldown.enabled | true | bool | Enable/disable _CooldownPeriod_ protection. |
+| custom_protections.cooldown.stop_duration_candles | 4 | int >= 1 | Number of candles to wait before allowing new trades after a trade is closed. |
+| custom_protections.drawdown.enabled | true | bool | Enable/disable _MaxDrawdown_ protection. |
+| custom_protections.drawdown.max_allowed_drawdown | 0.2 | float (0,1) | Maximum allowed drawdown. |
+| custom_protections.stoploss.enabled | true | bool | Enable/disable _StoplossGuard_ protection. |
+| _Leverage_ | | | |
+| leverage | `proposed_leverage` | float [1.0, max_leverage] | Leverage. Fallback to `proposed_leverage` for the pair. |
+| _Exit pricing_ | | | |
+| exit_pricing.trade_price_target_method | `moving_average` | enum {`moving_average`,`quantile_interpolation`,`weighted_average`} | Trade NATR computation method. (Deprecated alias: `exit_pricing.trade_price_target`) |
+| exit_pricing.thresholds_calibration.decline_quantile | 0.75 | float (0,1) | PnL decline quantile threshold. |
+| _Reversal confirmation_ | | | |
+| reversal_confirmation.lookback_period_candles | 0 | int >= 0 | Prior confirming candles; 0 = none. (Deprecated alias: `reversal_confirmation.lookback_period`) |
+| reversal_confirmation.decay_fraction | 0.5 | float (0,1] | Geometric per-candle volatility adjusted reversal threshold relaxation factor. (Deprecated alias: `reversal_confirmation.decay_ratio`) |
+| reversal_confirmation.min_natr_multiplier_fraction | 0.0095 | float [0,1] | Lower bound fraction for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.min_natr_ratio_percent`) |
+| reversal_confirmation.max_natr_multiplier_fraction | 0.075 | float [0,1] | Upper bound fraction (>= lower bound) for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.max_natr_ratio_percent`) |
+| _Regressor model_ | | | |
+| freqai.regressor | `xgboost` | enum {`xgboost`,`lightgbm`,`histgradientboostingregressor`} | Machine learning regressor algorithm. |
+| _Extrema smoothing_ | | | |
+| freqai.extrema_smoothing.method | `gaussian` | enum {`gaussian`,`kaiser`,`triang`,`smm`,`sma`,`savgol`,`gaussian_filter1d`} | Extrema smoothing method (`smm`=median, `sma`=mean, `savgol`=Savitzky–Golay). |
+| freqai.extrema_smoothing.window_candles | 5 | int >= 3 | Smoothing window length (candles). (Deprecated alias: `freqai.extrema_smoothing.window`) |
+| freqai.extrema_smoothing.beta | 8.0 | float > 0 | Shape parameter for `kaiser` kernel. |
+| freqai.extrema_smoothing.polyorder | 3 | int >= 1 | Polynomial order for `savgol` smoothing. |
+| freqai.extrema_smoothing.mode | `mirror` | enum {`mirror`,`constant`,`nearest`,`wrap`,`interp`} | Boundary mode for `savgol` and `gaussian_filter1d`. |
+| freqai.extrema_smoothing.sigma | 1.0 | float > 0 | Gaussian `sigma` for `gaussian_filter1d` smoothing. |
+| _Extrema weighting_ | | | |
+| freqai.extrema_weighting.strategy | `none` | enum {`none`,`amplitude`,`amplitude_threshold_ratio`,`volume_rate`,`speed`,`efficiency_ratio`,`volume_weighted_efficiency_ratio`,`combined`} | Extrema weighting source: unweighted (`none`), swing amplitude (`amplitude`), swing amplitude / median volatility-threshold ratio (`amplitude_threshold_ratio`), swing volume per candle (`volume_rate`), swing speed (`speed`), swing efficiency ratio (`efficiency_ratio`), swing volume-weighted efficiency ratio (`volume_weighted_efficiency_ratio`), or combined metrics aggregation (`combined`). |
+| freqai.extrema_weighting.metric_coefficients | {} | dict[str, float] | Per-metric coefficients for `combined` strategy. Keys: `amplitude`, `amplitude_threshold_ratio`, `volume_rate`, `speed`, `efficiency_ratio`, `volume_weighted_efficiency_ratio`. |
+| freqai.extrema_weighting.aggregation | `weighted_average` | enum {`weighted_average`,`geometric_mean`} | Metric aggregation method for `combined` strategy. `weighted_average`=Σ(coef·metric)/Σ(coef), `geometric_mean`=∏(metric^coef)^(1/Σcoef). |
+| freqai.extrema_weighting.standardization | `none` | enum {`none`,`zscore`,`robust`,`mmad`,`power_yj`} | Standardization method applied to smoothed weighted extrema before normalization. `none`=w, `zscore`=(w-μ)/σ, `robust`=(w-median)/IQR, `mmad`=(w-median)/(MAD·k), `power_yj`=YJ(w). |
+| freqai.extrema_weighting.robust_quantiles | [0.25, 0.75] | list[float] where 0 <= Q1 < Q3 <= 1 | Quantile range for robust standardization, Q1 and Q3. |
+| freqai.extrema_weighting.mmad_scaling_factor | 1.4826 | float > 0 | Scaling factor for MMAD standardization. |
+| freqai.extrema_weighting.normalization | `maxabs` | enum {`maxabs`,`minmax`,`sigmoid`,`none`} | Normalization method applied to smoothed weighted extrema. `maxabs`=w/max(\|w\|), `minmax`=low+(w-min)/(max-min)·(high-low), `sigmoid`=2·σ(scale·w)-1, `none`=w. |
+| freqai.extrema_weighting.minmax_range | [-1.0, 1.0] | list[float] | Target range for `minmax` normalization, min and max. |
+| freqai.extrema_weighting.sigmoid_scale | 1.0 | float > 0 | Scale parameter for `sigmoid` normalization, controls steepness. |
+| freqai.extrema_weighting.gamma | 1.0 | float (0,10] | Contrast exponent applied to smoothed weighted extrema after normalization: >1 emphasizes extrema, values between 0 and 1 soften. |
+| _Feature parameters_ | | | |
+| freqai.feature_parameters.label_period_candles | min/max midpoint | int >= 1 | Zigzag labeling NATR horizon. |
+| freqai.feature_parameters.min_label_period_candles | 12 | int >= 1 | Minimum labeling NATR horizon used for reversals labeling HPO. |
+| freqai.feature_parameters.max_label_period_candles | 24 | int >= 1 | Maximum labeling NATR horizon used for reversals labeling HPO. |
+| freqai.feature_parameters.label_natr_multiplier | min/max midpoint | float > 0 | Zigzag labeling NATR multiplier. (Deprecated alias: `freqai.feature_parameters.label_natr_ratio`) |
+| freqai.feature_parameters.min_label_natr_multiplier | 9.0 | float > 0 | Minimum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.min_label_natr_ratio`) |
+| freqai.feature_parameters.max_label_natr_multiplier | 12.0 | float > 0 | Maximum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.max_label_natr_ratio`) |
+| freqai.feature_parameters.label_frequency_candles | `auto` | int >= 2 \| `auto` | Reversals labeling frequency. `auto` = max(2, 2 \* number of whitelisted pairs). |
+| freqai.feature_parameters.label_weights | [1/7,1/7,1/7,1/7,1/7,1/7,1/7] | list[float] | Per-objective weights used in distance calculations to ideal point. Objectives: (1) number of detected reversals, (2) median swing amplitude, (3) median (swing amplitude / median volatility-threshold ratio), (4) median swing volume per candle, (5) median swing speed, (6) median swing efficiency ratio, (7) median swing volume-weighted efficiency ratio. |
+| freqai.feature_parameters.label_p_order | `None` | float \| None | p-order parameter for distance metrics. Used by minkowski (default 2.0) and power_mean (default 1.0). Ignored by other metrics. |
+| freqai.feature_parameters.label_method | `compromise_programming` | enum {`compromise_programming`,`topsis`,`kmeans`,`kmeans2`,`kmedoids`,`knn`,`medoid`} | HPO `label` Pareto front trial selection method. |
+| freqai.feature_parameters.label_distance_metric | `euclidean` | string | Distance metric for `compromise_programming` and `topsis` methods. |
+| freqai.feature_parameters.label_cluster_metric | `euclidean` | string | Distance metric for `kmeans`, `kmeans2`, and `kmedoids` methods. |
+| freqai.feature_parameters.label_cluster_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Cluster selection method for clustering-based label methods. |
+| freqai.feature_parameters.label_cluster_trial_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Best cluster trial selection method for clustering-based label methods. |
+| freqai.feature_parameters.label_density_metric | method-dependent | string | Distance metric for `knn` and `medoid` methods. |
+| freqai.feature_parameters.label_density_aggregation | `power_mean` | enum {`power_mean`,`quantile`,`min`,`max`} | Aggregation method for KNN neighbor distances. |
+| freqai.feature_parameters.label_density_n_neighbors | 5 | int >= 1 | Number of neighbors for KNN. |
+| freqai.feature_parameters.label_density_aggregation_param | aggregation-dependent | float \| None | Tunable for KNN neighbor distance aggregation: p-order (`power_mean`) or quantile value (`quantile`). |
+| _Predictions extrema_ | | | |
+| freqai.predictions_extrema.selection_method | `rank_extrema` | enum {`rank_extrema`,`rank_peaks`,`partition`} | Extrema selection method. `rank_extrema` ranks extrema values, `rank_peaks` ranks detected peak values, `partition` uses sign-based partitioning. |
+| freqai.predictions_extrema.threshold_smoothing_method | `mean` | enum {`mean`,`isodata`,`li`,`minimum`,`otsu`,`triangle`,`yen`,`median`,`soft_extremum`} | Thresholding method for prediction thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_smoothing`) |
+| freqai.predictions_extrema.soft_extremum_alpha | 12.0 | float >= 0 | Alpha for `soft_extremum` thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_alpha`) |
+| freqai.predictions_extrema.outlier_threshold_quantile | 0.999 | float (0,1) | Quantile threshold for predictions outlier filtering. (Deprecated alias: `freqai.predictions_extrema.threshold_outlier`) |
+| freqai.predictions_extrema.keep_extrema_fraction | 1.0 | float (0,1] | Fraction of extrema used for thresholds. `1.0` uses all, lower values keep only most significant. Applies to `rank_extrema` and `rank_peaks`; ignored for `partition`. (Deprecated alias: `freqai.predictions_extrema.extrema_fraction`) |
+| _Optuna / HPO_ | | | |
+| freqai.optuna_hyperopt.enabled | false | bool | Enables HPO. |
+| freqai.optuna_hyperopt.sampler | `tpe` | enum {`tpe`,`auto`} | HPO sampler algorithm for `hp` namespace. `tpe` uses [TPESampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html) with multivariate and group, `auto` uses [AutoSampler](https://hub.optuna.org/samplers/auto_sampler). |
+| freqai.optuna_hyperopt.label_sampler | `auto` | enum {`auto`,`tpe`,`nsgaii`,`nsgaiii`} | HPO sampler algorithm for multi-objective `label` namespace. `nsgaii` uses [NSGAIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIISampler.html), `nsgaiii` uses [NSGAIIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIIISampler.html). |
+| freqai.optuna_hyperopt.storage | `file` | enum {`file`,`sqlite`} | HPO storage backend. |
+| freqai.optuna_hyperopt.continuous | true | bool | Continuous HPO. |
+| freqai.optuna_hyperopt.warm_start | true | bool | Warm start HPO with previous best value(s). |
+| freqai.optuna_hyperopt.n_startup_trials | 15 | int >= 0 | HPO startup trials. |
+| freqai.optuna_hyperopt.n_trials | 50 | int >= 1 | Maximum HPO trials. |
+| freqai.optuna_hyperopt.n_jobs | CPU threads / 4 | int >= 1 | Parallel HPO workers. |
+| freqai.optuna_hyperopt.timeout | 7200 | int >= 0 | HPO wall-clock timeout in seconds. |
+| freqai.optuna_hyperopt.label_candles_step | 1 | int >= 1 | Step for Zigzag NATR horizon `label` search space. |
+| freqai.optuna_hyperopt.space_reduction | false | bool | Enable/disable `hp` search space reduction based on previous best parameters. |
+| freqai.optuna_hyperopt.space_fraction | 0.4 | float [0,1] | Fraction of the `hp` search space to use with `space_reduction`. Lower values create narrower search ranges around the best parameters. (Deprecated alias: `freqai.optuna_hyperopt.expansion_ratio`) |
+| freqai.optuna_hyperopt.min_resource | 3 | int >= 1 | Minimum resource per [HyperbandPruner](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.pruners.HyperbandPruner.html) rung. |
+| freqai.optuna_hyperopt.seed | 1 | int >= 0 | HPO RNG seed. |
## ReforceXY
import scipy as sp
import talib.abstract as ta
from ExtremaWeightingTransformer import (
+ COMBINED_AGGREGATIONS,
+ COMBINED_METRICS,
DEFAULTS_EXTREMA_WEIGHTING,
NORMALIZATION_TYPES,
STANDARDIZATION_TYPES,
WEIGHT_STRATEGIES,
- WeightStrategy,
+ CombinedAggregation,
+ CombinedMetric,
)
from numpy.typing import NDArray
from scipy.ndimage import gaussian_filter1d
"kaiser",
"triang",
)
+
SmoothingMethod = Union[
SmoothingKernel, Literal["smm", "sma", "savgol", "gaussian_filter1d"]
]
f"Invalid extrema_weighting strategy {strategy!r}, supported: {', '.join(WEIGHT_STRATEGIES)}, using default {WEIGHT_STRATEGIES[0]!r}"
)
strategy = WEIGHT_STRATEGIES[0]
+ metric_coefficients = extrema_weighting.get(
+ "metric_coefficients", DEFAULTS_EXTREMA_WEIGHTING["metric_coefficients"]
+ )
+ if not isinstance(metric_coefficients, dict):
+ logger.warning(
+ f"Invalid extrema_weighting metric_coefficients {metric_coefficients!r}: must be a mapping, using default {DEFAULTS_EXTREMA_WEIGHTING['metric_coefficients']!r}"
+ )
+ metric_coefficients = DEFAULTS_EXTREMA_WEIGHTING["metric_coefficients"]
+ elif invalid_keys := set(metric_coefficients.keys()) - set(COMBINED_METRICS):
+ logger.warning(
+ f"Invalid extrema_weighting metric_coefficients keys {sorted(invalid_keys)!r}, valid keys: {', '.join(COMBINED_METRICS)}"
+ )
+ metric_coefficients = {
+ k: v for k, v in metric_coefficients.items() if k in set(COMBINED_METRICS)
+ }
+
+ aggregation: CombinedAggregation = extrema_weighting.get(
+ "aggregation", DEFAULTS_EXTREMA_WEIGHTING["aggregation"]
+ )
+ if aggregation not in set(COMBINED_AGGREGATIONS):
+ logger.warning(
+ f"Invalid extrema_weighting aggregation {aggregation!r}, supported: {', '.join(COMBINED_AGGREGATIONS)}, using default {DEFAULTS_EXTREMA_WEIGHTING['aggregation']!r}"
+ )
+ aggregation = DEFAULTS_EXTREMA_WEIGHTING["aggregation"]
# Phase 1: Standardization
standardization = extrema_weighting.get(
return {
"strategy": strategy,
+ "metric_coefficients": metric_coefficients,
+ "aggregation": aggregation,
# Phase 1: Standardization
"standardization": standardization,
"robust_quantiles": robust_quantiles,
return weights_array
+def _parse_metric_coefficients(
+ metric_coefficients: dict[str, Any],
+) -> dict[CombinedMetric, float]:
+ out: dict[CombinedMetric, float] = {}
+ for metric in COMBINED_METRICS:
+ value = metric_coefficients.get(metric)
+ if not isinstance(value, (int, float)):
+ continue
+ if not np.isfinite(value) or value <= 0:
+ continue
+ out[metric] = float(value)
+
+ return out
+
+
+def _aggregate_metrics(
+ stacked_metrics: NDArray[np.floating],
+ coefficients: NDArray[np.floating],
+ aggregation: CombinedAggregation,
+) -> NDArray[np.floating]:
+ if aggregation == COMBINED_AGGREGATIONS[0]: # "weighted_average"
+ return np.average(stacked_metrics, axis=0, weights=coefficients)
+ elif aggregation == COMBINED_AGGREGATIONS[1]: # "geometric_mean"
+ return np.asarray(
+ sp.stats.gmean(stacked_metrics.T, weights=coefficients, axis=1),
+ dtype=float,
+ )
+ else:
+ raise ValueError(
+ f"Invalid aggregation {aggregation!r}. Supported: {', '.join(COMBINED_AGGREGATIONS)}"
+ )
+
+
+def _compute_combined_weights(
+ indices: list[int],
+ amplitudes: list[float],
+ amplitude_threshold_ratios: list[float],
+ volume_rates: list[float],
+ speeds: list[float],
+ efficiency_ratios: list[float],
+ volume_weighted_efficiency_ratios: list[float],
+ metric_coefficients: dict[str, Any],
+ aggregation: CombinedAggregation,
+) -> NDArray[np.floating]:
+ if len(indices) == 0:
+ return np.asarray([], dtype=float)
+
+ coefficients = _parse_metric_coefficients(metric_coefficients)
+ if len(coefficients) == 0:
+ coefficients = dict.fromkeys(COMBINED_METRICS, DEFAULT_EXTREMA_WEIGHT)
+
+ metrics: dict[CombinedMetric, NDArray[np.floating]] = {
+ "amplitude": np.asarray(amplitudes, dtype=float),
+ "amplitude_threshold_ratio": np.asarray(
+ amplitude_threshold_ratios, dtype=float
+ ),
+ "volume_rate": np.asarray(volume_rates, dtype=float),
+ "speed": np.asarray(speeds, dtype=float),
+ "efficiency_ratio": np.asarray(efficiency_ratios, dtype=float),
+ "volume_weighted_efficiency_ratio": np.asarray(
+ volume_weighted_efficiency_ratios, dtype=float
+ ),
+ }
+
+ imputed_metrics: list[NDArray[np.floating]] = []
+ coefficients_list: list[float] = []
+
+ for metric_name in COMBINED_METRICS:
+ if metric_name not in coefficients:
+ continue
+ coefficient = coefficients[metric_name]
+ metric_values = metrics[metric_name]
+ if metric_values.size == 0:
+ continue
+ imputed_metrics.append(_impute_weights(weights=metric_values))
+ coefficients_list.append(float(coefficient))
+
+ if len(imputed_metrics) == 0:
+ return np.asarray([], dtype=float)
+
+ stacked_metrics = np.vstack(imputed_metrics)
+ coefficients_array = np.asarray(coefficients_list, dtype=float)
+
+ return _aggregate_metrics(stacked_metrics, coefficients_array, aggregation)
+
+
def compute_extrema_weights(
n_extrema: int,
indices: list[int],
speeds: list[float],
efficiency_ratios: list[float],
volume_weighted_efficiency_ratios: list[float],
- strategy: WeightStrategy = DEFAULTS_EXTREMA_WEIGHTING["strategy"],
+ extrema_weighting: dict[str, Any],
) -> NDArray[np.floating]:
+ extrema_weighting = {**DEFAULTS_EXTREMA_WEIGHTING, **extrema_weighting}
+ strategy = extrema_weighting["strategy"]
+
if len(indices) == 0 or strategy == WEIGHT_STRATEGIES[0]: # "none"
return np.full(n_extrema, DEFAULT_EXTREMA_WEIGHT, dtype=float)
weights: Optional[NDArray[np.floating]] = None
- if (
- strategy
- in {
- WEIGHT_STRATEGIES[1],
- WEIGHT_STRATEGIES[2],
- WEIGHT_STRATEGIES[3],
- WEIGHT_STRATEGIES[4],
- WEIGHT_STRATEGIES[5],
- WEIGHT_STRATEGIES[6],
- }
- ): # "amplitude" / "amplitude_threshold_ratio" / "volume_rate" / "speed" / "efficiency_ratio" / "volume_weighted_efficiency_ratio"
- if strategy == WEIGHT_STRATEGIES[1]: # "amplitude"
- weights = np.asarray(amplitudes, dtype=float)
- elif strategy == WEIGHT_STRATEGIES[2]: # "amplitude_threshold_ratio"
- weights = np.asarray(amplitude_threshold_ratios, dtype=float)
- elif strategy == WEIGHT_STRATEGIES[3]: # "volume_rate"
- weights = np.asarray(volume_rates, dtype=float)
- elif strategy == WEIGHT_STRATEGIES[4]: # "speed"
- weights = np.asarray(speeds, dtype=float)
- elif strategy == WEIGHT_STRATEGIES[5]: # "efficiency_ratio"
- weights = np.asarray(efficiency_ratios, dtype=float)
- elif strategy == WEIGHT_STRATEGIES[6]: # "volume_weighted_efficiency_ratio"
- weights = np.asarray(volume_weighted_efficiency_ratios, dtype=float)
- else:
- weights = np.asarray([], dtype=float)
-
- if weights.size == 0:
- return np.full(n_extrema, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- weights = _impute_weights(
- weights=weights,
+ if strategy == WEIGHT_STRATEGIES[1]: # "amplitude"
+ weights = np.asarray(amplitudes, dtype=float)
+ elif strategy == WEIGHT_STRATEGIES[2]: # "amplitude_threshold_ratio"
+ weights = np.asarray(amplitude_threshold_ratios, dtype=float)
+ elif strategy == WEIGHT_STRATEGIES[3]: # "volume_rate"
+ weights = np.asarray(volume_rates, dtype=float)
+ elif strategy == WEIGHT_STRATEGIES[4]: # "speed"
+ weights = np.asarray(speeds, dtype=float)
+ elif strategy == WEIGHT_STRATEGIES[5]: # "efficiency_ratio"
+ weights = np.asarray(efficiency_ratios, dtype=float)
+ elif strategy == WEIGHT_STRATEGIES[6]: # "volume_weighted_efficiency_ratio"
+ weights = np.asarray(volume_weighted_efficiency_ratios, dtype=float)
+ elif strategy == WEIGHT_STRATEGIES[7]: # "combined"
+ weights = _compute_combined_weights(
+ indices=indices,
+ amplitudes=amplitudes,
+ amplitude_threshold_ratios=amplitude_threshold_ratios,
+ volume_rates=volume_rates,
+ speeds=speeds,
+ efficiency_ratios=efficiency_ratios,
+ volume_weighted_efficiency_ratios=volume_weighted_efficiency_ratios,
+ metric_coefficients=extrema_weighting["metric_coefficients"],
+ aggregation=extrema_weighting["aggregation"],
)
- if weights is not None:
- if weights.size == 0:
- return np.full(n_extrema, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- return _build_weights_array(
- n_extrema=n_extrema,
- indices=indices,
- weights=weights,
- default_weight=np.nanmedian(weights),
+ else:
+ raise ValueError(
+ f"Invalid extrema weighting strategy {strategy!r}. "
+ f"Supported: {', '.join(WEIGHT_STRATEGIES)}"
)
- raise ValueError(
- f"Invalid extrema weighting strategy {strategy!r}. "
- f"Supported: {', '.join(WEIGHT_STRATEGIES)}"
+ weights = _impute_weights(
+ weights=weights,
+ )
+
+ return _build_weights_array(
+ n_extrema=n_extrema,
+ indices=indices,
+ weights=weights,
+ default_weight=float(np.nanmedian(weights)),
)
speeds: list[float],
efficiency_ratios: list[float],
volume_weighted_efficiency_ratios: list[float],
- strategy: WeightStrategy = DEFAULTS_EXTREMA_WEIGHTING["strategy"],
+ extrema_weighting: dict[str, Any],
) -> tuple[pd.Series, pd.Series]:
extrema_values = extrema.to_numpy(dtype=float)
extrema_index = extrema.index
speeds=speeds,
efficiency_ratios=efficiency_ratios,
volume_weighted_efficiency_ratios=volume_weighted_efficiency_ratios,
- strategy=strategy,
+ extrema_weighting=extrema_weighting,
)
return pd.Series(
return np.nan, np.nan
amplitude = abs(current_value - previous_value) / abs(previous_value)
+ if not (np.isfinite(amplitude) and amplitude >= 0):
+ return np.nan, np.nan
start_pos = min(previous_pos, current_pos)
end_pos = max(previous_pos, current_pos) + 1
median_threshold = np.nanmedian(thresholds[start_pos:end_pos])
- if (
- np.isfinite(median_threshold)
- and median_threshold > 0
- and np.isfinite(amplitude)
- ):
- amplitude_threshold_ratio = amplitude / median_threshold
- else:
- amplitude_threshold_ratio = np.nan
+ amplitude_threshold_ratio = (
+ amplitude / (amplitude + median_threshold)
+ if np.isfinite(median_threshold) and median_threshold > 0
+ else np.nan
+ )
- return amplitude, amplitude_threshold_ratio
+ return amplitude / (1.0 + amplitude), amplitude_threshold_ratio
def calculate_pivot_duration(
*,
previous_pos=previous_pos,
current_pos=current_pos,
)
-
if not np.isfinite(duration) or duration == 0:
return np.nan
start_pos = min(previous_pos, current_pos)
end_pos = max(previous_pos, current_pos) + 1
- total_volume = np.nansum(volumes[start_pos:end_pos])
- return total_volume / duration
+ avg_volume_per_candle = np.nansum(volumes[start_pos:end_pos]) / duration
+ median_volume = np.nanmedian(volumes[start_pos:end_pos])
+ if (
+ np.isfinite(avg_volume_per_candle)
+ and avg_volume_per_candle >= 0
+ and np.isfinite(median_volume)
+ and median_volume > 0
+ ):
+ return avg_volume_per_candle / (avg_volume_per_candle + median_volume)
+ return np.nan
def calculate_pivot_speed(
*,
previous_pos: int,
+ previous_value: float,
current_pos: int,
- amplitude: float,
+ current_value: float,
) -> float:
if previous_pos < 0 or current_pos < 0:
return np.nan
if previous_pos >= n or current_pos >= n:
return np.nan
- if not np.isfinite(amplitude):
+
+ if np.isclose(previous_value, 0.0):
return np.nan
duration = calculate_pivot_duration(
previous_pos=previous_pos,
current_pos=current_pos,
)
-
if not np.isfinite(duration) or duration == 0:
return np.nan
- return amplitude / duration
+ amplitude = abs(current_value - previous_value) / abs(previous_value)
+ if not (np.isfinite(amplitude) and amplitude >= 0):
+ return np.nan
+
+ speed = amplitude / duration
+ return speed / (1.0 + speed) if np.isfinite(speed) and speed >= 0 else np.nan
def calculate_pivot_efficiency_ratio(
*,
)
speed = calculate_pivot_speed(
previous_pos=last_pivot_pos,
+ previous_value=pivots_values[-1],
current_pos=pos,
- amplitude=amplitude,
+ current_value=value,
)
efficiency_ratio = calculate_pivot_efficiency_ratio(
previous_pos=last_pivot_pos,