### Configuration tunables
-| Path | Default | Type / Range | Description |
-| -------------------------------------------------------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| _Protections_ | | | |
-| custom_protections.trade_duration_candles | 72 | int >= 1 | Estimated trade duration in candles. Scales protections stop duration candles and trade limit. |
-| custom_protections.lookback_period_fraction | 0.5 | float (0,1] | Fraction of `fit_live_predictions_candles` used to calculate `lookback_period_candles` for _MaxDrawdown_ and _StoplossGuard_ protections. |
-| custom_protections.cooldown.enabled | true | bool | Enable/disable _CooldownPeriod_ protection. |
-| custom_protections.cooldown.stop_duration_candles | 4 | int >= 1 | Number of candles to wait before allowing new trades after a trade is closed. |
-| custom_protections.drawdown.enabled | true | bool | Enable/disable _MaxDrawdown_ protection. |
-| custom_protections.drawdown.max_allowed_drawdown | 0.2 | float (0,1) | Maximum allowed drawdown. |
-| custom_protections.stoploss.enabled | true | bool | Enable/disable _StoplossGuard_ protection. |
-| _Leverage_ | | | |
-| leverage | `proposed_leverage` | float [1.0, max_leverage] | Leverage. Fallback to `proposed_leverage` for the pair. |
-| _Exit pricing_ | | | |
-| exit_pricing.trade_price_target_method | `moving_average` | enum {`moving_average`,`quantile_interpolation`,`weighted_average`} | Trade NATR computation method. (Deprecated alias: `exit_pricing.trade_price_target`) |
-| exit_pricing.thresholds_calibration.decline_quantile | 0.75 | float (0,1) | PnL decline quantile threshold. |
-| _Reversal confirmation_ | | | |
-| reversal_confirmation.lookback_period_candles | 0 | int >= 0 | Prior confirming candles; 0 = none. (Deprecated alias: `reversal_confirmation.lookback_period`) |
-| reversal_confirmation.decay_fraction | 0.5 | float (0,1] | Geometric per-candle volatility adjusted reversal threshold relaxation factor. (Deprecated alias: `reversal_confirmation.decay_ratio`) |
-| reversal_confirmation.min_natr_multiplier_fraction | 0.0095 | float [0,1] | Lower bound fraction for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.min_natr_ratio_percent`) |
-| reversal_confirmation.max_natr_multiplier_fraction | 0.075 | float [0,1] | Upper bound fraction (>= lower bound) for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.max_natr_ratio_percent`) |
-| _Regressor model_ | | | |
-| freqai.regressor | `xgboost` | enum {`xgboost`,`lightgbm`,`histgradientboostingregressor`} | Machine learning regressor algorithm. |
-| _Extrema smoothing_ | | | |
-| freqai.extrema_smoothing.method | `gaussian` | enum {`gaussian`,`kaiser`,`triang`,`smm`,`sma`,`savgol`,`gaussian_filter1d`} | Extrema smoothing method (`smm`=median, `sma`=mean, `savgol`=Savitzky–Golay). |
-| freqai.extrema_smoothing.window_candles | 5 | int >= 3 | Smoothing window length (candles). (Deprecated alias: `freqai.extrema_smoothing.window`) |
-| freqai.extrema_smoothing.beta | 8.0 | float > 0 | Shape parameter for `kaiser` kernel. |
-| freqai.extrema_smoothing.polyorder | 3 | int >= 1 | Polynomial order for `savgol` smoothing. |
-| freqai.extrema_smoothing.mode | `mirror` | enum {`mirror`,`constant`,`nearest`,`wrap`,`interp`} | Boundary mode for `savgol` and `gaussian_filter1d`. |
-| freqai.extrema_smoothing.sigma | 1.0 | float > 0 | Gaussian `sigma` for `gaussian_filter1d` smoothing. |
-| _Extrema weighting_ | | | |
-| freqai.extrema_weighting.strategy | `none` | enum {`none`,`amplitude`,`amplitude_threshold_ratio`,`volume_rate`,`speed`,`efficiency_ratio`,`volume_weighted_efficiency_ratio`,`hybrid`} | Extrema weighting source: unweighted (`none`), swing amplitude (`amplitude`), swing amplitude / median volatility-threshold ratio (`amplitude_threshold_ratio`), swing volume per candle (`volume_rate`), swing speed (`speed`), swing efficiency ratio (`efficiency_ratio`), swing volume-weighted efficiency ratio (`volume_weighted_efficiency_ratio`), or `hybrid`. |
-| freqai.extrema_weighting.source_weights | `{}` | dict[str, float] | Weights on extrema weighting sources for `hybrid`. |
-| freqai.extrema_weighting.aggregation | `weighted_sum` | enum {`weighted_sum`,`geometric_mean`} | Aggregation method applied to weighted extrema weighting sources for `hybrid`. |
-| freqai.extrema_weighting.aggregation_normalization | `none` | enum {`minmax`,`sigmoid`,`softmax`,`l1`,`l2`,`rank`,`none`} | Normalization method applied to the aggregated extrema weighting source for `hybrid`. |
-| freqai.extrema_weighting.standardization | `none` | enum {`none`,`zscore`,`robust`,`mmad`} | Standardization method applied to weights before normalization. `none`=no standardization, `zscore`=(w-μ)/σ, `robust`=(w-median)/IQR, `mmad`=(w-median)/MAD. |
-| freqai.extrema_weighting.robust_quantiles | [0.25, 0.75] | list[float] where 0 <= Q1 < Q3 <= 1 | Quantile range for robust standardization, Q1 and Q3. |
-| freqai.extrema_weighting.mmad_scaling_factor | 1.4826 | float > 0 | Scaling factor for MMAD standardization. |
-| freqai.extrema_weighting.normalization | `minmax` | enum {`minmax`,`sigmoid`,`softmax`,`l1`,`l2`,`rank`,`none`} | Normalization method applied to weights. |
-| freqai.extrema_weighting.minmax_range | [0.0, 1.0] | list[float] | Target range for `minmax` normalization, min and max. |
-| freqai.extrema_weighting.sigmoid_scale | 1.0 | float > 0 | Scale parameter for `sigmoid` normalization, controls steepness. |
-| freqai.extrema_weighting.softmax_temperature | 1.0 | float > 0 | Temperature parameter for `softmax` normalization: lower values sharpen distribution, higher values flatten it. |
-| freqai.extrema_weighting.rank_method | `average` | enum {`average`,`min`,`max`,`dense`,`ordinal`} | Ranking method for `rank` normalization. |
-| freqai.extrema_weighting.gamma | 1.0 | float (0,10] | Contrast exponent applied after normalization: >1 emphasizes extrema, values between 0 and 1 soften. |
-| _Feature parameters_ | | | |
-| freqai.feature_parameters.label_period_candles | min/max midpoint | int >= 1 | Zigzag labeling NATR horizon. |
-| freqai.feature_parameters.min_label_period_candles | 12 | int >= 1 | Minimum labeling NATR horizon used for reversals labeling HPO. |
-| freqai.feature_parameters.max_label_period_candles | 24 | int >= 1 | Maximum labeling NATR horizon used for reversals labeling HPO. |
-| freqai.feature_parameters.label_natr_multiplier | min/max midpoint | float > 0 | Zigzag labeling NATR multiplier. (Deprecated alias: `freqai.feature_parameters.label_natr_ratio`) |
-| freqai.feature_parameters.min_label_natr_multiplier | 9.0 | float > 0 | Minimum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.min_label_natr_ratio`) |
-| freqai.feature_parameters.max_label_natr_multiplier | 12.0 | float > 0 | Maximum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.max_label_natr_ratio`) |
-| freqai.feature_parameters.label_frequency_candles | `auto` | int >= 2 \| `auto` | Reversals labeling frequency. `auto` = max(2, 2 \* number of whitelisted pairs). |
-| freqai.feature_parameters.label_weights | [1/7,1/7,1/7,1/7,1/7,1/7,1/7] | list[float] | Per-objective weights used in distance calculations to ideal point. Objectives: (1) number of detected reversals, (2) median swing amplitude, (3) median (swing amplitude / median volatility-threshold ratio), (4) median swing volume per candle, (5) median swing speed, (6) median swing efficiency ratio, (7) median swing volume-weighted efficiency ratio. |
-| freqai.feature_parameters.label_p_order | `None` | float \| None | p-order parameter for distance metrics. Used by minkowski (default 2.0) and power_mean (default 1.0). Ignored by other metrics. |
-| freqai.feature_parameters.label_method | `compromise_programming` | enum {`compromise_programming`,`topsis`,`kmeans`,`kmeans2`,`kmedoids`,`knn`,`medoid`} | HPO `label` Pareto front trial selection method. |
-| freqai.feature_parameters.label_distance_metric | `euclidean` | string | Distance metric for `compromise_programming` and `topsis` methods. |
-| freqai.feature_parameters.label_cluster_metric | `euclidean` | string | Distance metric for `kmeans`, `kmeans2`, and `kmedoids` methods. |
-| freqai.feature_parameters.label_cluster_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Cluster selection method for clustering-based label methods. |
-| freqai.feature_parameters.label_cluster_trial_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Best cluster trial selection method for clustering-based label methods. |
-| freqai.feature_parameters.label_density_metric | method-dependent | string | Distance metric for `knn` and `medoid` methods. |
-| freqai.feature_parameters.label_density_aggregation | `power_mean` | enum {`power_mean`,`quantile`,`min`,`max`} | Aggregation method for KNN neighbor distances. |
-| freqai.feature_parameters.label_density_n_neighbors | 5 | int >= 1 | Number of neighbors for KNN. |
-| freqai.feature_parameters.label_density_aggregation_param | aggregation-dependent | float \| None | Tunable for KNN neighbor distance aggregation: p-order (`power_mean`) or quantile value (`quantile`). |
-| _Predictions extrema_ | | | |
-| freqai.predictions_extrema.selection_method | `rank_extrema` | enum {`rank_extrema`,`rank_peaks`,`partition`} | Extrema selection method. `rank_extrema` ranks extrema values, `rank_peaks` ranks detected peak values, `partition` uses sign-based partitioning. |
-| freqai.predictions_extrema.threshold_smoothing_method | `mean` | enum {`mean`,`isodata`,`li`,`minimum`,`otsu`,`triangle`,`yen`,`median`,`soft_extremum`} | Thresholding method for prediction thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_smoothing`) |
-| freqai.predictions_extrema.soft_extremum_alpha | 12.0 | float >= 0 | Alpha for `soft_extremum` thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_alpha`) |
-| freqai.predictions_extrema.outlier_threshold_quantile | 0.999 | float (0,1) | Quantile threshold for predictions outlier filtering. (Deprecated alias: `freqai.predictions_extrema.threshold_outlier`) |
-| freqai.predictions_extrema.keep_extrema_fraction | 1.0 | float (0,1] | Fraction of extrema used for thresholds. `1.0` uses all, lower values keep only most significant. Applies to `rank_extrema` and `rank_peaks`; ignored for `partition`. (Deprecated alias: `freqai.predictions_extrema.extrema_fraction`) |
-| _Optuna / HPO_ | | | |
-| freqai.optuna_hyperopt.enabled | false | bool | Enables HPO. |
-| freqai.optuna_hyperopt.sampler | `tpe` | enum {`tpe`,`auto`} | HPO sampler algorithm for `hp` namespace. `tpe` uses [TPESampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html) with multivariate and group, `auto` uses [AutoSampler](https://hub.optuna.org/samplers/auto_sampler). |
-| freqai.optuna_hyperopt.label_sampler | `auto` | enum {`auto`,`tpe`,`nsgaii`,`nsgaiii`} | HPO sampler algorithm for multi-objective `label` namespace. `nsgaii` uses [NSGAIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIISampler.html), `nsgaiii` uses [NSGAIIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIIISampler.html). |
-| freqai.optuna_hyperopt.storage | `file` | enum {`file`,`sqlite`} | HPO storage backend. |
-| freqai.optuna_hyperopt.continuous | true | bool | Continuous HPO. |
-| freqai.optuna_hyperopt.warm_start | true | bool | Warm start HPO with previous best value(s). |
-| freqai.optuna_hyperopt.n_startup_trials | 15 | int >= 0 | HPO startup trials. |
-| freqai.optuna_hyperopt.n_trials | 50 | int >= 1 | Maximum HPO trials. |
-| freqai.optuna_hyperopt.n_jobs | CPU threads / 4 | int >= 1 | Parallel HPO workers. |
-| freqai.optuna_hyperopt.timeout | 7200 | int >= 0 | HPO wall-clock timeout in seconds. |
-| freqai.optuna_hyperopt.label_candles_step | 1 | int >= 1 | Step for Zigzag NATR horizon `label` search space. |
-| freqai.optuna_hyperopt.space_reduction | false | bool | Enable/disable `hp` search space reduction based on previous best parameters. |
-| freqai.optuna_hyperopt.space_fraction | 0.4 | float [0,1] | Fraction of the `hp` search space to use with `space_reduction`. Lower values create narrower search ranges around the best parameters. (Deprecated alias: `freqai.optuna_hyperopt.expansion_ratio`) |
-| freqai.optuna_hyperopt.min_resource | 3 | int >= 1 | Minimum resource per [HyperbandPruner](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.pruners.HyperbandPruner.html) rung. |
-| freqai.optuna_hyperopt.seed | 1 | int >= 0 | HPO RNG seed. |
+| Path | Default | Type / Range | Description |
+| -------------------------------------------------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| _Protections_ | | | |
+| custom_protections.trade_duration_candles | 72 | int >= 1 | Estimated trade duration in candles. Scales protections stop duration candles and trade limit. |
+| custom_protections.lookback_period_fraction | 0.5 | float (0,1] | Fraction of `fit_live_predictions_candles` used to calculate `lookback_period_candles` for _MaxDrawdown_ and _StoplossGuard_ protections. |
+| custom_protections.cooldown.enabled | true | bool | Enable/disable _CooldownPeriod_ protection. |
+| custom_protections.cooldown.stop_duration_candles | 4 | int >= 1 | Number of candles to wait before allowing new trades after a trade is closed. |
+| custom_protections.drawdown.enabled | true | bool | Enable/disable _MaxDrawdown_ protection. |
+| custom_protections.drawdown.max_allowed_drawdown | 0.2 | float (0,1) | Maximum allowed drawdown. |
+| custom_protections.stoploss.enabled | true | bool | Enable/disable _StoplossGuard_ protection. |
+| _Leverage_ | | | |
+| leverage | `proposed_leverage` | float [1.0, max_leverage] | Leverage. Fallback to `proposed_leverage` for the pair. |
+| _Exit pricing_ | | | |
+| exit_pricing.trade_price_target_method | `moving_average` | enum {`moving_average`,`quantile_interpolation`,`weighted_average`} | Trade NATR computation method. (Deprecated alias: `exit_pricing.trade_price_target`) |
+| exit_pricing.thresholds_calibration.decline_quantile | 0.75 | float (0,1) | PnL decline quantile threshold. |
+| _Reversal confirmation_ | | | |
+| reversal_confirmation.lookback_period_candles | 0 | int >= 0 | Prior confirming candles; 0 = none. (Deprecated alias: `reversal_confirmation.lookback_period`) |
+| reversal_confirmation.decay_fraction | 0.5 | float (0,1] | Geometric per-candle volatility adjusted reversal threshold relaxation factor. (Deprecated alias: `reversal_confirmation.decay_ratio`) |
+| reversal_confirmation.min_natr_multiplier_fraction | 0.0095 | float [0,1] | Lower bound fraction for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.min_natr_ratio_percent`) |
+| reversal_confirmation.max_natr_multiplier_fraction | 0.075 | float [0,1] | Upper bound fraction (>= lower bound) for volatility adjusted reversal threshold. (Deprecated alias: `reversal_confirmation.max_natr_ratio_percent`) |
+| _Regressor model_ | | | |
+| freqai.regressor | `xgboost` | enum {`xgboost`,`lightgbm`,`histgradientboostingregressor`} | Machine learning regressor algorithm. |
+| _Extrema smoothing_ | | | |
+| freqai.extrema_smoothing.method | `gaussian` | enum {`gaussian`,`kaiser`,`triang`,`smm`,`sma`,`savgol`,`gaussian_filter1d`} | Extrema smoothing method (`smm`=median, `sma`=mean, `savgol`=Savitzky–Golay). |
+| freqai.extrema_smoothing.window_candles | 5 | int >= 3 | Smoothing window length (candles). (Deprecated alias: `freqai.extrema_smoothing.window`) |
+| freqai.extrema_smoothing.beta | 8.0 | float > 0 | Shape parameter for `kaiser` kernel. |
+| freqai.extrema_smoothing.polyorder | 3 | int >= 1 | Polynomial order for `savgol` smoothing. |
+| freqai.extrema_smoothing.mode | `mirror` | enum {`mirror`,`constant`,`nearest`,`wrap`,`interp`} | Boundary mode for `savgol` and `gaussian_filter1d`. |
+| freqai.extrema_smoothing.sigma | 1.0 | float > 0 | Gaussian `sigma` for `gaussian_filter1d` smoothing. |
+| _Extrema weighting_ | | | |
+| freqai.extrema_weighting.strategy | `none` | enum {`none`,`amplitude`,`amplitude_threshold_ratio`,`volume_rate`,`speed`,`efficiency_ratio`,`volume_weighted_efficiency_ratio`} | Extrema weighting source: unweighted (`none`), swing amplitude (`amplitude`), swing amplitude / median volatility-threshold ratio (`amplitude_threshold_ratio`), swing volume per candle (`volume_rate`), swing speed (`speed`), swing efficiency ratio (`efficiency_ratio`), or swing volume-weighted efficiency ratio (`volume_weighted_efficiency_ratio`). |
+| freqai.extrema_weighting.standardization | `none` | enum {`none`,`zscore`,`robust`,`mmad`} | Standardization method applied to smoothed weighted extrema before normalization. `none`=no standardization, `zscore`=(w-μ)/σ, `robust`=(w-median)/IQR, `mmad`=(w-median)/MAD. |
+| freqai.extrema_weighting.robust_quantiles | [0.25, 0.75] | list[float] where 0 <= Q1 < Q3 <= 1 | Quantile range for robust standardization, Q1 and Q3. |
+| freqai.extrema_weighting.mmad_scaling_factor | 1.4826 | float > 0 | Scaling factor for MMAD standardization. |
+| freqai.extrema_weighting.normalization | `minmax` | enum {`minmax`,`sigmoid`,`none`} | Normalization method applied to smoothed weighted extrema. |
+| freqai.extrema_weighting.minmax_range | [-1.0, 1.0] | list[float] | Target range for `minmax` normalization, min and max. |
+| freqai.extrema_weighting.sigmoid_scale | 1.0 | float > 0 | Scale parameter for `sigmoid` normalization, controls steepness. |
+| freqai.extrema_weighting.gamma | 1.0 | float (0,10] | Contrast exponent applied to smoothed weighted extrema after normalization: >1 emphasizes extrema, values between 0 and 1 soften. |
+| _Feature parameters_ | | | |
+| freqai.feature_parameters.label_period_candles | min/max midpoint | int >= 1 | Zigzag labeling NATR horizon. |
+| freqai.feature_parameters.min_label_period_candles | 12 | int >= 1 | Minimum labeling NATR horizon used for reversals labeling HPO. |
+| freqai.feature_parameters.max_label_period_candles | 24 | int >= 1 | Maximum labeling NATR horizon used for reversals labeling HPO. |
+| freqai.feature_parameters.label_natr_multiplier | min/max midpoint | float > 0 | Zigzag labeling NATR multiplier. (Deprecated alias: `freqai.feature_parameters.label_natr_ratio`) |
+| freqai.feature_parameters.min_label_natr_multiplier | 9.0 | float > 0 | Minimum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.min_label_natr_ratio`) |
+| freqai.feature_parameters.max_label_natr_multiplier | 12.0 | float > 0 | Maximum labeling NATR multiplier used for reversals labeling HPO. (Deprecated alias: `freqai.feature_parameters.max_label_natr_ratio`) |
+| freqai.feature_parameters.label_frequency_candles | `auto` | int >= 2 \| `auto` | Reversals labeling frequency. `auto` = max(2, 2 \* number of whitelisted pairs). |
+| freqai.feature_parameters.label_weights | [1/7,1/7,1/7,1/7,1/7,1/7,1/7] | list[float] | Per-objective weights used in distance calculations to ideal point. Objectives: (1) number of detected reversals, (2) median swing amplitude, (3) median (swing amplitude / median volatility-threshold ratio), (4) median swing volume per candle, (5) median swing speed, (6) median swing efficiency ratio, (7) median swing volume-weighted efficiency ratio. |
+| freqai.feature_parameters.label_p_order | `None` | float \| None | p-order parameter for distance metrics. Used by minkowski (default 2.0) and power_mean (default 1.0). Ignored by other metrics. |
+| freqai.feature_parameters.label_method | `compromise_programming` | enum {`compromise_programming`,`topsis`,`kmeans`,`kmeans2`,`kmedoids`,`knn`,`medoid`} | HPO `label` Pareto front trial selection method. |
+| freqai.feature_parameters.label_distance_metric | `euclidean` | string | Distance metric for `compromise_programming` and `topsis` methods. |
+| freqai.feature_parameters.label_cluster_metric | `euclidean` | string | Distance metric for `kmeans`, `kmeans2`, and `kmedoids` methods. |
+| freqai.feature_parameters.label_cluster_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Cluster selection method for clustering-based label methods. |
+| freqai.feature_parameters.label_cluster_trial_selection_method | `topsis` | enum {`compromise_programming`,`topsis`} | Best cluster trial selection method for clustering-based label methods. |
+| freqai.feature_parameters.label_density_metric | method-dependent | string | Distance metric for `knn` and `medoid` methods. |
+| freqai.feature_parameters.label_density_aggregation | `power_mean` | enum {`power_mean`,`quantile`,`min`,`max`} | Aggregation method for KNN neighbor distances. |
+| freqai.feature_parameters.label_density_n_neighbors | 5 | int >= 1 | Number of neighbors for KNN. |
+| freqai.feature_parameters.label_density_aggregation_param | aggregation-dependent | float \| None | Tunable for KNN neighbor distance aggregation: p-order (`power_mean`) or quantile value (`quantile`). |
+| _Predictions extrema_ | | | |
+| freqai.predictions_extrema.selection_method | `rank_extrema` | enum {`rank_extrema`,`rank_peaks`,`partition`} | Extrema selection method. `rank_extrema` ranks extrema values, `rank_peaks` ranks detected peak values, `partition` uses sign-based partitioning. |
+| freqai.predictions_extrema.threshold_smoothing_method | `mean` | enum {`mean`,`isodata`,`li`,`minimum`,`otsu`,`triangle`,`yen`,`median`,`soft_extremum`} | Thresholding method for prediction thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_smoothing`) |
+| freqai.predictions_extrema.soft_extremum_alpha | 12.0 | float >= 0 | Alpha for `soft_extremum` thresholds smoothing. (Deprecated alias: `freqai.predictions_extrema.thresholds_alpha`) |
+| freqai.predictions_extrema.outlier_threshold_quantile | 0.999 | float (0,1) | Quantile threshold for predictions outlier filtering. (Deprecated alias: `freqai.predictions_extrema.threshold_outlier`) |
+| freqai.predictions_extrema.keep_extrema_fraction | 1.0 | float (0,1] | Fraction of extrema used for thresholds. `1.0` uses all, lower values keep only most significant. Applies to `rank_extrema` and `rank_peaks`; ignored for `partition`. (Deprecated alias: `freqai.predictions_extrema.extrema_fraction`) |
+| _Optuna / HPO_ | | | |
+| freqai.optuna_hyperopt.enabled | false | bool | Enables HPO. |
+| freqai.optuna_hyperopt.sampler | `tpe` | enum {`tpe`,`auto`} | HPO sampler algorithm for `hp` namespace. `tpe` uses [TPESampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html) with multivariate and group, `auto` uses [AutoSampler](https://hub.optuna.org/samplers/auto_sampler). |
+| freqai.optuna_hyperopt.label_sampler | `auto` | enum {`auto`,`tpe`,`nsgaii`,`nsgaiii`} | HPO sampler algorithm for multi-objective `label` namespace. `nsgaii` uses [NSGAIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIISampler.html), `nsgaiii` uses [NSGAIIISampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.NSGAIIISampler.html). |
+| freqai.optuna_hyperopt.storage | `file` | enum {`file`,`sqlite`} | HPO storage backend. |
+| freqai.optuna_hyperopt.continuous | true | bool | Continuous HPO. |
+| freqai.optuna_hyperopt.warm_start | true | bool | Warm start HPO with previous best value(s). |
+| freqai.optuna_hyperopt.n_startup_trials | 15 | int >= 0 | HPO startup trials. |
+| freqai.optuna_hyperopt.n_trials | 50 | int >= 1 | Maximum HPO trials. |
+| freqai.optuna_hyperopt.n_jobs | CPU threads / 4 | int >= 1 | Parallel HPO workers. |
+| freqai.optuna_hyperopt.timeout | 7200 | int >= 0 | HPO wall-clock timeout in seconds. |
+| freqai.optuna_hyperopt.label_candles_step | 1 | int >= 1 | Step for Zigzag NATR horizon `label` search space. |
+| freqai.optuna_hyperopt.space_reduction | false | bool | Enable/disable `hp` search space reduction based on previous best parameters. |
+| freqai.optuna_hyperopt.space_fraction | 0.4 | float [0,1] | Fraction of the `hp` search space to use with `space_reduction`. Lower values create narrower search ranges around the best parameters. (Deprecated alias: `freqai.optuna_hyperopt.expansion_ratio`) |
+| freqai.optuna_hyperopt.min_resource | 3 | int >= 1 | Minimum resource per [HyperbandPruner](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.pruners.HyperbandPruner.html) rung. |
+| freqai.optuna_hyperopt.seed | 1 | int >= 0 | HPO RNG seed. |
## ReforceXY
"extrema_weighting": {
"strategy": "none"
},
+ // "extrema_weighting": {
+ // "strategy": "amplitude",
+ // "gamma": 1.5
+ // },
"extrema_smoothing": {
"method": "kaiser",
"window_candles": 5,
import scipy as sp
import skimage
import sklearn
+from datasieve.pipeline import Pipeline
from freqtrade.freqai.base_models.BaseRegressionModel import BaseRegressionModel
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
from numpy.typing import NDArray
from optuna.study.study import ObjectiveFuncType
from sklearn_extra.cluster import KMedoids
+from ExtremaWeightingTransformer import (
+ ExtremaWeightingTransformer,
+)
from Utils import (
DEFAULT_FIT_LIVE_PREDICTIONS_CANDLES,
EXTREMA_COLUMN,
MAXIMA_THRESHOLD_COLUMN,
MINIMA_THRESHOLD_COLUMN,
REGRESSORS,
+ WEIGHT_STRATEGIES,
Regressor,
eval_set_and_weights,
fit_regressor,
format_number,
+ get_extrema_weighting_config,
get_label_defaults,
get_min_max_label_period_candles,
get_optuna_study_model_parameters,
https://github.com/sponsors/robcaulk
"""
- version = "3.9.2"
+ version = "3.10.0"
_TEST_SIZE: Final[float] = 0.1
)
self._optuna_label_shuffle_rng.shuffle(self._optuna_label_candle_pool)
+ def define_label_pipeline(self, threads: int = -1) -> Pipeline:
+ extrema_weighting = self.freqai_info.get("extrema_weighting", {})
+ if not isinstance(extrema_weighting, dict):
+ extrema_weighting = {}
+ extrema_weighting_config = get_extrema_weighting_config(
+ extrema_weighting, logger
+ )
+
+ if extrema_weighting_config["strategy"] == WEIGHT_STRATEGIES[0]: # "none"
+ return super().define_label_pipeline(threads)
+
+ return Pipeline(
+ [
+ (
+ "extrema_weighting",
+ ExtremaWeightingTransformer(
+ extrema_weighting=extrema_weighting_config
+ ),
+ ),
+ ]
+ )
+
def fit(
self, data_dictionary: dict[str, Any], dk: FreqaiDataKitchen, **kwargs
) -> Any:
--- /dev/null
+from typing import Any, Final, Literal
+
+import numpy as np
+import scipy as sp
+from datasieve.transforms.base_transform import (
+ ArrayOrNone,
+ BaseTransform,
+ ListOrNone,
+)
+from numpy.typing import ArrayLike, NDArray
+
+WeightStrategy = Literal[
+ "none",
+ "amplitude",
+ "amplitude_threshold_ratio",
+ "volume_rate",
+ "speed",
+ "efficiency_ratio",
+ "volume_weighted_efficiency_ratio",
+]
+WEIGHT_STRATEGIES: Final[tuple[WeightStrategy, ...]] = (
+ "none",
+ "amplitude",
+ "amplitude_threshold_ratio",
+ "volume_rate",
+ "speed",
+ "efficiency_ratio",
+ "volume_weighted_efficiency_ratio",
+)
+
+StandardizationType = Literal["none", "zscore", "robust", "mmad"]
+STANDARDIZATION_TYPES: Final[tuple[StandardizationType, ...]] = (
+ "none", # 0 - w (identity)
+ "zscore", # 1 - (w - μ) / σ
+ "robust", # 2 - (w - median) / IQR
+ "mmad", # 3 - (w - median) / MAD
+)
+
+NormalizationType = Literal["minmax", "sigmoid", "none"]
+NORMALIZATION_TYPES: Final[tuple[NormalizationType, ...]] = (
+ "minmax", # 0 - (w - min) / (max - min)
+ "sigmoid", # 1 - 1 / (1 + exp(-scale × w))
+ "none", # 2 - w (identity)
+)
+
+DEFAULTS_EXTREMA_WEIGHTING: Final[dict[str, Any]] = {
+ "strategy": WEIGHT_STRATEGIES[0], # "none"
+ # Phase 1: Standardization
+ "standardization": STANDARDIZATION_TYPES[0], # "none"
+ "robust_quantiles": (0.25, 0.75),
+ "mmad_scaling_factor": 1.4826,
+ # Phase 2: Normalization
+ "normalization": NORMALIZATION_TYPES[0], # "minmax"
+ "minmax_range": (-1.0, 1.0),
+ "sigmoid_scale": 1.0,
+ # Phase 3: Post-processing
+ "gamma": 1.0,
+}
+
+
+class ExtremaWeightingTransformer(BaseTransform):
+ def __init__(self, *, extrema_weighting: dict[str, Any]) -> None:
+ super().__init__(name="ExtremaWeightingTransformer")
+ self.extrema_weighting = {**DEFAULTS_EXTREMA_WEIGHTING, **extrema_weighting}
+ self._fitted = False
+ self._mean = 0.0
+ self._std = 1.0
+ self._min = 0.0
+ self._max = 1.0
+ self._median = 0.0
+ self._iqr = 1.0
+ self._mad = 1.0
+
+ def _standardize(
+ self,
+ values: NDArray[np.floating],
+ mask: NDArray[np.bool_],
+ ) -> NDArray[np.floating]:
+ method = self.extrema_weighting["standardization"]
+ if method == STANDARDIZATION_TYPES[0]: # "none"
+ return values
+ out = values.copy()
+ if method == STANDARDIZATION_TYPES[1]: # "zscore"
+ out[mask] = (values[mask] - self._mean) / self._std
+ elif method == STANDARDIZATION_TYPES[2]: # "robust"
+ out[mask] = (values[mask] - self._median) / self._iqr
+ elif method == STANDARDIZATION_TYPES[3]: # "mmad"
+ mmad_scaling_factor = self.extrema_weighting["mmad_scaling_factor"]
+ out[mask] = (values[mask] - self._median) / (
+ self._mad * mmad_scaling_factor
+ )
+ else:
+ raise ValueError(
+ f"Invalid standardization {method!r}. "
+ f"Supported: {', '.join(STANDARDIZATION_TYPES)}"
+ )
+ return out
+
+ def _normalize(
+ self,
+ values: NDArray[np.floating],
+ mask: NDArray[np.bool_],
+ ) -> NDArray[np.floating]:
+ method = self.extrema_weighting["normalization"]
+ if method == NORMALIZATION_TYPES[2]: # "none"
+ return values
+ out = values.copy()
+ if method == NORMALIZATION_TYPES[0]: # "minmax"
+ minmax_range = self.extrema_weighting["minmax_range"]
+ value_range = self._max - self._min
+ low, high = minmax_range
+ scale_range = high - low
+
+ if (
+ not np.isfinite(value_range)
+ or np.isclose(value_range, 0.0)
+ or not np.isfinite(scale_range)
+ or np.isclose(scale_range, 0.0)
+ ):
+ return values
+
+ out[mask] = low + (values[mask] - self._min) / value_range * scale_range
+ elif method == NORMALIZATION_TYPES[1]: # "sigmoid"
+ sigmoid_scale = self.extrema_weighting["sigmoid_scale"]
+ out[mask] = sp.special.expit(sigmoid_scale * values[mask])
+ else:
+ raise ValueError(
+ f"Invalid normalization {method!r}. "
+ f"Supported: {', '.join(NORMALIZATION_TYPES)}"
+ )
+ return out
+
+ def _apply_gamma(
+ self,
+ values: NDArray[np.floating],
+ mask: NDArray[np.bool_],
+ ) -> NDArray[np.floating]:
+ gamma = self.extrema_weighting["gamma"]
+ if np.isclose(gamma, 1.0) or not np.isfinite(gamma) or gamma <= 0:
+ return values
+ out = values.copy()
+ out[mask] = np.sign(values[mask]) * np.power(np.abs(values[mask]), gamma)
+ return out
+
+ def _inverse_standardize(
+ self,
+ values: NDArray[np.floating],
+ mask: NDArray[np.bool_],
+ ) -> NDArray[np.floating]:
+ method = self.extrema_weighting["standardization"]
+ if method == STANDARDIZATION_TYPES[0]: # "none"
+ return values
+ out = values.copy()
+ if method == STANDARDIZATION_TYPES[1]: # "zscore"
+ out[mask] = values[mask] * self._std + self._mean
+ elif method == STANDARDIZATION_TYPES[2]: # "robust"
+ out[mask] = values[mask] * self._iqr + self._median
+ elif method == STANDARDIZATION_TYPES[3]: # "mmad"
+ mmad_scaling_factor = self.extrema_weighting["mmad_scaling_factor"]
+ out[mask] = values[mask] * (self._mad * mmad_scaling_factor) + self._median
+ else:
+ raise ValueError(
+ f"Invalid standardization {method!r}. "
+ f"Supported: {', '.join(STANDARDIZATION_TYPES)}"
+ )
+ return out
+
+ def _inverse_normalize(
+ self,
+ values: NDArray[np.floating],
+ mask: NDArray[np.bool_],
+ ) -> NDArray[np.floating]:
+ method = self.extrema_weighting["normalization"]
+ if method == NORMALIZATION_TYPES[2]: # "none"
+ return values
+ out = values.copy()
+ if method == NORMALIZATION_TYPES[0]: # "minmax"
+ minmax_range = self.extrema_weighting["minmax_range"]
+ low, high = minmax_range
+ value_range = self._max - self._min
+ scale_range = high - low
+
+ if (
+ not np.isfinite(value_range)
+ or np.isclose(value_range, 0.0)
+ or not np.isfinite(scale_range)
+ or np.isclose(scale_range, 0.0)
+ ):
+ return values
+
+ out[mask] = self._min + (values[mask] - low) / scale_range * value_range
+ elif method == NORMALIZATION_TYPES[1]: # "sigmoid"
+ sigmoid_scale = self.extrema_weighting["sigmoid_scale"]
+ out[mask] = sp.special.logit(values[mask]) / sigmoid_scale
+ else:
+ raise ValueError(
+ f"Invalid normalization {method!r}. "
+ f"Supported: {', '.join(NORMALIZATION_TYPES)}"
+ )
+ return out
+
+ def _inverse_gamma(
+ self,
+ values: NDArray[np.floating],
+ mask: NDArray[np.bool_],
+ ) -> NDArray[np.floating]:
+ gamma = self.extrema_weighting["gamma"]
+ if np.isclose(gamma, 1.0) or not np.isfinite(gamma) or gamma <= 0:
+ return values
+ out = values.copy()
+ out[mask] = np.power(np.abs(values[mask]), 1.0 / gamma) * np.sign(values[mask])
+ return out
+
+ def fit(
+ self,
+ X: ArrayLike,
+ y: ArrayOrNone = None,
+ sample_weight: ArrayOrNone = None,
+ feature_list: ListOrNone = None,
+ **kwargs,
+ ) -> tuple[ArrayLike, ArrayOrNone, ArrayOrNone, ListOrNone]:
+ values = np.asarray(X, dtype=float)
+ non_zero_finite_values = values[np.isfinite(values) & ~np.isclose(values, 0.0)]
+
+ if non_zero_finite_values.size == 0:
+ self._mean = 0.0
+ self._std = 1.0
+ self._min = 0.0
+ self._max = 1.0
+ self._median = 0.0
+ self._iqr = 1.0
+ self._mad = 1.0
+ self._fitted = True
+ return X, y, sample_weight, feature_list
+
+ robust_quantiles = self.extrema_weighting["robust_quantiles"]
+
+ self._mean = np.mean(non_zero_finite_values)
+ std = np.std(non_zero_finite_values, ddof=1)
+ self._std = std if np.isfinite(std) and not np.isclose(std, 0.0) else 1.0
+ self._min = np.min(non_zero_finite_values)
+ self._max = np.max(non_zero_finite_values)
+ if np.isclose(self._max, self._min):
+ self._max = self._min + 1.0
+ self._median = np.median(non_zero_finite_values)
+ q1, q3 = (
+ np.quantile(non_zero_finite_values, robust_quantiles[0]),
+ np.quantile(non_zero_finite_values, robust_quantiles[1]),
+ )
+ iqr = q3 - q1
+ self._iqr = iqr if np.isfinite(iqr) and not np.isclose(iqr, 0.0) else 1.0
+ mad = np.median(np.abs(non_zero_finite_values - self._median))
+ self._mad = mad if np.isfinite(mad) and not np.isclose(mad, 0.0) else 1.0
+
+ self._fitted = True
+ return X, y, sample_weight, feature_list
+
+ def transform(
+ self,
+ X: ArrayLike,
+ y: ArrayOrNone = None,
+ sample_weight: ArrayOrNone = None,
+ feature_list: ListOrNone = None,
+ outlier_check: bool = False,
+ **kwargs,
+ ) -> tuple[ArrayLike, ArrayOrNone, ArrayOrNone, ListOrNone]:
+ if not self._fitted:
+ raise RuntimeError(
+ "ExtremaWeightingTransformer must be fitted before transform"
+ )
+
+ arr = np.asarray(X, dtype=float)
+ mask = np.isfinite(arr) & ~np.isclose(arr, 0.0)
+
+ standardized = self._standardize(arr, mask)
+ normalized = self._normalize(standardized, mask)
+ gammaized = self._apply_gamma(normalized, mask)
+
+ return gammaized, y, sample_weight, feature_list
+
+ def fit_transform(
+ self,
+ X: ArrayLike,
+ y: ArrayOrNone = None,
+ sample_weight: ArrayOrNone = None,
+ feature_list: ListOrNone = None,
+ **kwargs,
+ ) -> tuple[ArrayLike, ArrayOrNone, ArrayOrNone, ListOrNone]:
+ self.fit(X, y, sample_weight, feature_list, **kwargs)
+ return self.transform(X, y, sample_weight, feature_list, **kwargs)
+
+ def inverse_transform(
+ self,
+ X: ArrayLike,
+ y: ArrayOrNone = None,
+ sample_weight: ArrayOrNone = None,
+ feature_list: ListOrNone = None,
+ **kwargs,
+ ) -> tuple[ArrayLike, ArrayOrNone, ArrayOrNone, ListOrNone]:
+ if not self._fitted:
+ raise RuntimeError(
+ "ExtremaWeightingTransformer must be fitted before inverse_transform"
+ )
+
+ arr = np.asarray(X, dtype=float)
+ mask = np.isfinite(arr) & ~np.isclose(arr, 0.0)
+
+ degammaized = self._inverse_gamma(arr, mask)
+ denormalized = self._inverse_normalize(degammaized, mask)
+ destandardized = self._inverse_standardize(denormalized, mask)
+
+ return destandardized, y, sample_weight, feature_list
from Utils import (
DEFAULT_FIT_LIVE_PREDICTIONS_CANDLES,
DEFAULTS_EXTREMA_SMOOTHING,
- DEFAULTS_EXTREMA_WEIGHTING,
EXTREMA_COLUMN,
MAXIMA_THRESHOLD_COLUMN,
MINIMA_THRESHOLD_COLUMN,
- NORMALIZATION_TYPES,
- RANK_METHODS,
SMOOTHING_METHODS,
SMOOTHING_MODES,
- STANDARDIZATION_TYPES,
TRADE_PRICE_TARGETS,
- WEIGHT_AGGREGATIONS,
- WEIGHT_SOURCES,
- WEIGHT_STRATEGIES,
alligator,
bottom_change_percent,
calculate_quantile,
format_number,
get_callable_sha256,
get_distance,
+ get_extrema_weighting_config,
get_label_defaults,
get_weighted_extrema,
get_zl_ma_fn,
_ORDER_TYPES: Final[tuple[OrderType, ...]] = ("entry", "exit")
_TRADING_MODES: Final[tuple[TradingMode, ...]] = ("spot", "margin", "futures")
+ _CUSTOM_STOPLOSS_NATR_MULTIPLIER_FRACTION: Final[float] = 0.7860
+
+ _ANNOTATION_LINE_OFFSET_CANDLES: Final[int] = 10
+
+ _PLOT_EXTREMA_MIN_EPS: Final[float] = 0.01
+
def version(self) -> str:
- return "3.9.2"
+ return "3.10.0"
timeframe = "5m"
timeframe_minutes = timeframe_to_minutes(timeframe)
# (natr_multiplier_fraction, stake_percent, color)
_FINAL_EXIT_STAGE: Final[tuple[float, float, str]] = (1.0, 1.0, "deepskyblue")
- _CUSTOM_STOPLOSS_NATR_MULTIPLIER_FRACTION: Final[float] = 0.7860
-
- _ANNOTATION_LINE_OFFSET_CANDLES: Final[int] = 10
-
- _PLOT_EXTREMA_MIN_EPS: Final[float] = 0.01
-
minimal_roi = {str(timeframe_minutes * 864): -1}
# FreqAI is crashing if minimal_roi is a property
extrema_weighting = self.freqai_info.get("extrema_weighting", {})
if not isinstance(extrema_weighting, dict):
extrema_weighting = {}
- return QuickAdapterV3._get_extrema_weighting_params(extrema_weighting)
+ return get_extrema_weighting_config(extrema_weighting, logger)
@property
def extrema_smoothing(self) -> dict[str, Any]:
extrema_smoothing = self.freqai_info.get("extrema_smoothing", {})
if not isinstance(extrema_smoothing, dict):
extrema_smoothing = {}
- return QuickAdapterV3._get_extrema_smoothing_params(extrema_smoothing)
+ method = extrema_smoothing.get("method", DEFAULTS_EXTREMA_SMOOTHING["method"])
+ if method not in set(SMOOTHING_METHODS):
+ logger.warning(
+ f"Invalid extrema_smoothing method {method!r}, supported: {', '.join(SMOOTHING_METHODS)}, using default {SMOOTHING_METHODS[0]!r}"
+ )
+ method = SMOOTHING_METHODS[0]
+
+ window_candles = update_config_value(
+ extrema_smoothing,
+ new_key="window_candles",
+ old_key="window",
+ default=DEFAULTS_EXTREMA_SMOOTHING["window_candles"],
+ logger=logger,
+ new_path="freqai.extrema_smoothing.window_candles",
+ old_path="freqai.extrema_smoothing.window",
+ )
+ if not isinstance(window_candles, int) or window_candles < 3:
+ logger.warning(
+ f"Invalid extrema_smoothing window_candles {window_candles!r}: must be an integer >= 3, using default {DEFAULTS_EXTREMA_SMOOTHING['window_candles']!r}"
+ )
+ window_candles = int(DEFAULTS_EXTREMA_SMOOTHING["window_candles"])
+
+ beta = extrema_smoothing.get("beta", DEFAULTS_EXTREMA_SMOOTHING["beta"])
+ if not isinstance(beta, (int, float)) or not np.isfinite(beta) or beta <= 0:
+ logger.warning(
+ f"Invalid extrema_smoothing beta {beta!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_SMOOTHING['beta']!r}"
+ )
+ beta = DEFAULTS_EXTREMA_SMOOTHING["beta"]
+
+ polyorder = extrema_smoothing.get(
+ "polyorder", DEFAULTS_EXTREMA_SMOOTHING["polyorder"]
+ )
+ if not isinstance(polyorder, int) or polyorder < 1:
+ logger.warning(
+ f"Invalid extrema_smoothing polyorder {polyorder!r}: must be an integer >= 1, using default {DEFAULTS_EXTREMA_SMOOTHING['polyorder']!r}"
+ )
+ polyorder = DEFAULTS_EXTREMA_SMOOTHING["polyorder"]
+
+ mode = str(extrema_smoothing.get("mode", DEFAULTS_EXTREMA_SMOOTHING["mode"]))
+ if mode not in set(SMOOTHING_MODES):
+ logger.warning(
+ f"Invalid extrema_smoothing mode {mode!r}, supported: {', '.join(SMOOTHING_MODES)}, using default {SMOOTHING_MODES[0]!r}"
+ )
+ mode = SMOOTHING_MODES[0]
+
+ sigma = extrema_smoothing.get("sigma", DEFAULTS_EXTREMA_SMOOTHING["sigma"])
+ if not isinstance(sigma, (int, float)) or sigma <= 0 or not np.isfinite(sigma):
+ logger.warning(
+ f"Invalid extrema_smoothing sigma {sigma!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_SMOOTHING['sigma']!r}"
+ )
+ sigma = DEFAULTS_EXTREMA_SMOOTHING["sigma"]
+
+ return {
+ "method": method,
+ "window_candles": window_candles,
+ "beta": beta,
+ "polyorder": polyorder,
+ "mode": mode,
+ "sigma": sigma,
+ }
@property
def trade_price_target_method(self) -> str:
logger.info("Extrema Weighting:")
logger.info(f" strategy: {self.extrema_weighting['strategy']}")
- formatted_source_weights = {
- k: format_number(v)
- for k, v in self.extrema_weighting["source_weights"].items()
- }
- logger.info(f" source_weights: {formatted_source_weights}")
- logger.info(f" aggregation: {self.extrema_weighting['aggregation']}")
- logger.info(
- f" aggregation_normalization: {self.extrema_weighting['aggregation_normalization']}"
- )
logger.info(f" standardization: {self.extrema_weighting['standardization']}")
logger.info(
f" robust_quantiles: ({format_number(self.extrema_weighting['robust_quantiles'][0])}, {format_number(self.extrema_weighting['robust_quantiles'][1])})"
logger.info(
f" sigmoid_scale: {format_number(self.extrema_weighting['sigmoid_scale'])}"
)
- logger.info(
- f" softmax_temperature: {format_number(self.extrema_weighting['softmax_temperature'])}"
- )
- logger.info(f" rank_method: {self.extrema_weighting['rank_method']}")
logger.info(f" gamma: {format_number(self.extrema_weighting['gamma'])}")
logger.info("Extrema Smoothing:")
)
return self.get_label_natr_multiplier(pair) * fraction
- @staticmethod
- def _get_extrema_weighting_params(
- extrema_weighting: dict[str, Any],
- ) -> dict[str, Any]:
- # Strategy
- strategy = str(
- extrema_weighting.get("strategy", DEFAULTS_EXTREMA_WEIGHTING["strategy"])
- )
- if strategy not in set(WEIGHT_STRATEGIES):
- logger.warning(
- f"Invalid extrema_weighting strategy {strategy!r}, supported: {', '.join(WEIGHT_STRATEGIES)}, using default {WEIGHT_STRATEGIES[0]!r}"
- )
- strategy = WEIGHT_STRATEGIES[0]
-
- # Phase 1: Standardization
- standardization = str(
- extrema_weighting.get(
- "standardization", DEFAULTS_EXTREMA_WEIGHTING["standardization"]
- )
- )
- if standardization not in set(STANDARDIZATION_TYPES):
- logger.warning(
- f"Invalid extrema_weighting standardization {standardization!r}, supported: {', '.join(STANDARDIZATION_TYPES)}, using default {STANDARDIZATION_TYPES[0]!r}"
- )
- standardization = STANDARDIZATION_TYPES[0]
-
- robust_quantiles = extrema_weighting.get(
- "robust_quantiles", DEFAULTS_EXTREMA_WEIGHTING["robust_quantiles"]
- )
- if (
- not isinstance(robust_quantiles, (list, tuple))
- or len(robust_quantiles) != 2
- or not all(
- isinstance(q, (int, float)) and np.isfinite(q) and 0 <= q <= 1
- for q in robust_quantiles
- )
- or robust_quantiles[0] >= robust_quantiles[1]
- ):
- logger.warning(
- f"Invalid extrema_weighting robust_quantiles {robust_quantiles!r}: must be (q1, q3) with 0 <= q1 < q3 <= 1, using default {DEFAULTS_EXTREMA_WEIGHTING['robust_quantiles']!r}"
- )
- robust_quantiles = DEFAULTS_EXTREMA_WEIGHTING["robust_quantiles"]
- else:
- robust_quantiles = (
- float(robust_quantiles[0]),
- float(robust_quantiles[1]),
- )
-
- mmad_scaling_factor = extrema_weighting.get(
- "mmad_scaling_factor", DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"]
- )
- if (
- not isinstance(mmad_scaling_factor, (int, float))
- or not np.isfinite(mmad_scaling_factor)
- or mmad_scaling_factor <= 0
- ):
- logger.warning(
- f"Invalid extrema_weighting mmad_scaling_factor {mmad_scaling_factor!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_WEIGHTING['mmad_scaling_factor']!r}"
- )
- mmad_scaling_factor = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"]
-
- # Phase 2: Normalization
- normalization = str(
- extrema_weighting.get(
- "normalization", DEFAULTS_EXTREMA_WEIGHTING["normalization"]
- )
- )
- if normalization not in set(NORMALIZATION_TYPES):
- logger.warning(
- f"Invalid extrema_weighting normalization {normalization!r}, supported: {', '.join(NORMALIZATION_TYPES)}, using default {NORMALIZATION_TYPES[0]!r}"
- )
- normalization = NORMALIZATION_TYPES[0]
-
- if (
- strategy != WEIGHT_STRATEGIES[0] # "none"
- and standardization != STANDARDIZATION_TYPES[0] # "none"
- and normalization
- in {
- NORMALIZATION_TYPES[3], # "l1"
- NORMALIZATION_TYPES[4], # "l2"
- NORMALIZATION_TYPES[6], # "none"
- }
- ):
- raise ValueError(
- f"Invalid extrema_weighting configuration: "
- f"standardization={standardization!r} with normalization={normalization!r} "
- "can produce negative weights and flip ternary extrema labels. "
- f"Use normalization in {{{NORMALIZATION_TYPES[0]!r},{NORMALIZATION_TYPES[1]!r},{NORMALIZATION_TYPES[2]!r},{NORMALIZATION_TYPES[5]!r}}} "
- f"or set standardization={STANDARDIZATION_TYPES[0]!r}"
- )
-
- minmax_range = extrema_weighting.get(
- "minmax_range", DEFAULTS_EXTREMA_WEIGHTING["minmax_range"]
- )
- if (
- not isinstance(minmax_range, (list, tuple))
- or len(minmax_range) != 2
- or not all(
- isinstance(x, (int, float)) and np.isfinite(x) for x in minmax_range
- )
- or minmax_range[0] >= minmax_range[1]
- ):
- logger.warning(
- f"Invalid extrema_weighting minmax_range {minmax_range!r}: must be (min, max) with min < max, using default {DEFAULTS_EXTREMA_WEIGHTING['minmax_range']!r}"
- )
- minmax_range = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"]
- else:
- minmax_range = (
- float(minmax_range[0]),
- float(minmax_range[1]),
- )
-
- sigmoid_scale = extrema_weighting.get(
- "sigmoid_scale", DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"]
- )
- if (
- not isinstance(sigmoid_scale, (int, float))
- or not np.isfinite(sigmoid_scale)
- or sigmoid_scale <= 0
- ):
- logger.warning(
- f"Invalid extrema_weighting sigmoid_scale {sigmoid_scale!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_WEIGHTING['sigmoid_scale']!r}"
- )
- sigmoid_scale = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"]
-
- softmax_temperature = extrema_weighting.get(
- "softmax_temperature", DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"]
- )
- if (
- not isinstance(softmax_temperature, (int, float))
- or not np.isfinite(softmax_temperature)
- or softmax_temperature <= 0
- ):
- logger.warning(
- f"Invalid extrema_weighting softmax_temperature {softmax_temperature!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_WEIGHTING['softmax_temperature']!r}"
- )
- softmax_temperature = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"]
-
- rank_method = str(
- extrema_weighting.get(
- "rank_method", DEFAULTS_EXTREMA_WEIGHTING["rank_method"]
- )
- )
- if rank_method not in set(RANK_METHODS):
- logger.warning(
- f"Invalid extrema_weighting rank_method {rank_method!r}, supported: {', '.join(RANK_METHODS)}, using default {RANK_METHODS[0]!r}"
- )
- rank_method = RANK_METHODS[0]
-
- # Phase 3: Post-processing
- gamma = extrema_weighting.get("gamma", DEFAULTS_EXTREMA_WEIGHTING["gamma"])
- if (
- not isinstance(gamma, (int, float))
- or not np.isfinite(gamma)
- or not (0 < gamma <= 10.0)
- ):
- logger.warning(
- f"Invalid extrema_weighting gamma {gamma!r}: must be in range (0, 10], using default {DEFAULTS_EXTREMA_WEIGHTING['gamma']!r}"
- )
- gamma = DEFAULTS_EXTREMA_WEIGHTING["gamma"]
-
- source_weights = extrema_weighting.get(
- "source_weights", DEFAULTS_EXTREMA_WEIGHTING["source_weights"]
- )
- if not isinstance(source_weights, dict):
- logger.warning(
- f"Invalid extrema_weighting source_weights {source_weights!r}: must be a dict of source name to weight, using default {DEFAULTS_EXTREMA_WEIGHTING['source_weights']!r}"
- )
- source_weights = DEFAULTS_EXTREMA_WEIGHTING["source_weights"]
- else:
- sanitized_source_weights: dict[str, float] = {}
- for source, weight in source_weights.items():
- if source not in set(WEIGHT_SOURCES):
- continue
- if (
- not isinstance(weight, (int, float))
- or not np.isfinite(weight)
- or weight < 0
- ):
- continue
- sanitized_source_weights[str(source)] = float(weight)
- if not sanitized_source_weights:
- logger.warning(
- f"Invalid extrema_weighting source_weights {source_weights!r}: empty after sanitization, using default {DEFAULTS_EXTREMA_WEIGHTING['source_weights']!r}"
- )
- source_weights = DEFAULTS_EXTREMA_WEIGHTING["source_weights"]
- else:
- source_weights = sanitized_source_weights
- aggregation = str(
- extrema_weighting.get(
- "aggregation",
- DEFAULTS_EXTREMA_WEIGHTING["aggregation"],
- )
- )
- if aggregation not in set(WEIGHT_AGGREGATIONS):
- logger.warning(
- f"Invalid extrema_weighting aggregation {aggregation!r}, supported: {', '.join(WEIGHT_AGGREGATIONS)}, using default {WEIGHT_AGGREGATIONS[0]!r}"
- )
- aggregation = DEFAULTS_EXTREMA_WEIGHTING["aggregation"]
- aggregation_normalization = str(
- extrema_weighting.get(
- "aggregation_normalization",
- DEFAULTS_EXTREMA_WEIGHTING["aggregation_normalization"],
- )
- )
- if aggregation_normalization not in set(NORMALIZATION_TYPES):
- logger.warning(
- f"Invalid extrema_weighting aggregation_normalization {aggregation_normalization!r}, supported: {', '.join(NORMALIZATION_TYPES)}, using default {NORMALIZATION_TYPES[6]!r}"
- )
- aggregation_normalization = DEFAULTS_EXTREMA_WEIGHTING[
- "aggregation_normalization"
- ]
-
- if aggregation == WEIGHT_AGGREGATIONS[1] and normalization in {
- NORMALIZATION_TYPES[0], # "minmax"
- NORMALIZATION_TYPES[5], # "rank"
- }:
- logger.warning(
- f"extrema_weighting aggregation='{aggregation}' with normalization='{normalization}' "
- "can produce zero weights (gmean collapses to 0 when any source has min value). "
- f"Consider using normalization='{NORMALIZATION_TYPES[1]}' (sigmoid) or aggregation='{WEIGHT_AGGREGATIONS[0]}' (weighted_sum)."
- )
-
- return {
- "strategy": strategy,
- "source_weights": source_weights,
- "aggregation": aggregation,
- "aggregation_normalization": aggregation_normalization,
- # Phase 1: Standardization
- "standardization": standardization,
- "robust_quantiles": robust_quantiles,
- "mmad_scaling_factor": mmad_scaling_factor,
- # Phase 2: Normalization
- "normalization": normalization,
- "minmax_range": minmax_range,
- "sigmoid_scale": sigmoid_scale,
- "softmax_temperature": softmax_temperature,
- "rank_method": rank_method,
- # Phase 3: Post-processing
- "gamma": gamma,
- }
-
- @staticmethod
- def _get_extrema_smoothing_params(
- extrema_smoothing: dict[str, Any],
- ) -> dict[str, Any]:
- smoothing_method = str(
- extrema_smoothing.get("method", DEFAULTS_EXTREMA_SMOOTHING["method"])
- )
- if smoothing_method not in set(SMOOTHING_METHODS):
- logger.warning(
- f"Invalid extrema_smoothing method {smoothing_method!r}, supported: {', '.join(SMOOTHING_METHODS)}, using default {SMOOTHING_METHODS[0]!r}"
- )
- smoothing_method = SMOOTHING_METHODS[0]
-
- smoothing_window_candles = update_config_value(
- extrema_smoothing,
- new_key="window_candles",
- old_key="window",
- default=DEFAULTS_EXTREMA_SMOOTHING["window_candles"],
- logger=logger,
- new_path="freqai.extrema_smoothing.window_candles",
- old_path="freqai.extrema_smoothing.window",
- )
- if (
- not isinstance(smoothing_window_candles, int)
- or smoothing_window_candles < 3
- ):
- logger.warning(
- f"Invalid extrema_smoothing window_candles {smoothing_window_candles!r}: must be an integer >= 3, using default {DEFAULTS_EXTREMA_SMOOTHING['window_candles']!r}"
- )
- smoothing_window_candles = int(DEFAULTS_EXTREMA_SMOOTHING["window_candles"])
-
- smoothing_beta = extrema_smoothing.get(
- "beta", DEFAULTS_EXTREMA_SMOOTHING["beta"]
- )
- if (
- not isinstance(smoothing_beta, (int, float))
- or not np.isfinite(smoothing_beta)
- or smoothing_beta <= 0
- ):
- logger.warning(
- f"Invalid extrema_smoothing beta {smoothing_beta!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_SMOOTHING['beta']!r}"
- )
- smoothing_beta = DEFAULTS_EXTREMA_SMOOTHING["beta"]
-
- smoothing_polyorder = extrema_smoothing.get(
- "polyorder", DEFAULTS_EXTREMA_SMOOTHING["polyorder"]
- )
- if not isinstance(smoothing_polyorder, int) or smoothing_polyorder < 1:
- logger.warning(
- f"Invalid extrema_smoothing polyorder {smoothing_polyorder!r}: must be an integer >= 1, using default {DEFAULTS_EXTREMA_SMOOTHING['polyorder']!r}"
- )
- smoothing_polyorder = DEFAULTS_EXTREMA_SMOOTHING["polyorder"]
-
- smoothing_mode = str(
- extrema_smoothing.get("mode", DEFAULTS_EXTREMA_SMOOTHING["mode"])
- )
- if smoothing_mode not in set(SMOOTHING_MODES):
- logger.warning(
- f"Invalid extrema_smoothing mode {smoothing_mode!r}, supported: {', '.join(SMOOTHING_MODES)}, using default {SMOOTHING_MODES[0]!r}"
- )
- smoothing_mode = SMOOTHING_MODES[0]
-
- smoothing_sigma = extrema_smoothing.get(
- "sigma", DEFAULTS_EXTREMA_SMOOTHING["sigma"]
- )
- if (
- not isinstance(smoothing_sigma, (int, float))
- or smoothing_sigma <= 0
- or not np.isfinite(smoothing_sigma)
- ):
- logger.warning(
- f"Invalid extrema_smoothing sigma {smoothing_sigma!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_SMOOTHING['sigma']!r}"
- )
- smoothing_sigma = DEFAULTS_EXTREMA_SMOOTHING["sigma"]
-
- return {
- "method": smoothing_method,
- "window_candles": int(smoothing_window_candles),
- "beta": smoothing_beta,
- "polyorder": int(smoothing_polyorder),
- "mode": smoothing_mode,
- "sigma": float(smoothing_sigma),
- }
-
@staticmethod
@lru_cache(maxsize=128)
def _td_format(
speeds=pivots_speeds,
efficiency_ratios=pivots_efficiency_ratios,
volume_weighted_efficiency_ratios=pivots_volume_weighted_efficiency_ratios,
- source_weights=self.extrema_weighting["source_weights"],
strategy=self.extrema_weighting["strategy"],
- aggregation=self.extrema_weighting["aggregation"],
- aggregation_normalization=self.extrema_weighting[
- "aggregation_normalization"
- ],
- standardization=self.extrema_weighting["standardization"],
- robust_quantiles=self.extrema_weighting["robust_quantiles"],
- mmad_scaling_factor=self.extrema_weighting["mmad_scaling_factor"],
- normalization=self.extrema_weighting["normalization"],
- minmax_range=self.extrema_weighting["minmax_range"],
- sigmoid_scale=self.extrema_weighting["sigmoid_scale"],
- softmax_temperature=self.extrema_weighting["softmax_temperature"],
- rank_method=self.extrema_weighting["rank_method"],
- gamma=self.extrema_weighting["gamma"],
)
plot_eps = weighted_extrema.abs().where(weighted_extrema.ne(0.0)).min()
import pandas as pd
import scipy as sp
import talib.abstract as ta
+from ExtremaWeightingTransformer import (
+ DEFAULTS_EXTREMA_WEIGHTING,
+ NORMALIZATION_TYPES,
+ STANDARDIZATION_TYPES,
+ WEIGHT_STRATEGIES,
+ WeightStrategy,
+)
from numpy.typing import NDArray
from scipy.ndimage import gaussian_filter1d
-from scipy.stats import gmean, percentileofscore
+from scipy.stats import percentileofscore
from technical import qtpylib
if TYPE_CHECKING:
T = TypeVar("T", pd.Series, float)
-WeightStrategy = Literal[
- "none",
- "amplitude",
- "amplitude_threshold_ratio",
- "volume_rate",
- "speed",
- "efficiency_ratio",
- "volume_weighted_efficiency_ratio",
- "hybrid",
-]
-WEIGHT_STRATEGIES: Final[tuple[WeightStrategy, ...]] = (
- "none",
- "amplitude",
- "amplitude_threshold_ratio",
- "volume_rate",
- "speed",
- "efficiency_ratio",
- "volume_weighted_efficiency_ratio",
- "hybrid",
-)
-
-WeightSource = Literal[
- "amplitude",
- "amplitude_threshold_ratio",
- "volume_rate",
- "speed",
- "efficiency_ratio",
- "volume_weighted_efficiency_ratio",
-]
-WEIGHT_SOURCES: Final[tuple[WeightSource, ...]] = (
- "amplitude",
- "amplitude_threshold_ratio",
- "volume_rate",
- "speed",
- "efficiency_ratio",
- "volume_weighted_efficiency_ratio",
-)
-
-WeightAggregation = Literal["weighted_sum", "geometric_mean"]
-WEIGHT_AGGREGATIONS: Final[tuple[WeightAggregation, ...]] = (
- "weighted_sum",
- "geometric_mean",
-)
-
EXTREMA_COLUMN: Final = "&s-extrema"
MAXIMA_THRESHOLD_COLUMN: Final = "&s-maxima_threshold"
MINIMA_THRESHOLD_COLUMN: Final = "&s-minima_threshold"
-StandardizationType = Literal["none", "zscore", "robust", "mmad"]
-STANDARDIZATION_TYPES: Final[tuple[StandardizationType, ...]] = (
- "none", # 0 - No standardization
- "zscore", # 1 - (w - μ) / σ
- "robust", # 2 - (w - median) / IQR
- "mmad", # 3 - (w - median) / MAD
-)
-
-NormalizationType = Literal["minmax", "sigmoid", "softmax", "l1", "l2", "rank", "none"]
-NORMALIZATION_TYPES: Final[tuple[NormalizationType, ...]] = (
- "minmax", # 0 - (w - min) / (max - min)
- "sigmoid", # 1 - 1 / (1 + exp(-scale × w))
- "softmax", # 2 - exp(w/T) / Σexp(w/T)
- "l1", # 3 - w / Σ|w|
- "l2", # 4 - w / ||w||₂
- "rank", # 5 - (rank(w) - 1) / (n - 1)
- "none", # 6 - w (identity)
-)
-
-RankMethod = Literal["average", "min", "max", "dense", "ordinal"]
-RANK_METHODS: Final[tuple[RankMethod, ...]] = (
- "average",
- "min",
- "max",
- "dense",
- "ordinal",
-)
-
SmoothingKernel = Literal["gaussian", "kaiser", "triang"]
SMOOTHING_KERNELS: Final[tuple[SmoothingKernel, ...]] = (
"gaussian",
"sigma": 1.0,
}
-DEFAULTS_EXTREMA_WEIGHTING: Final[dict[str, Any]] = {
- "strategy": WEIGHT_STRATEGIES[0], # "none"
- "source_weights": {s: 1.0 for s in WEIGHT_SOURCES},
- "aggregation": WEIGHT_AGGREGATIONS[0], # "weighted_sum"
- "aggregation_normalization": NORMALIZATION_TYPES[6], # "none"
+DEFAULT_EXTREMA_WEIGHT: Final[float] = 1.0
+
+DEFAULT_FIT_LIVE_PREDICTIONS_CANDLES: Final[int] = 100
+
+
+def get_extrema_weighting_config(
+ extrema_weighting: dict[str, Any],
+ logger: Logger,
+) -> dict[str, Any]:
+ strategy = extrema_weighting.get("strategy", DEFAULTS_EXTREMA_WEIGHTING["strategy"])
+ if strategy not in set(WEIGHT_STRATEGIES):
+ logger.warning(
+ f"Invalid extrema_weighting strategy {strategy!r}, supported: {', '.join(WEIGHT_STRATEGIES)}, using default {WEIGHT_STRATEGIES[0]!r}"
+ )
+ strategy = WEIGHT_STRATEGIES[0]
+
# Phase 1: Standardization
- "standardization": STANDARDIZATION_TYPES[0], # "none"
- "robust_quantiles": (0.25, 0.75),
- "mmad_scaling_factor": 1.4826,
+ standardization = extrema_weighting.get(
+ "standardization", DEFAULTS_EXTREMA_WEIGHTING["standardization"]
+ )
+ if standardization not in set(STANDARDIZATION_TYPES):
+ logger.warning(
+ f"Invalid extrema_weighting standardization {standardization!r}, supported: {', '.join(STANDARDIZATION_TYPES)}, using default {STANDARDIZATION_TYPES[0]!r}"
+ )
+ standardization = STANDARDIZATION_TYPES[0]
+
+ robust_quantiles = extrema_weighting.get(
+ "robust_quantiles", DEFAULTS_EXTREMA_WEIGHTING["robust_quantiles"]
+ )
+ if (
+ not isinstance(robust_quantiles, (list, tuple))
+ or len(robust_quantiles) != 2
+ or not all(
+ isinstance(q, (int, float)) and np.isfinite(q) and 0 <= q <= 1
+ for q in robust_quantiles
+ )
+ or robust_quantiles[0] >= robust_quantiles[1]
+ ):
+ logger.warning(
+ f"Invalid extrema_weighting robust_quantiles {robust_quantiles!r}: must be (q1, q3) with 0 <= q1 < q3 <= 1, using default {DEFAULTS_EXTREMA_WEIGHTING['robust_quantiles']!r}"
+ )
+ robust_quantiles = DEFAULTS_EXTREMA_WEIGHTING["robust_quantiles"]
+ else:
+ robust_quantiles = (
+ robust_quantiles[0],
+ robust_quantiles[1],
+ )
+
+ mmad_scaling_factor = extrema_weighting.get(
+ "mmad_scaling_factor", DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"]
+ )
+ if (
+ not isinstance(mmad_scaling_factor, (int, float))
+ or not np.isfinite(mmad_scaling_factor)
+ or mmad_scaling_factor <= 0
+ ):
+ logger.warning(
+ f"Invalid extrema_weighting mmad_scaling_factor {mmad_scaling_factor!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_WEIGHTING['mmad_scaling_factor']!r}"
+ )
+ mmad_scaling_factor = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"]
+
# Phase 2: Normalization
- "normalization": NORMALIZATION_TYPES[0], # "minmax"
- "minmax_range": (0.0, 1.0),
- "sigmoid_scale": 1.0,
- "softmax_temperature": 1.0,
- "rank_method": RANK_METHODS[0], # "average"
- # Phase 3: Post-processing
- "gamma": 1.0,
-}
+ normalization = extrema_weighting.get(
+ "normalization", DEFAULTS_EXTREMA_WEIGHTING["normalization"]
+ )
+ if normalization not in set(NORMALIZATION_TYPES):
+ logger.warning(
+ f"Invalid extrema_weighting normalization {normalization!r}, supported: {', '.join(NORMALIZATION_TYPES)}, using default {NORMALIZATION_TYPES[0]!r}"
+ )
+ normalization = NORMALIZATION_TYPES[0]
-DEFAULT_EXTREMA_WEIGHT: Final[float] = 1.0
+ if (
+ strategy != WEIGHT_STRATEGIES[0] # "none"
+ and standardization != STANDARDIZATION_TYPES[0] # "none"
+ and normalization == NORMALIZATION_TYPES[2] # "none"
+ ):
+ logger.warning(
+ f"extrema_weighting standardization={standardization!r} with normalization={normalization!r} "
+ "can produce negative weights and flip ternary extrema labels. "
+ f"Consider using normalization in {{{NORMALIZATION_TYPES[0]!r},{NORMALIZATION_TYPES[1]!r}}} "
+ f"or set standardization={STANDARDIZATION_TYPES[0]!r}"
+ )
-DEFAULT_FIT_LIVE_PREDICTIONS_CANDLES: Final[int] = 100
+ minmax_range = extrema_weighting.get(
+ "minmax_range", DEFAULTS_EXTREMA_WEIGHTING["minmax_range"]
+ )
+ if (
+ not isinstance(minmax_range, (list, tuple))
+ or len(minmax_range) != 2
+ or not all(isinstance(x, (int, float)) and np.isfinite(x) for x in minmax_range)
+ or minmax_range[0] >= minmax_range[1]
+ ):
+ logger.warning(
+ f"Invalid extrema_weighting minmax_range {minmax_range!r}: must be (min, max) with min < max, using default {DEFAULTS_EXTREMA_WEIGHTING['minmax_range']!r}"
+ )
+ minmax_range = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"]
+ else:
+ minmax_range = (
+ minmax_range[0],
+ minmax_range[1],
+ )
+
+ sigmoid_scale = extrema_weighting.get(
+ "sigmoid_scale", DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"]
+ )
+ if (
+ not isinstance(sigmoid_scale, (int, float))
+ or not np.isfinite(sigmoid_scale)
+ or sigmoid_scale <= 0
+ ):
+ logger.warning(
+ f"Invalid extrema_weighting sigmoid_scale {sigmoid_scale!r}: must be a finite number > 0, using default {DEFAULTS_EXTREMA_WEIGHTING['sigmoid_scale']!r}"
+ )
+ sigmoid_scale = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"]
+
+ # Phase 3: Post-processing
+ gamma = extrema_weighting.get("gamma", DEFAULTS_EXTREMA_WEIGHTING["gamma"])
+ if (
+ not isinstance(gamma, (int, float))
+ or not np.isfinite(gamma)
+ or not (0 < gamma <= 10.0)
+ ):
+ logger.warning(
+ f"Invalid extrema_weighting gamma {gamma!r}: must be in range (0, 10], using default {DEFAULTS_EXTREMA_WEIGHTING['gamma']!r}"
+ )
+ gamma = DEFAULTS_EXTREMA_WEIGHTING["gamma"]
+
+ return {
+ "strategy": strategy,
+ # Phase 1: Standardization
+ "standardization": standardization,
+ "robust_quantiles": robust_quantiles,
+ "mmad_scaling_factor": mmad_scaling_factor,
+ # Phase 2: Normalization
+ "normalization": normalization,
+ "minmax_range": minmax_range,
+ "sigmoid_scale": sigmoid_scale,
+ # Phase 3: Post-processing
+ "gamma": gamma,
+ }
def get_distance(p1: T, p2: T) -> T:
)
-def _standardize_zscore(weights: NDArray[np.floating]) -> NDArray[np.floating]:
- """
- Z-score standardization: (w - μ) / σ
- Returns: mean≈0, std≈1
- """
- if weights.size == 0:
- return weights
-
- weights = weights.astype(float, copy=False)
-
- if np.isnan(weights).any():
- return np.zeros_like(weights, dtype=float)
-
- if weights.size == 1 or np.allclose(weights, weights[0]):
- return np.zeros_like(weights, dtype=float)
-
- try:
- z_scores = sp.stats.zscore(weights, ddof=1, nan_policy="raise")
- except Exception:
- return np.zeros_like(weights, dtype=float)
-
- if np.isnan(z_scores).any() or not np.isfinite(z_scores).all():
- return np.zeros_like(weights, dtype=float)
-
- return z_scores
-
-
-def _standardize_robust(
- weights: NDArray[np.floating],
- quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING["robust_quantiles"],
-) -> NDArray[np.floating]:
- """
- Robust standardization: (w - median) / IQR
- Returns: median≈0, IQR≈1 (outlier-resistant)
- """
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.zeros_like(weights, dtype=float)
-
- median = np.nanmedian(weights)
- q1, q3 = np.nanquantile(weights, quantiles)
- iqr = q3 - q1
-
- if np.isclose(iqr, 0.0):
- return np.zeros_like(weights, dtype=float)
-
- return (weights - median) / iqr
-
-
-def _standardize_mmad(
- weights: NDArray[np.floating],
- scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
-) -> NDArray[np.floating]:
- """
- MMAD standardization: (w - median) / MAD
- Returns: median≈0, MAD≈1 (outlier-resistant)
- """
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.zeros_like(weights, dtype=float)
-
- median = np.nanmedian(weights)
- mad = np.nanmedian(np.abs(weights - median))
-
- if np.isclose(mad, 0.0):
- return np.zeros_like(weights, dtype=float)
-
- return (weights - median) / (scaling_factor * mad)
-
-
-def standardize_weights(
- weights: NDArray[np.floating],
- method: StandardizationType = STANDARDIZATION_TYPES[0],
- robust_quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING[
- "robust_quantiles"
- ],
- mmad_scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
-) -> NDArray[np.floating]:
- """
- Phase 1: Standardize weights (centering/scaling, not [0,1] mapping).
- Methods: "none", "zscore", "robust", "mmad"
- """
- if weights.size == 0:
- return weights
-
- if method == STANDARDIZATION_TYPES[0]: # "none"
- return weights
-
- elif method == STANDARDIZATION_TYPES[1]: # "zscore"
- return _standardize_zscore(weights)
-
- elif method == STANDARDIZATION_TYPES[2]: # "robust"
- return _standardize_robust(weights, quantiles=robust_quantiles)
-
- elif method == STANDARDIZATION_TYPES[3]: # "mmad"
- return _standardize_mmad(weights, scaling_factor=mmad_scaling_factor)
-
- else:
- raise ValueError(
- f"Invalid standardization method {method!r}. "
- f"Supported: {', '.join(STANDARDIZATION_TYPES)}"
- )
-
-
-def _normalize_sigmoid(
- weights: NDArray[np.floating],
- scale: float = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"],
-) -> NDArray[np.floating]:
- """
- Sigmoid normalization: 1 / (1 + exp(-scale × w))
- Returns: [0, 1] with soft compression
- """
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- if scale <= 0 or not np.isfinite(scale):
- scale = 1.0
-
- return sp.special.expit(scale * weights)
-
-
-def _normalize_minmax(
- weights: NDArray[np.floating],
- range: tuple[float, float] = (0.0, 1.0),
-) -> NDArray[np.floating]:
- """
- MinMax normalization: range_min + [(w - min) / (max - min)] × (range_max - range_min)
- Returns: [range_min, range_max]
- """
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- w_min = np.min(weights)
- w_max = np.max(weights)
-
- if not (np.isfinite(w_min) and np.isfinite(w_max)):
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- w_range = w_max - w_min
- if np.isclose(w_range, 0.0):
- return np.full_like(weights, midpoint(range[0], range[1]), dtype=float)
-
- return range[0] + ((weights - w_min) / w_range) * (range[1] - range[0])
-
-
-def _normalize_l1(weights: NDArray[np.floating]) -> NDArray[np.floating]:
- """L1 normalization: w / Σ|w| → Σ|w| = 1"""
- weights_sum = np.nansum(np.abs(weights))
- if weights_sum <= 0 or not np.isfinite(weights_sum):
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
- return weights / weights_sum
-
-
-def _normalize_l2(weights: NDArray[np.floating]) -> NDArray[np.floating]:
- """L2 normalization: w / ||w||₂ → ||w||₂ = 1"""
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- l2_norm = np.linalg.norm(weights, ord=2)
-
- if l2_norm <= 0 or not np.isfinite(l2_norm):
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- return weights / l2_norm
-
-
-def _normalize_softmax(
- weights: NDArray[np.floating],
- temperature: float = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"],
-) -> NDArray[np.floating]:
- """Softmax normalization: exp(w/T) / Σexp(w/T) → Σw = 1, range [0,1]"""
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
- if not np.isclose(temperature, 1.0) and temperature > 0:
- weights = weights / temperature
- return sp.special.softmax(weights)
-
-
-def _normalize_rank(
- weights: NDArray[np.floating],
- method: RankMethod = DEFAULTS_EXTREMA_WEIGHTING["rank_method"],
-) -> NDArray[np.floating]:
- """Rank normalization: [rank(w) - 1] / (n - 1) → [0, 1] uniformly distributed"""
- weights = weights.astype(float, copy=False)
- if np.isnan(weights).any():
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- ranks = sp.stats.rankdata(weights, method=method)
- n = len(weights)
- if n <= 1:
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- return (ranks - 1) / (n - 1)
-
-
def _impute_weights(
weights: NDArray[np.floating],
*,
) -> NDArray[np.floating]:
weights = weights.astype(float, copy=True)
+ if weights.size == 0:
+ return np.full_like(weights, default_weight, dtype=float)
+
# Weights computed by `zigzag` can be NaN on boundary pivots
- if len(weights) > 0:
- if not np.isfinite(weights[0]):
- weights[0] = 0.0
- if not np.isfinite(weights[-1]):
- weights[-1] = 0.0
+ if not np.isfinite(weights[0]):
+ weights[0] = 0.0
+ if not np.isfinite(weights[-1]):
+ weights[-1] = 0.0
finite_mask = np.isfinite(weights)
if not finite_mask.any():
return weights
-def normalize_weights(
- weights: NDArray[np.floating],
- # Phase 1: Standardization
- standardization: StandardizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "standardization"
- ],
- robust_quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING[
- "robust_quantiles"
- ],
- mmad_scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
- # Phase 2: Normalization
- normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING["normalization"],
- minmax_range: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"],
- sigmoid_scale: float = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"],
- softmax_temperature: float = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"],
- rank_method: RankMethod = DEFAULTS_EXTREMA_WEIGHTING["rank_method"],
- # Phase 3: Post-processing
- gamma: float = DEFAULTS_EXTREMA_WEIGHTING["gamma"],
-) -> NDArray[np.floating]:
- """
- 3-phase weights normalization:
- 1. Standardization: zscore (w-μ)/σ | robust (w-median)/IQR | mmad (w-median)/MAD | none
- 2. Normalization: minmax, sigmoid, softmax, l1, l2, rank, none
- 3. Post-processing: gamma correction w^γ
- """
- if weights.size == 0:
- return weights
-
- weights = _impute_weights(
- weights,
- default_weight=DEFAULT_EXTREMA_WEIGHT,
- )
-
- # Phase 1: Standardization
- standardized_weights = standardize_weights(
- weights,
- method=standardization,
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- )
-
- # Phase 2: Normalization
- if normalization == NORMALIZATION_TYPES[6]: # "none"
- normalized_weights = standardized_weights
- elif normalization == NORMALIZATION_TYPES[0]: # "minmax"
- normalized_weights = _normalize_minmax(standardized_weights, range=minmax_range)
- elif normalization == NORMALIZATION_TYPES[1]: # "sigmoid"
- normalized_weights = _normalize_sigmoid(
- standardized_weights, scale=sigmoid_scale
- )
- elif normalization == NORMALIZATION_TYPES[2]: # "softmax"
- normalized_weights = _normalize_softmax(
- standardized_weights, temperature=softmax_temperature
- )
- elif normalization == NORMALIZATION_TYPES[3]: # "l1"
- normalized_weights = _normalize_l1(standardized_weights)
- elif normalization == NORMALIZATION_TYPES[4]: # "l2"
- normalized_weights = _normalize_l2(standardized_weights)
- elif normalization == NORMALIZATION_TYPES[5]: # "rank"
- normalized_weights = _normalize_rank(standardized_weights, method=rank_method)
- else:
- raise ValueError(
- f"Invalid normalization method {normalization!r}. "
- f"Supported: {', '.join(NORMALIZATION_TYPES)}"
- )
-
- # Phase 3: Post-processing
- if not np.isclose(gamma, 1.0) and np.isfinite(gamma) and gamma > 0:
- normalized_weights = np.power(np.abs(normalized_weights), gamma) * np.sign(
- normalized_weights
- )
-
- if not np.isfinite(normalized_weights).all():
- return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
-
- return normalized_weights
-
-
def _build_weights_array(
n_extrema: int,
indices: list[int],
return weights_array
-def calculate_hybrid_extrema_weights(
- indices: list[int],
- amplitudes: list[float],
- amplitude_threshold_ratios: list[float],
- volume_rates: list[float],
- speeds: list[float],
- efficiency_ratios: list[float],
- volume_weighted_efficiency_ratios: list[float],
- source_weights: dict[str, float],
- aggregation: WeightAggregation = DEFAULTS_EXTREMA_WEIGHTING["aggregation"],
- aggregation_normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "aggregation_normalization"
- ],
- # Phase 1: Standardization
- standardization: StandardizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "standardization"
- ],
- robust_quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING[
- "robust_quantiles"
- ],
- mmad_scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
- # Phase 2: Normalization
- normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING["normalization"],
- minmax_range: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"],
- sigmoid_scale: float = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"],
- softmax_temperature: float = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"],
- rank_method: RankMethod = DEFAULTS_EXTREMA_WEIGHTING["rank_method"],
- # Phase 3: Post-processing
- gamma: float = DEFAULTS_EXTREMA_WEIGHTING["gamma"],
-) -> NDArray[np.floating]:
- n = len(indices)
- if n == 0:
- return np.array([], dtype=float)
-
- if not isinstance(source_weights, dict):
- source_weights = {}
-
- weights_array_by_source: dict[WeightSource, NDArray[np.floating]] = {
- "amplitude": np.asarray(amplitudes, dtype=float),
- "amplitude_threshold_ratio": np.asarray(
- amplitude_threshold_ratios, dtype=float
- ),
- "volume_rate": np.asarray(volume_rates, dtype=float),
- "speed": np.asarray(speeds, dtype=float),
- "efficiency_ratio": np.asarray(efficiency_ratios, dtype=float),
- "volume_weighted_efficiency_ratio": np.asarray(
- volume_weighted_efficiency_ratios, dtype=float
- ),
- }
-
- enabled_sources: list[WeightSource] = []
- source_weights_list: list[float] = []
- for source in WEIGHT_SOURCES:
- source_weight = source_weights.get(source)
- if source_weight is None:
- continue
- if (
- not isinstance(source_weight, (int, float))
- or not np.isfinite(source_weight)
- or source_weight <= 0
- ):
- continue
- enabled_sources.append(source)
- source_weights_list.append(float(source_weight))
-
- if len(enabled_sources) == 0:
- enabled_sources = list(WEIGHT_SOURCES)
- source_weights_list = [1.0 for _ in enabled_sources]
-
- if any(weights_array_by_source[s].size != n for s in enabled_sources):
- raise ValueError(
- f"Invalid hybrid weights: length mismatch, got {n} indices but inconsistent weights lengths"
- )
-
- source_weights_array: NDArray[np.floating] = np.asarray(
- source_weights_list, dtype=float
- )
- source_weights_array_sum = np.nansum(np.abs(source_weights_array))
- if not np.isfinite(source_weights_array_sum) or source_weights_array_sum <= 0:
- return np.array([], dtype=float)
- source_weights_array = source_weights_array / source_weights_array_sum
-
- normalized_source_weights_array: list[NDArray[np.floating]] = []
- for source in enabled_sources:
- source_weights_arr = weights_array_by_source[source]
- normalized_source_weights = normalize_weights(
- source_weights_arr,
- standardization=standardization,
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- normalization=normalization,
- minmax_range=minmax_range,
- sigmoid_scale=sigmoid_scale,
- softmax_temperature=softmax_temperature,
- rank_method=rank_method,
- gamma=gamma,
- )
- normalized_source_weights_array.append(normalized_source_weights)
-
- if aggregation == WEIGHT_AGGREGATIONS[0]: # "weighted_sum"
- combined_source_weights_array: NDArray[np.floating] = np.average(
- np.vstack(normalized_source_weights_array),
- axis=0,
- weights=source_weights_array,
- )
- elif aggregation == WEIGHT_AGGREGATIONS[1]: # "geometric_mean"
- combined_source_weights_array: NDArray[np.floating] = gmean(
- np.vstack([np.abs(values) for values in normalized_source_weights_array]),
- axis=0,
- weights=source_weights_array[:, np.newaxis],
- )
- else:
- raise ValueError(
- f"Invalid hybrid aggregation method {aggregation!r}. "
- f"Supported: {', '.join(WEIGHT_AGGREGATIONS)}"
- )
-
- if aggregation_normalization != NORMALIZATION_TYPES[6]: # "none"
- combined_source_weights_array = normalize_weights(
- combined_source_weights_array,
- standardization=STANDARDIZATION_TYPES[0],
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- normalization=aggregation_normalization,
- minmax_range=minmax_range,
- sigmoid_scale=sigmoid_scale,
- softmax_temperature=softmax_temperature,
- rank_method=rank_method,
- gamma=1.0,
- )
-
- if (
- combined_source_weights_array.size == 0
- or not np.isfinite(combined_source_weights_array).all()
- ):
- return np.array([], dtype=float)
-
- return combined_source_weights_array
-
-
-def calculate_extrema_weights(
- indices: list[int],
- weights: NDArray[np.floating],
- # Phase 1: Standardization
- standardization: StandardizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "standardization"
- ],
- robust_quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING[
- "robust_quantiles"
- ],
- mmad_scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
- # Phase 2: Normalization
- normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING["normalization"],
- minmax_range: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"],
- sigmoid_scale: float = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"],
- softmax_temperature: float = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"],
- rank_method: RankMethod = DEFAULTS_EXTREMA_WEIGHTING["rank_method"],
- # Phase 3: Post-processing
- gamma: float = DEFAULTS_EXTREMA_WEIGHTING["gamma"],
-) -> NDArray[np.floating]:
- if len(indices) == 0 or len(weights) == 0:
- return np.array([], dtype=float)
-
- normalized_weights = normalize_weights(
- weights,
- standardization=standardization,
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- normalization=normalization,
- minmax_range=minmax_range,
- sigmoid_scale=sigmoid_scale,
- softmax_temperature=softmax_temperature,
- rank_method=rank_method,
- gamma=gamma,
- )
-
- return normalized_weights
-
-
def compute_extrema_weights(
n_extrema: int,
indices: list[int],
speeds: list[float],
efficiency_ratios: list[float],
volume_weighted_efficiency_ratios: list[float],
- source_weights: dict[str, float],
strategy: WeightStrategy = DEFAULTS_EXTREMA_WEIGHTING["strategy"],
- aggregation: WeightAggregation = DEFAULTS_EXTREMA_WEIGHTING["aggregation"],
- aggregation_normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "aggregation_normalization"
- ],
- # Phase 1: Standardization
- standardization: StandardizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "standardization"
- ],
- robust_quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING[
- "robust_quantiles"
- ],
- mmad_scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
- # Phase 2: Normalization
- normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING["normalization"],
- minmax_range: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"],
- sigmoid_scale: float = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"],
- softmax_temperature: float = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"],
- rank_method: RankMethod = DEFAULTS_EXTREMA_WEIGHTING["rank_method"],
- # Phase 3: Post-processing
- gamma: float = DEFAULTS_EXTREMA_WEIGHTING["gamma"],
) -> NDArray[np.floating]:
if len(indices) == 0 or strategy == WEIGHT_STRATEGIES[0]: # "none"
return np.full(n_extrema, DEFAULT_EXTREMA_WEIGHT, dtype=float)
- normalized_weights: Optional[NDArray[np.floating]] = None
+ weights: Optional[NDArray[np.floating]] = None
if (
strategy
if weights.size == 0:
return np.full(n_extrema, DEFAULT_EXTREMA_WEIGHT, dtype=float)
- normalized_weights = calculate_extrema_weights(
- indices=indices,
+ weights = _impute_weights(
weights=weights,
- standardization=standardization,
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- normalization=normalization,
- minmax_range=minmax_range,
- sigmoid_scale=sigmoid_scale,
- softmax_temperature=softmax_temperature,
- rank_method=rank_method,
- gamma=gamma,
)
- if strategy == WEIGHT_STRATEGIES[7]: # "hybrid"
- normalized_weights = calculate_hybrid_extrema_weights(
- indices=indices,
- amplitudes=amplitudes,
- amplitude_threshold_ratios=amplitude_threshold_ratios,
- volume_rates=volume_rates,
- speeds=speeds,
- efficiency_ratios=efficiency_ratios,
- volume_weighted_efficiency_ratios=volume_weighted_efficiency_ratios,
- source_weights=source_weights,
- aggregation=aggregation,
- aggregation_normalization=aggregation_normalization,
- standardization=standardization,
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- normalization=normalization,
- minmax_range=minmax_range,
- sigmoid_scale=sigmoid_scale,
- softmax_temperature=softmax_temperature,
- rank_method=rank_method,
- gamma=gamma,
- )
-
- if normalized_weights is not None:
- if normalized_weights.size == 0:
+ if weights is not None:
+ if weights.size == 0:
return np.full(n_extrema, DEFAULT_EXTREMA_WEIGHT, dtype=float)
return _build_weights_array(
n_extrema=n_extrema,
indices=indices,
- weights=normalized_weights,
- default_weight=np.nanmedian(normalized_weights),
+ weights=weights,
+ default_weight=np.nanmedian(weights),
)
raise ValueError(
speeds: list[float],
efficiency_ratios: list[float],
volume_weighted_efficiency_ratios: list[float],
- source_weights: dict[str, float],
strategy: WeightStrategy = DEFAULTS_EXTREMA_WEIGHTING["strategy"],
- aggregation: WeightAggregation = DEFAULTS_EXTREMA_WEIGHTING["aggregation"],
- aggregation_normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "aggregation_normalization"
- ],
- # Phase 1: Standardization
- standardization: StandardizationType = DEFAULTS_EXTREMA_WEIGHTING[
- "standardization"
- ],
- robust_quantiles: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING[
- "robust_quantiles"
- ],
- mmad_scaling_factor: float = DEFAULTS_EXTREMA_WEIGHTING["mmad_scaling_factor"],
- # Phase 2: Normalization
- normalization: NormalizationType = DEFAULTS_EXTREMA_WEIGHTING["normalization"],
- minmax_range: tuple[float, float] = DEFAULTS_EXTREMA_WEIGHTING["minmax_range"],
- sigmoid_scale: float = DEFAULTS_EXTREMA_WEIGHTING["sigmoid_scale"],
- softmax_temperature: float = DEFAULTS_EXTREMA_WEIGHTING["softmax_temperature"],
- rank_method: RankMethod = DEFAULTS_EXTREMA_WEIGHTING["rank_method"],
- # Phase 3: Post-processing
- gamma: float = DEFAULTS_EXTREMA_WEIGHTING["gamma"],
) -> tuple[pd.Series, pd.Series]:
extrema_values = extrema.to_numpy(dtype=float)
extrema_index = extrema.index
speeds=speeds,
efficiency_ratios=efficiency_ratios,
volume_weighted_efficiency_ratios=volume_weighted_efficiency_ratios,
- source_weights=source_weights,
strategy=strategy,
- aggregation=aggregation,
- aggregation_normalization=aggregation_normalization,
- standardization=standardization,
- robust_quantiles=robust_quantiles,
- mmad_scaling_factor=mmad_scaling_factor,
- normalization=normalization,
- minmax_range=minmax_range,
- sigmoid_scale=sigmoid_scale,
- softmax_temperature=softmax_temperature,
- rank_method=rank_method,
- gamma=gamma,
)
return pd.Series(