### Configuration tunables
-| Path | Default | Type / Range | Description |
-| ---------------------------------------------------- | ----------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| _Protections_ | | | |
-| custom_protections.trade_duration_candles | 72 | int >= 1 | Estimated trade duration in candles. Scales protections stop duration candles and trade limit. |
-| custom_protections.lookback_period_fraction | 0.5 | float (0,1] | Fraction of fit_live_predictions_candles used to calculate lookback_period_candles for MaxDrawdown and StoplossGuard protections. |
-| custom_protections.cooldown.enabled | true | bool | Enable/disable CooldownPeriod protection. |
-| custom_protections.cooldown.stop_duration_candles | 4 | int >= 1 | Number of candles to wait before allowing new trades after a trade is closed. |
-| custom_protections.drawdown.enabled | true | bool | Enable/disable MaxDrawdown protection. |
-| custom_protections.drawdown.max_allowed_drawdown | 0.2 | float (0,1) | Maximum allowed drawdown. |
-| custom_protections.stoploss.enabled | true | bool | Enable/disable StoplossGuard protection. |
-| _Leverage_ | | | |
-| leverage | proposed_leverage | float [1.0, max_leverage] | Leverage. Fallback to proposed_leverage for the pair. |
-| _Exit pricing_ | | | |
-| exit_pricing.trade_price_target | `moving_average` | enum {`moving_average`,`interpolation`,`weighted_interpolation`} | Trade NATR computation method. |
-| exit_pricing.thresholds_calibration.decline_quantile | 0.90 | float (0,1) | PnL decline quantile threshold. |
-| _Reversal confirmation_ | | | |
-| reversal_confirmation.lookback_period | 0 | int >= 0 | Prior confirming candles; 0 = none. |
-| reversal_confirmation.decay_ratio | 0.5 | float (0,1] | Geometric per-candle volatility adjusted reversal threshold relaxation factor. |
-| reversal_confirmation.min_natr_ratio_percent | 0.0095 | float [0,1] | Lower bound fraction for volatility adjusted reversal threshold. |
-| reversal_confirmation.max_natr_ratio_percent | 0.075 | float [0,1] | Upper bound fraction (>= lower bound) for volatility adjusted reversal threshold. |
-| _Regressor model_ | | | |
-| freqai.regressor | `xgboost` | enum {`xgboost`,`lightgbm`} | Machine learning regressor algorithm. |
-| _Extrema smoothing_ | | | |
-| freqai.extrema_smoothing.method | `gaussian` | enum {`gaussian`,`kaiser`,`triang`,`smm`,`sma`,`savgol`,`nadaraya_watson`} | Extrema smoothing method (`smm`=median, `sma`=mean, `savgol`=Savitzky–Golay, `nadaraya_watson`=Gaussian kernel regression). |
-| freqai.extrema_smoothing.window | 5 | int >= 3 | Smoothing window length (candles). |
-| freqai.extrema_smoothing.beta | 8.0 | float > 0 | Shape parameter for `kaiser` kernel. |
-| freqai.extrema_smoothing.polyorder | 3 | int >= 1 | Polynomial order for `savgol` smoothing. |
-| freqai.extrema_smoothing.mode | `mirror` | enum {`mirror`,`constant`,`nearest`,`wrap`,`interp`} | Boundary mode for `savgol` and `nadaraya_watson`. |
-| freqai.extrema_smoothing.bandwidth | 1.0 | float > 0 | Gaussian bandwidth for `nadaraya_watson`. |
-| _Extrema weighting_ | | | |
-| freqai.extrema_weighting.strategy | `none` | enum {`none`,`amplitude`,`amplitude_threshold_ratio`,`volume_weighted_amplitude`} | Extrema weighting source: unweighted (`none`), swing amplitude (`amplitude`), volatility-threshold / swing amplitude ratio (`amplitude_threshold_ratio`), or volume-weighted swing amplitude (`volume_weighted_amplitude`). |
-| freqai.extrema_weighting.standardization | `none` | enum {`none`,`zscore`,`robust`,`mmad`} | Standardization method applied before normalization. `none`=no standardization, `zscore`=(w-μ)/σ, `robust`=(w-median)/IQR, `mmad`=(w-median)/MAD. |
-| freqai.extrema_weighting.robust_quantiles | [0.25, 0.75] | list[float] where 0 <= Q1 < Q3 <= 1 | Quantile range for robust standardization, Q1 and Q3. |
-| freqai.extrema_weighting.mmad_scaling_factor | 1.4826 | float > 0 | Scaling factor for MMAD standardization. |
-| freqai.extrema_weighting.normalization | `minmax` | enum {`minmax`,`sigmoid`,`softmax`,`l1`,`l2`,`rank`,`none`} | Normalization method for weights. |
-| freqai.extrema_weighting.minmax_range | [0.0, 1.0] | list[float] | Target range for `minmax` normalization, min and max. |
-| freqai.extrema_weighting.sigmoid_scale | 1.0 | float > 0 | Scale parameter for `sigmoid` normalization, controls steepness. |
-| freqai.extrema_weighting.softmax_temperature | 1.0 | float > 0 | Temperature parameter for `softmax` normalization: lower values sharpen distribution, higher values flatten it. |
-| freqai.extrema_weighting.rank_method | `average` | enum {`average`,`min`,`max`,`dense`,`ordinal`} | Ranking method for `rank` normalization. |
-| freqai.extrema_weighting.gamma | 1.0 | float (0,10] | Contrast exponent applied after normalization: >1 emphasizes extrema, values between 0 and 1 soften. |
-| _Feature parameters_ | | | |
-| freqai.feature_parameters.label_period_candles | min/max midpoint | int >= 1 | Zigzag labeling NATR horizon. |
-| freqai.feature_parameters.min_label_period_candles | 12 | int >= 1 | Minimum labeling NATR horizon used for reversals labeling HPO. |
-| freqai.feature_parameters.max_label_period_candles | 24 | int >= 1 | Maximum labeling NATR horizon used for reversals labeling HPO. |
-| freqai.feature_parameters.label_natr_ratio | min/max midpoint | float > 0 | Zigzag labeling NATR ratio. |
-| freqai.feature_parameters.min_label_natr_ratio | 9.0 | float > 0 | Minimum labeling NATR ratio used for reversals labeling HPO. |
-| freqai.feature_parameters.max_label_natr_ratio | 12.0 | float > 0 | Maximum labeling NATR ratio used for reversals labeling HPO. |
-| freqai.feature_parameters.label_frequency_candles | `auto` | int >= 2 \| `auto` | Reversals labeling frequency. `auto` = max(2, 2 \* number of whitelisted pairs). |
-| freqai.feature_parameters.label_metric | `euclidean` | string (supported: `euclidean`,`minkowski`,`cityblock`,`chebyshev`,`mahalanobis`,`seuclidean`,`jensenshannon`,`sqeuclidean`,...) | Metric used in distance calculations to ideal point. |
-| freqai.feature_parameters.label_weights | [1/3,1/3,1/3] | list[float] | Per-objective weights used in distance calculations to ideal point. First objective is the number of detected reversals. Second objective is the median volume-weighted swing amplitude of Zigzag reversals (reversals quality). Third objective is the median volatility-threshold / swing amplitude ratio. |
-| freqai.feature_parameters.label_p_order | `None` | float \| None | p-order used by `minkowski` / `power_mean` (optional). |
-| freqai.feature_parameters.label_medoid_metric | `euclidean` | string | Metric used with `medoid`. |
-| freqai.feature_parameters.label_kmeans_metric | `euclidean` | string | Metric used for k-means clustering. |
-| freqai.feature_parameters.label_kmeans_selection | `min` | enum {`min`,`medoid`} | Strategy to select trial in the best kmeans cluster. |
-| freqai.feature_parameters.label_kmedoids_metric | `euclidean` | string | Metric used for k-medoids clustering. |
-| freqai.feature_parameters.label_kmedoids_selection | `min` | enum {`min`,`medoid`} | Strategy to select trial in the best k-medoids cluster. |
-| freqai.feature_parameters.label_knn_metric | `minkowski` | string | Distance metric for KNN. |
-| freqai.feature_parameters.label_knn_p_order | `None` | float \| None | Tunable for KNN neighbor distances aggregation methods: p-order (`knn_power_mean`, default: 1.0) or quantile (`knn_quantile`, default: 0.5). (optional) |
-| freqai.feature_parameters.label_knn_n_neighbors | 5 | int >= 1 | Number of neighbors for KNN. |
-| _Predictions extrema_ | | | |
-| freqai.predictions_extrema.selection_method | `rank` | enum {`rank`,`values`,`partition`} | Extrema selection method. `values` uses reversal values, `rank` uses ranked extrema values, `partition` uses sign-based partitioning. |
-| freqai.predictions_extrema.thresholds_smoothing | `mean` | enum {`mean`,`isodata`,`li`,`minimum`,`otsu`,`triangle`,`yen`,`median`,`soft_extremum`} | Thresholding method for prediction thresholds smoothing. |
-| freqai.predictions_extrema.thresholds_alpha | 12.0 | float > 0 | Alpha for `soft_extremum`. |
-| freqai.predictions_extrema.threshold_outlier | 0.999 | float (0,1) | Quantile threshold for predictions outlier filtering. |
-| freqai.predictions_extrema.extrema_fraction | 1.0 | float (0,1] | Fraction of extrema used for thresholds. `1.0` uses all, lower values keep only most significant. Applies to `rank` and `values`; ignored for `partition`. |
-| _Optuna / HPO_ | | | |
-| freqai.optuna_hyperopt.enabled | true | bool | Enables HPO. |
-| freqai.optuna_hyperopt.sampler | `tpe` | enum {`tpe`,`auto`} | HPO sampler algorithm. `tpe` uses [TPESampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html) with multivariate and group, `auto` uses [AutoSampler](https://hub.optuna.org/samplers/auto_sampler). |
-| freqai.optuna_hyperopt.storage | `file` | enum {`file`,`sqlite`} | HPO storage backend. |
-| freqai.optuna_hyperopt.continuous | true | bool | Continuous HPO. |
-| freqai.optuna_hyperopt.warm_start | true | bool | Warm start HPO with previous best value(s). |
-| freqai.optuna_hyperopt.n_startup_trials | 15 | int >= 0 | HPO startup trials. |
-| freqai.optuna_hyperopt.n_trials | 50 | int >= 1 | Maximum HPO trials. |
-| freqai.optuna_hyperopt.n_jobs | CPU threads / 4 | int >= 1 | Parallel HPO workers. |
-| freqai.optuna_hyperopt.timeout | 7200 | int >= 0 | HPO wall-clock timeout in seconds. |
-| freqai.optuna_hyperopt.label_candles_step | 1 | int >= 1 | Step for Zigzag NATR horizon search space. |
-| freqai.optuna_hyperopt.train_candles_step | 10 | int >= 1 | Step for training sets size search space. |
-| freqai.optuna_hyperopt.space_reduction | false | bool | Enable/disable HPO search space reduction based on previous best parameters. |
-| freqai.optuna_hyperopt.expansion_ratio | 0.4 | float [0,1] | HPO search space expansion ratio. |
-| freqai.optuna_hyperopt.min_resource | 3 | int >= 1 | Minimum resource per Hyperband pruner rung. |
-| freqai.optuna_hyperopt.seed | 1 | int >= 0 | HPO RNG seed. |
+| Path | Default | Type / Range | Description |
+| ---------------------------------------------------- | ----------------- | -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| _Protections_ | | | |
+| custom_protections.trade_duration_candles | 72 | int >= 1 | Estimated trade duration in candles. Scales protections stop duration candles and trade limit. |
+| custom_protections.lookback_period_fraction | 0.5 | float (0,1] | Fraction of fit_live_predictions_candles used to calculate lookback_period_candles for MaxDrawdown and StoplossGuard protections. |
+| custom_protections.cooldown.enabled | true | bool | Enable/disable CooldownPeriod protection. |
+| custom_protections.cooldown.stop_duration_candles | 4 | int >= 1 | Number of candles to wait before allowing new trades after a trade is closed. |
+| custom_protections.drawdown.enabled | true | bool | Enable/disable MaxDrawdown protection. |
+| custom_protections.drawdown.max_allowed_drawdown | 0.2 | float (0,1) | Maximum allowed drawdown. |
+| custom_protections.stoploss.enabled | true | bool | Enable/disable StoplossGuard protection. |
+| _Leverage_ | | | |
+| leverage | proposed_leverage | float [1.0, max_leverage] | Leverage. Fallback to proposed_leverage for the pair. |
+| _Exit pricing_ | | | |
+| exit_pricing.trade_price_target | `moving_average` | enum {`moving_average`,`interpolation`,`weighted_interpolation`} | Trade NATR computation method. |
+| exit_pricing.thresholds_calibration.decline_quantile | 0.90 | float (0,1) | PnL decline quantile threshold. |
+| _Reversal confirmation_ | | | |
+| reversal_confirmation.lookback_period | 0 | int >= 0 | Prior confirming candles; 0 = none. |
+| reversal_confirmation.decay_ratio | 0.5 | float (0,1] | Geometric per-candle volatility adjusted reversal threshold relaxation factor. |
+| reversal_confirmation.min_natr_ratio_percent | 0.0095 | float [0,1] | Lower bound fraction for volatility adjusted reversal threshold. |
+| reversal_confirmation.max_natr_ratio_percent | 0.075 | float [0,1] | Upper bound fraction (>= lower bound) for volatility adjusted reversal threshold. |
+| _Regressor model_ | | | |
+| freqai.regressor | `xgboost` | enum {`xgboost`,`lightgbm`} | Machine learning regressor algorithm. |
+| _Extrema smoothing_ | | | |
+| freqai.extrema_smoothing.method | `gaussian` | enum {`gaussian`,`kaiser`,`triang`,`smm`,`sma`,`savgol`,`nadaraya_watson`} | Extrema smoothing method (`smm`=median, `sma`=mean, `savgol`=Savitzky–Golay, `nadaraya_watson`=Gaussian kernel regression). |
+| freqai.extrema_smoothing.window | 5 | int >= 3 | Smoothing window length (candles). |
+| freqai.extrema_smoothing.beta | 8.0 | float > 0 | Shape parameter for `kaiser` kernel. |
+| freqai.extrema_smoothing.polyorder | 3 | int >= 1 | Polynomial order for `savgol` smoothing. |
+| freqai.extrema_smoothing.mode | `mirror` | enum {`mirror`,`constant`,`nearest`,`wrap`,`interp`} | Boundary mode for `savgol` and `nadaraya_watson`. |
+| freqai.extrema_smoothing.bandwidth | 1.0 | float > 0 | Gaussian bandwidth for `nadaraya_watson`. |
+| _Extrema weighting_ | | | |
+| freqai.extrema_weighting.strategy | `none` | enum {`none`,`amplitude`,`amplitude_threshold_ratio`} | Extrema weighting source: unweighted (`none`), swing amplitude (`amplitude`), or swing amplitude / median volatility-threshold ratio (`amplitude_threshold_ratio`). |
+| freqai.extrema_weighting.standardization | `none` | enum {`none`,`zscore`,`robust`,`mmad`} | Standardization method applied before normalization. `none`=no standardization, `zscore`=(w-μ)/σ, `robust`=(w-median)/IQR, `mmad`=(w-median)/MAD. |
+| freqai.extrema_weighting.robust_quantiles | [0.25, 0.75] | list[float] where 0 <= Q1 < Q3 <= 1 | Quantile range for robust standardization, Q1 and Q3. |
+| freqai.extrema_weighting.mmad_scaling_factor | 1.4826 | float > 0 | Scaling factor for MMAD standardization. |
+| freqai.extrema_weighting.normalization | `minmax` | enum {`minmax`,`sigmoid`,`softmax`,`l1`,`l2`,`rank`,`none`} | Normalization method for weights. |
+| freqai.extrema_weighting.minmax_range | [0.0, 1.0] | list[float] | Target range for `minmax` normalization, min and max. |
+| freqai.extrema_weighting.sigmoid_scale | 1.0 | float > 0 | Scale parameter for `sigmoid` normalization, controls steepness. |
+| freqai.extrema_weighting.softmax_temperature | 1.0 | float > 0 | Temperature parameter for `softmax` normalization: lower values sharpen distribution, higher values flatten it. |
+| freqai.extrema_weighting.rank_method | `average` | enum {`average`,`min`,`max`,`dense`,`ordinal`} | Ranking method for `rank` normalization. |
+| freqai.extrema_weighting.gamma | 1.0 | float (0,10] | Contrast exponent applied after normalization: >1 emphasizes extrema, values between 0 and 1 soften. |
+| _Feature parameters_ | | | |
+| freqai.feature_parameters.label_period_candles | min/max midpoint | int >= 1 | Zigzag labeling NATR horizon. |
+| freqai.feature_parameters.min_label_period_candles | 12 | int >= 1 | Minimum labeling NATR horizon used for reversals labeling HPO. |
+| freqai.feature_parameters.max_label_period_candles | 24 | int >= 1 | Maximum labeling NATR horizon used for reversals labeling HPO. |
+| freqai.feature_parameters.label_natr_ratio | min/max midpoint | float > 0 | Zigzag labeling NATR ratio. |
+| freqai.feature_parameters.min_label_natr_ratio | 9.0 | float > 0 | Minimum labeling NATR ratio used for reversals labeling HPO. |
+| freqai.feature_parameters.max_label_natr_ratio | 12.0 | float > 0 | Maximum labeling NATR ratio used for reversals labeling HPO. |
+| freqai.feature_parameters.label_frequency_candles | `auto` | int >= 2 \| `auto` | Reversals labeling frequency. `auto` = max(2, 2 \* number of whitelisted pairs). |
+| freqai.feature_parameters.label_metric | `euclidean` | string (supported: `euclidean`,`minkowski`,`cityblock`,`chebyshev`,`mahalanobis`,`seuclidean`,`jensenshannon`,`sqeuclidean`,...) | Metric used in distance calculations to ideal point. |
+| freqai.feature_parameters.label_weights | [1/3,1/3,1/3] | list[float] | Per-objective weights used in distance calculations to ideal point. First objective is the number of detected reversals. Second objective is the median swing amplitude of Zigzag reversals (reversals quality). Third objective is the median swing amplitude / median volatility-threshold ratio. |
+| freqai.feature_parameters.label_p_order | `None` | float \| None | p-order used by `minkowski` / `power_mean` (optional). |
+| freqai.feature_parameters.label_medoid_metric | `euclidean` | string | Metric used with `medoid`. |
+| freqai.feature_parameters.label_kmeans_metric | `euclidean` | string | Metric used for k-means clustering. |
+| freqai.feature_parameters.label_kmeans_selection | `min` | enum {`min`,`medoid`} | Strategy to select trial in the best kmeans cluster. |
+| freqai.feature_parameters.label_kmedoids_metric | `euclidean` | string | Metric used for k-medoids clustering. |
+| freqai.feature_parameters.label_kmedoids_selection | `min` | enum {`min`,`medoid`} | Strategy to select trial in the best k-medoids cluster. |
+| freqai.feature_parameters.label_knn_metric | `minkowski` | string | Distance metric for KNN. |
+| freqai.feature_parameters.label_knn_p_order | `None` | float \| None | Tunable for KNN neighbor distances aggregation methods: p-order (`knn_power_mean`, default: 1.0) or quantile (`knn_quantile`, default: 0.5). (optional) |
+| freqai.feature_parameters.label_knn_n_neighbors | 5 | int >= 1 | Number of neighbors for KNN. |
+| _Predictions extrema_ | | | |
+| freqai.predictions_extrema.selection_method | `rank` | enum {`rank`,`values`,`partition`} | Extrema selection method. `values` uses reversal values, `rank` uses ranked extrema values, `partition` uses sign-based partitioning. |
+| freqai.predictions_extrema.thresholds_smoothing | `mean` | enum {`mean`,`isodata`,`li`,`minimum`,`otsu`,`triangle`,`yen`,`median`,`soft_extremum`} | Thresholding method for prediction thresholds smoothing. |
+| freqai.predictions_extrema.thresholds_alpha | 12.0 | float > 0 | Alpha for `soft_extremum`. |
+| freqai.predictions_extrema.threshold_outlier | 0.999 | float (0,1) | Quantile threshold for predictions outlier filtering. |
+| freqai.predictions_extrema.extrema_fraction | 1.0 | float (0,1] | Fraction of extrema used for thresholds. `1.0` uses all, lower values keep only most significant. Applies to `rank` and `values`; ignored for `partition`. |
+| _Optuna / HPO_ | | | |
+| freqai.optuna_hyperopt.enabled | true | bool | Enables HPO. |
+| freqai.optuna_hyperopt.sampler | `tpe` | enum {`tpe`,`auto`} | HPO sampler algorithm. `tpe` uses [TPESampler](https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html) with multivariate and group, `auto` uses [AutoSampler](https://hub.optuna.org/samplers/auto_sampler). |
+| freqai.optuna_hyperopt.storage | `file` | enum {`file`,`sqlite`} | HPO storage backend. |
+| freqai.optuna_hyperopt.continuous | true | bool | Continuous HPO. |
+| freqai.optuna_hyperopt.warm_start | true | bool | Warm start HPO with previous best value(s). |
+| freqai.optuna_hyperopt.n_startup_trials | 15 | int >= 0 | HPO startup trials. |
+| freqai.optuna_hyperopt.n_trials | 50 | int >= 1 | Maximum HPO trials. |
+| freqai.optuna_hyperopt.n_jobs | CPU threads / 4 | int >= 1 | Parallel HPO workers. |
+| freqai.optuna_hyperopt.timeout | 7200 | int >= 0 | HPO wall-clock timeout in seconds. |
+| freqai.optuna_hyperopt.label_candles_step | 1 | int >= 1 | Step for Zigzag NATR horizon search space. |
+| freqai.optuna_hyperopt.train_candles_step | 10 | int >= 1 | Step for training sets size search space. |
+| freqai.optuna_hyperopt.space_reduction | false | bool | Enable/disable HPO search space reduction based on previous best parameters. |
+| freqai.optuna_hyperopt.expansion_ratio | 0.4 | float [0,1] | HPO search space expansion ratio. |
+| freqai.optuna_hyperopt.min_resource | 3 | int >= 1 | Minimum resource per Hyperband pruner rung. |
+| freqai.optuna_hyperopt.seed | 1 | int >= 0 | HPO RNG seed. |
## ReforceXY
"data_kitchen_thread_count": 6, // set to number of CPU threads / 4
"track_performance": false,
"extrema_weighting": {
- "strategy": "volume_weighted_amplitude",
+ "strategy": "amplitude",
"gamma": 1.75
},
"extrema_smoothing": {
https://github.com/sponsors/robcaulk
"""
- version = "3.7.126"
+ version = "3.7.127"
_SQRT_2: Final[float] = np.sqrt(2.0)
_,
pivots_values,
_,
- _,
+ pivots_amplitudes,
pivots_amplitude_threshold_ratios,
- _,
- _,
- pivots_volume_weighted_amplitudes,
) = zigzag(
df,
natr_period=label_period_candles,
natr_ratio=label_natr_ratio,
)
- median_volume_weighted_amplitude = np.nanmedian(
- np.asarray(pivots_volume_weighted_amplitudes, dtype=float)
- )
- if not np.isfinite(median_volume_weighted_amplitude):
- median_volume_weighted_amplitude = 0.0
+ median_amplitude = np.nanmedian(np.asarray(pivots_amplitudes, dtype=float))
+ if not np.isfinite(median_amplitude):
+ median_amplitude = 0.0
median_amplitude_threshold_ratio = np.nanmedian(
np.asarray(pivots_amplitude_threshold_ratios, dtype=float)
)
return (
len(pivots_values),
- median_volume_weighted_amplitude,
+ median_amplitude,
median_amplitude_threshold_ratio,
)
_TRADING_MODES: Final[tuple[TradingMode, ...]] = ("spot", "margin", "futures")
def version(self) -> str:
- return "3.3.176"
+ return "3.3.177"
timeframe = "5m"
)
weighting_normalization = NORMALIZATION_TYPES[0]
+ if (
+ weighting_strategy != WEIGHT_STRATEGIES[0] # "none"
+ and weighting_standardization != STANDARDIZATION_TYPES[0] # "none"
+ and weighting_normalization
+ in {
+ NORMALIZATION_TYPES[3], # "l1"
+ NORMALIZATION_TYPES[4], # "l2"
+ NORMALIZATION_TYPES[6], # "none"
+ }
+ ):
+ raise ValueError(
+ f"{pair}: invalid extrema_weighting configuration: "
+ f"standardization='{weighting_standardization}' with normalization='{weighting_normalization}' "
+ "can produce negative weights and flip ternary extrema labels. "
+ f"Use normalization in {{'{NORMALIZATION_TYPES[0]}','{NORMALIZATION_TYPES[1]}','{NORMALIZATION_TYPES[2]}','{NORMALIZATION_TYPES[5]}'}} "
+ f"or set standardization='{STANDARDIZATION_TYPES[0]}'."
+ )
+
weighting_minmax_range = extrema_weighting.get(
"minmax_range", DEFAULTS_EXTREMA_WEIGHTING["minmax_range"]
)
def _get_weights(
strategy: WeightStrategy,
amplitudes: list[float],
- volume_weighted_amplitudes: list[float],
amplitude_threshold_ratios: list[float],
) -> NDArray[np.floating]:
if strategy == WEIGHT_STRATEGIES[1]: # "amplitude"
if len(amplitude_threshold_ratios) == len(amplitudes)
else np.array(amplitudes)
)
- if strategy == WEIGHT_STRATEGIES[3]: # "volume_weighted_amplitude"
- return (
- np.array(volume_weighted_amplitudes)
- if len(volume_weighted_amplitudes) == len(amplitudes)
- else np.array(amplitudes)
- )
return np.array([])
def set_freqai_targets(
pivots_directions,
pivots_amplitudes,
pivots_amplitude_threshold_ratios,
- _,
- _,
- pivots_volume_weighted_amplitude,
) = zigzag(
dataframe,
natr_period=label_period_candles,
pivot_weights = QuickAdapterV3._get_weights(
self.extrema_weighting["strategy"],
pivots_amplitudes,
- pivots_volume_weighted_amplitude,
pivots_amplitude_threshold_ratios,
)
weighted_extrema, _ = get_weighted_extrema(
"none",
"amplitude",
"amplitude_threshold_ratio",
- "volume_weighted_amplitude",
]
WEIGHT_STRATEGIES: Final[tuple[WeightStrategy, ...]] = (
"none",
"amplitude",
"amplitude_threshold_ratio",
- "volume_weighted_amplitude",
)
EXTREMA_COLUMN: Final = "&s-extrema"
"""
weights = weights.astype(float, copy=False)
if np.isnan(weights).any():
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
if scale <= 0 or not np.isfinite(scale):
scale = 1.0
"""
weights = weights.astype(float, copy=False)
if np.isnan(weights).any():
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
w_min = np.min(weights)
w_max = np.max(weights)
if not (np.isfinite(w_min) and np.isfinite(w_max)):
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
w_range = w_max - w_min
if np.isclose(w_range, 0.0):
"""L1 normalization: w / Σ|w| → Σ|w| = 1"""
weights_sum = np.sum(np.abs(weights))
if weights_sum <= 0 or not np.isfinite(weights_sum):
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
return weights / weights_sum
"""L2 normalization: w / ||w||₂ → ||w||₂ = 1"""
weights = weights.astype(float, copy=False)
if np.isnan(weights).any():
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
l2_norm = np.linalg.norm(weights, ord=2)
if l2_norm <= 0 or not np.isfinite(l2_norm):
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
return weights / l2_norm
"""Softmax normalization: exp(w/T) / Σexp(w/T) → Σw = 1, range [0,1]"""
weights = weights.astype(float, copy=False)
if np.isnan(weights).any():
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
if not np.isclose(temperature, 1.0) and temperature > 0:
weights = weights / temperature
return sp.special.softmax(weights)
"""Rank normalization: [rank(w) - 1] / (n - 1) → [0, 1] uniformly distributed"""
weights = weights.astype(float, copy=False)
if np.isnan(weights).any():
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
ranks = sp.stats.rankdata(weights, method=method)
n = len(weights)
if n <= 1:
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
+ return np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
return (ranks - 1) / (n - 1)
if weights.size == 0:
return weights
+ weights_out = np.full_like(weights, DEFAULT_EXTREMA_WEIGHT, dtype=float)
+
+ weights_finite_mask = np.isfinite(weights)
+ if not weights_finite_mask.any():
+ return weights_out
+
# Phase 1: Standardization
standardized_weights = standardize_weights(
- weights,
+ weights[weights_finite_mask],
method=standardization,
robust_quantiles=robust_quantiles,
mmad_scaling_factor=mmad_scaling_factor,
# Phase 2: Normalization
if normalization == NORMALIZATION_TYPES[6]: # "none"
normalized_weights = standardized_weights
-
elif normalization == NORMALIZATION_TYPES[0]: # "minmax"
normalized_weights = _normalize_minmax(standardized_weights, range=minmax_range)
-
elif normalization == NORMALIZATION_TYPES[1]: # "sigmoid"
normalized_weights = _normalize_sigmoid(
standardized_weights, scale=sigmoid_scale
)
-
elif normalization == NORMALIZATION_TYPES[2]: # "softmax"
normalized_weights = _normalize_softmax(
standardized_weights, temperature=softmax_temperature
)
-
elif normalization == NORMALIZATION_TYPES[3]: # "l1"
normalized_weights = _normalize_l1(standardized_weights)
-
elif normalization == NORMALIZATION_TYPES[4]: # "l2"
normalized_weights = _normalize_l2(standardized_weights)
-
elif normalization == NORMALIZATION_TYPES[5]: # "rank"
normalized_weights = _normalize_rank(standardized_weights, method=rank_method)
-
else:
raise ValueError(f"Unknown normalization method: {normalization}")
normalized_weights
)
- if np.isnan(normalized_weights).any():
- return np.full_like(weights, float(DEFAULT_EXTREMA_WEIGHT), dtype=float)
-
- return normalized_weights
+ weights_out[weights_finite_mask] = normalized_weights
+ weights_out[~np.isfinite(weights_out)] = DEFAULT_EXTREMA_WEIGHT
+ return weights_out
def calculate_extrema_weights(
Returns: Series with weights at extrema indices (rest filled with default).
"""
if len(indices) == 0 or len(weights) == 0:
- return pd.Series(float(DEFAULT_EXTREMA_WEIGHT), index=series.index)
+ return pd.Series(DEFAULT_EXTREMA_WEIGHT, index=series.index)
if len(indices) != len(weights):
raise ValueError(
if normalized_weights.size == 0 or np.allclose(
normalized_weights, normalized_weights[0]
):
- normalized_weights = np.full_like(
- normalized_weights, float(DEFAULT_EXTREMA_WEIGHT)
- )
+ normalized_weights = np.full_like(normalized_weights, DEFAULT_EXTREMA_WEIGHT)
- weights_series = pd.Series(float(DEFAULT_EXTREMA_WEIGHT), index=series.index)
+ weights_series = pd.Series(DEFAULT_EXTREMA_WEIGHT, index=series.index)
mask = pd.Index(indices).isin(series.index)
normalized_weights = normalized_weights[mask]
valid_indices = [idx for idx, is_valid in zip(indices, mask) if is_valid]
Returns:
Tuple of (weighted_extrema, extrema_weights)
"""
- default_weights = pd.Series(float(DEFAULT_EXTREMA_WEIGHT), index=extrema.index)
+ default_weights = pd.Series(DEFAULT_EXTREMA_WEIGHT, index=extrema.index)
if (
len(indices) == 0 or len(weights) == 0 or strategy == WEIGHT_STRATEGIES[0]
): # "none"
if strategy in {
WEIGHT_STRATEGIES[1],
WEIGHT_STRATEGIES[2],
- WEIGHT_STRATEGIES[3],
- }: # "amplitude" or "amplitude_threshold_ratio" or "volume_weighted_amplitude"
+ }: # "amplitude" or "amplitude_threshold_ratio"
extrema_weights = calculate_extrema_weights(
series=extrema,
indices=indices,
list[TrendDirection],
list[float],
list[float],
- list[float],
- list[float],
- list[float],
]:
n = len(df)
if df.empty or n < natr_period:
[],
[],
[],
- [],
- [],
- [],
)
natr_values = (ta.NATR(df, timeperiod=natr_period).bfill() / 100.0).to_numpy()
log_closes = np.log(closes)
highs = df.get("high").to_numpy()
lows = df.get("low").to_numpy()
- volumes = df.get("volume").to_numpy()
state: TrendDirection = TrendDirection.NEUTRAL
pivots_directions: list[TrendDirection] = []
pivots_amplitudes: list[float] = []
pivots_amplitude_threshold_ratios: list[float] = []
- pivots_volume_spike_ratios: list[float] = []
- pivots_volume_quantiles: list[float] = []
- pivots_volume_weighted_amplitudes: list[float] = []
last_pivot_pos: int = -1
candidate_pivot_pos: int = -1
return volatility_quantile_cache[pos]
- volume_quantile_cache: dict[int, float] = {}
-
- def calculate_volume_quantile(pos: int) -> float:
- if pos not in volume_quantile_cache:
- pos_plus_1 = pos + 1
- start_pos = max(0, pos_plus_1 - natr_period)
- end_pos = min(pos_plus_1, n)
- if start_pos >= end_pos:
- volume_quantile_cache[pos] = np.nan
- else:
- volume_quantile_cache[pos] = calculate_quantile(
- volumes[start_pos:end_pos], volumes[pos]
- )
-
- return volume_quantile_cache[pos]
-
def calculate_slopes_ok_threshold(
pos: int,
min_threshold: float = 0.75,
candidate_pivot_pos = -1
candidate_pivot_value = np.nan
- def calculate_pivot_amplitude(current_value: float, previous_value: float) -> float:
- if np.isclose(previous_value, 0.0):
- return np.nan
- return abs(current_value - previous_value) / abs(previous_value)
-
- def calculate_pivot_amplitude_threshold_ratio(
- amplitude: float, threshold: float
- ) -> float:
- if np.isfinite(threshold) and threshold > 0 and np.isfinite(amplitude):
- return amplitude / threshold
- return np.nan
-
- def apply_weight_transform(weight: float, transform_type: str = "log1p") -> float:
- if not np.isfinite(weight):
- return np.nan
-
- if transform_type == "log1p":
- if weight < 0:
- return np.nan
- return np.log1p(weight)
-
- elif transform_type == "sqrt":
- if weight < 0:
- return np.nan
- return np.sqrt(weight)
-
- elif transform_type == "identity":
- return weight
-
- elif transform_type == "rational":
- return weight / (1 + weight)
-
- elif transform_type == "log10p":
- if weight < 0:
- return np.nan
- return np.log10(1 + weight)
-
- else:
- return weight
-
- def calculate_pivot_volume_metrics(
- pos: int, amplitude: float
- ) -> tuple[float, float, float]:
- if pos < 0 or pos >= n:
- return np.nan, np.nan, np.nan
+ def calculate_pivot_amplitude_and_threshold_ratio(
+ *,
+ previous_pos: int,
+ previous_value: float,
+ current_pos: int,
+ current_value: float,
+ ) -> tuple[float, float]:
+ if previous_pos < 0 or current_pos < 0:
+ return np.nan, np.nan
+ if previous_pos >= n or current_pos >= n:
+ return np.nan, np.nan
- pivot_volume = volumes[pos]
+ if np.isclose(previous_value, 0.0):
+ return np.nan, np.nan
- start_pos = max(0, pos - natr_period)
- if start_pos >= pos:
- volume_spike_ratio = np.nan
- else:
- volumes_slice = volumes[start_pos:pos]
- if volumes_slice.size == 0 or np.all(np.isnan(volumes_slice)):
- volume_spike_ratio = np.nan
- else:
- mean_volume = np.nanmean(volumes_slice)
- if mean_volume > 0 and np.isfinite(mean_volume):
- volume_spike_ratio = pivot_volume / mean_volume
- else:
- volume_spike_ratio = np.nan
+ amplitude = abs(current_value - previous_value) / abs(previous_value)
- volume_quantile = calculate_volume_quantile(pos)
+ start_pos = min(previous_pos, current_pos)
+ end_pos = max(previous_pos, current_pos) + 1
+ median_threshold = np.nanmedian(thresholds[start_pos:end_pos])
- transformed_volume_spike_ratio = apply_weight_transform(
- volume_spike_ratio, "log1p"
- )
- if np.isfinite(transformed_volume_spike_ratio) and np.isfinite(amplitude):
- volume_weighted_amplitude = amplitude * transformed_volume_spike_ratio
+ if (
+ np.isfinite(median_threshold)
+ and median_threshold > 0
+ and np.isfinite(amplitude)
+ ):
+ amplitude_threshold_ratio = amplitude / median_threshold
else:
- volume_weighted_amplitude = np.nan
+ amplitude_threshold_ratio = np.nan
- return volume_spike_ratio, volume_quantile, volume_weighted_amplitude
+ return amplitude, amplitude_threshold_ratio
def add_pivot(pos: int, value: float, direction: TrendDirection):
nonlocal last_pivot_pos
pivots_values.append(value)
pivots_directions.append(direction)
- if len(pivots_values) > 1:
- prev_pivot_value = pivots_values[-2]
- amplitude = calculate_pivot_amplitude(value, prev_pivot_value)
- amplitude_threshold_ratio = calculate_pivot_amplitude_threshold_ratio(
- amplitude, thresholds[pos]
+ if len(pivots_values) > 1 and last_pivot_pos >= 0:
+ amplitude, amplitude_threshold_ratio = (
+ calculate_pivot_amplitude_and_threshold_ratio(
+ previous_pos=last_pivot_pos,
+ previous_value=pivots_values[-2],
+ current_pos=pos,
+ current_value=value,
+ )
)
else:
amplitude = np.nan
amplitude_threshold_ratio = np.nan
- volume_spike_ratio, volume_quantile, volume_weighted_amplitude = (
- calculate_pivot_volume_metrics(pos, amplitude)
- )
-
pivots_amplitudes.append(amplitude)
pivots_amplitude_threshold_ratios.append(amplitude_threshold_ratio)
- pivots_volume_spike_ratios.append(volume_spike_ratio)
- pivots_volume_quantiles.append(volume_quantile)
- pivots_volume_weighted_amplitudes.append(volume_weighted_amplitude)
last_pivot_pos = pos
reset_candidate_pivot()
[],
[],
[],
- [],
- [],
- [],
)
for i in range(last_pivot_pos + 1, n):
pivots_directions,
pivots_amplitudes,
pivots_amplitude_threshold_ratios,
- pivots_volume_spike_ratios,
- pivots_volume_quantiles,
- pivots_volume_weighted_amplitudes,
)