]> Piment Noir Git Repositories - freqai-strategies.git/log
freqai-strategies.git
3 weeks agobuild(quickadapter): add missing ngboost and catboost version args to docker-compose
Jérôme Benoit [Fri, 9 Jan 2026 13:22:26 +0000 (14:22 +0100)] 
build(quickadapter): add missing ngboost and catboost version args to docker-compose

Aligns docker-compose.yml build args with all ARG declarations in Dockerfile.
This allows version overrides for ngboost (0.5.8) and catboost (1.2.8).

3 weeks agofix(quickadapter): preserve rsm parameter for CatBoost GPU pairwise modes (#37)
Jérôme Benoit [Fri, 9 Jan 2026 12:48:49 +0000 (13:48 +0100)] 
fix(quickadapter): preserve rsm parameter for CatBoost GPU pairwise modes (#37)

* fix(quickadapter): preserve rsm parameter for CatBoost GPU pairwise modes

The previous fix unconditionally removed the rsm parameter when using GPU,
but according to CatBoost documentation, rsm IS supported on GPU for
pairwise loss functions (PairLogit and PairLogitPairwise).

This commit refines the logic to only remove rsm for non-pairwise modes
on GPU, allowing users to benefit from rsm optimization when using
pairwise ranking loss functions.

Reference: https://github.com/catboost/catboost/issues/983

* refactor: rename constant and remove comments

* refactor(quickadapter): define _CATBOOST_GPU_RSM_LOSS_FUNCTIONS as global constant

- Define _CATBOOST_GPU_RSM_LOSS_FUNCTIONS as a reusable global constant
- Remove duplicate definitions in fit_regressor() and get_optuna_study_model_parameters()
- Improves maintainability: single source of truth for GPU rsm compatibility
- Ensures consistency between runtime logic and Optuna hyperparameter search

* chore: bump version to 3.10.8

Includes:
- CatBoost GPU rsm parameter fix for pairwise loss functions
- Optuna hyperparameter search optimization for rsm parameter
- Global constant for GPU rsm compatibility

3 weeks agoCatboost rsm support and pruning callback mod (#36)
jokedoke [Fri, 9 Jan 2026 12:17:08 +0000 (15:17 +0300)] 
Catboost rsm support and pruning callback mod (#36)

3 weeks agofix(quickadapter): use trial pruning for CatBoost grow_policy validation
Jérôme Benoit [Thu, 8 Jan 2026 21:28:41 +0000 (22:28 +0100)] 
fix(quickadapter): use trial pruning for CatBoost grow_policy validation

Replace dynamic grow_policy options with trial pruning to avoid Optuna's
"CategoricalDistribution does not support dynamic value space" error.

Always suggest all grow_policy options, but prune trials with incompatible
Ordered boosting + nonsymmetric trees combinations.

3 weeks agofix(quickadapter): restrict CatBoost grow_policy for Ordered boosting
Jérôme Benoit [Thu, 8 Jan 2026 21:10:38 +0000 (22:10 +0100)] 
fix(quickadapter): restrict CatBoost grow_policy for Ordered boosting

CatBoost's Ordered boosting mode only supports SymmetricTree grow policy.
When Optuna suggested Ordered boosting with Depthwise or Lossguide grow
policies, CatBoost raised: "Ordered boosting is not supported for
nonsymmetric trees."

This fix conditionally restricts grow_policy options to SymmetricTree when
boosting_type is Ordered, preventing invalid parameter combinations during
hyperparameter optimization.

Reference: catboost/catboost source (catboost_options.cpp:~758)

3 weeks agorefactor(quickadapter): use has_eval_set flag
Jérôme Benoit [Thu, 8 Jan 2026 20:20:28 +0000 (21:20 +0100)] 
refactor(quickadapter): use has_eval_set flag

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
3 weeks agochore(quickadapter): bump version to 3.10.7
Jérôme Benoit [Thu, 8 Jan 2026 16:25:34 +0000 (17:25 +0100)] 
chore(quickadapter): bump version to 3.10.7

3 weeks agofeat(quickadapter): add tol parameter and tune HPO ranges for NGBoost
Jérôme Benoit [Thu, 8 Jan 2026 16:11:29 +0000 (17:11 +0100)] 
feat(quickadapter): add tol parameter and tune HPO ranges for NGBoost

3 weeks agofeat(quickadapter): add DART booster support to XGBoost and enhance LightGBM DART HPO
Jérôme Benoit [Thu, 8 Jan 2026 16:02:01 +0000 (17:02 +0100)] 
feat(quickadapter): add DART booster support to XGBoost and enhance LightGBM DART HPO

- Add booster parameter with gbtree/dart options for XGBoost
- Add DART-specific params: sample_type, normalize_type, rate_drop, skip_drop, one_drop
- Enhance LightGBM DART with max_drop, xgboost_dart_mode, uniform_drop
- Adjust colsample_* ranges from 0.3-1.0 to 0.5-1.0 per best practices

3 weeks agofeat(quickadapter): add boosting_type with DART support to LightGBM HPO
Jérôme Benoit [Thu, 8 Jan 2026 15:37:22 +0000 (16:37 +0100)] 
feat(quickadapter): add boosting_type with DART support to LightGBM HPO

Add boosting_type parameter allowing selection between gbdt and dart
boosting methods. When dart is selected, conditional parameters drop_rate
and skip_drop are tuned. Also widen num_leaves and min_child_samples
ranges for better exploration of the hyperparameter space.

3 weeks agofeat(quickadapter): add boosting_type and leaf_estimation_method to CatBoost HPO
Jérôme Benoit [Thu, 8 Jan 2026 15:16:04 +0000 (16:16 +0100)] 
feat(quickadapter): add boosting_type and leaf_estimation_method to CatBoost HPO

3 weeks agochore(quickadapter): set verbosity to 0 in config-template.json
Jérôme Benoit [Thu, 8 Jan 2026 13:56:30 +0000 (14:56 +0100)] 
chore(quickadapter): set verbosity to 0 in config-template.json

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
3 weeks agochore: add catboost_info to .gitignore
Jérôme Benoit [Thu, 8 Jan 2026 13:54:15 +0000 (14:54 +0100)] 
chore: add catboost_info to .gitignore

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
3 weeks agoperf(quickadapter): default keep_extrema_fraction to 0.5 in config-template.json
Jérôme Benoit [Thu, 8 Jan 2026 12:56:03 +0000 (13:56 +0100)] 
perf(quickadapter): default keep_extrema_fraction to 0.5 in config-template.json

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
3 weeks agoperf(quickadapter): optimize Optuna log scale for LightGBM and CatBoost hyperparameters
Jérôme Benoit [Thu, 8 Jan 2026 11:49:39 +0000 (12:49 +0100)] 
perf(quickadapter): optimize Optuna log scale for LightGBM and CatBoost hyperparameters

Apply logarithmic sampling scale to regularization and tree complexity parameters for improved hyperparameter search efficiency:

- LightGBM: Add num_leaves to log scale (exponential tree growth)
- CatBoost: Add l2_leaf_reg and random_strength to log scale (multiplicative effects)
- Reverted bagging_temperature to linear scale (0 has special meaning: disables Bayesian bootstrap)

Log scale provides better exploration in low-value regions where these parameters have the most impact, consistent with Optuna best practices and industry standards (FLAML, XGBoost patterns).

Bump version to 3.10.6

3 weeks agofeat(quickadapter): add NGBoost regressor support with Optuna optimization (#33)
Jérôme Benoit [Thu, 8 Jan 2026 11:28:55 +0000 (12:28 +0100)] 
feat(quickadapter): add NGBoost regressor support with Optuna optimization (#33)

* feat: add NGBoost regressor support with Optuna optimization

- Add NGBoost to supported regressors (xgboost, lightgbm, histgradientboosting, ngboost)
- Install ngboost==0.5.8 in Docker image
- Implement fit_regressor branch for NGBoost with:
  - Dynamic distribution selection via get_ngboost_dist() helper
  - Support for 5 distributions: normal, lognormal, exponential, laplace, t
  - Early stopping support with validation set (X_val/Y_val API)
  - Sample weights support for training and validation
  - Optuna trial handling with random_state adjustment
  - Verbosity parameter conversion (verbosity -> verbose)
- Add Optuna hyperparameter optimization support:
  - n_estimators: [100, 1000] (log-scaled)
  - learning_rate: [0.001, 0.3] (log-scaled)
  - minibatch_frac: [0.5, 1.0] (linear)
  - col_sample: [0.3, 1.0] (linear)
  - dist: categorical [normal, lognormal]
  - Space reduction support for refined optimization
- Create get_ngboost_dist() helper function for distribution class mapping
- Default distribution: lognormal (optimal for crypto prices)
- Compatible with RMSE optimization objective (LogScore ≈ RMSE)

* docs: add ngboost to regressor enum in README

* fix: correct NGBoost parameter comment to reflect actual tuned parameters

Removed 'tree structure' from the parameter order comment since NGBoost
implementation doesn't tune tree structure parameters (only boosting,
sampling, and distribution parameters are optimized via Optuna).

* feat(ngboost): add tree structure parameter tuning

Add DecisionTreeRegressor base learner parameters for NGBoost:
- max_depth: (3, 8) based on literature and XGBoost patterns
- min_samples_split: (2, 20) following sklearn best practices
- min_samples_leaf: (1, 10) conservative range for crypto data

These parameters are passed via the Base argument to control
the underlying decision tree learners in the NGBoost ensemble.

* refine(ngboost): narrow sampling and leaf hyperparameter ranges

Refined Optuna search space based on gradient boosting research:
- min_samples_leaf: 1-8 (was 1-10)
- minibatch_frac: 0.6-1.0 (was 0.5-1.0)
- col_sample: 0.4-1.0 (was 0.3-1.0)

Ranges focused on empirically proven optimal zones for ensemble
gradient boosting methods on financial/crypto time series data.

* refactor(ngboost): move DecisionTreeRegressor import to branch start

Move sklearn.tree.DecisionTreeRegressor import to the beginning of
the NGBoost branch (after NGBRegressor import) for better code
organization and consistency with import conventions.

3 weeks agofeat(quickadapter): add CatBoost regressor with RMSE loss function (#34)
Jérôme Benoit [Thu, 8 Jan 2026 11:13:16 +0000 (12:13 +0100)] 
feat(quickadapter): add CatBoost regressor with RMSE loss function (#34)

* feat: add CatBoost regressor with RMSE loss function

Add CatBoost as the 5th regressor option using standard RMSE loss function.

Changes:
- Add catboost==1.2.8 to Dockerfile dependencies
- Update Regressor type literal to include 'catboost'
- Implement fit_regressor branch for CatBoost with:
  - RMSE loss function (default)
  - Early stopping and validation set handling
  - Verbosity parameter mapping
  - Sample weights support
  - Optuna CatBoostPruningCallback for trial pruning
- Add Optuna hyperparameter optimization with 6 parameters:
  - iterations: [100, 2000] (log-scaled)
  - learning_rate: [0.001, 0.3] (log-scaled)
  - depth: [4, 10] (tree depth)
  - l2_leaf_reg: [1, 10] (L2 regularization)
  - bagging_temperature: [0, 10] (Bayesian bootstrap)
  - random_strength: [1, 20] (split randomness)
- Update README.md regressor enum documentation

CatBoost advantages:
- Better accuracy than XGBoost/LightGBM (2024 benchmarks)
- GPU support for faster training
- Better categorical feature handling
- Strong overfitting resistance (ordered boosting)
- Production-ready at scale
- Optuna pruning callback for efficient hyperparameter search

* feat(catboost): add GPU/CPU differentiation for training and Optuna hyperparameters

- Add task_type-aware parameter handling in fit_regressor
- GPU mode: set devices, max_ctr_complexity=4, remove n_jobs
- CPU mode: propagate n_jobs to thread_count, max_ctr_complexity=2
- Trust CatBoost defaults for border_count (CPU=254, GPU=128)
- Differentiate Optuna hyperparameter search spaces by task_type
- GPU: depth=(4,12), border_count=(32,254), bootstrap=[Bayesian,Bernoulli]
- CPU: depth=(4,10), bootstrap=[Bayesian,Bernoulli,MVS]
- Add GPU-specific parameters: border_count, max_ctr_complexity
- Expand search space: min_data_in_leaf, grow_policy, model_size_reg, rsm, subsample
- Use CatBoost Pool for training with proper eval_set handling

* refactor(catboost): remove devices default to allow GPU auto-discovery

Trust CatBoost's automatic GPU device detection (default: all available GPUs).
Users can still explicitly set devices='0' or devices='0:1' in config if needed.

* fix(quickadapter): avoid forcing CatBoost thread_count when n_jobs unset

Remove nonstandard thread_count=-1 default in CPU CatBoost path.\nLet CatBoost select threads automatically unless n_jobs is provided.\nImproves consistency and avoids potential performance misinterpretation.

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix(catboost): properly omit incompatible bootstrap parameters

CatBoost strictly validates bootstrap parameters and rejects:
- subsample with Bayesian bootstrap
- bagging_temperature with non-Bayesian bootstrap

Even passing 'neutral' values (0 or 1.0) causes runtime errors.

Changed from ternary expressions (which always pass params) to
conditional dict building (which omits incompatible params entirely).

Also fixed: border_count min_val 16→1 per CatBoost documentation.

* refactor(quickadapter): replace plotting column names with constants

Replace string literals "minima", "maxima", and "smoothed-extrema" with MINIMA_COLUMN, MAXIMA_COLUMN, and SMOOTHED_EXTREMA_COLUMN constants following the existing *_COLUMN naming convention.

This improves maintainability and prevents typos when referencing these DataFrame column names throughout the codebase.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
3 weeks agorefactor(quickadapter): improve arguments naming
Jérôme Benoit [Wed, 7 Jan 2026 12:50:37 +0000 (13:50 +0100)] 
refactor(quickadapter): improve arguments naming

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
3 weeks agorefactor(quickadapter): consolidate pivot metrics and extrema ranking; bump version...
Jérôme Benoit [Wed, 7 Jan 2026 12:46:29 +0000 (13:46 +0100)] 
refactor(quickadapter): consolidate pivot metrics and extrema ranking; bump version to 3.10.5

- Utils.py: unify amplitude/threshold/speed in calculate_pivot_metrics, remove calculate_pivot_speed, update add_pivot to consume normalized speed; preserves edge-case guards (NaN/inf, zero duration).
- QuickAdapterRegressorV3: add _calculate_n_kept_extrema and use in ranking; mark scaler fallback path; bump version to 3.10.5.
- QuickAdapterV3: bump version() to 3.10.5; adjust docstring for t-distribution helper.

4 weeks agorefactor(quickadapter): sort imports
Jérôme Benoit [Wed, 7 Jan 2026 01:10:20 +0000 (02:10 +0100)] 
refactor(quickadapter): sort imports

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat(quickadapter): add logging for invalid fit data in ExtremaWeightingTransformer
Jérôme Benoit [Wed, 7 Jan 2026 01:08:17 +0000 (02:08 +0100)] 
feat(quickadapter): add logging for invalid fit data in ExtremaWeightingTransformer

Add warning when fit() receives data with no finite values, improving
observability of data quality issues. Uses fallback [0.0, 1.0] to prevent
pipeline crashes while alerting users to upstream preprocessing problems.

4 weeks agorefactor(quickadapter): simplify early stopping condition checks
Jérôme Benoit [Wed, 7 Jan 2026 00:14:46 +0000 (01:14 +0100)] 
refactor(quickadapter): simplify early stopping condition checks

Remove redundant has_eval_set verification in early stopping callbacks for XGBoost and LightGBM. The check is unnecessary because early_stopping_rounds is only assigned a non-None value when has_eval_set is True, making the condition implicitly guaranteed.

4 weeks agofeat(opencode): add commit command for Conventional Commits workflow
Jérôme Benoit [Wed, 7 Jan 2026 00:04:57 +0000 (01:04 +0100)] 
feat(opencode): add commit command for Conventional Commits workflow

4 weeks agorefactor(xgboost): migrate to callback-based early stopping for API 3.x compatibility
Jérôme Benoit [Tue, 6 Jan 2026 23:30:18 +0000 (00:30 +0100)] 
refactor(xgboost): migrate to callback-based early stopping for API 3.x compatibility

- Replace deprecated early_stopping_rounds parameter with EarlyStopping callback
- Extract early_stopping_rounds from model parameters using pop() before instantiation
- Configure callback with metric_name='rmse', data_name='validation_0', save_best=True
- Reorganize LightGBM callback initialization for improved code readability
- Maintains backward compatibility with eval_set validation approach
- Ensures compatibility with XGBoost 3.1.2+ API requirements

4 weeks agorefactor(quickadapter): return cached sets directly in optuna_samplers_by_namespace
Jérôme Benoit [Tue, 6 Jan 2026 18:12:36 +0000 (19:12 +0100)] 
refactor(quickadapter): return cached sets directly in optuna_samplers_by_namespace

- Add _optuna_hpo_samplers_set() and _optuna_label_samplers_set() cached methods
- Change return type from tuple[tuple[OptunaSampler, ...], OptunaSampler] to tuple[set[OptunaSampler], OptunaSampler]
- Remove redundant set() conversion in sampler validation
- Align with existing pattern used by other constant set methods (_scaler_types_set, _threshold_methods_set, etc.)

4 weeks agodocs(quickadapter): add constant_liar handling to TPE sampler description
Jérôme Benoit [Tue, 6 Jan 2026 12:38:54 +0000 (13:38 +0100)] 
docs(quickadapter): add constant_liar handling to TPE sampler description

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agoperf(quickadapter): enable constant_liar on TPESampler when using multiple workers
Jérôme Benoit [Tue, 6 Jan 2026 00:02:49 +0000 (01:02 +0100)] 
perf(quickadapter): enable constant_liar on TPESampler when using multiple workers

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat(plot): add smoothed extrema line to min_max subplot with weighted smoothing
Jérôme Benoit [Mon, 5 Jan 2026 22:12:20 +0000 (23:12 +0100)] 
feat(plot): add smoothed extrema line to min_max subplot with weighted smoothing

- Add 'smoothed-extrema' column displaying weighted extrema after smoothing
- Position smoothed extrema line below maxima/minima bars in plot z-order
- Use 'wheat' color for better visual distinction from red/green bars
- Store smoothed result in variable before assigning to both EXTREMA_COLUMN and smoothed-extrema

4 weeks agofeat(quickadapter): add multiple aggregation methods for combined extrema weighting...
Jérôme Benoit [Mon, 5 Jan 2026 20:41:43 +0000 (21:41 +0100)] 
feat(quickadapter): add multiple aggregation methods for combined extrema weighting (v3.10.4)

- Add 5 new aggregation methods: arithmetic_mean, harmonic_mean, quadratic_mean, weighted_median, softmax
- Replace weighted_average (deprecated) with arithmetic_mean as new default
- Add softmax_temperature parameter (default: 1.0) for softmax aggregation
- Implement all methods using scipy.stats.pmean for power means (p=1,-1,2) and numpy for weighted_median
- Add softmax aggregation with temperature scaling and coefficient weighting
- Add validation and logging for softmax_temperature parameter
- Update README with precise mathematical formulas for all aggregation methods
- Bump version to 3.10.4 in strategy and model
- Add conditional logging for softmax_temperature when aggregation is softmax

4 weeks agofeat(quickadapter): add logging for scaler and range parameters
Jérôme Benoit [Mon, 5 Jan 2026 15:38:28 +0000 (16:38 +0100)] 
feat(quickadapter): add logging for scaler and range parameters

4 weeks agofeat(quickadapter): Add configurable feature normalization to QuickAdapterRegressorV3...
Jérôme Benoit [Mon, 5 Jan 2026 15:32:12 +0000 (16:32 +0100)] 
feat(quickadapter): Add configurable feature normalization to QuickAdapterRegressorV3 (#31)

* feat(quickadapter): add configurable feature normalization to data pipeline

Add support for configurable feature scaling/normalization in QuickAdapterRegressorV3
via define_data_pipeline() override. Users can now select different sklearn scalers
through feature_parameters configuration.

Supported normalization methods:
- minmax: MinMaxScaler with configurable range (default: -1 to 1)
- maxabs: MaxAbsScaler (scales by max absolute value)
- standard: StandardScaler (zero mean, unit variance)
- robust: RobustScaler (uses median and IQR, robust to outliers)

Configuration example:
{
  "freqai": {
    "feature_parameters": {
      "normalization": "minmax",
      "normalization_range": [-1, 1]
    }
  }
}

Implementation details:
- Overrides define_data_pipeline() to replace scalers in pipeline
- Optimizes default case (minmax with -1,1 range) by using parent pipeline
- Replaces both 'scaler' and 'post-pca-scaler' steps with selected scaler
- normalization_range parameter only applies to minmax scaler

Note: Changing normalization config requires deleting existing models
(rm -rf user_data/models/*) due to pipeline serialization.

* fix(quickadapter): address PR review comments for feature normalization

- Remove unused 'datasieve as ds' import
- Add validation for normalization parameter using _validate_enum_value
- Add comprehensive validation for normalization_range (type, length, values, min < max)
- Fix tuple/list comparison by using tuple() conversion
- Store normalization_range in variable to avoid fetching twice
- Optimize scaler creation by creating once instead of calling get_scaler() multiple times

* refactor(quickadapter): harmonize validation error messages with codebase style

- Use consistent 'Invalid {param} {type}:' format matching existing patterns
- Remove unnecessary try-except block around float conversion
- Simplify error messages to be more concise
- Let float() raise its own errors for non-numeric values

* refactor(quickadapter): rename data pipeline parameters for clarity

- Rename ft_params.normalization → ft_params.scaler
- Rename ft_params.normalization_range → ft_params.range
- Add ScalerType Literal and _SCALER_TYPES constant
- Document new parameters in README

More intuitive naming that better reflects sklearn terminology.

* docs(README.md): format

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor(quickadapter): rename data pipeline parameters for clarity

- Rename ft_params.normalization → ft_params.scaler
- Rename ft_params.normalization_range → ft_params.range
- Add ScalerType Literal and _SCALER_TYPES constant
- Document new parameters in README under feature_parameters section

More intuitive naming that better reflects sklearn terminology.
Users configure these via freqai.feature_parameters.* in config.json.

* fix(quickadapter): address PR review comments for feature normalization

- Extract hardcoded defaults to class constants (SCALER_DEFAULT, RANGE_DEFAULT)
- Remove redundant tuple() call in feature_range comparison
- Follow codebase pattern for default values similar to other constants

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* docs(README): add note about model retraining for scaler changes

* docs(README): clarify extrema weighting strategy requires model retraining

Only switching between 'none' and other strategies changes the label pipeline.
Other parameter changes within the same strategy do not require retraining.

* docs(README): format

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore: bump model and strategy version to 3.10.3

---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
4 weeks agorefactor(ReforceXY): harmonize logging with DEBUG instrumentation and corrected levels
Jérôme Benoit [Mon, 5 Jan 2026 13:03:30 +0000 (14:03 +0100)] 
refactor(ReforceXY): harmonize logging with DEBUG instrumentation and corrected levels

- Add 11 strategic logger.debug() calls for cache, training, LSTM, predictions, and PBRS
- Correct 6 log levels: 2 INFO→WARNING (config overrides), 4 ERROR→WARNING (recoverable conditions)
- Maintain 100% %-formatting consistency across all 103 logging calls (performance + best practice)
- Total: 11 DEBUG, 31 INFO, 53 WARNING, 8 ERROR

4 weeks agochore(deps): lock file maintenance (#32)
renovate[bot] [Mon, 5 Jan 2026 12:22:02 +0000 (13:22 +0100)] 
chore(deps): lock file maintenance (#32)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
4 weeks agorefactor(quickadapter): harmonize log and error messages across codebase
Jérôme Benoit [Mon, 5 Jan 2026 12:13:14 +0000 (13:13 +0100)] 
refactor(quickadapter): harmonize log and error messages across codebase

Standardize error and log messages for consistency and clarity:

- Standardize 29 ValueError messages with 'Invalid {param} value {value}' format
- Harmonize 35 warning messages with fallback defaults ('using default'/'using uniform')
- Replace {trade.pair} with {pair} in 33 log messages for consistent context
- Ensure all 7 exception handlers use exc_info=True for complete stack traces
- Normalize punctuation and capitalization in validation messages

This improves debugging experience and maintains uniform message patterns
throughout the QuickAdapter, Utils, and ExtremaWeightingTransformer modules.

4 weeks agodocs(quickadapter): refine extrema weighting description
Jérôme Benoit [Sun, 4 Jan 2026 23:15:18 +0000 (00:15 +0100)] 
docs(quickadapter): refine extrema weighting description

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat(quickadapter): add combined extrema weighting strategy with multi-metric aggregation
Jérôme Benoit [Sun, 4 Jan 2026 23:02:21 +0000 (00:02 +0100)] 
feat(quickadapter): add combined extrema weighting strategy with multi-metric aggregation

Add new 'combined' strategy to extrema weighting that aggregates multiple
metrics (amplitude, amplitude_threshold_ratio, volume_rate, speed,
efficiency_ratio, volume_weighted_efficiency_ratio) using configurable
coefficients and aggregation methods.

Features:
- New strategy type 'combined' with per-metric coefficient weighting
- Support for weighted_average and geometric_mean aggregation methods
- Normalize all metrics to [0,1] range for consistent aggregation:
  * amplitude: x/(1+x)
  * amplitude_threshold_ratio: x/(x+median)
  * volume_rate: x/(x+median)
  * speed: x/(1+x)
- Deterministic metric iteration order via COMBINED_METRICS constant
- Centralized validation in get_extrema_weighting_config()
- Comprehensive logging of new parameters

Configuration:
- metric_coefficients: dict mapping metric names to positive weights
- aggregation: 'weighted_average' (default) or 'geometric_mean'
- Empty coefficients dict defaults to equal weights (1.0) for all metrics

Documentation:
- README updated with new strategy and parameters
- Mathematical formulas for aggregation methods
- Style aligned with existing documentation conventions

Bump version: 3.10.1 -> 3.10.2

4 weeks agofix(quickadapter): handle missing scaler attributes in ExtremaWeightingTransformer
Jérôme Benoit [Sun, 4 Jan 2026 17:06:10 +0000 (18:06 +0100)] 
fix(quickadapter): handle missing scaler attributes in ExtremaWeightingTransformer

Use getattr with default None value to properly handle cases where scaler
attributes don't exist (e.g., when loading pre-existing models), allowing
the RuntimeError to be raised with a clear message instead of AttributeError.

4 weeks agorefactor(quickadapter): improve ExtremaWeightingTransformer and bump version to 3...
Jérôme Benoit [Sun, 4 Jan 2026 16:57:52 +0000 (17:57 +0100)] 
refactor(quickadapter): improve ExtremaWeightingTransformer and bump version to 3.10.1

- Refactor ExtremaWeightingTransformer for better maintainability
  * Add scaler mapping dictionaries (_STANDARDIZATION_SCALERS, _NORMALIZATION_SCALERS)
  * Extract common scaler logic into _apply_scaler() helper method
  * Extract MMAD logic into _apply_mmad() helper method
  * Extract sigmoid logic into _apply_sigmoid() helper method
  * Extract fitting logic into _fit_standardization() and _fit_normalization()
  * Reduce code duplication in transform/inverse_transform methods
  * Simplify fit() method by consolidating edge case handling

- Bump version from 3.10.0 to 3.10.1
  * QuickAdapterRegressorV3: model version
  * QuickAdapterV3: strategy version

4 weeks agodocs(quickadapter): use markdown escape
Jérôme Benoit [Sun, 4 Jan 2026 15:55:57 +0000 (16:55 +0100)] 
docs(quickadapter): use markdown escape

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor: enhance extrema weighting with sklearn scalers and new methods
Jérôme Benoit [Sun, 4 Jan 2026 15:54:26 +0000 (16:54 +0100)] 
refactor: enhance extrema weighting with sklearn scalers and new methods

Replace manual standardization/normalization calculations with sklearn scalers
for better maintainability and correctness.

Standardization changes:
- Add power_yj (Yeo-Johnson) standardization method
- Replace manual zscore with StandardScaler
- Replace manual robust with RobustScaler
- Add mask size checks for all methods including MMAD
- Store fitted scaler objects instead of manual stats

Normalization changes:
- Add maxabs normalization (new default)
- Replace manual minmax with MinMaxScaler
- Fix sigmoid to output [-1, 1] range (was [0, 1])
- Replace manual calculations with MaxAbsScaler and MinMaxScaler

Other improvements:
- Remove zero-exclusion from mask (zeros are valid values)
- Fit normalization on standardized data (proper pipeline order)
- Add proper RuntimeError for unfitted scalers

Docs:
- Update README to reflect maxabs as new normalization default
- Document power_yj standardization type
- Harmonize mathematical formulas with code notation

4 weeks agorefactor: document ExtremaWeightingTransformer zero handling
Jérôme Benoit [Sun, 4 Jan 2026 12:11:41 +0000 (13:11 +0100)] 
refactor: document ExtremaWeightingTransformer zero handling

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor(quickadapter): code cleanups and optimizations
Jérôme Benoit [Sat, 3 Jan 2026 23:01:16 +0000 (00:01 +0100)] 
refactor(quickadapter): code cleanups and optimizations

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor: improve string literal replacements and use specific tuple constants
Jérôme Benoit [Sat, 3 Jan 2026 22:36:37 +0000 (23:36 +0100)] 
refactor: improve string literal replacements and use specific tuple constants

- Replace hardcoded method names in error messages with tuple constants
  (lines 2003, 2167: use _DISTANCE_METHODS[0] and [1] instead of literal strings)
- Use _CLUSTER_METHODS instead of _SELECTION_METHODS indices for better
  code maintainability (e.g., _CLUSTER_METHODS[0] vs _SELECTION_METHODS[2])
- Fix trial_selection_method comparison order to match tuple constant order
  (compromise_programming [0] before topsis [1])
- Remove redundant power_mean None check (already validated by _validate_power_mean)
- Add clarifying comments to tuple constant usages (minkowski, rank_extrema)

4 weeks agorefactor: replace hardcoded string literals with tuple constant references
Jérôme Benoit [Sat, 3 Jan 2026 21:55:40 +0000 (22:55 +0100)] 
refactor: replace hardcoded string literals with tuple constant references

Replace 8 hardcoded string literal comparisons with proper tuple constant
references using CONSTANT[index] pattern:

- _DENSITY_AGGREGATIONS: 'quantile' and 'power_mean' comparisons (2 fixes)
- _DISTANCE_METHODS: 'topsis' and 'compromise_programming' comparisons (6 fixes)

This improves type safety, maintainability, and ensures consistency with
the codebase's established constant usage patterns.

4 weeks agofix: eliminate data leakage in extrema weighting normalization (#30)
Jérôme Benoit [Sat, 3 Jan 2026 20:48:51 +0000 (21:48 +0100)] 
fix: eliminate data leakage in extrema weighting normalization (#30)

* fix: eliminate data leakage in extrema weighting normalization

Move dataset-dependent scaling from strategy (pre-split) to model label
pipeline (post-split) to prevent train/test data leakage.

Changes:
- Add ExtremaWeightingTransformer (datasieve BaseTransform) in Utils.py
  that fits standardization/normalization stats on training data only
- Add define_label_pipeline() in QuickAdapterRegressorV3 that replaces
  FreqAI's default MinMaxScaler with our configurable transformer
- Simplify strategy's set_freqai_targets() to pass raw weighted extrema
  without any normalization (normalization now happens post-split)
- Remove pre-split normalization functions from Utils.py (~107 lines)

The transformer supports:
- Standardization: zscore, robust, mmad, none
- Normalization: minmax, sigmoid, none (all mathematically invertible)
- Configurable minmax_range (default [-1, 1] per FreqAI convention)
- Correct inverse_transform for prediction recovery

BREAKING CHANGES:
- softmax normalization removed
- l1, l2, rank normalization removed (not mathematically invertible)
- rank_method config option removed
- extrema_weighting config now processed in model pipeline instead of strategy

* chore: remove dead rank_method log line

* chore: cleanup unused imports and comments

* refactor(quickadapter): cleanup extrema weighting implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: use robust_quantiles config in transformer fit()

* style: align with codebase conventions (error messages, near-zero detection)

* refactor: remove redundant checks

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: update config validation for transformer pipeline

- Remove obsolete aggregation+normalization warning (no longer applies post-refactor)
- Change standardization+normalization=none from error to warning

* refactor: cleanup ExtremaWeightingTransformer implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup ExtremaWeightingTransformer implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: Remove redundant configuration extraction in ExtremaWeightingTransformer

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: align ExtremaWeightingTransformer with BaseTransform API

- Call super().__init__() with name parameter
- Match method signatures exactly (npt.ArrayLike, ArrayOrNone, ListOrNone)
- Return tuple from fit() instead of self
- Import types from same namespaces as BaseTransform

* refactor: cleanup type hints

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: remove unnecessary type casts and annotations

Let numpy types flow naturally without explicit float()/int() casts.

* refactor: avoid range python shadowing

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting transformer implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting and smoothing config handling

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting transformer

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: filter non-finite values in ExtremaWeightingTransformer

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: use scipy.special.logit for inverse sigmoid transformation

Replace manual inverse sigmoid calculation (-np.log(1.0 / values - 1.0))
with scipy.special.logit() for better code clarity and consistency.

- Uses official scipy function that is the documented inverse of expit
- Mathematically equivalent to the previous implementation
- Improves code readability and maintainability
- Maintains symmetry: sp.special.expit() <-> sp.special.logit()

Also improve comment clarity for standardization identity function.

* docs: update README.md

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: remove unused _n_train attribute from ExtremaWeightingTransformer

The _n_train attribute was being set during fit() but never used
elsewhere in the class or by the BaseTransform interface. Removing
it to reduce code clutter and improve maintainability.

* fix: import paths correction

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: add Bessel correction and ValueError consistency in ExtremaWeightingTransformer

- Use ddof=1 for std computation (sample std instead of population std)
- Add ValueError in _inverse_standardize for unknown methods
- Add ValueError in _inverse_normalize for unknown methods

* chore: refine config-template.json for extrema weighting options

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore: refine extrema weighting configuration in config-template.json

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore: remove hybrid extrema weighting source weights

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: remove unreachable dead code in compute_extrema_weights

* docs: refine README extrema weighting descriptions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agochore: bump version to 3.9.2
Jérôme Benoit [Fri, 2 Jan 2026 23:14:14 +0000 (00:14 +0100)] 
chore: bump version to 3.9.2

Increment patch level version for both model and strategy to reflect
the comment cleanup changes.

4 weeks agorefactor: remove redundant 'Only' prefix from namespace comments
Jérôme Benoit [Fri, 2 Jan 2026 23:10:44 +0000 (00:10 +0100)] 
refactor: remove redundant 'Only' prefix from namespace comments

Simplify code comments by removing the word 'Only' from namespace
identifier comments. The context already makes it clear that these
are the supported namespaces.

4 weeks agoRemove Optuna "train" namespace as preliminary step to eliminate data leakage
Jérôme Benoit [Fri, 2 Jan 2026 23:05:31 +0000 (00:05 +0100)] 
Remove Optuna "train" namespace as preliminary step to eliminate data leakage

Remove the "train" namespace from Optuna hyperparameter optimization to
address data leakage issues in extrema weighting normalization. This is a
preliminary step before implementing a proper data preparation pipeline
that prevents train/test contamination.

Problem:
Current architecture applies extrema weighting normalization (minmax, softmax,
zscore, etc.) on the full dataset BEFORE train/test split. This causes data
leakage: train set labels are normalized using statistics (min/max, mean/std,
median/IQR) computed from the entire dataset including test set. The "train"
namespace hyperopt optimization exacerbates this by optimizing dataset
truncation with contaminated statistics.

Solution approach:
1. Remove "train" namespace optimization (this commit)
2. Switch to binary extrema labels (strategy: "none") to avoid leakage
3. Future: implement proper data preparation that computes normalization
   statistics on train set only and applies them to both train/test sets

This naive train/test splitting hyperopt is incompatible with a correct
data preparation pipeline where normalization must be fit on train and
transformed on test separately.

Changes:
- Remove "train" namespace from OptunaNamespace (3→2 namespaces: hp, label)
- Remove train_objective function and all train optimization logic
- Remove dataset truncation based on optimized train/test periods
- Update namespace indices: label from [2] to [1] throughout codebase
- Remove train_candles_step config parameter and train_rmse metric tracking
- Set extrema_weighting.strategy to "none" (binary labels: -1/0/+1)
- Update documentation to reflect 2-namespace architecture

Files modified:
- QuickAdapterRegressorV3.py: -204 lines (train namespace removal)
- QuickAdapterV3.py: remove train_rmse from plot config
- config-template.json: remove train params, set extrema_weighting to none
- README.md: update documentation (remove train_candles_step reference)

4 weeks agorefactor(quickadapter): cleanup timeframe_minutes usage
Jérôme Benoit [Fri, 2 Jan 2026 17:29:04 +0000 (18:29 +0100)] 
refactor(quickadapter): cleanup timeframe_minutes usage

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor(quickadapter): consolidate custom distance metrics in Pareto front selection
Jérôme Benoit [Fri, 2 Jan 2026 15:16:18 +0000 (16:16 +0100)] 
refactor(quickadapter): consolidate custom distance metrics in Pareto front selection

Extract shared distance metric logic from _compromise_programming_scores and
_topsis_scores into reusable static methods:

- Add _POWER_MEAN_MAP class constant as single source of truth for power values
- Add _power_mean_metrics_set() cached method for metric name lookup
- Add _hellinger_distance() for Hellinger/Shellinger computation
- Add _power_mean_distance() for generalized mean computation with validation
- Add _weighted_sum_distance() for weighted sum computation

Harmonize with existing validation API using ValidationMode and proper contexts.

4 weeks agorefactor(quickadapter): unify validation helpers with ValidationMode support
Jérôme Benoit [Fri, 2 Jan 2026 14:21:16 +0000 (15:21 +0100)] 
refactor(quickadapter): unify validation helpers with ValidationMode support

- Add ValidationMode type for "warn", "raise", "none" behavior
- Rename _UNSUPPORTED_CLUSTER_METRICS to _UNSUPPORTED_WEIGHTS_METRICS
- Refactor _validate_minkowski_p, _validate_quantile_q with mode support
- Add _validate_power_mean_p, _validate_metric_weights_support, _validate_label_weights
- Add _validate_enum_value for generic enum validation
- Add _prepare_knn_kwargs for sklearn-specific weight handling
- Remove deprecated _normalize_weights function
- Update all call sites to use new unified API

4 weeks agodocs(quickadapter): refine comment on HPO parameters ordering
Jérôme Benoit [Fri, 2 Jan 2026 11:44:11 +0000 (12:44 +0100)] 
docs(quickadapter): refine comment on HPO parameters ordering

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat: add debug logs for extrema detection and filtering on predictions
Jérôme Benoit [Fri, 2 Jan 2026 00:56:51 +0000 (01:56 +0100)] 
feat: add debug logs for extrema detection and filtering on predictions

- Add debug logging in _get_extrema_indices to track find_peaks detection counts
- Add debug logging in _get_ranked_peaks to track filtering results
- Add debug logging in _get_ranked_extrema to track filtering results
- Harmonize variable naming: use n_kept_minima/n_kept_maxima consistently
- Use consistent log format: 'Context | Action: details' pattern

4 weeks agorefactor: proprerly format hyperopt metric log message
Jérôme Benoit [Thu, 1 Jan 2026 23:31:48 +0000 (00:31 +0100)] 
refactor: proprerly format hyperopt metric log message

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor!: reorganize label selection with distance, cluster and density methods...
Jérôme Benoit [Thu, 1 Jan 2026 23:01:19 +0000 (00:01 +0100)] 
refactor!: reorganize label selection with distance, cluster and density methods (#29)

* refactor: reorganize label selection with distance, cluster and density methods

* refactor: import lru_cache directly instead of functools module

* chore: remove unused imports in Utils.py

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup n_neighbors adjustment in QuickAdapterRegressorV3

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: use unbounded cache for constant-returning helper methods

Replace @lru_cache(maxsize=1) with @lru_cache(maxsize=None) for all
static methods that return constant sets. Using maxsize=None is more
idiomatic and efficient for parameterless functions that always return
the same value.

* refactor: add _prepare_distance_kwargs to centralize distance kwargs preparation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting API

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema smoothing API

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: align namespace

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: add more tunables validations

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: simplify cluster-based label selection

- Remove ClusterSelectionMethod type and related constants
- Unify selection methods to use DistanceMethod for both cluster and trial selection
- Add separate trial_selection_method parameter for within-cluster selection
- Change power_mean default from 2.0 to 1.0 for internal consistency
- Add validation for selection_method and trial_selection_method parameters

* fix: add missing validations for label_distance_metric and label_density_aggregation_param

- Add validation for label_distance_metric parameter at configuration time
- Add early validation for label_density_aggregation_param (quantile and power_mean)
- Ensures invalid configuration values fail fast with clear error messages
- Harmonizes error messages with existing validation patterns in the codebase

* fix: add validation for label_cluster_metric and custom metrics support in topsis

- Add validation that label_cluster_metric is in _distance_metrics_set()
- Implement custom metrics support in _topsis_scores (hellinger, shellinger,
  harmonic/geometric/arithmetic/quadratic/cubic/power_mean, weighted_sum)
  matching _compromise_programming_scores implementation

* docs: update README.md with refactored label selection methods

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* docs: fix config parameter and bump to v3.9.0

- Fix config-template.json: label_metric -> label_method
- Bump version from 3.8.5 to 3.9.0 in model and strategy

Parameter names now match QuickAdapterRegressorV3.py implementation.

* docs: refine README label selection methods descriptions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: refine error message

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat(quickadapter): add TOPSIS selection method for clustering optuna pareto front...
Jérôme Benoit [Wed, 31 Dec 2025 15:29:28 +0000 (16:29 +0100)] 
feat(quickadapter): add TOPSIS selection method for clustering optuna pareto front trial selection

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor(quickadapter): factor out topsis and distance metric logic
Jérôme Benoit [Wed, 31 Dec 2025 13:01:44 +0000 (14:01 +0100)] 
refactor(quickadapter): factor out topsis and distance metric logic

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofeat(quickadapter): add TOPSIS metric for multi-objective HPO trial selection
Jérôme Benoit [Wed, 31 Dec 2025 00:01:40 +0000 (01:01 +0100)] 
feat(quickadapter): add TOPSIS metric for multi-objective HPO trial selection

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(ReforceXY): Ensure LSTM state persistence for live RecurrentPPO inference
Jérôme Benoit [Tue, 30 Dec 2025 20:25:52 +0000 (21:25 +0100)] 
fix(ReforceXY): Ensure LSTM state persistence for live RecurrentPPO inference

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(tests): synchronize README line numbers with current code
Jérôme Benoit [Tue, 30 Dec 2025 18:46:00 +0000 (19:46 +0100)] 
docs(tests): synchronize README line numbers with current code

- Update Coverage Mapping table line references for invariants 102-106, 110-113, 115-116, 118-121
- Update Non-Owning Smoke table line references for integration, components, pbrs, statistics
- Add missing constant groups to documentation (EFFICIENCY, PBRS, SCENARIOS, STAT_TOL)

5 weeks agofix(ReforceXY): reduce PBRS defaults to prevent reward exploitation
Jérôme Benoit [Tue, 30 Dec 2025 18:20:32 +0000 (19:20 +0100)] 
fix(ReforceXY): reduce PBRS defaults to prevent reward exploitation

Disable hold potential by default and reduce additive ratios to prevent
the agent from exploiting shaping rewards with many short losing trades.

Changes:
- hold_potential_enabled: true -> false (disabled by default)
- hold_potential_ratio: 0.03125 -> 0.001 (reduced when enabled)
- entry_additive_ratio: 0.125 -> 0.0625 (halved)
- exit_additive_ratio: 0.125 -> 0.0625 (halved)

These conservative defaults encourage the agent to focus on actual PnL
rather than gaming intermediate shaping rewards.

5 weeks agodocs(ReforceXY): more aligned mathematical notation in README and code comments
Jérôme Benoit [Tue, 30 Dec 2025 16:50:34 +0000 (17:50 +0100)] 
docs(ReforceXY): more aligned mathematical notation in README and code comments

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(ReforceXY): align mathematical notation with standard conventions
Jérôme Benoit [Tue, 30 Dec 2025 15:36:49 +0000 (16:36 +0100)] 
docs(ReforceXY): align mathematical notation with standard conventions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(ReforceXY): clarify PBRS formulas in reward calculation and analysis
Jérôme Benoit [Tue, 30 Dec 2025 00:30:11 +0000 (01:30 +0100)] 
docs(ReforceXY): clarify PBRS formulas in reward calculation and analysis

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): optuna results key/value formatting
Jérôme Benoit [Mon, 29 Dec 2025 23:23:22 +0000 (00:23 +0100)] 
fix(quickadapter): optuna results key/value formatting

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(ReforceXY): format README
Jérôme Benoit [Mon, 29 Dec 2025 15:29:13 +0000 (16:29 +0100)] 
docs(ReforceXY): format README

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agoperf(ReforceXY): tune default hold potential ratio to 0.03125
Jérôme Benoit [Mon, 29 Dec 2025 15:27:29 +0000 (16:27 +0100)] 
perf(ReforceXY): tune default hold potential ratio to 0.03125

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(deps): lock file maintenance (#27)
renovate[bot] [Mon, 29 Dec 2025 14:48:55 +0000 (15:48 +0100)] 
chore(deps): lock file maintenance (#27)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
5 weeks agofeat(quickadapter):add label_sampler option for optuna multi-objective HPO
Jérôme Benoit [Mon, 29 Dec 2025 14:47:29 +0000 (15:47 +0100)] 
feat(quickadapter):add label_sampler option for optuna multi-objective HPO

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): handle config reload
Jérôme Benoit [Mon, 29 Dec 2025 12:44:58 +0000 (13:44 +0100)] 
fix(quickadapter): handle config reload

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): revert HistGradientBoostingRegressor optuna trial pruning
Jérôme Benoit [Mon, 29 Dec 2025 02:07:05 +0000 (03:07 +0100)] 
fix(quickadapter): revert HistGradientBoostingRegressor optuna trial pruning

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): silence warning at HPO with histgb
Jérôme Benoit [Mon, 29 Dec 2025 00:30:06 +0000 (01:30 +0100)] 
refactor(quickadapter): silence warning at HPO with histgb

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofeat(quickadapter): add early stopping support to all models and pruning for HistGrad...
Jérôme Benoit [Mon, 29 Dec 2025 00:04:18 +0000 (01:04 +0100)] 
feat(quickadapter): add early stopping support to all models and pruning for HistGradientBoostingRegressor

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agoperf(quickadapter): refine optuna hyperparameters search space
Jérôme Benoit [Sun, 28 Dec 2025 19:42:58 +0000 (20:42 +0100)] 
perf(quickadapter): refine optuna hyperparameters search space

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter)!: normalize tunables namespace for semantic consistency (#26)
Jérôme Benoit [Sun, 28 Dec 2025 18:51:56 +0000 (19:51 +0100)] 
refactor(quickadapter)!: normalize tunables namespace for semantic consistency (#26)

* refactor(quickadapter): normalize tunables namespace for semantic consistency

Rename config keys and internal variables to follow consistent naming conventions:
- `_candles` suffix for time periods in candle units
- `_fraction` suffix for values in [0,1] range
- `_multiplier` suffix for scaling factors
- `_method` suffix for algorithm selectors

Config key renames (with backward-compatible deprecated aliases):
- lookback_period → lookback_period_candles
- decay_ratio → decay_fraction
- min/max_natr_ratio_percent → min/max_natr_ratio_fraction
- window → window_candles
- label_natr_ratio → label_natr_multiplier
- threshold_outlier → outlier_threshold_fraction
- thresholds_smoothing → threshold_smoothing_method
- thresholds_alpha → soft_extremum_alpha
- extrema_fraction → keep_extrema_fraction
- expansion_ratio → space_fraction
- trade_price_target → trade_price_target_method

Internal variable renames for code consistency:
- threshold_outlier → outlier_threshold_fraction
- thresholds_alpha → soft_extremum_alpha
- extrema_fraction → keep_extrema_fraction (local vars and function params)
- _reversal_lookback_period → _reversal_lookback_period_candles
- natr_ratio → natr_multiplier (zigzag function param)

All deprecated aliases emit warnings and remain functional for backward compatibility.

* chore(quickadapter): remove temporary audit file from codebase

* refactor(quickadapter): align constant names with normalized tunables

Rename class constants to match the normalized config key names:
- PREDICTIONS_EXTREMA_THRESHOLD_OUTLIER_DEFAULT → PREDICTIONS_EXTREMA_OUTLIER_THRESHOLD_FRACTION_DEFAULT
- PREDICTIONS_EXTREMA_THRESHOLDS_ALPHA_DEFAULT → PREDICTIONS_EXTREMA_SOFT_EXTREMUM_ALPHA_DEFAULT
- PREDICTIONS_EXTREMA_EXTREMA_FRACTION_DEFAULT → PREDICTIONS_EXTREMA_KEEP_EXTREMA_FRACTION_DEFAULT

* fix(quickadapter): rename outlier_threshold_fraction to outlier_threshold_quantile

The value (e.g., 0.999) represents the 99.9th percentile, which is
mathematically a quantile, not a fraction. This aligns the naming with
the semantic meaning of the parameter.

* fix(quickadapter): add missing deprecated alias support

Add backward-compatible deprecated alias handling for:
- freqai.optuna_hyperopt.expansion_ratio → space_fraction
- freqai.feature_parameters.min_label_natr_ratio → min_label_natr_multiplier
- freqai.feature_parameters.max_label_natr_ratio → max_label_natr_multiplier

Also add missing deprecated alias documentation in README for:
- reversal_confirmation.min_natr_ratio_percent → min_natr_ratio_fraction
- reversal_confirmation.max_natr_ratio_percent → max_natr_ratio_fraction

This ensures all deprecated aliases mentioned in the commit message of the
namespace normalization refactor are properly implemented.

* style(readme): normalize trailing whitespace

* fix(quickadapter): address PR review feedback

- Fix error message referencing 'window' instead of 'window_candles'
- Clarify soft_extremum_alpha error message (alpha=0 uses mean)
- Improve space_fraction documentation in README
- Simplify midpoint docstring

* refactor(quickadapter): rename safe configuration value retrieval helper

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor(quickadapter): rename natr_ratio_fraction to natr_multiplier_fraction

- Align naming with label_natr_multiplier for consistency
- Rename get_config_value_with_deprecated_alias to get_config_value

* refactor(quickadapter): centralize label_natr_multiplier migration in get_label_defaults

- Move label_natr_ratio -> label_natr_multiplier migration to get_label_defaults()
- Update get_config_value to migrate in-place (pop old key, store new key)
- Remove redundant get_config_value calls in Strategy and Model __init__
- Simplify cached properties to use .get() since migration is done at init
- Rename _CUSTOM_STOPLOSS_NATR_RATIO_FRACTION to _CUSTOM_STOPLOSS_NATR_MULTIPLIER_FRACTION

* fix(quickadapter): check that df columns exist before using them

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* docs(README.md): update QuickAdapter strategy documentation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore(quickadapter): bump version to 3.8.0

* refactor(quickadapter): remove unnecessary intermediate variable

* refactor(quickadapter): add cached properties for label_period_candles bounds

* chore(quickadapter): cleanup docstrings and comments

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): normalize tunables namespace
Jérôme Benoit [Sun, 28 Dec 2025 15:09:34 +0000 (16:09 +0100)] 
refactor(ReforceXY): normalize tunables namespace

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(ReforceXY): update ReforceXY config-template.json
Jérôme Benoit [Sun, 28 Dec 2025 14:40:36 +0000 (15:40 +0100)] 
chore(ReforceXY): update ReforceXY config-template.json

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): ensure 1D target with HistGradientBoostingRegressor
Jérôme Benoit [Sun, 28 Dec 2025 01:34:12 +0000 (02:34 +0100)] 
refactor(quickadapter): ensure 1D target with HistGradientBoostingRegressor

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofeat(quickadapter): add HistGradientBoostingRegressor support (#25)
Jérôme Benoit [Sun, 28 Dec 2025 01:16:02 +0000 (02:16 +0100)] 
feat(quickadapter): add HistGradientBoostingRegressor support (#25)

* feat(quickadapter): add HistGradientBoostingRegressor support

Add sklearn's HistGradientBoostingRegressor as a third regressor option.

- Add 'histgradientboostingregressor' to Regressor type and REGRESSORS
- Implement fit_regressor() with X_val/y_val support and early stopping
- Add native sklearn hyperparameters to get_optuna_study_model_parameters()
- Return empty callbacks list (no Optuna pruning callback support)
- Log warning when init_model is provided (not supported)

* fix(quickadapter): address PR review comments for HistGradientBoostingRegressor

* refactor(quickadapter): cleanup HistGradientBoostingRegressor integration

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* perf(quickadapter): fine tune optuna search space for HistGradientBoostingRegressor

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* perf(quickadapter): fine tune model hyperparameters search space by model

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore(quickadapter): bump versions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* docs(README.md): add histgradientboostingregressor to supported regressors list

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): improve exception logging
Jérôme Benoit [Sat, 27 Dec 2025 18:50:51 +0000 (19:50 +0100)] 
refactor(ReforceXY): improve exception logging

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 18:04:41 +0000 (19:04 +0100)] 
refactor(ReforceXY): harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 17:36:07 +0000 (18:36 +0100)] 
refactor(ReforceXY): harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): improve error messages
Jérôme Benoit [Sat, 27 Dec 2025 17:01:15 +0000 (18:01 +0100)] 
refactor(quickadapter): improve error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): hardening error messages
Jérôme Benoit [Sat, 27 Dec 2025 15:55:59 +0000 (16:55 +0100)] 
refactor(quickadapter): hardening error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): harmonize error messages
Jérôme Benoit [Sat, 27 Dec 2025 15:42:34 +0000 (16:42 +0100)] 
refactor(quickadapter): harmonize error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 15:12:06 +0000 (16:12 +0100)] 
refactor: harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): harmonize error messages
Jérôme Benoit [Sat, 27 Dec 2025 14:41:47 +0000 (15:41 +0100)] 
refactor(quickadapter): harmonize error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): improve log messages consistency
Jérôme Benoit [Sat, 27 Dec 2025 13:38:42 +0000 (14:38 +0100)] 
refactor(quickadapter): improve log messages consistency

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 13:32:57 +0000 (14:32 +0100)] 
refactor: harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 13:14:59 +0000 (14:14 +0100)] 
refactor(ReforceXY): harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: remove now unneeded debug code and improve logging messages
Jérôme Benoit [Sat, 27 Dec 2025 12:57:34 +0000 (13:57 +0100)] 
refactor: remove now unneeded debug code and improve logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize and clarify logging messages
Jérôme Benoit [Sat, 27 Dec 2025 12:33:10 +0000 (13:33 +0100)] 
refactor(ReforceXY): harmonize and clarify logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 12:11:28 +0000 (13:11 +0100)] 
refactor(quickadapter): harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(qav3): harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 12:00:40 +0000 (13:00 +0100)] 
refactor(qav3): harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 11:24:00 +0000 (12:24 +0100)] 
refactor(ReforceXY): harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): improve code readability and maintainability
Jérôme Benoit [Sat, 27 Dec 2025 01:12:16 +0000 (02:12 +0100)] 
refactor(ReforceXY): improve code readability and maintainability

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(quickadapter): bump model and strategy versions
Jérôme Benoit [Fri, 26 Dec 2025 21:39:53 +0000 (22:39 +0100)] 
chore(quickadapter): bump model and strategy versions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(ReforceXY): remove PBRS reward duration ratio clamping
Jérôme Benoit [Fri, 26 Dec 2025 20:37:58 +0000 (21:37 +0100)] 
fix(ReforceXY): remove PBRS reward duration ratio clamping

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize errors and warnings messages
Jérôme Benoit [Fri, 26 Dec 2025 19:16:22 +0000 (20:16 +0100)] 
refactor: harmonize errors and warnings messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize logging messages in reward space analysis
Jérôme Benoit [Fri, 26 Dec 2025 17:46:19 +0000 (18:46 +0100)] 
refactor(ReforceXY): harmonize logging messages in reward space analysis

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>