]> Piment Noir Git Repositories - freqai-strategies.git/log
freqai-strategies.git
4 weeks agorefactor(quickadapter): improve ExtremaWeightingTransformer and bump version to 3...
Jérôme Benoit [Sun, 4 Jan 2026 16:57:52 +0000 (17:57 +0100)] 
refactor(quickadapter): improve ExtremaWeightingTransformer and bump version to 3.10.1

- Refactor ExtremaWeightingTransformer for better maintainability
  * Add scaler mapping dictionaries (_STANDARDIZATION_SCALERS, _NORMALIZATION_SCALERS)
  * Extract common scaler logic into _apply_scaler() helper method
  * Extract MMAD logic into _apply_mmad() helper method
  * Extract sigmoid logic into _apply_sigmoid() helper method
  * Extract fitting logic into _fit_standardization() and _fit_normalization()
  * Reduce code duplication in transform/inverse_transform methods
  * Simplify fit() method by consolidating edge case handling

- Bump version from 3.10.0 to 3.10.1
  * QuickAdapterRegressorV3: model version
  * QuickAdapterV3: strategy version

4 weeks agodocs(quickadapter): use markdown escape
Jérôme Benoit [Sun, 4 Jan 2026 15:55:57 +0000 (16:55 +0100)] 
docs(quickadapter): use markdown escape

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor: enhance extrema weighting with sklearn scalers and new methods
Jérôme Benoit [Sun, 4 Jan 2026 15:54:26 +0000 (16:54 +0100)] 
refactor: enhance extrema weighting with sklearn scalers and new methods

Replace manual standardization/normalization calculations with sklearn scalers
for better maintainability and correctness.

Standardization changes:
- Add power_yj (Yeo-Johnson) standardization method
- Replace manual zscore with StandardScaler
- Replace manual robust with RobustScaler
- Add mask size checks for all methods including MMAD
- Store fitted scaler objects instead of manual stats

Normalization changes:
- Add maxabs normalization (new default)
- Replace manual minmax with MinMaxScaler
- Fix sigmoid to output [-1, 1] range (was [0, 1])
- Replace manual calculations with MaxAbsScaler and MinMaxScaler

Other improvements:
- Remove zero-exclusion from mask (zeros are valid values)
- Fit normalization on standardized data (proper pipeline order)
- Add proper RuntimeError for unfitted scalers

Docs:
- Update README to reflect maxabs as new normalization default
- Document power_yj standardization type
- Harmonize mathematical formulas with code notation

4 weeks agorefactor: document ExtremaWeightingTransformer zero handling
Jérôme Benoit [Sun, 4 Jan 2026 12:11:41 +0000 (13:11 +0100)] 
refactor: document ExtremaWeightingTransformer zero handling

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor(quickadapter): code cleanups and optimizations
Jérôme Benoit [Sat, 3 Jan 2026 23:01:16 +0000 (00:01 +0100)] 
refactor(quickadapter): code cleanups and optimizations

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor: improve string literal replacements and use specific tuple constants
Jérôme Benoit [Sat, 3 Jan 2026 22:36:37 +0000 (23:36 +0100)] 
refactor: improve string literal replacements and use specific tuple constants

- Replace hardcoded method names in error messages with tuple constants
  (lines 2003, 2167: use _DISTANCE_METHODS[0] and [1] instead of literal strings)
- Use _CLUSTER_METHODS instead of _SELECTION_METHODS indices for better
  code maintainability (e.g., _CLUSTER_METHODS[0] vs _SELECTION_METHODS[2])
- Fix trial_selection_method comparison order to match tuple constant order
  (compromise_programming [0] before topsis [1])
- Remove redundant power_mean None check (already validated by _validate_power_mean)
- Add clarifying comments to tuple constant usages (minkowski, rank_extrema)

4 weeks agorefactor: replace hardcoded string literals with tuple constant references
Jérôme Benoit [Sat, 3 Jan 2026 21:55:40 +0000 (22:55 +0100)] 
refactor: replace hardcoded string literals with tuple constant references

Replace 8 hardcoded string literal comparisons with proper tuple constant
references using CONSTANT[index] pattern:

- _DENSITY_AGGREGATIONS: 'quantile' and 'power_mean' comparisons (2 fixes)
- _DISTANCE_METHODS: 'topsis' and 'compromise_programming' comparisons (6 fixes)

This improves type safety, maintainability, and ensures consistency with
the codebase's established constant usage patterns.

4 weeks agofix: eliminate data leakage in extrema weighting normalization (#30)
Jérôme Benoit [Sat, 3 Jan 2026 20:48:51 +0000 (21:48 +0100)] 
fix: eliminate data leakage in extrema weighting normalization (#30)

* fix: eliminate data leakage in extrema weighting normalization

Move dataset-dependent scaling from strategy (pre-split) to model label
pipeline (post-split) to prevent train/test data leakage.

Changes:
- Add ExtremaWeightingTransformer (datasieve BaseTransform) in Utils.py
  that fits standardization/normalization stats on training data only
- Add define_label_pipeline() in QuickAdapterRegressorV3 that replaces
  FreqAI's default MinMaxScaler with our configurable transformer
- Simplify strategy's set_freqai_targets() to pass raw weighted extrema
  without any normalization (normalization now happens post-split)
- Remove pre-split normalization functions from Utils.py (~107 lines)

The transformer supports:
- Standardization: zscore, robust, mmad, none
- Normalization: minmax, sigmoid, none (all mathematically invertible)
- Configurable minmax_range (default [-1, 1] per FreqAI convention)
- Correct inverse_transform for prediction recovery

BREAKING CHANGES:
- softmax normalization removed
- l1, l2, rank normalization removed (not mathematically invertible)
- rank_method config option removed
- extrema_weighting config now processed in model pipeline instead of strategy

* chore: remove dead rank_method log line

* chore: cleanup unused imports and comments

* refactor(quickadapter): cleanup extrema weighting implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: use robust_quantiles config in transformer fit()

* style: align with codebase conventions (error messages, near-zero detection)

* refactor: remove redundant checks

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: update config validation for transformer pipeline

- Remove obsolete aggregation+normalization warning (no longer applies post-refactor)
- Change standardization+normalization=none from error to warning

* refactor: cleanup ExtremaWeightingTransformer implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup ExtremaWeightingTransformer implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: Remove redundant configuration extraction in ExtremaWeightingTransformer

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: align ExtremaWeightingTransformer with BaseTransform API

- Call super().__init__() with name parameter
- Match method signatures exactly (npt.ArrayLike, ArrayOrNone, ListOrNone)
- Return tuple from fit() instead of self
- Import types from same namespaces as BaseTransform

* refactor: cleanup type hints

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: remove unnecessary type casts and annotations

Let numpy types flow naturally without explicit float()/int() casts.

* refactor: avoid range python shadowing

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting transformer implementation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting and smoothing config handling

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting transformer

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: filter non-finite values in ExtremaWeightingTransformer

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: use scipy.special.logit for inverse sigmoid transformation

Replace manual inverse sigmoid calculation (-np.log(1.0 / values - 1.0))
with scipy.special.logit() for better code clarity and consistency.

- Uses official scipy function that is the documented inverse of expit
- Mathematically equivalent to the previous implementation
- Improves code readability and maintainability
- Maintains symmetry: sp.special.expit() <-> sp.special.logit()

Also improve comment clarity for standardization identity function.

* docs: update README.md

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: remove unused _n_train attribute from ExtremaWeightingTransformer

The _n_train attribute was being set during fit() but never used
elsewhere in the class or by the BaseTransform interface. Removing
it to reduce code clutter and improve maintainability.

* fix: import paths correction

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: add Bessel correction and ValueError consistency in ExtremaWeightingTransformer

- Use ddof=1 for std computation (sample std instead of population std)
- Add ValueError in _inverse_standardize for unknown methods
- Add ValueError in _inverse_normalize for unknown methods

* chore: refine config-template.json for extrema weighting options

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore: refine extrema weighting configuration in config-template.json

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore: remove hybrid extrema weighting source weights

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: remove unreachable dead code in compute_extrema_weights

* docs: refine README extrema weighting descriptions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agochore: bump version to 3.9.2
Jérôme Benoit [Fri, 2 Jan 2026 23:14:14 +0000 (00:14 +0100)] 
chore: bump version to 3.9.2

Increment patch level version for both model and strategy to reflect
the comment cleanup changes.

4 weeks agorefactor: remove redundant 'Only' prefix from namespace comments
Jérôme Benoit [Fri, 2 Jan 2026 23:10:44 +0000 (00:10 +0100)] 
refactor: remove redundant 'Only' prefix from namespace comments

Simplify code comments by removing the word 'Only' from namespace
identifier comments. The context already makes it clear that these
are the supported namespaces.

4 weeks agoRemove Optuna "train" namespace as preliminary step to eliminate data leakage
Jérôme Benoit [Fri, 2 Jan 2026 23:05:31 +0000 (00:05 +0100)] 
Remove Optuna "train" namespace as preliminary step to eliminate data leakage

Remove the "train" namespace from Optuna hyperparameter optimization to
address data leakage issues in extrema weighting normalization. This is a
preliminary step before implementing a proper data preparation pipeline
that prevents train/test contamination.

Problem:
Current architecture applies extrema weighting normalization (minmax, softmax,
zscore, etc.) on the full dataset BEFORE train/test split. This causes data
leakage: train set labels are normalized using statistics (min/max, mean/std,
median/IQR) computed from the entire dataset including test set. The "train"
namespace hyperopt optimization exacerbates this by optimizing dataset
truncation with contaminated statistics.

Solution approach:
1. Remove "train" namespace optimization (this commit)
2. Switch to binary extrema labels (strategy: "none") to avoid leakage
3. Future: implement proper data preparation that computes normalization
   statistics on train set only and applies them to both train/test sets

This naive train/test splitting hyperopt is incompatible with a correct
data preparation pipeline where normalization must be fit on train and
transformed on test separately.

Changes:
- Remove "train" namespace from OptunaNamespace (3→2 namespaces: hp, label)
- Remove train_objective function and all train optimization logic
- Remove dataset truncation based on optimized train/test periods
- Update namespace indices: label from [2] to [1] throughout codebase
- Remove train_candles_step config parameter and train_rmse metric tracking
- Set extrema_weighting.strategy to "none" (binary labels: -1/0/+1)
- Update documentation to reflect 2-namespace architecture

Files modified:
- QuickAdapterRegressorV3.py: -204 lines (train namespace removal)
- QuickAdapterV3.py: remove train_rmse from plot config
- config-template.json: remove train params, set extrema_weighting to none
- README.md: update documentation (remove train_candles_step reference)

4 weeks agorefactor(quickadapter): cleanup timeframe_minutes usage
Jérôme Benoit [Fri, 2 Jan 2026 17:29:04 +0000 (18:29 +0100)] 
refactor(quickadapter): cleanup timeframe_minutes usage

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor(quickadapter): consolidate custom distance metrics in Pareto front selection
Jérôme Benoit [Fri, 2 Jan 2026 15:16:18 +0000 (16:16 +0100)] 
refactor(quickadapter): consolidate custom distance metrics in Pareto front selection

Extract shared distance metric logic from _compromise_programming_scores and
_topsis_scores into reusable static methods:

- Add _POWER_MEAN_MAP class constant as single source of truth for power values
- Add _power_mean_metrics_set() cached method for metric name lookup
- Add _hellinger_distance() for Hellinger/Shellinger computation
- Add _power_mean_distance() for generalized mean computation with validation
- Add _weighted_sum_distance() for weighted sum computation

Harmonize with existing validation API using ValidationMode and proper contexts.

4 weeks agorefactor(quickadapter): unify validation helpers with ValidationMode support
Jérôme Benoit [Fri, 2 Jan 2026 14:21:16 +0000 (15:21 +0100)] 
refactor(quickadapter): unify validation helpers with ValidationMode support

- Add ValidationMode type for "warn", "raise", "none" behavior
- Rename _UNSUPPORTED_CLUSTER_METRICS to _UNSUPPORTED_WEIGHTS_METRICS
- Refactor _validate_minkowski_p, _validate_quantile_q with mode support
- Add _validate_power_mean_p, _validate_metric_weights_support, _validate_label_weights
- Add _validate_enum_value for generic enum validation
- Add _prepare_knn_kwargs for sklearn-specific weight handling
- Remove deprecated _normalize_weights function
- Update all call sites to use new unified API

4 weeks agodocs(quickadapter): refine comment on HPO parameters ordering
Jérôme Benoit [Fri, 2 Jan 2026 11:44:11 +0000 (12:44 +0100)] 
docs(quickadapter): refine comment on HPO parameters ordering

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat: add debug logs for extrema detection and filtering on predictions
Jérôme Benoit [Fri, 2 Jan 2026 00:56:51 +0000 (01:56 +0100)] 
feat: add debug logs for extrema detection and filtering on predictions

- Add debug logging in _get_extrema_indices to track find_peaks detection counts
- Add debug logging in _get_ranked_peaks to track filtering results
- Add debug logging in _get_ranked_extrema to track filtering results
- Harmonize variable naming: use n_kept_minima/n_kept_maxima consistently
- Use consistent log format: 'Context | Action: details' pattern

4 weeks agorefactor: proprerly format hyperopt metric log message
Jérôme Benoit [Thu, 1 Jan 2026 23:31:48 +0000 (00:31 +0100)] 
refactor: proprerly format hyperopt metric log message

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor!: reorganize label selection with distance, cluster and density methods...
Jérôme Benoit [Thu, 1 Jan 2026 23:01:19 +0000 (00:01 +0100)] 
refactor!: reorganize label selection with distance, cluster and density methods (#29)

* refactor: reorganize label selection with distance, cluster and density methods

* refactor: import lru_cache directly instead of functools module

* chore: remove unused imports in Utils.py

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup n_neighbors adjustment in QuickAdapterRegressorV3

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* fix: use unbounded cache for constant-returning helper methods

Replace @lru_cache(maxsize=1) with @lru_cache(maxsize=None) for all
static methods that return constant sets. Using maxsize=None is more
idiomatic and efficient for parameterless functions that always return
the same value.

* refactor: add _prepare_distance_kwargs to centralize distance kwargs preparation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema weighting API

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: cleanup extrema smoothing API

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: align namespace

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: add more tunables validations

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: simplify cluster-based label selection

- Remove ClusterSelectionMethod type and related constants
- Unify selection methods to use DistanceMethod for both cluster and trial selection
- Add separate trial_selection_method parameter for within-cluster selection
- Change power_mean default from 2.0 to 1.0 for internal consistency
- Add validation for selection_method and trial_selection_method parameters

* fix: add missing validations for label_distance_metric and label_density_aggregation_param

- Add validation for label_distance_metric parameter at configuration time
- Add early validation for label_density_aggregation_param (quantile and power_mean)
- Ensures invalid configuration values fail fast with clear error messages
- Harmonizes error messages with existing validation patterns in the codebase

* fix: add validation for label_cluster_metric and custom metrics support in topsis

- Add validation that label_cluster_metric is in _distance_metrics_set()
- Implement custom metrics support in _topsis_scores (hellinger, shellinger,
  harmonic/geometric/arithmetic/quadratic/cubic/power_mean, weighted_sum)
  matching _compromise_programming_scores implementation

* docs: update README.md with refactored label selection methods

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* docs: fix config parameter and bump to v3.9.0

- Fix config-template.json: label_metric -> label_method
- Bump version from 3.8.5 to 3.9.0 in model and strategy

Parameter names now match QuickAdapterRegressorV3.py implementation.

* docs: refine README label selection methods descriptions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor: refine error message

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agofeat(quickadapter): add TOPSIS selection method for clustering optuna pareto front...
Jérôme Benoit [Wed, 31 Dec 2025 15:29:28 +0000 (16:29 +0100)] 
feat(quickadapter): add TOPSIS selection method for clustering optuna pareto front trial selection

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
4 weeks agorefactor(quickadapter): factor out topsis and distance metric logic
Jérôme Benoit [Wed, 31 Dec 2025 13:01:44 +0000 (14:01 +0100)] 
refactor(quickadapter): factor out topsis and distance metric logic

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofeat(quickadapter): add TOPSIS metric for multi-objective HPO trial selection
Jérôme Benoit [Wed, 31 Dec 2025 00:01:40 +0000 (01:01 +0100)] 
feat(quickadapter): add TOPSIS metric for multi-objective HPO trial selection

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(ReforceXY): Ensure LSTM state persistence for live RecurrentPPO inference
Jérôme Benoit [Tue, 30 Dec 2025 20:25:52 +0000 (21:25 +0100)] 
fix(ReforceXY): Ensure LSTM state persistence for live RecurrentPPO inference

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(tests): synchronize README line numbers with current code
Jérôme Benoit [Tue, 30 Dec 2025 18:46:00 +0000 (19:46 +0100)] 
docs(tests): synchronize README line numbers with current code

- Update Coverage Mapping table line references for invariants 102-106, 110-113, 115-116, 118-121
- Update Non-Owning Smoke table line references for integration, components, pbrs, statistics
- Add missing constant groups to documentation (EFFICIENCY, PBRS, SCENARIOS, STAT_TOL)

5 weeks agofix(ReforceXY): reduce PBRS defaults to prevent reward exploitation
Jérôme Benoit [Tue, 30 Dec 2025 18:20:32 +0000 (19:20 +0100)] 
fix(ReforceXY): reduce PBRS defaults to prevent reward exploitation

Disable hold potential by default and reduce additive ratios to prevent
the agent from exploiting shaping rewards with many short losing trades.

Changes:
- hold_potential_enabled: true -> false (disabled by default)
- hold_potential_ratio: 0.03125 -> 0.001 (reduced when enabled)
- entry_additive_ratio: 0.125 -> 0.0625 (halved)
- exit_additive_ratio: 0.125 -> 0.0625 (halved)

These conservative defaults encourage the agent to focus on actual PnL
rather than gaming intermediate shaping rewards.

5 weeks agodocs(ReforceXY): more aligned mathematical notation in README and code comments
Jérôme Benoit [Tue, 30 Dec 2025 16:50:34 +0000 (17:50 +0100)] 
docs(ReforceXY): more aligned mathematical notation in README and code comments

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(ReforceXY): align mathematical notation with standard conventions
Jérôme Benoit [Tue, 30 Dec 2025 15:36:49 +0000 (16:36 +0100)] 
docs(ReforceXY): align mathematical notation with standard conventions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(ReforceXY): clarify PBRS formulas in reward calculation and analysis
Jérôme Benoit [Tue, 30 Dec 2025 00:30:11 +0000 (01:30 +0100)] 
docs(ReforceXY): clarify PBRS formulas in reward calculation and analysis

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): optuna results key/value formatting
Jérôme Benoit [Mon, 29 Dec 2025 23:23:22 +0000 (00:23 +0100)] 
fix(quickadapter): optuna results key/value formatting

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs(ReforceXY): format README
Jérôme Benoit [Mon, 29 Dec 2025 15:29:13 +0000 (16:29 +0100)] 
docs(ReforceXY): format README

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agoperf(ReforceXY): tune default hold potential ratio to 0.03125
Jérôme Benoit [Mon, 29 Dec 2025 15:27:29 +0000 (16:27 +0100)] 
perf(ReforceXY): tune default hold potential ratio to 0.03125

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(deps): lock file maintenance (#27)
renovate[bot] [Mon, 29 Dec 2025 14:48:55 +0000 (15:48 +0100)] 
chore(deps): lock file maintenance (#27)

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
5 weeks agofeat(quickadapter):add label_sampler option for optuna multi-objective HPO
Jérôme Benoit [Mon, 29 Dec 2025 14:47:29 +0000 (15:47 +0100)] 
feat(quickadapter):add label_sampler option for optuna multi-objective HPO

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): handle config reload
Jérôme Benoit [Mon, 29 Dec 2025 12:44:58 +0000 (13:44 +0100)] 
fix(quickadapter): handle config reload

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): revert HistGradientBoostingRegressor optuna trial pruning
Jérôme Benoit [Mon, 29 Dec 2025 02:07:05 +0000 (03:07 +0100)] 
fix(quickadapter): revert HistGradientBoostingRegressor optuna trial pruning

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): silence warning at HPO with histgb
Jérôme Benoit [Mon, 29 Dec 2025 00:30:06 +0000 (01:30 +0100)] 
refactor(quickadapter): silence warning at HPO with histgb

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofeat(quickadapter): add early stopping support to all models and pruning for HistGrad...
Jérôme Benoit [Mon, 29 Dec 2025 00:04:18 +0000 (01:04 +0100)] 
feat(quickadapter): add early stopping support to all models and pruning for HistGradientBoostingRegressor

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agoperf(quickadapter): refine optuna hyperparameters search space
Jérôme Benoit [Sun, 28 Dec 2025 19:42:58 +0000 (20:42 +0100)] 
perf(quickadapter): refine optuna hyperparameters search space

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter)!: normalize tunables namespace for semantic consistency (#26)
Jérôme Benoit [Sun, 28 Dec 2025 18:51:56 +0000 (19:51 +0100)] 
refactor(quickadapter)!: normalize tunables namespace for semantic consistency (#26)

* refactor(quickadapter): normalize tunables namespace for semantic consistency

Rename config keys and internal variables to follow consistent naming conventions:
- `_candles` suffix for time periods in candle units
- `_fraction` suffix for values in [0,1] range
- `_multiplier` suffix for scaling factors
- `_method` suffix for algorithm selectors

Config key renames (with backward-compatible deprecated aliases):
- lookback_period → lookback_period_candles
- decay_ratio → decay_fraction
- min/max_natr_ratio_percent → min/max_natr_ratio_fraction
- window → window_candles
- label_natr_ratio → label_natr_multiplier
- threshold_outlier → outlier_threshold_fraction
- thresholds_smoothing → threshold_smoothing_method
- thresholds_alpha → soft_extremum_alpha
- extrema_fraction → keep_extrema_fraction
- expansion_ratio → space_fraction
- trade_price_target → trade_price_target_method

Internal variable renames for code consistency:
- threshold_outlier → outlier_threshold_fraction
- thresholds_alpha → soft_extremum_alpha
- extrema_fraction → keep_extrema_fraction (local vars and function params)
- _reversal_lookback_period → _reversal_lookback_period_candles
- natr_ratio → natr_multiplier (zigzag function param)

All deprecated aliases emit warnings and remain functional for backward compatibility.

* chore(quickadapter): remove temporary audit file from codebase

* refactor(quickadapter): align constant names with normalized tunables

Rename class constants to match the normalized config key names:
- PREDICTIONS_EXTREMA_THRESHOLD_OUTLIER_DEFAULT → PREDICTIONS_EXTREMA_OUTLIER_THRESHOLD_FRACTION_DEFAULT
- PREDICTIONS_EXTREMA_THRESHOLDS_ALPHA_DEFAULT → PREDICTIONS_EXTREMA_SOFT_EXTREMUM_ALPHA_DEFAULT
- PREDICTIONS_EXTREMA_EXTREMA_FRACTION_DEFAULT → PREDICTIONS_EXTREMA_KEEP_EXTREMA_FRACTION_DEFAULT

* fix(quickadapter): rename outlier_threshold_fraction to outlier_threshold_quantile

The value (e.g., 0.999) represents the 99.9th percentile, which is
mathematically a quantile, not a fraction. This aligns the naming with
the semantic meaning of the parameter.

* fix(quickadapter): add missing deprecated alias support

Add backward-compatible deprecated alias handling for:
- freqai.optuna_hyperopt.expansion_ratio → space_fraction
- freqai.feature_parameters.min_label_natr_ratio → min_label_natr_multiplier
- freqai.feature_parameters.max_label_natr_ratio → max_label_natr_multiplier

Also add missing deprecated alias documentation in README for:
- reversal_confirmation.min_natr_ratio_percent → min_natr_ratio_fraction
- reversal_confirmation.max_natr_ratio_percent → max_natr_ratio_fraction

This ensures all deprecated aliases mentioned in the commit message of the
namespace normalization refactor are properly implemented.

* style(readme): normalize trailing whitespace

* fix(quickadapter): address PR review feedback

- Fix error message referencing 'window' instead of 'window_candles'
- Clarify soft_extremum_alpha error message (alpha=0 uses mean)
- Improve space_fraction documentation in README
- Simplify midpoint docstring

* refactor(quickadapter): rename safe configuration value retrieval helper

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* refactor(quickadapter): rename natr_ratio_fraction to natr_multiplier_fraction

- Align naming with label_natr_multiplier for consistency
- Rename get_config_value_with_deprecated_alias to get_config_value

* refactor(quickadapter): centralize label_natr_multiplier migration in get_label_defaults

- Move label_natr_ratio -> label_natr_multiplier migration to get_label_defaults()
- Update get_config_value to migrate in-place (pop old key, store new key)
- Remove redundant get_config_value calls in Strategy and Model __init__
- Simplify cached properties to use .get() since migration is done at init
- Rename _CUSTOM_STOPLOSS_NATR_RATIO_FRACTION to _CUSTOM_STOPLOSS_NATR_MULTIPLIER_FRACTION

* fix(quickadapter): check that df columns exist before using them

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* docs(README.md): update QuickAdapter strategy documentation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore(quickadapter): bump version to 3.8.0

* refactor(quickadapter): remove unnecessary intermediate variable

* refactor(quickadapter): add cached properties for label_period_candles bounds

* chore(quickadapter): cleanup docstrings and comments

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): normalize tunables namespace
Jérôme Benoit [Sun, 28 Dec 2025 15:09:34 +0000 (16:09 +0100)] 
refactor(ReforceXY): normalize tunables namespace

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(ReforceXY): update ReforceXY config-template.json
Jérôme Benoit [Sun, 28 Dec 2025 14:40:36 +0000 (15:40 +0100)] 
chore(ReforceXY): update ReforceXY config-template.json

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): ensure 1D target with HistGradientBoostingRegressor
Jérôme Benoit [Sun, 28 Dec 2025 01:34:12 +0000 (02:34 +0100)] 
refactor(quickadapter): ensure 1D target with HistGradientBoostingRegressor

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofeat(quickadapter): add HistGradientBoostingRegressor support (#25)
Jérôme Benoit [Sun, 28 Dec 2025 01:16:02 +0000 (02:16 +0100)] 
feat(quickadapter): add HistGradientBoostingRegressor support (#25)

* feat(quickadapter): add HistGradientBoostingRegressor support

Add sklearn's HistGradientBoostingRegressor as a third regressor option.

- Add 'histgradientboostingregressor' to Regressor type and REGRESSORS
- Implement fit_regressor() with X_val/y_val support and early stopping
- Add native sklearn hyperparameters to get_optuna_study_model_parameters()
- Return empty callbacks list (no Optuna pruning callback support)
- Log warning when init_model is provided (not supported)

* fix(quickadapter): address PR review comments for HistGradientBoostingRegressor

* refactor(quickadapter): cleanup HistGradientBoostingRegressor integration

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* perf(quickadapter): fine tune optuna search space for HistGradientBoostingRegressor

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* perf(quickadapter): fine tune model hyperparameters search space by model

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* chore(quickadapter): bump versions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
* docs(README.md): add histgradientboostingregressor to supported regressors list

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
---------

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): improve exception logging
Jérôme Benoit [Sat, 27 Dec 2025 18:50:51 +0000 (19:50 +0100)] 
refactor(ReforceXY): improve exception logging

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 18:04:41 +0000 (19:04 +0100)] 
refactor(ReforceXY): harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 17:36:07 +0000 (18:36 +0100)] 
refactor(ReforceXY): harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): improve error messages
Jérôme Benoit [Sat, 27 Dec 2025 17:01:15 +0000 (18:01 +0100)] 
refactor(quickadapter): improve error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): hardening error messages
Jérôme Benoit [Sat, 27 Dec 2025 15:55:59 +0000 (16:55 +0100)] 
refactor(quickadapter): hardening error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): harmonize error messages
Jérôme Benoit [Sat, 27 Dec 2025 15:42:34 +0000 (16:42 +0100)] 
refactor(quickadapter): harmonize error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 15:12:06 +0000 (16:12 +0100)] 
refactor: harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): harmonize error messages
Jérôme Benoit [Sat, 27 Dec 2025 14:41:47 +0000 (15:41 +0100)] 
refactor(quickadapter): harmonize error messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): improve log messages consistency
Jérôme Benoit [Sat, 27 Dec 2025 13:38:42 +0000 (14:38 +0100)] 
refactor(quickadapter): improve log messages consistency

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 13:32:57 +0000 (14:32 +0100)] 
refactor: harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 13:14:59 +0000 (14:14 +0100)] 
refactor(ReforceXY): harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: remove now unneeded debug code and improve logging messages
Jérôme Benoit [Sat, 27 Dec 2025 12:57:34 +0000 (13:57 +0100)] 
refactor: remove now unneeded debug code and improve logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize and clarify logging messages
Jérôme Benoit [Sat, 27 Dec 2025 12:33:10 +0000 (13:33 +0100)] 
refactor(ReforceXY): harmonize and clarify logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): harmonize log messages
Jérôme Benoit [Sat, 27 Dec 2025 12:11:28 +0000 (13:11 +0100)] 
refactor(quickadapter): harmonize log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(qav3): harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 12:00:40 +0000 (13:00 +0100)] 
refactor(qav3): harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize logging messages
Jérôme Benoit [Sat, 27 Dec 2025 11:24:00 +0000 (12:24 +0100)] 
refactor(ReforceXY): harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): improve code readability and maintainability
Jérôme Benoit [Sat, 27 Dec 2025 01:12:16 +0000 (02:12 +0100)] 
refactor(ReforceXY): improve code readability and maintainability

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(quickadapter): bump model and strategy versions
Jérôme Benoit [Fri, 26 Dec 2025 21:39:53 +0000 (22:39 +0100)] 
chore(quickadapter): bump model and strategy versions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(ReforceXY): remove PBRS reward duration ratio clamping
Jérôme Benoit [Fri, 26 Dec 2025 20:37:58 +0000 (21:37 +0100)] 
fix(ReforceXY): remove PBRS reward duration ratio clamping

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize errors and warnings messages
Jérôme Benoit [Fri, 26 Dec 2025 19:16:22 +0000 (20:16 +0100)] 
refactor: harmonize errors and warnings messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): harmonize logging messages in reward space analysis
Jérôme Benoit [Fri, 26 Dec 2025 17:46:19 +0000 (18:46 +0100)] 
refactor(ReforceXY): harmonize logging messages in reward space analysis

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): add price target tunables constant.
Jérôme Benoit [Fri, 26 Dec 2025 16:05:32 +0000 (17:05 +0100)] 
refactor(quickadapter): add price target tunables constant.

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize logging messages
Jérôme Benoit [Fri, 26 Dec 2025 15:39:31 +0000 (16:39 +0100)] 
refactor: harmonize logging messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor: harmonize Optuna log messages
Jérôme Benoit [Fri, 26 Dec 2025 15:05:43 +0000 (16:05 +0100)] 
refactor: harmonize Optuna log messages

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): cleanup PnL momentum declining trade exit logic
Jérôme Benoit [Fri, 26 Dec 2025 14:19:55 +0000 (15:19 +0100)] 
refactor(quickadapter): cleanup PnL momentum declining trade exit logic

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): unbiased quantile calculation with percentileofscore
Jérôme Benoit [Fri, 26 Dec 2025 13:03:41 +0000 (14:03 +0100)] 
fix(quickadapter): unbiased quantile calculation with percentileofscore

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): sensible defaults for risk/reward ratio and hold potential
Jérôme Benoit [Thu, 25 Dec 2025 23:11:19 +0000 (00:11 +0100)] 
refactor(ReforceXY): sensible defaults for risk/reward ratio and hold potential

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): use float
Jérôme Benoit [Thu, 25 Dec 2025 20:14:42 +0000 (21:14 +0100)] 
refactor(quickadapter): use float

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): improve outlier detection fitting
Jérôme Benoit [Thu, 25 Dec 2025 19:39:03 +0000 (20:39 +0100)] 
refactor(quickadapter): improve outlier detection fitting

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agotest(ReforceXY): use proper constants
Jérôme Benoit [Thu, 25 Dec 2025 18:51:46 +0000 (19:51 +0100)] 
test(ReforceXY): use proper constants

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): cleanup optuna trials validation
Jérôme Benoit [Thu, 25 Dec 2025 18:20:21 +0000 (19:20 +0100)] 
refactor(quickadapter): cleanup optuna trials validation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): improve optuna integration
Jérôme Benoit [Thu, 25 Dec 2025 18:04:33 +0000 (19:04 +0100)] 
refactor(quickadapter): improve optuna integration

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter)!: rename nadaraya_watson to gaussian_filter1d
Jérôme Benoit [Thu, 25 Dec 2025 16:22:13 +0000 (17:22 +0100)] 
refactor(quickadapter)!: rename nadaraya_watson to gaussian_filter1d

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): cleanup redundant checks in _impute_weights()
Jérôme Benoit [Thu, 25 Dec 2025 15:02:10 +0000 (16:02 +0100)] 
refactor(quickadapter): cleanup redundant checks in _impute_weights()

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): cleanup overzealous reward params checks
Jérôme Benoit [Thu, 25 Dec 2025 14:58:23 +0000 (15:58 +0100)] 
refactor(ReforceXY): cleanup overzealous reward params checks

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore: bump model and strategy versions
Jérôme Benoit [Thu, 25 Dec 2025 13:31:46 +0000 (14:31 +0100)] 
chore: bump model and strategy versions

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): ensure extrema weighting sources express future price movements
Jérôme Benoit [Thu, 25 Dec 2025 13:27:22 +0000 (14:27 +0100)] 
fix(quickadapter): ensure extrema weighting sources express future price movements

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agotest(ReforceXY): factor out common code
Jérôme Benoit [Thu, 25 Dec 2025 12:21:57 +0000 (13:21 +0100)] 
test(ReforceXY): factor out common code

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agotest(ReforceXY): use standardized test parameters and tolerances
Jérôme Benoit [Thu, 25 Dec 2025 11:05:37 +0000 (12:05 +0100)] 
test(ReforceXY): use standardized test parameters and tolerances

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): simplify param helpers with auto-lookup defaults
Jérôme Benoit [Thu, 25 Dec 2025 01:09:50 +0000 (02:09 +0100)] 
refactor(ReforceXY): simplify param helpers with auto-lookup defaults

5 weeks agodocs(ReforceXY): update tests documentation
Jérôme Benoit [Wed, 24 Dec 2025 22:53:18 +0000 (23:53 +0100)] 
docs(ReforceXY): update tests documentation

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agotest(ReforceXY): cleanup tests namespace
Jérôme Benoit [Wed, 24 Dec 2025 22:41:31 +0000 (23:41 +0100)] 
test(ReforceXY): cleanup tests namespace

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): consolidate default params in test helpers
Jérôme Benoit [Wed, 24 Dec 2025 22:16:34 +0000 (23:16 +0100)] 
refactor(ReforceXY): consolidate default params in test helpers

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(ReforceXY): align methods signature
Jérôme Benoit [Wed, 24 Dec 2025 21:15:55 +0000 (22:15 +0100)] 
refactor(ReforceXY): align methods signature

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agochore(ReforceXY)!: rename idle/hold penalty scale to ratio
Jérôme Benoit [Wed, 24 Dec 2025 20:02:50 +0000 (21:02 +0100)] 
chore(ReforceXY)!: rename idle/hold penalty scale to ratio

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(ReforceXY): enforce coherent scale for reward components
Jérôme Benoit [Wed, 24 Dec 2025 18:27:31 +0000 (19:27 +0100)] 
fix(ReforceXY): enforce coherent scale for reward components

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agodocs: align k-means wording
Jérôme Benoit [Wed, 24 Dec 2025 13:54:55 +0000 (14:54 +0100)] 
docs: align k-means wording

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agorefactor(quickadapter): dynamically adjust extrema plot epsilon for zero weighted...
Jérôme Benoit [Wed, 24 Dec 2025 13:26:45 +0000 (14:26 +0100)] 
refactor(quickadapter): dynamically adjust extrema plot epsilon for zero weighted extrema values

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
5 weeks agofix(quickadapter): render zero-weight extrema bars
Jérôme Benoit [Wed, 24 Dec 2025 12:36:53 +0000 (13:36 +0100)] 
fix(quickadapter): render zero-weight extrema bars

6 weeks agotest(ReforceXY): finish factor decoupling properly
Jérôme Benoit [Wed, 24 Dec 2025 00:06:15 +0000 (01:06 +0100)] 
test(ReforceXY): finish factor decoupling properly

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agofeat(ReforceXY): make PBRS position holding risk reward aware
Jérôme Benoit [Tue, 23 Dec 2025 23:13:15 +0000 (00:13 +0100)] 
feat(ReforceXY): make PBRS position holding risk reward aware

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agorefactor(ReforceXY): PBRS refactoring, bug fix, and documentation harmonization
Jérôme Benoit [Tue, 23 Dec 2025 19:13:18 +0000 (20:13 +0100)] 
refactor(ReforceXY): PBRS refactoring, bug fix, and documentation harmonization

This commit includes three major improvements to the PBRS implementation:

1. Bug Fix: idle_factor calculation
   - Fixed incorrect variable reference in reward_space_analysis.py:625
   - Changed 'factor' to 'base_factor' in idle_factor formula
   - Formula: idle_factor = base_factor * (profit_aim / risk_reward_ratio) / 4.0
   - Also fixed in test_reward_components.py and ReforceXY.py

2. Refactoring: Separation of concerns in PBRS calculation
   - Renamed apply_potential_shaping() → compute_pbrs_components()
   - Removed base_reward parameter from PBRS functions
   - PBRS functions now return only shaping components
   - Caller responsible for: total = base_reward + shaping + entry + exit
   - Kept deprecated wrapper for backward compatibility
   - Updated ReforceXY.py with parallel changes
   - Adapted tests to new function signatures

3. Documentation: Complete mathematical notation harmonization
   - Achieved 100% consistent notation across both implementations
   - Standardized on Greek symbols: Φ(s), γ, Δ(s,a,s')
   - Eliminated mixing of word forms (Phi/gamma/Delta) with symbols
   - Harmonized docstrings to 156-169 lines with identical theory sections
   - Added cross-references between implementations
   - Fixed all instances of Δ(s,s') → Δ(s,a,s') to include action parameter

Files modified:
- reward_space_analysis/reward_space_analysis.py: Core refactoring + docs
- user_data/freqaimodels/ReforceXY.py: Parallel refactoring + docs
- tests/components/test_additives.py: Adapted to new signature
- tests/components/test_reward_components.py: Bug fix
- tests/api/test_api_helpers.py: Adapted to new signature

All 50 tests pass. Behavior preserved except for intentional bug fix.

6 weeks agofix(ReforceXY): make the data generation duration aware
Jérôme Benoit [Tue, 23 Dec 2025 17:02:49 +0000 (18:02 +0100)] 
fix(ReforceXY): make the data generation duration aware

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agorefactor(ReforceXY): cleanup reward space analysis
Jérôme Benoit [Tue, 23 Dec 2025 16:53:41 +0000 (17:53 +0100)] 
refactor(ReforceXY): cleanup reward space analysis

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agochore: refresh openspec artifacts
Jérôme Benoit [Mon, 22 Dec 2025 22:56:34 +0000 (23:56 +0100)] 
chore: refresh openspec artifacts

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agofeat(ReforceXY): add purge_period to optuna config to periodically purge optuna studies
Jérôme Benoit [Mon, 22 Dec 2025 22:16:46 +0000 (23:16 +0100)] 
feat(ReforceXY): add purge_period to optuna config to periodically purge optuna studies

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agofix(ReforceXY): reset last reward shaping on neutral self loop
Jérôme Benoit [Mon, 22 Dec 2025 19:02:33 +0000 (20:02 +0100)] 
fix(ReforceXY): reset last reward shaping on neutral self loop

When the environment is reset, ensure that the last reward shaping value

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
6 weeks agorefactor(ReforceXY): cleanup variables namespace
Jérôme Benoit [Mon, 22 Dec 2025 18:54:52 +0000 (19:54 +0100)] 
refactor(ReforceXY): cleanup variables namespace

Signed-off-by: Jérôme Benoit <jerome.benoit@piment-noir.org>