--- /dev/null
+---
+agent: build
+description: Implement an approved OpenSpec change and keep tasks in sync.
+---
+
+<!-- OPENSPEC:START -->
+
+**Guardrails**
+
+- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
+- Keep changes tightly scoped to the requested outcome.
+- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
+
+**Steps**
+Track these steps as TODOs and complete them one by one.
+
+1. Read `changes/<id>/proposal.md`, `design.md` (if present), and `tasks.md` to confirm scope and acceptance criteria.
+2. Work through tasks sequentially, keeping edits minimal and focused on the requested change.
+3. Confirm completion before updating statuses—make sure every item in `tasks.md` is finished.
+4. Update the checklist after all work is done so each task is marked `- [x]` and reflects reality.
+5. Reference `openspec list` or `openspec show <item>` when additional context is required.
+
+**Reference**
+
+- Use `openspec show <id> --json --deltas-only` if you need additional context from the proposal while implementing.
+<!-- OPENSPEC:END -->
--- /dev/null
+---
+agent: build
+description: Archive a deployed OpenSpec change and update specs.
+---
+
+<ChangeId>
+ $ARGUMENTS
+</ChangeId>
+<!-- OPENSPEC:START -->
+**Guardrails**
+- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
+- Keep changes tightly scoped to the requested outcome.
+- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
+
+**Steps**
+
+1. Determine the change ID to archive:
+ - If this prompt already includes a specific change ID (for example inside a `<ChangeId>` block populated by slash-command arguments), use that value after trimming whitespace.
+ - If the conversation references a change loosely (for example by title or summary), run `openspec list` to surface likely IDs, share the relevant candidates, and confirm which one the user intends.
+ - Otherwise, review the conversation, run `openspec list`, and ask the user which change to archive; wait for a confirmed change ID before proceeding.
+ - If you still cannot identify a single change ID, stop and tell the user you cannot archive anything yet.
+2. Validate the change ID by running `openspec list` (or `openspec show <id>`) and stop if the change is missing, already archived, or otherwise not ready to archive.
+3. Run `openspec archive <id> --yes` so the CLI moves the change and applies spec updates without prompts (use `--skip-specs` only for tooling-only work).
+4. Review the command output to confirm the target specs were updated and the change landed in `changes/archive/`.
+5. Validate with `openspec validate --strict` and inspect with `openspec show <id>` if anything looks off.
+
+**Reference**
+
+- Use `openspec list` to confirm change IDs before archiving.
+- Inspect refreshed specs with `openspec list --specs` and address any validation issues before handing off.
+<!-- OPENSPEC:END -->
--- /dev/null
+---
+agent: build
+description: Scaffold a new OpenSpec change and validate strictly.
+---
+
+The user has requested the following change proposal. Use the openspec instructions to create their change proposal.
+<UserRequest>
+$ARGUMENTS
+</UserRequest>
+
+<!-- OPENSPEC:START -->
+
+**Guardrails**
+
+- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
+- Keep changes tightly scoped to the requested outcome.
+- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
+- Identify any vague or ambiguous details and ask the necessary follow-up questions before editing files.
+
+**Steps**
+
+1. Review `openspec/project.md`, run `openspec list` and `openspec list --specs`, and inspect related code or docs (e.g., via `rg`/`ls`) to ground the proposal in current behaviour; note any gaps that require clarification.
+2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, and `design.md` (when needed) under `openspec/changes/<id>/`.
+3. Map the change into concrete capabilities or requirements, breaking multi-scope efforts into distinct spec deltas with clear relationships and sequencing.
+4. Capture architectural reasoning in `design.md` when the solution spans multiple systems, introduces new patterns, or demands trade-off discussion before committing to specs.
+5. Draft spec deltas in `changes/<id>/specs/<capability>/spec.md` (one folder per capability) using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement and cross-reference related capabilities when relevant.
+6. Draft `tasks.md` as an ordered list of small, verifiable work items that deliver user-visible progress, include validation (tests, tooling), and highlight dependencies or parallelizable work.
+7. Validate with `openspec validate <id> --strict` and resolve every issue before sharing the proposal.
+
+**Reference**
+
+- Use `openspec show <id> --json --deltas-only` or `openspec show <spec> --type spec` to inspect details when validation fails.
+- Search existing requirements with `rg -n "Requirement:|Scenario:" openspec/specs` before writing new ones.
+- Explore the codebase with `rg <keyword>`, `ls`, or direct file reads so proposals align with current implementation realities.
+<!-- OPENSPEC:END -->
# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
-initial_prompt: "You are working on a free and open source software project that implements FreqAI trading strategies. Refer to the memories for more details and `.github/copilot-instructions.md` for development guidelines."
+initial_prompt: "You are working on a free and open source software project that implements FreqAI trading strategies. Refer to the memories for more details and `.github/copilot-instructions.md` or `AGENTS.md` for development guidelines."
project_name: "freqai-strategies"
--- /dev/null
+<!-- OPENSPEC:START -->
+
+# OpenSpec Instructions
+
+These instructions are for AI assistants working in this project.
+
+Always open `@/openspec/AGENTS.md` when the request:
+
+- Mentions planning or proposals (words like proposal, spec, change, plan)
+- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
+- Sounds ambiguous and you need the authoritative spec before coding
+
+Use `@/openspec/AGENTS.md` to learn:
+
+- How to create and apply change proposals
+- Spec format and conventions
+- Project structure and guidelines
+
+Keep this managed block so 'openspec update' can refresh the instructions.
+
+<!-- OPENSPEC:END -->
+
+Open `@/.github/copilot-instructions.md`, read it and strictly follow the instructions.
- Real vs synthetic shift metrics
- Manifest + parameter hash
-## Table of contents
+## Quick Start
+
+```shell
+# Install
+cd ReforceXY/reward_space_analysis
+uv sync --all-groups
+
+# Run a default analysis
+uv run python reward_space_analysis.py --num_samples 20000 --out_dir out
+
+# Run test suite (coverage ≥85% enforced)
+uv run pytest
+```
+
+Minimal selective test example:
+
+```shell
+uv run pytest -m pbrs -q
+```
+
+Full test documentation: `tests/README.md`.
+
+## Table of Contents
- [Key Capabilities](#key-capabilities)
+- [Quick Start](#quick-start)
- [Prerequisites](#prerequisites)
- [Common Use Cases](#common-use-cases)
- - [1. Validate Reward Logic](#1-validate-reward-logic)
- - [2. Parameter Sensitivity](#2-parameter-sensitivity)
- - [3. Debug Anomalies](#3-debug-anomalies)
- - [4. Real vs Synthetic](#4-real-vs-synthetic)
- [CLI Parameters](#cli-parameters)
- - [Required Parameters](#required-parameters)
- - [Core Simulation](#core-simulation)
- - [Reward Configuration](#reward-configuration)
- - [PnL / Volatility](#pnl--volatility)
- - [Trading Environment](#trading-environment)
- - [Output & Overrides](#output--overrides)
- - [Parameter Cheat Sheet](#parameter-cheat-sheet)
- - [Exit Attenuation Kernels](#exit-attenuation-kernels)
- - [Transform Functions](#transform-functions)
- - [Skipping Feature Analysis](#skipping-feature-analysis)
- - [Reproducibility](#reproducibility)
- - [Overrides vs `--params`](#overrides-vs---params)
+ - [Simulation & Environment](#simulation--environment)
+ - [Reward & Shaping](#reward--shaping)
+ - [Diagnostics & Validation](#diagnostics--validation)
+ - [Overrides](#overrides)
+ - [Reward Parameter Cheat Sheet](#reward-parameter-cheat-sheet)
+ - [Exit Attenuation Kernels](#exit-attenuation-kernels)
+ - [Transform Functions](#transform-functions)
+ - [Skipping Feature Analysis](#skipping-feature-analysis)
+ - [Reproducibility](#reproducibility)
+ - [Overrides vs --params](#overrides-vs--params)
- [Examples](#examples)
- [Outputs](#outputs)
- - [Main Report](#main-report-statistical_analysismd)
- - [Data Exports](#data-exports)
- - [Manifest](#manifest-manifestjson)
- - [Distribution Shift Metrics](#distribution-shift-metrics)
- [Advanced Usage](#advanced-usage)
- - [Custom Parameter Testing](#custom-parameter-testing)
- - [Real Data Comparison](#real-data-comparison)
- - [Batch Analysis](#batch-analysis)
+ - [Parameter Sweeps](#parameter-sweeps)
+ - [PBRS Rationale](#pbrs-rationale)
+ - [Real Data Comparison](#real-data-comparison)
+ - [Batch Analysis](#batch-analysis)
- [Testing](#testing)
- - [Run Tests](#run-tests)
- - [Coverage](#coverage)
- - [When to Run Tests](#when-to-run-tests)
- - [Focused Test Sets](#focused-test-sets)
- [Troubleshooting](#troubleshooting)
- - [No Output Files](#no-output-files)
- - [Unexpected Reward Values](#unexpected-reward-values)
- - [Slow Execution](#slow-execution)
- - [Memory Errors](#memory-errors)
## Prerequisites
-Requirements:
-- [Python 3.9+](https://www.python.org/downloads/)
+Requirements:
+
+- [Python 3.11+](https://www.python.org/downloads/)
- ≥4GB RAM
- [uv](https://docs.astral.sh/uv/getting-started/installation/) project manager
```
Run:
+
```shell
uv run python reward_space_analysis.py --num_samples 20000 --out_dir out
```
### 2. Parameter Sensitivity
-```shell
-# Test different win reward factors
-uv run python reward_space_analysis.py \
- --num_samples 30000 \
- --params win_reward_factor=2.0 \
- --out_dir conservative_rewards
+Single-run example:
+```shell
uv run python reward_space_analysis.py \
- --num_samples 30000 \
- --params win_reward_factor=4.0 \
- --out_dir aggressive_rewards
-
-# Test PBRS potential shaping
-uv run python reward_space_analysis.py \
- --num_samples 30000 \
- --params hold_potential_enabled=true potential_gamma=0.9 exit_potential_mode=progressive_release \
- --out_dir pbrs_analysis
+ --num_samples 30000 \
+ --params win_reward_factor=4.0 idle_penalty_scale=1.5 \
+ --out_dir sensitivity_test
```
Compare reward distribution & component share deltas across runs.
### 3. Debug Anomalies
```shell
-# Generate detailed analysis
uv run python reward_space_analysis.py \
- --num_samples 50000 \
- --out_dir debug_analysis
+ --num_samples 50000 \
+ --out_dir debug_analysis
```
Focus: feature importance, shaping activation, invariance drift, extremes.
### 4. Real vs Synthetic
```shell
-# First, collect real episodes
-# Then compare:
uv run python reward_space_analysis.py \
- --num_samples 100000 \
- --real_episodes path/to/episode_rewards.pkl \
- --out_dir real_vs_synthetic
+ --num_samples 100000 \
+ --real_episodes path/to/episode_rewards.pkl \
+ --out_dir real_vs_synthetic
```
+Generates shift metrics for comparison (see Outputs section).
+
---
## CLI Parameters
-### Required Parameters
-
-None (all have defaults).
-
-### Core Simulation
-
-**`--num_samples`** (int, default: 20000) – Synthetic scenarios. More = better stats (slower). Recommended: 10k (quick), 50k (standard), 100k+ (deep).
-
-**`--seed`** (int, default: 42) – Master seed (reuse for identical runs).
-
-### Reward Configuration
-
-**`--base_factor`** (float, default: 100.0) – Base reward scale (match environment).
-
-**`--profit_target`** (float, default: 0.03) – Target profit (e.g. 0.03=3%) for exit reward.
-
-**`--risk_reward_ratio`** (float, default: 1.0) – Adjusts effective profit target.
-
-**`--max_duration_ratio`** (float, default: 2.5) – Upper multiple for sampled trade/idle durations (higher = more variety).
-
-### PnL / Volatility
-
-Controls synthetic PnL variance (heteroscedastic; grows with duration):
-
-**`--pnl_base_std`** (float, default: 0.02) – Volatility floor.
+### Simulation & Environment
-**`--pnl_duration_vol_scale`** (float, default: 0.5) – Extra volatility × (duration/max_trade_duration). Higher ⇒ stronger.
+**`--num_samples`** (int, default: 20000) – Synthetic scenarios. More = better stats (slower). Recommended: 10k (quick), 50k (standard), 100k+ (deep). (Simulation-only; not overridable via `--params`).
+**`--seed`** (int, default: 42) – Master seed (reuse for identical runs). (Simulation-only).
+**`--trading_mode`** (spot|margin|futures, default: spot) – spot: no shorts; margin/futures: shorts enabled. (Simulation-only).
+**`--action_masking`** (bool, default: true) – Simulate environment action masking; invalid actions receive penalties only if masking disabled. (Simulation-only; not present in reward params; cannot be set via `--params`).
+**`--max_duration_ratio`** (float, default: 2.5) – Upper multiple for sampled trade durations (idle derived). (Simulation-only; not in reward params; cannot be set via `--params`).
+**`--pnl_base_std`** (float, default: 0.02) – Base standard deviation for synthetic PnL generation (pre-scaling). (Simulation-only).
+**`--pnl_duration_vol_scale`** (float, default: 0.5) – Additional PnL volatility scale proportional to trade duration ratio. (Simulation-only).
+**`--real_episodes`** (path, optional) – Episodes pickle for real vs synthetic distribution shift metrics. (Simulation-only; triggers additional outputs when provided).
+**`--unrealized_pnl`** (flag, default: false) – Simulate unrealized PnL accrual during holds for potential Φ. (Simulation-only; affects PBRS components).
-### Trading Environment
+### Reward & Shaping
-**`--trading_mode`** (spot|margin|futures, default: spot) – spot: no shorts; margin/futures: shorts enabled.
+**`--base_factor`** (float, default: 100.0) – Base reward scale.
+**`--profit_target`** (float, default: 0.03) – Target profit (e.g. 0.03=3%). (May be overridden via `--params` though not stored in `reward_params` object.)
+**`--risk_reward_ratio`** (float, default: 1.0) – Adjusts effective profit target (`profit_target * risk_reward_ratio`). (May be overridden via `--params`).
+**`--win_reward_factor`** (float, default: 2.0) – Profit overshoot multiplier.
+**Duration penalties**: idle / hold scales & powers shape time-cost.
+**Exit attenuation**: kernel factors applied to exit duration ratio.
+**Efficiency weighting**: scales efficiency contribution.
-**`--action_masking`** (bool, default: true) – Simulate action masking (match environment).
+### Diagnostics & Validation
-### Output & Overrides
+**`--check_invariants`** (bool, default: true) – Enable runtime invariant checks (diagnostics become advisory if disabled). Toggle rarely; disabling may hide reward drift or invariance violations.
+**`--strict_validation`** (flag, default: true) – Enforce parameter bounds and finite checks; raises instead of silent clamp/discard when enabled.
+**`--strict_diagnostics`** (flag, default: false) – Fail-fast on degenerate statistical diagnostics (zero-width CIs, undefined distribution metrics) instead of graceful fallbacks.
+**`--exit_factor_threshold`** (float, default: 10000.0) – Warn if exit factor exceeds threshold.
+**`--pvalue_adjust`** (none|benjamini_hochberg, default: none) – Multiple testing p-value adjustment method.
+**`--bootstrap_resamples`** (int, default: 10000) – Bootstrap iterations for confidence intervals; lower for speed (e.g. 500) during smoke tests.
+**`--skip_feature_analysis`** / **`--skip_partial_dependence`** – Skip feature importance or PD grids (see Skipping Feature Analysis section); influence runtime only.
+**`--rf_n_jobs`** / **`--perm_n_jobs`** (int, default: -1) – Parallel worker counts for RandomForest and permutation importance (-1 = all cores).
-**`--out_dir`** (path, default: reward_space_outputs) – Output directory (auto-created).
+### Overrides
-**`--params`** (k=v ...) – Override reward params. Example: `--params win_reward_factor=3.0 idle_penalty_scale=2.0`.
+**`--out_dir`** (path, default: reward_space_outputs) – Output directory (auto-created). (Simulation-only).
+**`--params`** (k=v ...) – Bulk override reward params and selected hybrid scalars (`profit_target`, `risk_reward_ratio`). Conflicts: individual flags vs `--params` ⇒ `--params` wins.
### Reward Parameter Cheat Sheet
-| Parameter | Default | Description |
-|-----------|---------|-------------|
-| **Core Parameters** |||
-| `base_factor` | 100.0 | Base reward scale |
-| `invalid_action` | -2.0 | Penalty for invalid actions |
-| `win_reward_factor` | 2.0 | Profit overshoot multiplier |
-| `pnl_factor_beta` | 0.5 | PnL amplification beta |
-| **Duration Penalties** |||
-| `max_trade_duration_candles` | 128 | Trade duration cap |
-| `max_idle_duration_candles` | None | Idle duration cap; fallback 4× max trade duration |
-| `idle_penalty_scale` | 0.5 | Idle penalty scale |
-| `idle_penalty_power` | 1.025 | Idle penalty exponent |
-| `hold_penalty_scale` | 0.25 | Hold penalty scale |
-| `hold_penalty_power` | 1.025 | Hold penalty exponent |
-| **Exit Attenuation** |||
-| `exit_attenuation_mode` | linear | Exit attenuation kernel |
-| `exit_plateau` | true | Flat region before attenuation starts |
-| `exit_plateau_grace` | 1.0 | Plateau duration ratio grace |
-| `exit_linear_slope` | 1.0 | Linear kernel slope |
-| `exit_power_tau` | 0.5 | Tau controlling `power` kernel decay (0,1] |
-| `exit_half_life` | 0.5 | Half-life for `half_life` kernel |
-| **Efficiency** |||
-| `efficiency_weight` | 1.0 | Efficiency contribution weight |
-| `efficiency_center` | 0.5 | Efficiency pivot in [0,1] |
-| **Validation** |||
-| `check_invariants` | true | Enable runtime invariant checks |
-| `exit_factor_threshold` | 10000.0 | Warn if exit factor exceeds threshold |
-| **PBRS** |||
-| `potential_gamma` | 0.95 | PBRS discount γ |
-| `exit_potential_mode` | canonical | Exit potential mode |
-| `exit_potential_decay` | 0.5 | Decay for `progressive_release` mode |
-| `hold_potential_enabled` | true | Enable hold potential Φ |
-| **Hold Potential** |||
-| `hold_potential_scale` | 1.0 | Hold potential scale |
-| `hold_potential_gain` | 1.0 | Hold potential gain |
-| `hold_potential_transform_pnl` | tanh | Hold PnL transform function |
-| `hold_potential_transform_duration` | tanh | Hold duration transform function |
-| **Entry Additive** |||
-| `entry_additive_enabled` | false | Enable entry additive |
-| `entry_additive_scale` | 1.0 | Entry additive scale |
-| `entry_additive_gain` | 1.0 | Entry additive gain |
-| `entry_additive_transform_pnl` | tanh | Entry PnL transform function |
-| `entry_additive_transform_duration` | tanh | Entry duration transform function |
-| **Exit Additive** |||
-| `exit_additive_enabled` | false | Enable exit additive |
-| `exit_additive_scale` | 1.0 | Exit additive scale |
-| `exit_additive_gain` | 1.0 | Exit additive gain |
-| `exit_additive_transform_pnl` | tanh | Exit PnL transform function |
-| `exit_additive_transform_duration` | tanh | Exit duration transform function |
+#### Core
+
+| Parameter | Default | Description |
+| ------------------- | ------- | --------------------------- |
+| `base_factor` | 100.0 | Base reward scale |
+| `invalid_action` | -2.0 | Penalty for invalid actions |
+| `win_reward_factor` | 2.0 | Profit overshoot multiplier |
+| `pnl_factor_beta` | 0.5 | PnL amplification beta |
+
+#### Duration Penalties
+
+| Parameter | Default | Description |
+| ---------------------------- | ------- | -------------------------- |
+| `max_trade_duration_candles` | 128 | Trade duration cap |
+| `max_idle_duration_candles` | None | Fallback 4× trade duration |
+| `idle_penalty_scale` | 0.5 | Idle penalty scale |
+| `idle_penalty_power` | 1.025 | Idle penalty exponent |
+| `hold_penalty_scale` | 0.25 | Hold penalty scale |
+| `hold_penalty_power` | 1.025 | Hold penalty exponent |
+
+#### Exit Attenuation
+
+| Parameter | Default | Description |
+| ----------------------- | ------- | ------------------------------ |
+| `exit_attenuation_mode` | linear | Kernel mode |
+| `exit_plateau` | true | Flat region before attenuation |
+| `exit_plateau_grace` | 1.0 | Plateau grace ratio |
+| `exit_linear_slope` | 1.0 | Linear slope |
+| `exit_power_tau` | 0.5 | Power kernel tau (0,1] |
+| `exit_half_life` | 0.5 | Half-life for half_life kernel |
+
+#### Efficiency
+
+| Parameter | Default | Description |
+| ------------------- | ------- | ------------------------------ |
+| `efficiency_weight` | 1.0 | Efficiency contribution weight |
+| `efficiency_center` | 0.5 | Efficiency pivot in [0,1] |
+
+Formula (unrealized profit normalization):
+Let `max_u = max_unrealized_profit`, `min_u = min_unrealized_profit`, `range = max_u - min_u`, `ratio = (pnl - min_u)/range`. Then:
+
+- If `pnl > 0`: `efficiency_factor = 1 + efficiency_weight * (ratio - efficiency_center)`
+- If `pnl < 0`: `efficiency_factor = 1 + efficiency_weight * (efficiency_center - ratio)`
+- Else: `efficiency_factor = 1`
+ Final exit multiplier path: `exit_reward = pnl * exit_factor`, where `exit_factor = kernel(base_factor, duration_ratio_adjusted) * pnl_factor` and `pnl_factor` includes the efficiency_factor above.
+
+#### Validation
+
+| Parameter | Default | Description |
+| ----------------------- | ------- | --------------------------------- |
+| `check_invariants` | true | Invariant enforcement (see above) |
+| `exit_factor_threshold` | 10000.0 | Warn on excessive factor |
+
+#### PBRS (Potential-Based Reward Shaping)
+
+| Parameter | Default | Description |
+| ------------------------ | --------- | --------------------------------- |
+| `potential_gamma` | 0.95 | Discount factor γ for potential Φ |
+| `exit_potential_mode` | canonical | Potential release mode |
+| `exit_potential_decay` | 0.5 | Decay for progressive_release |
+| `hold_potential_enabled` | true | Enable hold potential Φ |
+
+PBRS invariance holds when: `exit_potential_mode=canonical` AND `entry_additive_enabled=false` AND `exit_additive_enabled=false`. Under this condition the algorithm enforces zero-sum shaping: if the summed shaping term deviates by more than 1e-6 (`PBRS_INVARIANCE_TOL`), a uniform drift correction subtracts the mean shaping offset across invariant samples.
+
+#### Hold Potential Transforms
+
+| Parameter | Default | Description |
+| ----------------------------------- | ------- | -------------------- |
+| `hold_potential_scale` | 1.0 | Hold potential scale |
+| `hold_potential_gain` | 1.0 | Gain multiplier |
+| `hold_potential_transform_pnl` | tanh | PnL transform |
+| `hold_potential_transform_duration` | tanh | Duration transform |
+
+#### Entry Additive (Optional)
+
+| Parameter | Default | Description |
+| ----------------------------------- | ------- | --------------------- |
+| `entry_additive_enabled` | false | Enable entry additive |
+| `entry_additive_scale` | 1.0 | Scale |
+| `entry_additive_gain` | 1.0 | Gain |
+| `entry_additive_transform_pnl` | tanh | PnL transform |
+| `entry_additive_transform_duration` | tanh | Duration transform |
+
+#### Exit Additive (Optional)
+
+| Parameter | Default | Description |
+| ---------------------------------- | ------- | -------------------- |
+| `exit_additive_enabled` | false | Enable exit additive |
+| `exit_additive_scale` | 1.0 | Scale |
+| `exit_additive_gain` | 1.0 | Gain |
+| `exit_additive_transform_pnl` | tanh | PnL transform |
+| `exit_additive_transform_duration` | tanh | Duration transform |
### Exit Attenuation Kernels
r* = r if not exit_plateau
```
-| Mode | Multiplier (applied to base_factor * pnl * pnl_factor * efficiency_factor) | Monotonic decreasing (Yes/No) | Notes |
-|------|---------------------------------------------------------------------|-------------------------------|-------|
-| legacy | step: ×1.5 if r* ≤ 1 else ×0.5 | No | Historical reference |
-| sqrt | 1 / sqrt(1 + r*) | Yes | Sub-linear decay |
-| linear | 1 / (1 + slope * r*) | Yes | slope = `exit_linear_slope` (≥0) |
-| power | (1 + r*)^(-alpha) | Yes | alpha = -ln(tau)/ln(2), tau = `exit_power_tau` ∈ (0,1]; tau=1 ⇒ alpha=0 (flat); invalid tau ⇒ alpha=1 (default) |
-| half_life | 2^(- r* / hl) | Yes | hl = `exit_half_life`; r* = hl ⇒ factor × 0.5 |
+| Mode | Multiplier applied to base_factor \* pnl \* pnl_factor \* efficiency_factor | Monotonic | Notes |
+| --------- | --------------------------------------------------------------------------- | --------- | ------------------------------------------- |
+| legacy | step: ×1.5 if r\* ≤ 1 else ×0.5 | No | Non-monotonic legacy mode (not recommended) |
+| sqrt | 1 / sqrt(1 + r\*) | Yes | Sub-linear decay |
+| linear | 1 / (1 + slope \* r\*) | Yes | slope = `exit_linear_slope` |
+| power | (1 + r\*)^(-alpha) | Yes | alpha = -ln(tau)/ln(2); tau=1 ⇒ alpha=0 |
+| half_life | 2^(- r\* / hl) | Yes | hl = `exit_half_life`; r\*=hl ⇒ factor ×0.5 |
### Transform Functions
-| Transform | Formula | Range | Characteristics | Use Case |
-|-----------|---------|-------|-----------------|----------|
-| `tanh` | tanh(x) | (-1, 1) | Smooth sigmoid, symmetric around 0 | Balanced PnL/duration transforms (default) |
-| `softsign` | x / (1 + \|x\|) | (-1, 1) | Smoother than tanh, linear near 0 | Less aggressive saturation |
-| `arctan` | (2/π) * arctan(x) | (-1, 1) | Slower saturation than tanh | Wide dynamic range |
-| `sigmoid` | 2σ(x) - 1, σ(x) = 1/(1 + e^(-x)) | (-1, 1) | Sigmoid mapped to (-1, 1) | Standard sigmoid activation |
-| `asinh` | x / sqrt(1 + x^2) | (-1, 1) | Normalized asinh-like transform | Extreme outlier robustness |
-| `clip` | clip(x, -1, 1) | [-1, 1] | Hard clipping at ±1 | Preserve linearity within bounds |
-
-Invariant toggle: disable only for performance experiments (diagnostics become advisory).
+| Transform | Formula | Range | Characteristics | Use Case |
+| ---------- | ------------------ | ------- | ----------------- | ----------------------------- |
+| `tanh` | tanh(x) | (-1, 1) | Smooth sigmoid | Balanced transforms (default) |
+| `softsign` | x / (1 + \|x\|) | (-1, 1) | Linear near 0 | Less aggressive saturation |
+| `arctan` | (2/π) \* arctan(x) | (-1, 1) | Slower saturation | Wide dynamic range |
+| `sigmoid` | 2σ(x) - 1 | (-1, 1) | Standard sigmoid | Generic shaping |
+| `asinh` | x / sqrt(1 + x^2) | (-1, 1) | Outlier robust | Extreme stability |
+| `clip` | clip(x, -1, 1) | [-1, 1] | Hard clipping | Preserve linearity |
### Skipping Feature Analysis
-**`--skip_partial_dependence`**: skip PD curves (faster).
-
-**`--skip_feature_analysis`**: skip model, importance, PD.
-
-Hierarchy / precedence of skip flags:
-
+Flags hierarchy:
| Scenario | `--skip_feature_analysis` | `--skip_partial_dependence` | Feature Importance | Partial Dependence | Report Section 4 |
|----------|---------------------------|-----------------------------|--------------------|-------------------|------------------|
-| Default (no flags) | ✗ | ✗ | Yes | Yes | Full (R², top features, exported data) |
-| PD only skipped | ✗ | ✓ | Yes | No | Full (PD line shows skipped note) |
-| Feature analysis skipped | ✓ | ✗ | No | No | Marked “(skipped)” with reason(s) |
-| Both flags | ✓ | ✓ | No | No | Marked “(skipped)” + note PD redundant |
-
+| Default | ✗ | ✗ | Yes | Yes | Full |
+| PD skipped | ✗ | ✓ | Yes | No | PD note |
+| Feature analysis skipped | ✓ | ✗ | No | No | Marked “(skipped)” |
+| Both skipped | ✓ | ✓ | No | No | Marked “(skipped)” |
Auto-skip if `num_samples < 4`.
### Reproducibility
-| Component | Controlled By | Notes |
-|-----------|---------------|-------|
-| Sample simulation | `--seed` | Drives action sampling, PnL noise generation. |
-| Statistical tests / bootstrap | `--stats_seed` (fallback `--seed`) | Local RNG; isolation prevents side-effects in user code. |
-| RandomForest & permutation importance | `--seed` | Ensures identical splits and tree construction. |
-| Partial dependence grids | Deterministic | Depends only on fitted model & data. |
+| Component | Controlled By | Notes |
+| ------------------------------------- | ---------------------------------- | ----------------------------------- |
+| Sample simulation | `--seed` | Drives action sampling & PnL noise |
+| Statistical tests / bootstrap | `--stats_seed` (fallback `--seed`) | Isolated RNG |
+| RandomForest & permutation importance | `--seed` | Identical splits and trees |
+| Partial dependence grids | Deterministic | Depends only on fitted model & data |
Patterns:
+
```shell
-# Same synthetic data, two different statistical re-analysis runs
uv run python reward_space_analysis.py --num_samples 50000 --seed 123 --stats_seed 9001 --out_dir run_stats1
uv run python reward_space_analysis.py --num_samples 50000 --seed 123 --stats_seed 9002 --out_dir run_stats2
-
-# Fully reproducible end-to-end (all aspects deterministic)
+# Fully deterministic
uv run python reward_space_analysis.py --num_samples 50000 --seed 777
```
-### Overrides vs `--params`
+### Overrides vs --params
-Reward parameters also have individual flags:
+Direct flags and `--params` produce identical outcomes; conflicts resolved by bulk `--params` values.
```shell
-# Direct flag style
uv run python reward_space_analysis.py --win_reward_factor 3.0 --idle_penalty_scale 2.0 --num_samples 15000
-
-# Equivalent using --params
uv run python reward_space_analysis.py --params win_reward_factor=3.0 idle_penalty_scale=2.0 --num_samples 15000
```
`--params` wins on conflicts.
+Simulation-only keys (not allowed in `--params`): `num_samples`, `seed`, `trading_mode`, `action_masking`, `max_duration_ratio`, `out_dir`, `stats_seed`, `pnl_base_std`, `pnl_duration_vol_scale`, `real_episodes`, `unrealized_pnl`, `strict_diagnostics`, `strict_validation`, `bootstrap_resamples`, `skip_feature_analysis`, `skip_partial_dependence`, `rf_n_jobs`, `perm_n_jobs`, `pvalue_adjust`. Hybrid override keys allowed in `--params`: `profit_target`, `risk_reward_ratio`. Reward parameter keys (tunable via either direct flag or `--params`) correspond to those listed under Cheat Sheet, Exit Attenuation, Efficiency, Validation, PBRS, Hold/Entry/Exit additive transforms.
## Examples
```shell
# Quick test with defaults
uv run python reward_space_analysis.py --num_samples 10000
-
# Full analysis with custom profit target
uv run python reward_space_analysis.py \
- --num_samples 50000 \
- --profit_target 0.05 \
- --trading_mode futures \
- --bootstrap_resamples 5000 \
- --out_dir custom_analysis
-
-# Parameter sensitivity testing
-uv run python reward_space_analysis.py \
- --num_samples 30000 \
- --params win_reward_factor=3.0 idle_penalty_scale=1.5 \
- --out_dir sensitivity_test
-
+ --num_samples 50000 \
+ --profit_target 0.05 \
+ --trading_mode futures \
+ --bootstrap_resamples 5000 \
+ --out_dir custom_analysis
# PBRS potential shaping analysis
uv run python reward_space_analysis.py \
- --num_samples 40000 \
- --params hold_potential_enabled=true exit_potential_mode=spike_cancel potential_gamma=0.95 \
- --out_dir pbrs_test
-
-# Real vs synthetic comparison
+ --num_samples 40000 \
+ --params hold_potential_enabled=true exit_potential_mode=spike_cancel potential_gamma=0.95 \
+ --out_dir pbrs_test
+# Real vs synthetic comparison (see Common Use Cases #4)
uv run python reward_space_analysis.py \
- --num_samples 100000 \
- --real_episodes path/to/episode_rewards.pkl \
- --out_dir validation
+ --num_samples 100000 \
+ --real_episodes path/to/episode_rewards.pkl \
+ --out_dir validation
```
---
| File | Description |
| -------------------------- | ---------------------------------------------------- |
-| `reward_samples.csv` | Raw synthetic samples for custom analysis |
-| `feature_importance.csv` | Feature importance rankings from random forest model |
-| `partial_dependence_*.csv` | Partial dependence data for key features |
+| `reward_samples.csv` | Raw synthetic samples |
+| `feature_importance.csv` | Feature importance rankings |
+| `partial_dependence_*.csv` | Partial dependence data |
| `manifest.json` | Runtime manifest (simulation + reward params + hash) |
### Manifest (`manifest.json`)
-| Field | Type | Description |
-|-------|------|-------------|
-| `generated_at` | string (ISO 8601) | Timestamp of generation (not part of hash). |
-| `num_samples` | int | Number of synthetic samples generated. |
-| `seed` | int | Master random seed driving simulation determinism. |
-| `profit_target_effective` | float | Profit target after risk/reward scaling. |
-| `pvalue_adjust_method` | string | Multiple testing correction mode (`none` or `benjamini_hochberg`). |
-| `parameter_adjustments` | object | Map of any automatic bound clamps (empty if none). |
-| `reward_params` | object | Full resolved reward parameter set (post-validation). |
-| `simulation_params` | object | All simulation inputs (num_samples, seed, volatility knobs, etc.). |
-| `params_hash` | string (sha256) | Hash over ALL `simulation_params` (excluding `out_dir`, `real_episodes`) + ALL `reward_params` (lexicographically ordered). |
+| Field | Type | Description |
+| ------------------------- | ----------------- | ------------------------------------- |
+| `generated_at` | string (ISO 8601) | Generation timestamp (not hashed) |
+| `num_samples` | int | Synthetic samples count |
+| `seed` | int | Master random seed |
+| `profit_target_effective` | float | Effective profit target after scaling |
+| `pvalue_adjust_method` | string | Multiple testing correction mode |
+| `parameter_adjustments` | object | Bound clamp adjustments (if any) |
+| `reward_params` | object | Final reward params |
+| `simulation_params` | object | All simulation inputs |
+| `params_hash` | string (sha256) | Deterministic run hash |
-Two runs match iff `params_hash` identical (defaults included in hash scope).
+Two runs match iff `params_hash` identical.
### Distribution Shift Metrics
-| Metric | Definition | Notes |
-|--------|------------|-------|
-| `*_kl_divergence` | KL(synthetic‖real) = Σ p_synth log(p_synth / p_real) | Asymmetric; 0 iff identical histograms (after binning). |
-| `*_js_distance` | d_JS(p_synth, p_real) = √( 0.5 KL(p_synth‖m) + 0.5 KL(p_real‖m) ), m = 0.5 (p_synth + p_real) | Symmetric, bounded [0,1]; square-root of JS divergence; stable vs KL when supports differ. |
-| `*_wasserstein` | 1D Earth Mover's Distance | Non-negative; same units as feature. |
-| `*_ks_statistic` | KS two-sample statistic | ∈ [0,1]; higher = greater divergence. |
-| `*_ks_pvalue` | KS test p-value | ∈ [0,1]; small ⇒ reject equality (at α). |
+| Metric | Definition | Notes |
+| ----------------- | ------------------------------------- | ----------------------------- |
+| `*_kl_divergence` | KL(synth‖real) = Σ p_s log(p_s / p_r) | 0 ⇒ identical histograms |
+| `*_js_distance` | √(0.5 KL(p_s‖m) + 0.5 KL(p_r‖m)) | Symmetric, [0,1] |
+| `*_wasserstein` | 1D Earth Mover's Distance | Units of feature |
+| `*_ks_statistic` | KS two-sample statistic | [0,1]; higher ⇒ divergence |
+| `*_ks_pvalue` | KS test p-value | High ⇒ cannot reject equality |
-Implementation: 50-bin hist; add ε=1e-10 before normalizing; constants ⇒ zero divergence, KS p=1.0.
+Implementation: 50-bin hist; add ε=1e-10; constants ⇒ zero divergence & KS p=1.0.
---
## Advanced Usage
-### Custom Parameter Testing
+### Parameter Sweeps
-Test reward parameter configurations:
+Loop multiple values:
```shell
-# Test power-based exit attenuation with custom tau
-uv run python reward_space_analysis.py \
- --num_samples 25000 \
- --params exit_attenuation_mode=power exit_power_tau=0.5 efficiency_weight=0.8 \
- --out_dir custom_test
-
-# Test aggressive hold penalties
-uv run python reward_space_analysis.py \
- --num_samples 25000 \
- --params hold_penalty_scale=0.5 \
- --out_dir aggressive_hold
+for factor in 1.5 2.0 2.5 3.0; do
+ uv run python reward_space_analysis.py \
+ --num_samples 20000 \
+ --params win_reward_factor=$factor \
+ --out_dir analysis_factor_$factor
+done
+```
-# Canonical PBRS (strict invariance, additives disabled)
-uv run python reward_space_analysis.py \
- --num_samples 25000 \
- --params hold_potential_enabled=true entry_additive_enabled=true exit_additive_enabled=false exit_potential_mode=canonical \
- --out_dir pbrs_canonical
+Combine with other overrides cautiously; use distinct `out_dir` per configuration.
-# Non-canonical PBRS (allows additives with Φ(terminal)=0, breaks invariance)
-uv run python reward_space_analysis.py \
- --num_samples 25000 \
- --params hold_potential_enabled=true entry_additive_enabled=true exit_additive_enabled=true exit_potential_mode=non_canonical \
- --out_dir pbrs_non_canonical
+### PBRS Rationale
-uv run python reward_space_analysis.py \
- --num_samples 25000 \
- --params hold_potential_transform_pnl=sigmoid hold_potential_gain=2.0 \
- --out_dir pbrs_sigmoid_transforms
-```
+Canonical mode seeks near zero-sum shaping (Φ terminal ≈ 0) ensuring invariance: reward differences reflect environment performance, not potential leakage. Non-canonical modes or additives (entry/exit) trade strict invariance for potential extra signal shaping. Progressive release & spike cancel adjust temporal release of Φ. Choose canonical for theory alignment; use non-canonical or additives only when empirical gain outweighs invariance guarantees. Symbol Φ denotes potential. See invariance condition and drift correction mechanics under PBRS section.
### Real Data Comparison
-Compare with real trading episodes:
-
```shell
uv run python reward_space_analysis.py \
- --num_samples 100000 \
- --real_episodes path/to/episode_rewards.pkl \
- --out_dir real_vs_synthetic
+ --num_samples 100000 \
+ --real_episodes path/to/episode_rewards.pkl \
+ --out_dir real_vs_synthetic
```
-Shift metrics: lower is better (except p-value: higher ⇒ cannot reject equality).
+Shift metrics: lower divergence preferred (except p-value: higher ⇒ cannot reject equality).
### Batch Analysis
+(Alternate sweep variant)
+
```shell
-# Test multiple parameter combinations
-for factor in 1.5 2.0 2.5 3.0; do
- uv run python reward_space_analysis.py \
- --num_samples 20000 \
- --params win_reward_factor=$factor \
- --out_dir analysis_factor_$factor
-done
+while read target; do
+ uv run python reward_space_analysis.py \
+ --num_samples 30000 \
+ --params profit_target=$target \
+ --out_dir pt_${target}
+done <<EOF
+0.02
+0.03
+0.05
+EOF
```
---
## Testing
-### Run Tests
+Quick validation:
```shell
-uv run pytest -q
+uv run pytest
```
-### Coverage
+Selective example:
```shell
-uv run pytest -q --cov=. --cov-report=term-missing
-uv run pytest -q --cov=. --cov-report=html # open htmlcov/index.html
+uv run pytest -m pbrs -q
```
-### When to Run Tests
-
-- After modifying reward logic
-- Before important analyses
-- When results seem unexpected
-- After updating dependencies or Python version
-- When contributing new features (aim for >80% coverage on new code)
-
-### Focused Test Sets
-
-```shell
-uv run pytest -q test_reward_space_analysis.py::TestIntegration
-uv run pytest -q test_reward_space_analysis.py::TestStatisticalCoherence
-uv run pytest -q test_reward_space_analysis.py::TestRewardAlignment
-```
+Coverage threshold enforced: 85% (`--cov-fail-under=85` in `pyproject.toml`). Full coverage, invariants, markers, smoke policy, and maintenance workflow: `tests/README.md`.
---
{name = "Jerome Benoit", email = "jerome.benoit@piment-noir.org"}
]
readme = "README.md"
-requires-python = ">=3.9"
+requires-python = ">=3.11"
dependencies = [
"numpy",
"pandas",
python_functions = [
"test_*"
]
+markers = [
+ "components: component-level reward computations",
+ "transforms: transform logic within components",
+ "robustness: stress and edge-case behavior",
+ "api: public API surface and helpers",
+ "cli: command-line interface behaviors",
+ "statistics: statistical aggregation and metrics",
+ "pbrs: potential-based reward shaping specifics",
+ "integration: cross-module integration scenarios",
+ "slow: resource-intensive or long-running tests",
+ "smoke: non-owning smoke tests for early regression detection"
+]
# Logging configuration
log_cli = true
"--verbose",
"--tb=short",
"--strict-markers",
- "--disable-warnings",
- "--color=yes"
+ "--color=yes",
+ "--cov=reward_space_analysis",
+ "--cov-fail-under=85"
]
[tool.ruff]
line-length = 100
-target-version = "py39"
+target-version = "py311"
[tool.ruff.lint]
select = ["E", "F", "W", "I"]
def _compute_relationship_stats(df: pd.DataFrame) -> Dict[str, Any]:
- """Return binned stats dict for idle, trade duration and pnl (uniform bins)."""
+ """Return binned stats dict for idle, trade duration and pnl (uniform bins).
+
+ Defensive against missing optional columns (e.g., reward_invalid when synthetic
+ test helper omits it). Only present numeric columns are used for correlation.
+ Constant columns are dropped and reported.
+ """
reward_params: RewardParams = (
dict(df.attrs.get("reward_params"))
if isinstance(df.attrs.get("reward_params"), dict)
hold_stats = hold_stats.round(6)
exit_stats = exit_stats.round(6)
- correlation_fields = [
+ requested_fields = [
"reward",
"reward_invalid",
"reward_idle",
"trade_duration",
"idle_duration",
]
- # Drop columns that are constant (std == 0) to avoid all-NaN correlation rows
- numeric_subset = df[correlation_fields]
- constant_cols = [c for c in numeric_subset.columns if numeric_subset[c].nunique() <= 1]
- if constant_cols:
- filtered = numeric_subset.drop(columns=constant_cols)
+ correlation_fields = [c for c in requested_fields if c in df.columns]
+ if not correlation_fields:
+ correlation = pd.DataFrame()
+ constant_cols: list[str] = []
else:
- filtered = numeric_subset
- correlation = filtered.corr().round(4)
+ numeric_subset = df[correlation_fields]
+ constant_cols = [c for c in numeric_subset.columns if numeric_subset[c].nunique() <= 1]
+ filtered = numeric_subset.drop(columns=constant_cols) if constant_cols else numeric_subset
+ correlation = filtered.corr().round(4)
return {
"idle_stats": idle_stats,
skip_partial_dependence: bool = False,
rf_n_jobs: int = 1,
perm_n_jobs: int = 1,
-) -> Tuple[pd.DataFrame, Dict[str, Any], Dict[str, pd.DataFrame], RandomForestRegressor]:
- """Run RandomForest-based feature analysis.
+) -> Tuple[pd.DataFrame, Dict[str, Any], Dict[str, pd.DataFrame], Optional[RandomForestRegressor]]:
+ """Run RandomForest-based feature analysis defensively.
+
+ Purpose
+ -------
+ Provide permutation feature importances and optional partial dependence plots for the
+ synthetic reward space while remaining robust to incomplete or degenerate data.
+
+ Inputs
+ ------
+ df : pd.DataFrame
+ Sample frame containing canonical reward + feature columns (subset acceptable).
+ seed : int
+ Random seed used for train/test split and model reproducibility.
+ skip_partial_dependence : bool, default False
+ If True, skip partial dependence computation entirely (faster runs).
+ rf_n_jobs : int, default 1
+ Parallel jobs for the RandomForestRegressor.
+ perm_n_jobs : int, default 1
+ Parallel jobs for permutation_importance.
+
+ Behavior & Guarantees
+ ---------------------
+ - Dynamically selects available features from canonical list.
+ - Gracefully handles: empty frame, missing reward column, <2 usable features, any NaNs.
+ - Casts integer duration columns to float only if present (avoid unintended coercions).
+ - Drops wholly NaN or constant columns (reported via ``dropped_features``).
+ - All sklearn operations guarded by try/except; failures yield NaN importances.
+ - ``skip_partial_dependence`` returns an empty ``partial_deps`` dict without computing PD.
+ - Sets ``model_fitted`` flag (False for all stub paths and fitting failures).
+ - Raises ImportError early if scikit-learn components are unavailable (fast-fail semantics).
Returns
-------
importance_df : pd.DataFrame
- Permutation importance summary (mean/std per feature).
+ Columns: ``feature``, ``importance_mean``, ``importance_std`` (NaNs on failure paths).
analysis_stats : Dict[str, Any]
- Core diagnostics (R², sample counts, top feature & score).
+ Keys:
+ ``r2_score`` : float (NaN if model not fitted)
+ ``n_features`` : int (usable feature count after drops)
+ ``n_samples_train`` : int (0 on stub paths)
+ ``n_samples_test`` : int (0 on stub paths)
+ ``top_feature`` : Optional[str] (None or first usable feature)
+ ``top_importance`` : float (NaN on failure paths)
+ ``dropped_features`` : list[str] (NaN/constant removed columns)
+ ``model_fitted`` : bool (True only if RF fit succeeded)
partial_deps : Dict[str, pd.DataFrame]
- Partial dependence data frames keyed by feature.
- model : RandomForestRegressor
- Fitted model instance (for optional downstream inspection).
+ Mapping feature -> DataFrame with columns ``<feature>`` and ``partial_dependence``;
+ empty when skipped or failures occur.
+ model : Optional[RandomForestRegressor]
+ Fitted model instance when successful; ``None`` otherwise.
+
+ Failure Modes
+ -------------
+ Returns stub outputs (NaN importances, ``model_fitted`` False, empty ``partial_deps``) for:
+ - Missing ``reward`` column
+ - No rows
+ - Zero available canonical features
+ - <2 usable (post-drop) features
+ - Any NaNs after preprocessing
+ - Train/test split failures
+ - Model fitting failures
+ - Permutation importance failures (partial dependence may still be attempted)
+
+ Notes
+ -----
+ Optimized for interpretability and robustness rather than raw model performance. The
+ n_estimators choice balances stability of importance estimates with runtime.
"""
- # Ensure sklearn is available
if (
RandomForestRegressor is None
or train_test_split is None
or r2_score is None
):
raise ImportError("scikit-learn is not available; skipping feature analysis.")
- feature_cols = [
+
+ canonical_features = [
"pnl",
"trade_duration",
"idle_duration",
"action",
"is_invalid",
]
- X = df[feature_cols].copy()
+ available_features = [c for c in canonical_features if c in df.columns]
+
+ # Reward column must exist; if absent produce empty stub outputs
+ if "reward" not in df.columns or len(df) == 0 or len(available_features) == 0:
+ empty_importance = pd.DataFrame(columns=["feature", "importance_mean", "importance_std"])
+ return (
+ empty_importance,
+ {
+ "r2_score": np.nan,
+ "n_features": 0,
+ "n_samples_train": 0,
+ "n_samples_test": 0,
+ "top_feature": None,
+ "top_importance": np.nan,
+ "dropped_features": [],
+ "model_fitted": False,
+ },
+ {},
+ None,
+ )
+
+ X = df[available_features].copy()
for col in ("trade_duration", "idle_duration"):
if col in X.columns and pd.api.types.is_integer_dtype(X[col]):
X.loc[:, col] = X[col].astype(float)
- y = df["reward"]
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=seed)
+ y = df["reward"].copy()
+
+ # Drop wholly NaN or constant columns (provide no signal)
+ drop_cols: list[str] = []
+ for col in list(X.columns):
+ col_series = X[col]
+ if col_series.isna().all() or col_series.nunique(dropna=True) <= 1:
+ drop_cols.append(col)
+ if drop_cols:
+ X = X.drop(columns=drop_cols)
+ usable_features = list(X.columns)
+
+ if len(usable_features) < 2 or X.isna().any().any():
+ importance_df = pd.DataFrame(
+ {
+ "feature": usable_features,
+ "importance_mean": [np.nan] * len(usable_features),
+ "importance_std": [np.nan] * len(usable_features),
+ }
+ )
+ analysis_stats = {
+ "r2_score": np.nan,
+ "n_features": len(usable_features),
+ "n_samples_train": 0,
+ "n_samples_test": 0,
+ "top_feature": usable_features[0] if usable_features else None,
+ "top_importance": np.nan,
+ "dropped_features": drop_cols,
+ "model_fitted": False,
+ }
+ return importance_df, analysis_stats, {}, None
+
+ # Train/test split
+ try:
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=seed)
+ except Exception:
+ importance_df = pd.DataFrame(
+ {
+ "feature": usable_features,
+ "importance_mean": [np.nan] * len(usable_features),
+ "importance_std": [np.nan] * len(usable_features),
+ }
+ )
+ analysis_stats = {
+ "r2_score": np.nan,
+ "n_features": len(usable_features),
+ "n_samples_train": 0,
+ "n_samples_test": 0,
+ "top_feature": usable_features[0] if usable_features else None,
+ "top_importance": np.nan,
+ "dropped_features": drop_cols,
+ "model_fitted": False,
+ }
+ return importance_df, analysis_stats, {}, None
- # Canonical RandomForest configuration - single source of truth
- model = RandomForestRegressor(
+ model: Optional[RandomForestRegressor] = RandomForestRegressor(
n_estimators=400,
max_depth=None,
random_state=seed,
n_jobs=rf_n_jobs,
)
- model.fit(X_train, y_train)
- y_pred = model.predict(X_test)
- r2 = r2_score(y_test, y_pred)
-
- perm = permutation_importance(
- model,
- X_test,
- y_test,
- n_repeats=25,
- random_state=seed,
- n_jobs=perm_n_jobs,
- )
- importance_df = (
- pd.DataFrame(
+ r2_local: float = np.nan
+ try:
+ model.fit(X_train, y_train)
+ y_pred = model.predict(X_test)
+ r2_local = r2_score(y_test, y_pred)
+ model_fitted_flag = True
+ except Exception:
+ # Model failed to fit; drop to stub path
+ model = None
+ model_fitted_flag = False
+ importance_df = pd.DataFrame(
{
- "feature": feature_cols,
- "importance_mean": perm.importances_mean,
- "importance_std": perm.importances_std,
+ "feature": usable_features,
+ "importance_mean": [np.nan] * len(usable_features),
+ "importance_std": [np.nan] * len(usable_features),
}
)
- .sort_values("importance_mean", ascending=False)
- .reset_index(drop=True)
- )
+ analysis_stats = {
+ "r2_score": np.nan,
+ "n_features": len(usable_features),
+ "n_samples_train": len(X_train),
+ "n_samples_test": len(X_test),
+ "top_feature": usable_features[0] if usable_features else None,
+ "top_importance": np.nan,
+ "dropped_features": drop_cols,
+ "model_fitted": False,
+ }
+ return importance_df, analysis_stats, {}, None
- # Compute partial dependence for key features unless skipped
- partial_deps = {}
- if not skip_partial_dependence:
- for feature in ["trade_duration", "idle_duration", "pnl"]:
- pd_result = partial_dependence(
- model,
- X_test,
- [feature],
- grid_resolution=50,
- kind="average",
+ # Permutation importance
+ try:
+ perm = permutation_importance(
+ model,
+ X_test,
+ y_test,
+ n_repeats=25,
+ random_state=seed,
+ n_jobs=perm_n_jobs,
+ )
+ importance_df = (
+ pd.DataFrame(
+ {
+ "feature": usable_features,
+ "importance_mean": perm.importances_mean,
+ "importance_std": perm.importances_std,
+ }
)
- value_key = "values" if "values" in pd_result else "grid_values"
- values = pd_result[value_key][0]
- averaged = pd_result["average"][0]
- partial_deps[feature] = pd.DataFrame({feature: values, "partial_dependence": averaged})
+ .sort_values("importance_mean", ascending=False)
+ .reset_index(drop=True)
+ )
+ except Exception:
+ importance_df = pd.DataFrame(
+ {
+ "feature": usable_features,
+ "importance_mean": [np.nan] * len(usable_features),
+ "importance_std": [np.nan] * len(usable_features),
+ }
+ )
+
+ # Partial dependence (optional)
+ partial_deps: Dict[str, pd.DataFrame] = {}
+ if model is not None and not skip_partial_dependence:
+ for feature in [
+ f for f in ["trade_duration", "idle_duration", "pnl"] if f in X_test.columns
+ ]:
+ try:
+ pd_result = partial_dependence(
+ model,
+ X_test,
+ [feature],
+ grid_resolution=50,
+ kind="average",
+ )
+ value_key = "values" if "values" in pd_result else "grid_values"
+ values = pd_result[value_key][0]
+ averaged = pd_result["average"][0]
+ partial_deps[feature] = pd.DataFrame(
+ {feature: values, "partial_dependence": averaged}
+ )
+ except Exception:
+ continue
+
+ top_feature = (
+ importance_df.iloc[0]["feature"]
+ if not importance_df.empty and pd.notna(importance_df.iloc[0]["importance_mean"])
+ else (usable_features[0] if usable_features else None)
+ )
+ top_importance = importance_df.iloc[0]["importance_mean"] if not importance_df.empty else np.nan
analysis_stats = {
- "r2_score": r2,
- "n_features": len(feature_cols),
+ "r2_score": r2_local,
+ "n_features": len(usable_features),
"n_samples_train": len(X_train),
"n_samples_test": len(X_test),
- "top_feature": importance_df.iloc[0]["feature"],
- "top_importance": importance_df.iloc[0]["importance_mean"],
+ "top_feature": top_feature,
+ "top_importance": top_importance,
+ "dropped_features": drop_cols,
+ "model_fitted": model_fitted_flag,
}
return importance_df, analysis_stats, partial_deps, model
if len(data) < 10:
continue
+ # Mean point estimate
point_est = float(data.mean())
- bootstrap_means = []
+ # Constant distribution detection (all values identical / zero variance)
data_array = data.values # speed
+ if data_array.size == 0:
+ continue
+ if np.ptp(data_array) == 0: # zero range -> constant
+ if strict_diagnostics:
+ # In strict mode, skip constant metrics entirely to avoid degenerate CI raise.
+ continue
+ # Graceful mode: record degenerate CI; validator will widen.
+ results[metric] = (point_est, point_est, point_est)
+ continue
+
+ # Bootstrap resampling
+ bootstrap_means = []
n = len(data_array)
for _ in range(n_bootstrap):
indices = rng.integers(0, n, size=n)
partial_deps = {}
if skip_feature_analysis or len(df) < 4:
print("Skipping feature analysis: flag set or insufficient samples (<4).")
- # Create placeholder files to satisfy integration expectations
- (output_dir / "feature_importance.csv").write_text(
- "feature,importance_mean,importance_std\n", encoding="utf-8"
- )
- for feature in ["trade_duration", "idle_duration", "pnl"]:
- (output_dir / f"partial_dependence_{feature}.csv").write_text(
- f"{feature},partial_dependence\n", encoding="utf-8"
- )
+ # Do NOT create feature_importance.csv when skipped (tests expect absence)
+ # Create minimal partial dependence placeholders only if feature analysis was NOT explicitly skipped
+ if not skip_feature_analysis and not skip_partial_dependence:
+ for feature in ["trade_duration", "idle_duration", "pnl"]:
+ (output_dir / f"partial_dependence_{feature}.csv").write_text(
+ f"{feature},partial_dependence\n", encoding="utf-8"
+ )
else:
try:
importance_df, analysis_stats, partial_deps, _model = _perform_feature_analysis(
reason.append("flag --skip_feature_analysis set")
if len(df) < 4:
reason.append("insufficient samples <4")
- reason_str = "; ".join(reason) if reason else "skipped"
- f.write(f"_Skipped ({reason_str})._\n\n")
+ f.write("Feature Importance - (skipped)\n\n")
if skip_partial_dependence:
f.write(
"_Note: --skip_partial_dependence is redundant when feature analysis is skipped._\n\n"
def _is_warning_header(line: str) -> bool:
- l = line.strip()
- if not l:
+ line_str = line.strip()
+ if not line_str:
return False
- if "warnings.warn" in l.lower():
+ if "warnings.warn" in line_str.lower():
return False
- return bool(_WARN_HEADER_RE.search(l))
+ return bool(_WARN_HEADER_RE.search(line_str))
def build_arg_matrix(
--- /dev/null
+# Tests: Reward Space Analysis
+
+Authoritative documentation for invariant ownership, taxonomy layout, smoke policies, maintenance workflows, and full coverage mapping.
+
+## Purpose
+
+The suite enforces:
+
+- Reward component mathematics & transform correctness
+- PBRS invariance mechanics (canonical drift correction, near-zero classification)
+- Robustness under extreme / invalid parameter settings
+- Statistical metrics integrity (bootstrap, constant distributions)
+- CLI parameter propagation & report formatting
+- Cross-component smoke scenarios
+
+Single ownership per invariant is tracked in the Coverage Mapping section of this README.
+
+## Taxonomy Directories
+
+| Directory | Marker | Scope |
+| -------------- | ----------- | ------------------------------------------- |
+| `components/` | components | Component math & transforms |
+| `transforms/` | transforms | Transform function behavior |
+| `robustness/` | robustness | Edge cases, stability, progression |
+| `api/` | api | Public API helpers & parsing |
+| `cli/` | cli | CLI parameter propagation & artifacts |
+| `pbrs/` | pbrs | Potential-based shaping invariance & modes |
+| `statistics/` | statistics | Statistical metrics, tests, bootstrap |
+| `integration/` | integration | Smoke scenarios & report formatting |
+| `helpers/` | (none) | Helper utilities (data loading, assertions) |
+
+Markers are declared in `pyproject.toml` and enforced with `--strict-markers`.
+
+## Running Tests
+
+Full suite (coverage ≥85% enforced):
+
+```shell
+uv run pytest
+```
+
+Selective markers:
+
+```shell
+uv run pytest -m pbrs -q
+uv run pytest -m robustness -q
+uv run pytest -m "components or robustness" -q
+uv run pytest -m "not slow" -q
+```
+
+Coverage reports:
+
+```shell
+uv run pytest --cov=reward_space_analysis --cov-report=term-missing
+uv run pytest --cov=reward_space_analysis --cov-report=html && open htmlcov/index.html
+```
+
+Slow statistical tests:
+
+```shell
+uv run pytest -m "statistics and slow" -q
+```
+
+## Coverage Mapping (Invariant Ownership)
+
+Columns:
+
+- ID: Stable identifier (`<category>-<shortname>-NNN`) or numeric-only legacy (statistics block).
+- Category: Taxonomy directory marker.
+- Description: Concise invariant statement.
+- Owning File: Path:line of primary declaration (prefer comment line `# Owns invariant:` when present; otherwise docstring line).
+- Notes: Clarifications (sub-modes, extensions, non-owning references elsewhere, line clusters for multi-path coverage).
+
+| ID | Category | Description | Owning File | Notes |
+| -------------------------------------------- | ----------- | ----------------------------------------------------------------------------------- | --------------------------------------- | -------------------------------------------------------------------------------------------- |
+| report-abs-shaping-line-091 | integration | Abs Σ Shaping Reward line present & formatted | integration/test_report_formatting.py:4 | PBRS report may render line; formatting owned here (core assertion lines 84–103) |
+| report-additives-deterministic-092 | components | Additives deterministic report section | components/test_additives.py:4 | Integration/PBRS may reference outcome non-owning |
+| robustness-decomposition-integrity-101 | robustness | Single active core component equals total reward under mutually exclusive scenarios | robustness/test_robustness.py:35 | Scenarios: idle, hold, exit, invalid; non-owning refs integration/test_reward_calculation.py |
+| robustness-exit-mode-fallback-102 | robustness | Unknown exit_attenuation_mode falls back to linear w/ warning | robustness/test_robustness.py:519 | Comment line (function at :520) |
+| robustness-negative-grace-clamp-103 | robustness | Negative exit_plateau_grace clamps to 0.0 w/ warning | robustness/test_robustness.py:549 | |
+| robustness-invalid-power-tau-104 | robustness | Invalid power tau falls back alpha=1.0 w/ warning | robustness/test_robustness.py:586 | Line updated (was 585) |
+| robustness-near-zero-half-life-105 | robustness | Near-zero half life yields no attenuation (factor≈base) | robustness/test_robustness.py:615 | Line updated (was 613) |
+| pbrs-canonical-drift-correction-106 | pbrs | Canonical drift correction enforces near zero-sum shaping | pbrs/test_pbrs.py:447 | Multi-path: extension fallback (475), comparison path (517) |
+| pbrs-canonical-near-zero-report-116 | pbrs | Canonical near-zero cumulative shaping classification | pbrs/test_pbrs.py:747 | Full report classification |
+| statistics-partial-deps-skip-107 | statistics | skip_partial_dependence => empty PD structures | statistics/test_statistics.py:28 | Docstring line |
+| helpers-duplicate-rows-drop-108 | helpers | Duplicate rows dropped w/ warning counting removals | helpers/test_utilities.py:26 | Docstring line |
+| helpers-missing-cols-fill-109 | helpers | Missing required columns filled with NaN + single warning | helpers/test_utilities.py:50 | Docstring line |
+| statistics-binned-stats-min-edges-110 | statistics | <2 bin edges raises ValueError | statistics/test_statistics.py:45 | Docstring line |
+| statistics-constant-cols-exclusion-111 | statistics | Constant columns excluded & listed | statistics/test_statistics.py:57 | Docstring line |
+| statistics-degenerate-distribution-shift-112 | statistics | Degenerate dist: zero shift metrics & KS p=1.0 | statistics/test_statistics.py:74 | Docstring line |
+| statistics-constant-dist-widened-ci-113a | statistics | Non-strict: widened CI with warning | statistics/test_statistics.py:529 | Test docstring labels "Invariant 113 (non-strict)" |
+| statistics-constant-dist-strict-omit-113b | statistics | Strict: omit metrics (no widened CI) | statistics/test_statistics.py:562 | Test docstring labels "Invariant 113 (strict)" |
+| statistics-fallback-diagnostics-115 | statistics | Fallback diagnostics constant distribution (qq_r2=1.0 etc.) | statistics/test_statistics.py:190 | Docstring line |
+| robustness-exit-pnl-only-117 | robustness | Only exit actions have non-zero PnL | robustness/test_robustness.py:125 | Newly assigned ID (previously unnumbered) |
+| pbrs-absence-shift-placeholder-118 | pbrs | Placeholder shift line present (absence displayed) | pbrs/test_pbrs.py:975 | Ensures placeholder appears when shaping shift absent |
+
+Note: `transforms/` directory has no owned invariants; future transform-specific invariants should follow the ID pattern and be added here before test implementation.
+
+### Non-Owning Smoke / Reference Checks
+
+Files that reference invariant outcomes (formatting, aggregation) without owning the invariant must include a leading comment:
+
+```python
+# Non-owning smoke; ownership: <owning file>
+```
+
+Table tracks approximate line ranges and source ownership:
+
+| File | Lines (approx) | References | Ownership Source |
+| -------------------------------------- | -------------- | -------------------------------------------------------- | ------------------------------------------------------------------- |
+| integration/test_reward_calculation.py | 15-22 | Decomposition identity (sum components) | robustness/test_robustness.py:35 |
+| components/test_reward_components.py | 212-242 | Exit factor finiteness & plateau behavior | robustness/test_robustness.py:156+ |
+| pbrs/test_pbrs.py | 591-630 | Canonical vs non-canonical classification formatting | robustness/test_robustness.py:35, robustness/test_robustness.py:125 |
+| pbrs/test_pbrs.py | 616,624,799 | Abs Σ Shaping Reward line formatting | integration/test_report_formatting.py:84-103 |
+| pbrs/test_pbrs.py | 742-806 | Canonical near-zero cumulative shaping classification | robustness/test_robustness.py:35 |
+| pbrs/test_pbrs.py | 807-860 | Canonical warning classification (Σ shaping > tolerance) | robustness/test_robustness.py:35 |
+| pbrs/test_pbrs.py | 861-915 | Non-canonical full report reason aggregation | robustness/test_robustness.py:35 |
+| pbrs/test_pbrs.py | 916-969 | Non-canonical mode-only reason (additives disabled) | robustness/test_robustness.py:35 |
+| statistics/test_statistics.py | 292 | Mean decomposition consistency | robustness/test_robustness.py:35 |
+
+### Deprecated / Reserved IDs
+
+| ID | Status | Rationale |
+| --- | ---------- | --------------------------------------------------------------------- |
+| 093 | deprecated | CLI invariance consolidated; no dedicated test yet |
+| 094 | deprecated | CLI encoding/data migration removed in refactor |
+| 095 | deprecated | Report CLI propagation assertions merged into test_cli_params_and_csv |
+| 114 | reserved | Gap retained for potential future statistics invariant |
+
+## Adding New Invariants
+
+1. Assign ID `<category>-<shortname>-NNN` (NNN numeric). Reserve gaps explicitly if needed (see deprecated/reserved table).
+2. Add a row in Coverage Mapping BEFORE writing the test.
+3. Implement test in correct taxonomy directory; add marker if outside default selection.
+4. Optionally declare inline ownership:
+ ```python
+ # Owns invariant: <id>
+ def test_<short_description>(...):
+ ...
+ ```
+5. Run duplication audit and coverage before committing.
+
+## Duplication Audit
+
+Each invariant shortname must appear in exactly one taxonomy directory path:
+
+```shell
+cd ReforceXY/reward_space_analysis/tests
+grep -R "<shortname>" -n .
+```
+
+Expect a single directory path. Examples:
+
+```shell
+grep -R "drift_correction" -n .
+grep -R "near_zero" -n .
+```
+
+## Coverage Parity Notes
+
+Detailed assertions reside in targeted directories (components, robustness) while integration tests focus on report formatting. Ownership IDs (e.g. 091–095, 106) reflect current scope (multi-path when noted).
+
+## When to Run Tests
+
+Run after changes to: reward component logic, PBRS mechanics, CLI parsing/output, statistical routines, dependency or Python version upgrades, or before publishing analysis reliant on invariants.
+
+---
+
+This README is the single authoritative source for test coverage, invariant ownership, smoke policies, and maintenance guidelines.
-"""Test package for reward space analysis."""
+"""Test package for reward space analysis.
-from .test_api_helpers import TestAPIAndHelpers, TestPrivateFunctions
-from .test_integration import TestIntegration
-from .test_pbrs import TestPBRS
-from .test_reward_components import TestRewardComponents
-from .test_robustness import TestRewardRobustnessAndBoundaries
-from .test_statistics import TestStatistics
-from .test_utilities import (
- TestCsvAndSimulationOptions,
- TestLoadRealEpisodes,
- TestParamsPropagation,
- TestReportFormatting,
-)
+This file intentionally avoids importing test modules to prevent
+side-effect imports and early dependency loading. Pytest will
+discover all tests recursively without re-exporting them here.
+"""
-__all__ = [
- "TestIntegration",
- "TestStatistics",
- "TestRewardComponents",
- "TestPBRS",
- "TestAPIAndHelpers",
- "TestPrivateFunctions",
- "TestRewardRobustnessAndBoundaries",
- "TestLoadRealEpisodes",
- "TestReportFormatting",
- "TestCsvAndSimulationOptions",
- "TestParamsPropagation",
-]
+__all__: list[str] = []
import tempfile
import unittest
from pathlib import Path
+from typing import Any, cast
import numpy as np
import pandas as pd
+import pytest
from reward_space_analysis import (
Actions,
Positions,
+ RewardParams,
_get_bool_param,
_get_float_param,
_get_int_param,
write_complete_statistical_analysis,
)
-from .test_base import RewardSpaceTestBase
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.api # taxonomy classification
class TestAPIAndHelpers(RewardSpaceTestBase):
self.assertTrue(math.isnan(val_str))
self.assertEqual(_get_float_param(params, "missing", 3.14), 3.14)
+ def test_get_float_param_edge_cases(self):
+ """Robust coercion edge cases for _get_float_param.
+
+ Enumerates:
+ - None -> NaN
+ - bool True/False -> 1.0/0.0
+ - empty string -> NaN
+ - invalid string literal -> NaN
+ - numeric strings (integer, float, scientific, whitespace) -> parsed float
+ - non-finite float (inf, -inf) -> NaN
+ - np.nan -> NaN
+ - unsupported container type -> NaN
+ """
+ self.assertTrue(math.isnan(_get_float_param({"k": None}, "k", 0.0)))
+ self.assertEqual(_get_float_param({"k": True}, "k", 0.0), 1.0)
+ self.assertEqual(_get_float_param({"k": False}, "k", 1.0), 0.0)
+ self.assertTrue(math.isnan(_get_float_param({"k": ""}, "k", 0.0)))
+ self.assertTrue(math.isnan(_get_float_param({"k": "abc"}, "k", 0.0)))
+ self.assertEqual(_get_float_param({"k": "42"}, "k", 0.0), 42.0)
+ self.assertAlmostEqual(
+ _get_float_param({"k": " 17.5 "}, "k", 0.0),
+ 17.5,
+ places=6,
+ msg="Whitespace trimmed numeric string should parse",
+ )
+ self.assertEqual(_get_float_param({"k": "1e2"}, "k", 0.0), 100.0)
+ self.assertTrue(math.isnan(_get_float_param({"k": float("inf")}, "k", 0.0)))
+ self.assertTrue(math.isnan(_get_float_param({"k": float("-inf")}, "k", 0.0)))
+ self.assertTrue(math.isnan(_get_float_param({"k": np.nan}, "k", 0.0)))
+ self.assertTrue(
+ math.isnan(_get_float_param(cast(RewardParams, {"k": cast(Any, [1, 2, 3])}), "k", 0.0))
+ )
+
def test_get_str_param(self):
"""Test string parameter extraction."""
params = {"test_str": "hello", "test_int": 2}
self.assertFalse(_get_bool_param(params, "missing", False))
def test_get_int_param_coercions(self):
- """Robust coercion paths of _get_int_param (bool/int/float/str/None/unsupported)."""
+ """Robust coercion paths of _get_int_param (bool/int/float/str/None/unsupported).
+
+ This test intentionally enumerates edge coercion semantics:
+ - None returns default (numeric default or 0 if non-numeric fallback provided)
+ - bool maps via int(True)=1 / int(False)=0
+ - float truncates toward zero (positive and negative)
+ - NaN/inf treated as invalid -> default
+ - numeric-like strings parsed (including scientific notation, whitespace strip, float truncation)
+ - empty/invalid/NaN strings fall back to default
+ - unsupported container types fall back to default
+ - missing key with non-numeric default coerces to 0
+ Ensures downstream reward parameter normalization logic has consistent integer handling regardless of input source.
+ """
self.assertEqual(_get_int_param({"k": None}, "k", 5), 5)
self.assertEqual(_get_int_param({"k": None}, "k", "x"), 0)
self.assertEqual(_get_int_param({"k": True}, "k", 0), 1)
self.assertEqual(_get_int_param({"k": ""}, "k", 5), 5)
self.assertEqual(_get_int_param({"k": "abc"}, "k", 5), 5)
self.assertEqual(_get_int_param({"k": "NaN"}, "k", 5), 5)
- self.assertEqual(_get_int_param({"k": [1, 2, 3]}, "k", 3), 3)
+ self.assertEqual(_get_int_param(cast(RewardParams, {"k": cast(Any, [1, 2, 3])}), "k", 3), 3)
self.assertEqual(_get_int_param({}, "missing", "zzz"), 0)
def test_argument_parser_construction(self):
--- /dev/null
+#!/usr/bin/env python3
+"""CLI-level tests: CSV encoding and parameter propagation.
+
+Purpose: focused CLI CSV encoding and parameter propagation tests.
+"""
+
+import json
+import subprocess
+import sys
+import unittest
+from pathlib import Path
+
+import pandas as pd
+import pytest
+
+from ..test_base import RewardSpaceTestBase
+
+# Pytest marker for taxonomy classification
+pytestmark = pytest.mark.cli
+
+SCRIPT_PATH = Path(__file__).parent.parent.parent / "reward_space_analysis.py"
+
+
+class TestCsvEncoding(RewardSpaceTestBase):
+ """Validate CSV output encoding invariants."""
+
+ def test_action_column_integer_in_csv(self):
+ """Ensure 'action' column in reward_samples.csv is encoded as integers."""
+ out_dir = self.output_path / "csv_int_check"
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "200",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
+ csv_path = out_dir / "reward_samples.csv"
+ self.assertTrue(csv_path.exists(), "Missing reward_samples.csv")
+ df = pd.read_csv(csv_path)
+ self.assertIn("action", df.columns)
+ values = df["action"].tolist()
+ self.assertTrue(
+ all((float(v).is_integer() for v in values)),
+ "Non-integer values detected in 'action' column",
+ )
+ allowed = {0, 1, 2, 3, 4}
+ self.assertTrue(set((int(v) for v in values)).issubset(allowed))
+
+
+class TestParamsPropagation(RewardSpaceTestBase):
+ """Integration tests to validate max_trade_duration_candles propagation via CLI params and dynamic flag.
+
+ Extended with coverage for:
+ - skip_feature_analysis summary path
+ - strict_diagnostics fallback vs manifest generation
+ - params_hash generation when simulation params differ
+ - PBRS invariance summary section when reward_shaping present
+ """
+
+ def test_skip_feature_analysis_summary_branch(self):
+ """CLI run with --skip_feature_analysis should mark feature importance skipped in summary and omit feature_importance.csv."""
+ out_dir = self.output_path / "skip_feature_analysis"
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "200",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ "--skip_feature_analysis",
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(report_path.exists(), "Missing statistical_analysis.md")
+ content = report_path.read_text(encoding="utf-8")
+ self.assertIn("Feature Importance - (skipped)", content)
+ fi_path = out_dir / "feature_importance.csv"
+ self.assertFalse(fi_path.exists(), "feature_importance.csv should be absent when skipped")
+
+ def test_manifest_params_hash_generation(self):
+ """Ensure params_hash appears when non-default simulation params differ (risk_reward_ratio altered)."""
+ out_dir = self.output_path / "manifest_hash"
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "150",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ "--risk_reward_ratio",
+ "1.5",
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
+ manifest_path = out_dir / "manifest.json"
+ self.assertTrue(manifest_path.exists(), "Missing manifest.json")
+ manifest = json.loads(manifest_path.read_text())
+ self.assertIn("params_hash", manifest, "params_hash should be present when params differ")
+ self.assertIn("simulation_params", manifest)
+ self.assertIn("risk_reward_ratio", manifest["simulation_params"])
+
+ def test_pbrs_invariance_section_present(self):
+ """When reward_shaping column exists, summary should include PBRS invariance section."""
+ out_dir = self.output_path / "pbrs_invariance"
+ # Use small sample for speed; rely on default shaping logic
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "180",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(report_path.exists(), "Missing statistical_analysis.md")
+ content = report_path.read_text(encoding="utf-8")
+ # Section numbering includes PBRS invariance line 7
+ self.assertIn("PBRS Invariance", content)
+
+ def test_strict_diagnostics_constant_distribution_raises(self):
+ """Run with --strict_diagnostics and very low num_samples to increase chance of constant columns; expect success but can parse diagnostics without fallback replacements."""
+ out_dir = self.output_path / "strict_diagnostics"
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "120",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ "--strict_diagnostics",
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ # Should not raise; if constant distributions occur they should assert before graceful fallback paths, exercising assertion branches.
+ self.assertEqual(
+ result.returncode,
+ 0,
+ f"CLI failed (expected pass): {result.stderr}\nSTDOUT:\n{result.stdout[:500]}",
+ )
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(report_path.exists(), "Missing statistical_analysis.md")
+
+ """Integration tests to validate max_trade_duration_candles propagation via CLI params and dynamic flag."""
+
+ def test_max_trade_duration_candles_propagation_params(self):
+ """--params max_trade_duration_candles=X propagates to manifest and simulation params."""
+ out_dir = self.output_path / "mtd_params"
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "120",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ "--params",
+ "max_trade_duration_candles=96",
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
+ manifest_path = out_dir / "manifest.json"
+ self.assertTrue(manifest_path.exists(), "Missing manifest.json")
+ with open(manifest_path, "r") as f:
+ manifest = json.load(f)
+ self.assertIn("reward_params", manifest)
+ self.assertIn("simulation_params", manifest)
+ rp = manifest["reward_params"]
+ self.assertIn("max_trade_duration_candles", rp)
+ self.assertEqual(int(rp["max_trade_duration_candles"]), 96)
+
+ def test_max_trade_duration_candles_propagation_flag(self):
+ """Dynamic flag --max_trade_duration_candles X propagates identically."""
+ out_dir = self.output_path / "mtd_flag"
+ cmd = [
+ "uv",
+ "run",
+ sys.executable,
+ str(SCRIPT_PATH),
+ "--num_samples",
+ "120",
+ "--seed",
+ str(self.SEED),
+ "--out_dir",
+ str(out_dir),
+ "--max_trade_duration_candles",
+ "64",
+ ]
+ result = subprocess.run(
+ cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
+ )
+ self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
+ manifest_path = out_dir / "manifest.json"
+ self.assertTrue(manifest_path.exists(), "Missing manifest.json")
+ with open(manifest_path, "r") as f:
+ manifest = json.load(f)
+ self.assertIn("reward_params", manifest)
+ self.assertIn("simulation_params", manifest)
+ rp = manifest["reward_params"]
+ self.assertIn("max_trade_duration_candles", rp)
+ self.assertEqual(int(rp["max_trade_duration_candles"]), 64)
+
+
+if __name__ == "__main__":
+ unittest.main()
--- /dev/null
+#!/usr/bin/env python3
+"""Additive deterministic contribution tests moved from helpers/test_utilities.py.
+
+Owns invariant: report-additives-deterministic-092 (components category)
+"""
+
+import unittest
+
+import pytest
+
+from reward_space_analysis import apply_potential_shaping
+
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.components # selective execution marker
+
+
+class TestAdditivesDeterministicContribution(RewardSpaceTestBase):
+ """Additives enabled increase total reward; shaping impact limited."""
+
+ def test_additive_activation_deterministic_contribution(self):
+ base = self.base_params(
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ exit_potential_mode="non_canonical",
+ )
+ with_add = base.copy()
+ with_add.update(
+ {
+ "entry_additive_enabled": True,
+ "exit_additive_enabled": True,
+ "entry_additive_scale": 0.4,
+ "exit_additive_scale": 0.4,
+ "entry_additive_gain": 1.0,
+ "exit_additive_gain": 1.0,
+ }
+ )
+ ctx = {
+ "base_reward": 0.05,
+ "current_pnl": 0.01,
+ "current_duration_ratio": 0.2,
+ "next_pnl": 0.012,
+ "next_duration_ratio": 0.25,
+ "is_entry": True,
+ "is_exit": False,
+ }
+ _t0, s0, _n0 = apply_potential_shaping(last_potential=0.0, params=base, **ctx)
+ t1, s1, _n1 = apply_potential_shaping(last_potential=0.0, params=with_add, **ctx)
+ self.assertFinite(t1)
+ self.assertFinite(s1)
+ self.assertLess(abs(s1 - s0), 0.2)
+ self.assertGreater(t1 - _t0, 0.0, "Total reward should increase with additives present")
+
+
+if __name__ == "__main__":
+ unittest.main()
--- /dev/null
+#!/usr/bin/env python3
+"""Tests for reward calculation components and algorithms."""
+
+import math
+import unittest
+
+import pytest
+
+from reward_space_analysis import (
+ Actions,
+ Positions,
+ _compute_hold_potential,
+ _get_exit_factor,
+ _get_float_param,
+ _get_pnl_factor,
+ calculate_reward,
+)
+
+from ..helpers import (
+ assert_component_sum_integrity,
+ assert_exit_factor_plateau_behavior,
+ assert_hold_penalty_threshold_behavior,
+ assert_progressive_scaling_behavior,
+ assert_reward_calculation_scenarios,
+ make_idle_penalty_test_contexts,
+)
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.components # selective execution marker
+
+
+class TestRewardComponents(RewardSpaceTestBase):
+ def test_hold_potential_computation_finite(self):
+ """Test hold potential computation returns finite values."""
+ params = {
+ "hold_potential_enabled": True,
+ "hold_potential_scale": 1.0,
+ "hold_potential_gain": 1.0,
+ "hold_potential_transform_pnl": "tanh",
+ "hold_potential_transform_duration": "tanh",
+ }
+ val = _compute_hold_potential(0.5, 0.3, params)
+ self.assertFinite(val, name="hold_potential")
+
+ def test_hold_penalty_basic_calculation(self):
+ """Test hold penalty calculation when trade_duration exceeds max_duration.
+
+ Tests:
+ - Hold penalty is negative when duration exceeds threshold
+ - Component sum integrity maintained
+
+ Expected behavior:
+ - trade_duration > max_duration → hold_penalty < 0
+ - Total reward equals sum of active components
+ """
+ context = self.make_ctx(
+ pnl=0.01,
+ trade_duration=150, # > default max_duration (128)
+ idle_duration=0,
+ max_unrealized_profit=0.02,
+ min_unrealized_profit=0.0,
+ position=Positions.Long,
+ action=Actions.Neutral,
+ )
+ breakdown = calculate_reward(
+ context,
+ self.DEFAULT_PARAMS,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ short_allowed=True,
+ action_masking=True,
+ )
+ self.assertLess(breakdown.hold_penalty, 0, "Hold penalty should be negative")
+ assert_component_sum_integrity(
+ self,
+ breakdown,
+ self.TOL_IDENTITY_RELAXED,
+ exclude_components=["idle_penalty", "exit_component", "invalid_penalty"],
+ component_description="hold + shaping/additives",
+ )
+
+ def test_hold_penalty_threshold_behavior(self):
+ """Test hold penalty activation at max_duration threshold.
+
+ Tests:
+ - No penalty before max_duration
+ - Penalty activation at and after max_duration
+
+ Expected behavior:
+ - duration < max_duration → hold_penalty = 0
+ - duration >= max_duration → hold_penalty <= 0
+ """
+ max_duration = 128
+ threshold_test_cases = [
+ (64, "before max_duration"),
+ (127, "just before max_duration"),
+ (128, "exactly at max_duration"),
+ (129, "just after max_duration"),
+ ]
+
+ def context_factory(trade_duration):
+ return self.make_ctx(
+ pnl=0.0,
+ trade_duration=trade_duration,
+ idle_duration=0,
+ position=Positions.Long,
+ action=Actions.Neutral,
+ )
+
+ assert_hold_penalty_threshold_behavior(
+ self,
+ threshold_test_cases,
+ max_duration,
+ context_factory,
+ self.DEFAULT_PARAMS,
+ self.TEST_BASE_FACTOR,
+ self.TEST_PROFIT_TARGET,
+ 1.0,
+ self.TOL_IDENTITY_RELAXED,
+ )
+
+ def test_hold_penalty_progressive_scaling(self):
+ """Test hold penalty scales progressively with increasing duration.
+
+ Tests:
+ - Penalty magnitude increases monotonically with duration
+ - Progressive scaling beyond max_duration threshold
+
+ Expected behavior:
+ - For d1 < d2 < d3: penalty(d1) >= penalty(d2) >= penalty(d3)
+ - Penalties become more negative with longer durations
+ """
+ params = self.base_params(max_trade_duration_candles=100)
+ durations = [150, 200, 300]
+ penalties = []
+ for duration in durations:
+ context = self.make_ctx(
+ pnl=0.0,
+ trade_duration=duration,
+ idle_duration=0,
+ position=Positions.Long,
+ action=Actions.Neutral,
+ )
+ breakdown = calculate_reward(
+ context,
+ params,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ short_allowed=True,
+ action_masking=True,
+ )
+ penalties.append(breakdown.hold_penalty)
+
+ assert_progressive_scaling_behavior(self, penalties, durations, "Hold penalty")
+
+ def test_idle_penalty_calculation(self):
+ """Test idle penalty calculation for neutral idle state.
+
+ Tests:
+ - Idle penalty is negative for idle duration > 0
+ - Component sum integrity maintained
+
+ Expected behavior:
+ - idle_duration > 0 → idle_penalty < 0
+ - Total reward equals sum of active components
+ """
+ context = self.make_ctx(
+ pnl=0.0,
+ trade_duration=0,
+ idle_duration=20,
+ max_unrealized_profit=0.0,
+ min_unrealized_profit=0.0,
+ position=Positions.Neutral,
+ action=Actions.Neutral,
+ )
+
+ def validate_idle_penalty(test_case, breakdown, description, tolerance):
+ test_case.assertLess(breakdown.idle_penalty, 0, "Idle penalty should be negative")
+ assert_component_sum_integrity(
+ test_case,
+ breakdown,
+ tolerance,
+ exclude_components=["hold_penalty", "exit_component", "invalid_penalty"],
+ component_description="idle + shaping/additives",
+ )
+
+ scenarios = [(context, self.DEFAULT_PARAMS, "idle_penalty_basic")]
+ assert_reward_calculation_scenarios(
+ self,
+ scenarios,
+ self.TEST_BASE_FACTOR,
+ self.TEST_PROFIT_TARGET,
+ 1.0,
+ validate_idle_penalty,
+ self.TOL_IDENTITY_RELAXED,
+ )
+
+ def test_efficiency_zero_policy(self):
+ """Test efficiency zero policy produces expected PnL factor.
+
+ Tests:
+ - PnL factor calculation with efficiency weight = 0
+ - Finite and positive factor values
+
+ Expected behavior:
+ - efficiency_weight = 0 → pnl_factor ≈ 1.0
+ - Factor is finite and well-defined
+ """
+ ctx = self.make_ctx(
+ pnl=0.0,
+ trade_duration=1,
+ max_unrealized_profit=0.0,
+ min_unrealized_profit=-0.02,
+ position=Positions.Long,
+ action=Actions.Long_exit,
+ )
+ params = self.base_params()
+ profit_target = self.TEST_PROFIT_TARGET * self.TEST_RR
+ pnl_factor = _get_pnl_factor(params, ctx, profit_target, self.TEST_RR)
+ self.assertFinite(pnl_factor, name="pnl_factor")
+ self.assertAlmostEqualFloat(pnl_factor, 1.0, tolerance=self.TOL_GENERIC_EQ)
+
+ def test_max_idle_duration_candles_logic(self):
+ """Test max idle duration candles parameter affects penalty magnitude.
+
+ Tests:
+ - Smaller max_idle_duration → larger penalty magnitude
+ - Larger max_idle_duration → smaller penalty magnitude
+ - Both penalties are negative
+
+ Expected behavior:
+ - penalty(max=50) < penalty(max=200) < 0
+ """
+ params_small = self.base_params(max_idle_duration_candles=50)
+ params_large = self.base_params(max_idle_duration_candles=200)
+ base_factor = self.TEST_BASE_FACTOR
+ context = self.make_ctx(
+ pnl=0.0,
+ trade_duration=0,
+ idle_duration=40,
+ position=Positions.Neutral,
+ action=Actions.Neutral,
+ )
+ small = calculate_reward(
+ context,
+ params_small,
+ base_factor,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ short_allowed=True,
+ action_masking=True,
+ )
+ large = calculate_reward(
+ context,
+ params_large,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=0.06,
+ risk_reward_ratio=self.TEST_RR,
+ short_allowed=True,
+ action_masking=True,
+ )
+ self.assertLess(small.idle_penalty, 0.0)
+ self.assertLess(large.idle_penalty, 0.0)
+ self.assertGreater(large.idle_penalty, small.idle_penalty)
+
+ @pytest.mark.smoke
+ def test_exit_factor_calculation(self):
+ """Exit factor calculation smoke test across attenuation modes.
+
+ Non-owning smoke test; ownership: robustness/test_robustness.py:35
+
+ Tests:
+ - Exit factor finiteness for linear and power modes
+ - Plateau behavior with grace period
+
+ Expected behavior:
+ - All exit factors are finite and positive
+ - Plateau mode attenuates after grace period
+ """
+ modes_to_test = ["linear", "power"]
+ for mode in modes_to_test:
+ test_params = self.base_params(exit_attenuation_mode=mode)
+ factor = _get_exit_factor(
+ base_factor=1.0, pnl=0.02, pnl_factor=1.5, duration_ratio=0.3, params=test_params
+ )
+ self.assertFinite(factor, name=f"exit_factor[{mode}]")
+ self.assertGreater(factor, 0, f"Exit factor for {mode} should be positive")
+ plateau_params = self.base_params(
+ exit_attenuation_mode="linear",
+ exit_plateau=True,
+ exit_plateau_grace=0.5,
+ exit_linear_slope=1.0,
+ )
+ assert_exit_factor_plateau_behavior(
+ self,
+ _get_exit_factor,
+ base_factor=1.0,
+ pnl=0.02,
+ pnl_factor=1.5,
+ plateau_params=plateau_params,
+ grace=0.5,
+ tolerance_strict=self.TOL_IDENTITY_STRICT,
+ )
+
+ def test_idle_penalty_zero_when_profit_target_zero(self):
+ """Test idle penalty is zero when profit_target is zero.
+
+ Tests:
+ - profit_target = 0 → idle_penalty = 0
+ - Total reward is zero in this configuration
+
+ Expected behavior:
+ - profit_target = 0 → idle_factor = 0 → idle_penalty = 0
+ - No other components active for neutral idle state
+ """
+ context = self.make_ctx(
+ pnl=0.0,
+ trade_duration=0,
+ idle_duration=30,
+ position=Positions.Neutral,
+ action=Actions.Neutral,
+ )
+
+ def validate_zero_penalty(test_case, breakdown, description, tolerance_relaxed):
+ test_case.assertEqual(
+ breakdown.idle_penalty, 0.0, "Idle penalty should be zero when profit_target=0"
+ )
+ test_case.assertEqual(
+ breakdown.total, 0.0, "Total reward should be zero in this configuration"
+ )
+
+ scenarios = [(context, self.DEFAULT_PARAMS, "profit_target_zero")]
+ assert_reward_calculation_scenarios(
+ self,
+ scenarios,
+ self.TEST_BASE_FACTOR,
+ 0.0, # profit_target=0
+ self.TEST_RR,
+ validate_zero_penalty,
+ self.TOL_IDENTITY_RELAXED,
+ )
+
+ def test_win_reward_factor_saturation(self):
+ """Test PnL amplification factor saturates at asymptotic limit.
+
+ Tests:
+ - Amplification ratio increases monotonically with PnL
+ - Saturation approaches (1 + win_reward_factor)
+ - Mathematical formula validation
+
+ Expected behavior:
+ - As PnL → ∞: amplification → (1 + win_reward_factor)
+ - Monotonic increase: ratio(PnL1) <= ratio(PnL2) for PnL1 < PnL2
+ - Observed matches theoretical tanh-based formula
+ """
+ win_reward_factor = 3.0
+ beta = 0.5
+ profit_target = self.TEST_PROFIT_TARGET
+ params = self.base_params(
+ win_reward_factor=win_reward_factor,
+ pnl_factor_beta=beta,
+ efficiency_weight=0.0,
+ exit_attenuation_mode="linear",
+ exit_plateau=False,
+ exit_linear_slope=0.0,
+ )
+ params.pop("base_factor", None)
+ pnl_values = [profit_target * m for m in (1.05, self.TEST_RR_HIGH, 5.0, 10.0)]
+ ratios_observed: list[float] = []
+ for pnl in pnl_values:
+ context = self.make_ctx(
+ pnl=pnl,
+ trade_duration=0,
+ idle_duration=0,
+ max_unrealized_profit=pnl,
+ min_unrealized_profit=0.0,
+ position=Positions.Long,
+ action=Actions.Long_exit,
+ )
+ br = calculate_reward(
+ context,
+ params,
+ base_factor=1.0,
+ profit_target=profit_target,
+ risk_reward_ratio=1.0,
+ short_allowed=True,
+ action_masking=True,
+ )
+ ratio = br.exit_component / pnl if pnl != 0 else 0.0
+ ratios_observed.append(float(ratio))
+ self.assertMonotonic(
+ ratios_observed,
+ non_decreasing=True,
+ tolerance=self.TOL_IDENTITY_STRICT,
+ name="pnl_amplification_ratio",
+ )
+ asymptote = 1.0 + win_reward_factor
+ final_ratio = ratios_observed[-1]
+ self.assertFinite(final_ratio, name="final_ratio")
+ self.assertLess(
+ abs(final_ratio - asymptote),
+ 0.001,
+ f"Final amplification {final_ratio:.6f} not close to asymptote {asymptote:.6f}",
+ )
+ expected_ratios: list[float] = []
+ for pnl in pnl_values:
+ pnl_ratio = pnl / profit_target
+ expected = 1.0 + win_reward_factor * math.tanh(beta * (pnl_ratio - 1.0))
+ expected_ratios.append(expected)
+ for obs, exp in zip(ratios_observed, expected_ratios):
+ self.assertFinite(obs, name="observed_ratio")
+ self.assertFinite(exp, name="expected_ratio")
+ self.assertLess(
+ abs(obs - exp),
+ 5e-06,
+ f"Observed amplification {obs:.8f} deviates from expected {exp:.8f}",
+ )
+
+ def test_idle_penalty_fallback_and_proportionality(self):
+ """Test idle penalty fallback and proportional scaling behavior.
+
+ Tests:
+ - Fallback to max_trade_duration when max_idle_duration is None
+ - Proportional scaling with idle duration (2:1 ratio validation)
+ - Mathematical validation of penalty formula
+
+ Expected behavior:
+ - max_idle_duration = None → use max_trade_duration as fallback
+ - penalty(duration=40) ≈ 2 × penalty(duration=20)
+ - Formula: penalty ∝ (duration/max)^power × scale
+ """
+ params = self.base_params(max_idle_duration_candles=None, max_trade_duration_candles=100)
+ base_factor = 90.0
+ profit_target = self.TEST_PROFIT_TARGET
+ risk_reward_ratio = 1.0
+
+ # Generate test contexts using helper
+ base_context_kwargs = {
+ "pnl": 0.0,
+ "trade_duration": 0,
+ "position": Positions.Neutral,
+ "action": Actions.Neutral,
+ }
+ idle_scenarios = [20, 40, 120]
+ contexts_and_descriptions = make_idle_penalty_test_contexts(
+ self.make_ctx, idle_scenarios, base_context_kwargs
+ )
+
+ # Calculate all rewards
+ results = []
+ for context, description in contexts_and_descriptions:
+ breakdown = calculate_reward(
+ context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ results.append((breakdown, context.idle_duration, description))
+
+ # Validate proportional scaling
+ br_a, br_b, br_mid = [r[0] for r in results]
+ self.assertLess(br_a.idle_penalty, 0.0)
+ self.assertLess(br_b.idle_penalty, 0.0)
+ self.assertLess(br_mid.idle_penalty, 0.0)
+
+ # Check 2:1 ratio between 40 and 20 idle duration
+ ratio = br_b.idle_penalty / br_a.idle_penalty if br_a.idle_penalty != 0 else None
+ self.assertIsNotNone(ratio)
+ if ratio is not None:
+ self.assertAlmostEqualFloat(abs(ratio), 2.0, tolerance=0.2)
+
+ # Mathematical validation for mid-duration case
+ idle_penalty_scale = _get_float_param(params, "idle_penalty_scale", 0.5)
+ idle_penalty_power = _get_float_param(params, "idle_penalty_power", 1.025)
+ factor = _get_float_param(params, "base_factor", float(base_factor))
+ idle_factor = factor * (profit_target * risk_reward_ratio) / 4.0
+ observed_ratio = abs(br_mid.idle_penalty) / (idle_factor * idle_penalty_scale)
+ if observed_ratio > 0:
+ implied_D = 120 / observed_ratio ** (1 / idle_penalty_power)
+ self.assertAlmostEqualFloat(implied_D, 400.0, tolerance=20.0)
+
+
+if __name__ == "__main__":
+ unittest.main()
--- /dev/null
+"""Comprehensive transform function tests consolidating duplicate patterns.
+
+This module centralizes transform function tests that appear across multiple test files,
+reducing duplication while maintaining full functional coverage for mathematical transforms.
+"""
+
+import math
+
+import pytest
+
+from reward_space_analysis import apply_transform
+
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.transforms # taxonomy classification
+
+
+class TestTransforms(RewardSpaceTestBase):
+ """Comprehensive transform function tests with parameterized scenarios."""
+
+ # Transform function test data
+ SMOOTH_TRANSFORMS = ["tanh", "softsign", "arctan", "sigmoid", "asinh"]
+ ALL_TRANSFORMS = SMOOTH_TRANSFORMS + ["clip"]
+
+ def test_transform_exact_values(self):
+ """Test transform functions produce exact expected values for specific inputs."""
+ test_cases = [
+ # tanh transform: tanh(x) in (-1, 1)
+ ("tanh", [0.0, 1.0, -1.0], [0.0, math.tanh(1.0), math.tanh(-1.0)]),
+ # softsign transform: x / (1 + |x|) in (-1, 1)
+ ("softsign", [0.0, 1.0, -1.0], [0.0, 0.5, -0.5]),
+ # asinh transform: x / sqrt(1 + x^2) in (-1, 1)
+ ("asinh", [0.0], [0.0]), # More complex calculations tested separately
+ # arctan transform: (2/pi) * arctan(x) in (-1, 1)
+ ("arctan", [0.0, 1.0], [0.0, 2.0 / math.pi * math.atan(1.0)]),
+ # sigmoid transform: 2σ(x) - 1, σ(x) = 1/(1 + e^(-x)) in (-1, 1)
+ ("sigmoid", [0.0], [0.0]), # More complex calculations tested separately
+ # clip transform: clip(x, -1, 1) in [-1, 1]
+ ("clip", [0.0, 0.5, 2.0, -2.0], [0.0, 0.5, 1.0, -1.0]),
+ ]
+
+ for transform_name, test_values, expected_values in test_cases:
+ for test_val, expected_val in zip(test_values, expected_values):
+ with self.subTest(transform=transform_name, input=test_val, expected=expected_val):
+ result = apply_transform(transform_name, test_val)
+ self.assertAlmostEqualFloat(
+ result,
+ expected_val,
+ tolerance=1e-10,
+ msg=f"{transform_name}({test_val}) should equal {expected_val}",
+ )
+
+ def test_transform_bounds_smooth(self):
+ """Test that smooth transforms stay within [-1, 1] bounds for extreme values."""
+ extreme_values = [-1000000.0, -100.0, -10.0, 10.0, 100.0, 1000000.0]
+
+ for transform_name in self.SMOOTH_TRANSFORMS:
+ for extreme_val in extreme_values:
+ with self.subTest(transform=transform_name, input=extreme_val):
+ result = apply_transform(transform_name, extreme_val)
+ self.assertTrue(
+ -1.0 <= result <= 1.0,
+ f"{transform_name}({extreme_val}) = {result} should be in [-1, 1]",
+ )
+
+ def test_transform_bounds_clip(self):
+ """Test that clip transform stays within [-1, 1] bounds (inclusive)."""
+ extreme_values = [-1000.0, -100.0, -2.0, -1.0, 0.0, 1.0, 2.0, 100.0, 1000.0]
+
+ for extreme_val in extreme_values:
+ with self.subTest(input=extreme_val):
+ result = apply_transform("clip", extreme_val)
+ self.assertTrue(
+ -1.0 <= result <= 1.0, f"clip({extreme_val}) = {result} should be in [-1, 1]"
+ )
+
+ def test_transform_monotonicity_smooth(self):
+ """Test that smooth transforms are monotonically non-decreasing."""
+ test_sequence = [-5.0, -1.0, -0.5, 0.0, 0.5, 1.0, 5.0]
+
+ for transform_name in self.SMOOTH_TRANSFORMS:
+ transform_values = [apply_transform(transform_name, x) for x in test_sequence]
+
+ # Check monotonicity: each value should be <= next value
+ for i in range(len(transform_values) - 1):
+ with self.subTest(transform=transform_name, index=i):
+ current_val = transform_values[i]
+ next_val = transform_values[i + 1]
+ self.assertLessEqual(
+ current_val,
+ next_val + self.TOL_IDENTITY_STRICT,
+ f"{transform_name} not monotonic: values[{i}]={current_val:.6f} > values[{i + 1}]={next_val:.6f}",
+ )
+
+ def test_transform_clip_monotonicity(self):
+ """Test that clip transform is monotonically non-decreasing within bounds."""
+ test_sequence = [-10.0, -2.0, -1.0, -0.5, 0.0, 0.5, 1.0, 2.0, 10.0]
+ transform_values = [apply_transform("clip", x) for x in test_sequence]
+
+ # Expected: [-1.0, -1.0, -1.0, -0.5, 0.0, 0.5, 1.0, 1.0, 1.0]
+ for i in range(len(transform_values) - 1):
+ with self.subTest(index=i):
+ current_val = transform_values[i]
+ next_val = transform_values[i + 1]
+ self.assertLessEqual(
+ current_val,
+ next_val + self.TOL_IDENTITY_STRICT,
+ f"clip not monotonic: values[{i}]={current_val:.6f} > values[{i + 1}]={next_val:.6f}",
+ )
+
+ def test_transform_zero_input(self):
+ """Test that all transforms return 0.0 for zero input."""
+ for transform_name in self.ALL_TRANSFORMS:
+ with self.subTest(transform=transform_name):
+ result = apply_transform(transform_name, 0.0)
+ self.assertAlmostEqualFloat(
+ result,
+ 0.0,
+ tolerance=self.TOL_IDENTITY_STRICT,
+ msg=f"{transform_name}(0.0) should equal 0.0",
+ )
+
+ def test_transform_asinh_symmetry(self):
+ """Test asinh transform symmetry: asinh(x) = -asinh(-x)."""
+ test_values = [1.2345, 2.0, 5.0, 0.1]
+
+ for test_val in test_values:
+ with self.subTest(input=test_val):
+ pos_result = apply_transform("asinh", test_val)
+ neg_result = apply_transform("asinh", -test_val)
+ self.assertAlmostEqualFloat(
+ pos_result,
+ -neg_result,
+ tolerance=self.TOL_IDENTITY_STRICT,
+ msg=f"asinh({test_val}) should equal -asinh({-test_val})",
+ )
+
+ def test_transform_sigmoid_extreme_behavior(self):
+ """Test sigmoid transform behavior at extreme values."""
+ # High positive values should approach 1.0
+ high_positive = apply_transform("sigmoid", 100.0)
+ self.assertTrue(high_positive > 0.99, f"sigmoid(100.0) = {high_positive} should be > 0.99")
+
+ # High negative values should approach -1.0
+ high_negative = apply_transform("sigmoid", -100.0)
+ self.assertTrue(
+ high_negative < -0.99, f"sigmoid(-100.0) = {high_negative} should be < -0.99"
+ )
+
+ # Moderate values should be strictly within bounds
+ moderate_positive = apply_transform("sigmoid", 10.0)
+ moderate_negative = apply_transform("sigmoid", -10.0)
+
+ self.assertTrue(
+ -1 < moderate_positive < 1, f"sigmoid(10.0) = {moderate_positive} should be in (-1, 1)"
+ )
+ self.assertTrue(
+ -1 < moderate_negative < 1, f"sigmoid(-10.0) = {moderate_negative} should be in (-1, 1)"
+ )
+
+ def test_transform_finite_output(self):
+ """Test that all transforms produce finite outputs for reasonable inputs."""
+ test_inputs = [-100.0, -10.0, -1.0, -0.1, 0.0, 0.1, 1.0, 10.0, 100.0]
+
+ for transform_name in self.ALL_TRANSFORMS:
+ for test_input in test_inputs:
+ with self.subTest(transform=transform_name, input=test_input):
+ result = apply_transform(transform_name, test_input)
+ self.assertFinite(result, name=f"{transform_name}({test_input})")
+
+ def test_transform_invalid_fallback(self):
+ """Test that invalid transform names fall back to tanh."""
+ invalid_result = apply_transform("invalid_transform", 1.0)
+ expected_result = math.tanh(1.0)
+
+ self.assertAlmostEqualFloat(
+ invalid_result,
+ expected_result,
+ tolerance=self.TOL_IDENTITY_RELAXED,
+ msg="Invalid transform should fall back to tanh",
+ )
+
+ def test_transform_consistency_comprehensive(self):
+ """Test comprehensive consistency across different input ranges for all transforms."""
+ transform_descriptions = [
+ ("tanh", "Hyperbolic tangent"),
+ ("softsign", "Softsign activation"),
+ ("asinh", "Inverse hyperbolic sine normalized"),
+ ("arctan", "Scaled arctangent"),
+ ("sigmoid", "Scaled sigmoid"),
+ ("clip", "Hard clipping"),
+ ]
+
+ test_ranges = [
+ (-100.0, -10.0, 10), # Large negative range
+ (-2.0, -0.1, 10), # Medium negative range
+ (-0.1, 0.1, 10), # Near-zero range
+ (0.1, 2.0, 10), # Medium positive range
+ (10.0, 100.0, 10), # Large positive range
+ ]
+
+ for transform_name, description in transform_descriptions:
+ with self.subTest(transform=transform_name, desc=description):
+ for start, end, num_points in test_ranges:
+ # Generate test points in this range
+ step = (end - start) / (num_points - 1) if num_points > 1 else 0
+ test_points = [start + i * step for i in range(num_points)]
+
+ # Apply transform to all points
+ for point in test_points:
+ result = apply_transform(transform_name, point)
+
+ # Basic validity checks
+ self.assertFinite(result, name=f"{transform_name}({point})")
+
+ # Bounds checking based on transform type
+ if transform_name in self.SMOOTH_TRANSFORMS:
+ self.assertTrue(
+ -1.0 <= result <= 1.0,
+ f"{transform_name}({point}) = {result} should be in [-1, 1]",
+ )
+ elif transform_name == "clip":
+ self.assertTrue(
+ -1.0 <= result <= 1.0,
+ f"clip({point}) = {result} should be in [-1, 1]",
+ )
+
+ def test_transform_derivative_approximation_smoothness(self):
+ """Test smoothness of transforms using finite difference approximation."""
+ # Test points around zero where derivatives should be well-behaved
+ test_points = [-1.0, -0.5, -0.1, 0.0, 0.1, 0.5, 1.0]
+ h = 1e-6 # Small step for finite difference
+
+ for transform_name in self.SMOOTH_TRANSFORMS: # Skip clip as it's not smooth
+ with self.subTest(transform=transform_name):
+ for x in test_points:
+ # Compute finite difference approximation of derivative
+ f_plus = apply_transform(transform_name, x + h)
+ f_minus = apply_transform(transform_name, x - h)
+ approx_derivative = (f_plus - f_minus) / (2 * h)
+
+ # Derivative should be finite and non-negative (monotonicity)
+ self.assertFinite(approx_derivative, name=f"d/dx {transform_name}({x})")
+ self.assertGreaterEqual(
+ approx_derivative,
+ -self.TOL_IDENTITY_STRICT, # Allow small numerical errors
+ f"Derivative of {transform_name} at x={x} should be non-negative",
+ )
-"""Pytest configuration for reward space analysis tests."""
+"""Pytest configuration: fixtures and RNG setup.
+
+Helper assertion wrappers live in `reward_space_analysis.tests.helpers`.
+"""
import shutil
import tempfile
import numpy as np
import pytest
+from reward_space_analysis import DEFAULT_MODEL_REWARD_PARAMETERS
+
@pytest.fixture(scope="session")
def temp_output_dir():
@pytest.fixture
def base_reward_params():
"""Default reward parameters."""
- from reward_space_analysis import DEFAULT_MODEL_REWARD_PARAMETERS
-
return DEFAULT_MODEL_REWARD_PARAMETERS.copy()
--- /dev/null
+#!/usr/bin/env python3
+"""Test constants and configuration values.
+
+This module serves as the single source of truth for all test constants,
+following the DRY principle and repository conventions.
+
+All numeric tolerances, seeds, and test parameters are defined here with
+clear documentation of their purpose and usage context.
+"""
+
+from dataclasses import dataclass
+from typing import Final
+
+
+@dataclass(frozen=True)
+class ToleranceConfig:
+ """Numerical tolerance configuration for assertions.
+
+ These tolerances are used throughout the test suite for floating-point
+ comparisons, ensuring consistent precision requirements across all tests.
+
+ Attributes:
+ IDENTITY_STRICT: Machine-precision tolerance for identity checks (1e-12)
+ IDENTITY_RELAXED: Relaxed tolerance for approximate identity (1e-09)
+ GENERIC_EQ: Generic equality tolerance for float comparisons (1e-08)
+ NUMERIC_GUARD: Minimum threshold to prevent division by zero (1e-18)
+ NEGLIGIBLE: Threshold below which values are considered negligible (1e-15)
+ RELATIVE: Relative tolerance for ratio/percentage comparisons (1e-06)
+ DISTRIB_SHAPE: Tolerance for distribution shape metrics (skew, kurtosis) (0.15)
+ """
+
+ IDENTITY_STRICT: float = 1e-12
+ IDENTITY_RELAXED: float = 1e-09
+ GENERIC_EQ: float = 1e-08
+ NUMERIC_GUARD: float = 1e-18
+ NEGLIGIBLE: float = 1e-15
+ RELATIVE: float = 1e-06
+ DISTRIB_SHAPE: float = 0.15
+
+
+@dataclass(frozen=True)
+class ContinuityConfig:
+ """Continuity and smoothness testing configuration.
+
+ Epsilon values for testing continuity at boundaries, particularly for
+ plateau and attenuation functions.
+
+ Attributes:
+ EPS_SMALL: Small epsilon for tight continuity checks (1e-08)
+ EPS_LARGE: Larger epsilon for coarser continuity tests (5e-05)
+ """
+
+ EPS_SMALL: float = 1e-08
+ EPS_LARGE: float = 5e-05
+
+
+@dataclass(frozen=True)
+class ExitFactorConfig:
+ """Exit factor scaling and validation configuration.
+
+ Configuration for exit factor behavior validation, including scaling
+ ratio bounds and power mode constraints.
+
+ Attributes:
+ SCALING_RATIO_MIN: Minimum expected scaling ratio for continuity (1.5)
+ SCALING_RATIO_MAX: Maximum expected scaling ratio for continuity (3.5)
+ MIN_POWER_TAU: Minimum valid tau value for power mode (1e-15)
+ """
+
+ SCALING_RATIO_MIN: float = 1.5
+ SCALING_RATIO_MAX: float = 3.5
+ MIN_POWER_TAU: float = 1e-15
+
+
+@dataclass(frozen=True)
+class PBRSConfig:
+ """Potential-Based Reward Shaping (PBRS) configuration.
+
+ Thresholds and bounds for PBRS invariance validation and testing.
+
+ Attributes:
+ TERMINAL_TOL: Terminal potential must be within this tolerance of zero (1e-09)
+ MAX_ABS_SHAPING: Maximum absolute shaping value for bounded checks (10.0)
+ """
+
+ TERMINAL_TOL: float = 1e-09
+ MAX_ABS_SHAPING: float = 10.0
+
+
+@dataclass(frozen=True)
+class StatisticalConfig:
+ """Statistical testing configuration.
+
+ Configuration for statistical hypothesis testing, bootstrap methods,
+ and distribution comparisons.
+
+ Attributes:
+ BH_FP_RATE_THRESHOLD: Benjamini-Hochberg false positive rate threshold (0.15)
+ BOOTSTRAP_DEFAULT_ITERATIONS: Default bootstrap resampling count (100)
+ """
+
+ BH_FP_RATE_THRESHOLD: float = 0.15
+ BOOTSTRAP_DEFAULT_ITERATIONS: int = 100
+
+
+@dataclass(frozen=True)
+class TestSeeds:
+ """Random seed values for reproducible testing.
+
+ Each seed serves a specific purpose to ensure test reproducibility while
+ maintaining statistical independence across different test scenarios.
+
+ Seed Strategy:
+ - BASE: Default seed for general-purpose tests, ensuring stable baseline
+ - REPRODUCIBILITY: Used exclusively for reproducibility validation tests
+ - BOOTSTRAP: Prime number for bootstrap confidence interval tests to ensure
+ independence from other random sequences
+ - HETEROSCEDASTICITY: Dedicated seed for variance structure validation tests
+
+ Attributes:
+ BASE: Default seed for standard tests (42)
+ REPRODUCIBILITY: Seed for reproducibility validation (12345)
+ BOOTSTRAP: Seed for bootstrap CI tests (999)
+ HETEROSCEDASTICITY: Seed for heteroscedasticity tests (7890)
+ """
+
+ BASE: int = 42
+ REPRODUCIBILITY: int = 12345
+ BOOTSTRAP: int = 999
+ HETEROSCEDASTICITY: int = 7890
+
+
+@dataclass(frozen=True)
+class TestParameters:
+ """Standard test parameter values.
+
+ Default parameter values used consistently across the test suite for
+ reward calculation and simulation.
+
+ Attributes:
+ BASE_FACTOR: Default base factor for reward scaling (90.0)
+ PROFIT_TARGET: Target profit threshold (0.06)
+ RISK_REWARD_RATIO: Standard risk/reward ratio (1.0)
+ RISK_REWARD_RATIO_HIGH: High risk/reward ratio for stress tests (2.0)
+ PNL_STD: Standard deviation for PnL generation (0.02)
+ PNL_DUR_VOL_SCALE: Duration-based volatility scaling factor (0.001)
+ EPS_BASE: Base epsilon for near-zero checks (1e-10)
+ """
+
+ BASE_FACTOR: float = 90.0
+ PROFIT_TARGET: float = 0.06
+ RISK_REWARD_RATIO: float = 1.0
+ RISK_REWARD_RATIO_HIGH: float = 2.0
+ PNL_STD: float = 0.02
+ PNL_DUR_VOL_SCALE: float = 0.001
+ EPS_BASE: float = 1e-10
+
+
+# Global singleton instances for easy import
+TOLERANCE: Final[ToleranceConfig] = ToleranceConfig()
+CONTINUITY: Final[ContinuityConfig] = ContinuityConfig()
+EXIT_FACTOR: Final[ExitFactorConfig] = ExitFactorConfig()
+PBRS: Final[PBRSConfig] = PBRSConfig()
+STATISTICAL: Final[StatisticalConfig] = StatisticalConfig()
+SEEDS: Final[TestSeeds] = TestSeeds()
+PARAMS: Final[TestParameters] = TestParameters()
+
+
+__all__ = [
+ "ToleranceConfig",
+ "ContinuityConfig",
+ "ExitFactorConfig",
+ "PBRSConfig",
+ "StatisticalConfig",
+ "TestSeeds",
+ "TestParameters",
+ "TOLERANCE",
+ "CONTINUITY",
+ "EXIT_FACTOR",
+ "PBRS",
+ "STATISTICAL",
+ "SEEDS",
+ "PARAMS",
+]
--- /dev/null
+"""Helpers package for reward_space_analysis tests.
+
+Exposes shared assertion utilities, configuration dataclasses, and warning
+capture helpers, centralizing test infrastructure and reducing duplication.
+"""
+
+from .assertions import (
+ assert_adjustment_reason_contains,
+ # Core numeric/trend assertions
+ assert_almost_equal_list,
+ assert_component_sum_integrity,
+ assert_exit_factor_attenuation_modes,
+ # Exit factor invariance helpers
+ assert_exit_factor_invariant_suite,
+ assert_exit_factor_kernel_fallback,
+ assert_exit_factor_plateau_behavior,
+ assert_exit_mode_mathematical_validation,
+ assert_finite,
+ assert_hold_penalty_threshold_behavior,
+ assert_monotonic_nonincreasing,
+ assert_monotonic_nonnegative,
+ assert_multi_parameter_sensitivity,
+ assert_non_canonical_shaping_exceeds,
+ assert_parameter_sensitivity_behavior,
+ assert_pbrs_canonical_sum_within_tolerance,
+ # PBRS invariance/report helpers
+ assert_pbrs_invariance_report_classification,
+ assert_progressive_scaling_behavior,
+ # Relaxed validation aggregation
+ assert_relaxed_multi_reason_aggregation,
+ assert_reward_calculation_scenarios,
+ assert_single_active_component,
+ assert_single_active_component_with_additives,
+ assert_trend,
+ # Validation batch builders/executors
+ build_validation_case,
+ execute_validation_batch,
+ make_idle_penalty_test_contexts,
+ run_relaxed_validation_adjustment_cases,
+ run_strict_validation_failure_cases,
+ safe_float,
+)
+from .configs import (
+ ContextFactory,
+ ExitFactorConfig,
+ ProgressiveScalingConfig,
+ # Configuration dataclasses
+ RewardScenarioConfig,
+ SimulationConfig,
+ StatisticalTestConfig,
+ ThresholdTestConfig,
+ # Type aliases
+ ValidationCallback,
+ ValidationConfig,
+ WarningCaptureConfig,
+)
+from .warnings import (
+ assert_diagnostic_warning,
+ assert_no_warnings,
+ # Warning capture utilities
+ capture_warnings,
+ validate_warning_content,
+)
+
+__all__ = [
+ # Core numeric/trend assertions
+ "assert_monotonic_nonincreasing",
+ "assert_monotonic_nonnegative",
+ "assert_finite",
+ "assert_almost_equal_list",
+ "assert_trend",
+ "assert_component_sum_integrity",
+ "assert_progressive_scaling_behavior",
+ "assert_single_active_component",
+ "assert_single_active_component_with_additives",
+ "assert_reward_calculation_scenarios",
+ "assert_parameter_sensitivity_behavior",
+ "make_idle_penalty_test_contexts",
+ "assert_exit_factor_attenuation_modes",
+ "assert_exit_factor_plateau_behavior",
+ "assert_exit_mode_mathematical_validation",
+ "assert_multi_parameter_sensitivity",
+ "assert_hold_penalty_threshold_behavior",
+ "safe_float",
+ # Validation batch builders/executors
+ "build_validation_case",
+ "execute_validation_batch",
+ "assert_adjustment_reason_contains",
+ "run_strict_validation_failure_cases",
+ "run_relaxed_validation_adjustment_cases",
+ # Exit factor invariance helpers
+ "assert_exit_factor_invariant_suite",
+ "assert_exit_factor_kernel_fallback",
+ # Relaxed validation aggregation
+ "assert_relaxed_multi_reason_aggregation",
+ # PBRS invariance/report helpers
+ "assert_pbrs_invariance_report_classification",
+ "assert_pbrs_canonical_sum_within_tolerance",
+ "assert_non_canonical_shaping_exceeds",
+ # Configuration dataclasses
+ "RewardScenarioConfig",
+ "ValidationConfig",
+ "ThresholdTestConfig",
+ "ProgressiveScalingConfig",
+ "ExitFactorConfig",
+ "StatisticalTestConfig",
+ "SimulationConfig",
+ "WarningCaptureConfig",
+ "ValidationCallback",
+ "ContextFactory",
+ # Warning capture utilities
+ "capture_warnings",
+ "assert_diagnostic_warning",
+ "assert_no_warnings",
+ "validate_warning_content",
+]
--- /dev/null
+"""Shared assertion helpers for reward_space_analysis test suite.
+
+These functions centralize common numeric and behavioral checks to enforce
+single invariant ownership and reduce duplication across taxonomy modules.
+"""
+
+from typing import Any, Dict, List, Sequence, Tuple
+
+from reward_space_analysis import (
+ _get_exit_factor,
+ _get_pnl_factor,
+ calculate_reward,
+)
+
+
+def safe_float(value: Any, default: float = 0.0) -> float:
+ """Coerce value to float safely for test parameter handling.
+
+ Rules:
+ - None, '' -> default
+ - Numeric types pass through
+ - String numeric forms ('3', '3.5', 'nan', 'inf') handled; nan/inf return default
+ - Non-numeric strings return default
+ Avoids direct float(...) exceptions leaking into tests that target relaxed validation behaviors.
+ """
+ try:
+ if value is None or value == "":
+ return default
+ coerced = float(value)
+ if coerced != coerced or coerced in (float("inf"), float("-inf")):
+ return default
+ return coerced
+ except (TypeError, ValueError):
+ return default
+
+
+def assert_monotonic_nonincreasing(
+ test_case,
+ values: Sequence[float],
+ tolerance: float = 0.0,
+ msg: str = "Values should be non-increasing",
+):
+ """Assert that a sequence is monotonically non-increasing.
+
+ Validates that each element in the sequence is less than or equal to the
+ previous element, with an optional tolerance for floating-point comparisons.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ values: Sequence of numeric values to validate
+ tolerance: Numerical tolerance for comparisons (default: 0.0)
+ msg: Custom error message for assertion failures
+
+ Example:
+ assert_monotonic_nonincreasing(self, [5.0, 4.0, 3.0, 3.0, 2.0])
+ # Validates: 4.0 <= 5.0, 3.0 <= 4.0, 3.0 <= 3.0, 2.0 <= 3.0
+ """
+ for i in range(1, len(values)):
+ test_case.assertLessEqual(values[i], values[i - 1] + tolerance, msg)
+
+
+def assert_monotonic_nonnegative(
+ test_case,
+ values: Sequence[float],
+ tolerance: float = 0.0,
+ msg: str = "Values should be non-negative",
+):
+ """Assert that all values in a sequence are non-negative.
+
+ Validates that each element is greater than or equal to zero, with an
+ optional tolerance for floating-point comparisons.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ values: Sequence of numeric values to validate
+ tolerance: Numerical tolerance for comparisons (default: 0.0)
+ msg: Custom error message for assertion failures
+
+ Example:
+ assert_monotonic_nonnegative(self, [0.0, 1.5, 2.3, 0.1])
+ """
+ for v in values:
+ test_case.assertGreaterEqual(v + tolerance, 0.0, msg)
+
+
+def assert_finite(test_case, values: Sequence[float], msg: str = "Values must be finite"):
+ """Assert that all values are finite (not NaN or infinity).
+
+ Validates that no element in the sequence is NaN, positive infinity, or
+ negative infinity. Essential for numerical stability checks.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ values: Sequence of numeric values to validate
+ msg: Custom error message for assertion failures
+
+ Example:
+ assert_finite(self, [1.0, 2.5, -3.7, 0.0]) # Passes
+ assert_finite(self, [1.0, float('nan')]) # Fails
+ assert_finite(self, [1.0, float('inf')]) # Fails
+ """
+ for v in values:
+ test_case.assertTrue((v == v) and (v not in (float("inf"), float("-inf"))), msg)
+
+
+def assert_almost_equal_list(
+ test_case,
+ values: Sequence[float],
+ target: float,
+ delta: float,
+ msg: str = "Values should be near target",
+):
+ """Assert that all values in a sequence are approximately equal to a target.
+
+ Validates that each element is within a specified tolerance (delta) of the
+ target value. Useful for checking plateau behavior or constant outputs.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ values: Sequence of numeric values to validate
+ target: Target value for comparison
+ delta: Maximum allowed deviation from target
+ msg: Custom error message for assertion failures
+
+ Example:
+ assert_almost_equal_list(self, [1.0, 1.01, 0.99], 1.0, delta=0.02)
+ """
+ for v in values:
+ test_case.assertAlmostEqual(v, target, delta=delta, msg=msg)
+
+
+def assert_trend(
+ test_case,
+ values: Sequence[float],
+ trend: str,
+ tolerance: float,
+ msg_prefix: str = "Trend validation failed",
+):
+ """Assert that a sequence follows a specific trend pattern.
+
+ Generic trend validation supporting increasing, decreasing, or constant
+ patterns. More flexible than specialized monotonic assertions.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ values: Sequence of numeric values to validate
+ trend: Expected trend: "increasing", "decreasing", or "constant"
+ tolerance: Numerical tolerance for comparisons
+ msg_prefix: Prefix for error messages
+
+ Raises:
+ ValueError: If trend parameter is not one of the supported values
+
+ Example:
+ assert_trend(self, [1.0, 2.0, 3.0], "increasing", 1e-09)
+ assert_trend(self, [5.0, 5.0, 5.0], "constant", 1e-09)
+ """
+ if trend not in {"increasing", "decreasing", "constant"}:
+ raise ValueError(f"Unsupported trend '{trend}'")
+ if trend == "increasing":
+ for i in range(1, len(values)):
+ test_case.assertGreaterEqual(
+ values[i], values[i - 1] - tolerance, f"{msg_prefix}: expected increasing"
+ )
+ elif trend == "decreasing":
+ for i in range(1, len(values)):
+ test_case.assertLessEqual(
+ values[i], values[i - 1] + tolerance, f"{msg_prefix}: expected decreasing"
+ )
+ else: # constant
+ base = values[0]
+ for v in values[1:]:
+ test_case.assertAlmostEqual(
+ v, base, delta=tolerance, msg=f"{msg_prefix}: expected constant"
+ )
+
+
+def assert_component_sum_integrity(
+ test_case,
+ breakdown,
+ tolerance_relaxed,
+ exclude_components=None,
+ component_description="components",
+):
+ """Assert that reward component sum matches total within tolerance.
+
+ Validates the mathematical integrity of reward component decomposition by
+ ensuring the sum of individual components equals the reported total.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ breakdown: Reward breakdown object with component attributes
+ tolerance_relaxed: Numerical tolerance for sum validation
+ exclude_components: List of component names to exclude from sum (default: None)
+ component_description: Human-readable description for error messages
+
+ Components checked (if not excluded):
+ - hold_penalty
+ - idle_penalty
+ - exit_component
+ - invalid_penalty
+ - reward_shaping
+ - entry_additive
+ - exit_additive
+
+ Example:
+ assert_component_sum_integrity(
+ self, breakdown, 1e-09,
+ exclude_components=["reward_shaping"],
+ component_description="core components"
+ )
+ """
+ if exclude_components is None:
+ exclude_components = []
+ component_sum = 0.0
+ if "hold_penalty" not in exclude_components:
+ component_sum += breakdown.hold_penalty
+ if "idle_penalty" not in exclude_components:
+ component_sum += breakdown.idle_penalty
+ if "exit_component" not in exclude_components:
+ component_sum += breakdown.exit_component
+ if "invalid_penalty" not in exclude_components:
+ component_sum += breakdown.invalid_penalty
+ if "reward_shaping" not in exclude_components:
+ component_sum += breakdown.reward_shaping
+ if "entry_additive" not in exclude_components:
+ component_sum += breakdown.entry_additive
+ if "exit_additive" not in exclude_components:
+ component_sum += breakdown.exit_additive
+ test_case.assertAlmostEqual(
+ breakdown.total,
+ component_sum,
+ delta=tolerance_relaxed,
+ msg=f"Total should equal sum of {component_description}",
+ )
+
+
+def assert_progressive_scaling_behavior(
+ test_case,
+ penalties_list: Sequence[float],
+ durations: Sequence[int],
+ penalty_type: str = "penalty",
+):
+ """Validate that penalties scale progressively with increasing durations.
+
+ Ensures penalties become more severe (more negative) as duration increases,
+ which is a key invariant for hold and idle penalty calculations.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ penalties_list: Sequence of penalty values (typically negative)
+ durations: Corresponding sequence of duration values
+ penalty_type: Type of penalty for error messages (default: "penalty")
+
+ Example:
+ durations = [10, 50, 100, 200]
+ penalties = [-5.0, -10.0, -15.0, -20.0]
+ assert_progressive_scaling_behavior(self, penalties, durations, "hold_penalty")
+ """
+ for i in range(1, len(penalties_list)):
+ test_case.assertLessEqual(
+ penalties_list[i],
+ penalties_list[i - 1],
+ f"{penalty_type} should increase (more negative) with duration: {penalties_list[i]} <= {penalties_list[i - 1]} (duration {durations[i]} vs {durations[i - 1]})",
+ )
+
+
+def assert_single_active_component(
+ test_case, breakdown, active_name: str, tolerance: float, inactive_core: Sequence[str]
+):
+ """Assert that exactly one reward component is active in a breakdown.
+
+ Validates reward component isolation by ensuring the active component equals
+ the total reward while all other components are negligible (near zero).
+
+ Args:
+ test_case: Test case instance with assertion methods
+ breakdown: Reward breakdown object with component attributes
+ active_name: Name of the component expected to be active
+ tolerance: Numerical tolerance for near-zero checks
+ inactive_core: List of core component names to check
+
+ Example:
+ assert_single_active_component(
+ self, breakdown, "exit_component", 1e-09,
+ ["hold_penalty", "idle_penalty", "invalid_penalty"]
+ )
+ """
+ for name in inactive_core:
+ if name == active_name:
+ test_case.assertAlmostEqual(
+ getattr(breakdown, name),
+ breakdown.total,
+ delta=tolerance,
+ msg=f"Active component {name} should equal total",
+ )
+ else:
+ test_case.assertAlmostEqual(
+ getattr(breakdown, name),
+ 0.0,
+ delta=tolerance,
+ msg=f"Inactive component {name} should be near zero",
+ )
+
+
+def assert_single_active_component_with_additives(
+ test_case,
+ breakdown,
+ active_name: str,
+ tolerance: float,
+ inactive_core: Sequence[str],
+ enforce_additives_zero: bool = True,
+):
+ """Assert single active core component with optional additive checks.
+
+ Extended version of assert_single_active_component that additionally validates
+ that additive components (reward_shaping, entry_additive, exit_additive) are
+ near zero when they should be inactive.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ breakdown: Reward breakdown object with component attributes
+ active_name: Name of the component expected to be active
+ tolerance: Numerical tolerance for near-zero checks
+ inactive_core: List of core component names to check
+ enforce_additives_zero: If True, also check additives are near zero
+
+ Example:
+ assert_single_active_component_with_additives(
+ self, breakdown, "exit_component", 1e-09,
+ ["hold_penalty", "idle_penalty"],
+ enforce_additives_zero=True
+ )
+ """
+ # Delegate core component assertions
+ assert_single_active_component(test_case, breakdown, active_name, tolerance, inactive_core)
+ if enforce_additives_zero:
+ for attr in ("reward_shaping", "entry_additive", "exit_additive"):
+ test_case.assertAlmostEqual(
+ getattr(breakdown, attr),
+ 0.0,
+ delta=tolerance,
+ msg=f"{attr} should be near zero when inactive decomposition scenario",
+ )
+
+
+def assert_reward_calculation_scenarios(
+ test_case,
+ scenarios: List[Tuple[Any, Dict[str, Any], str]],
+ base_factor: float,
+ profit_target: float,
+ risk_reward_ratio: float,
+ validation_fn,
+ tolerance_relaxed: float,
+):
+ """Execute and validate multiple reward calculation scenarios.
+
+ Runs a batch of reward calculations with different contexts and parameters,
+ applying a custom validation function to each result. Reduces test boilerplate
+ for scenario-based testing.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ scenarios: List of (context, params, description) tuples defining test cases
+ base_factor: Base scaling factor for reward calculations
+ profit_target: Target profit threshold
+ risk_reward_ratio: Risk/reward ratio for position sizing
+ validation_fn: Callback function (test_case, breakdown, description, tolerance) -> None
+ tolerance_relaxed: Numerical tolerance passed to validation function
+
+ Example:
+ scenarios = [
+ (idle_context, {}, "idle scenario"),
+ (exit_context, {"exit_additive": 5.0}, "profitable exit"),
+ ]
+ assert_reward_calculation_scenarios(
+ self, scenarios, 90.0, 0.06, 1.0, my_validation_fn, 1e-09
+ )
+ """
+ for context, params, description in scenarios:
+ with test_case.subTest(scenario=description):
+ breakdown = calculate_reward(
+ context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ validation_fn(test_case, breakdown, description, tolerance_relaxed)
+
+
+def assert_parameter_sensitivity_behavior(
+ test_case,
+ parameter_variations: List[Dict[str, Any]],
+ base_context,
+ base_params: Dict[str, Any],
+ base_factor: float,
+ profit_target: float,
+ risk_reward_ratio: float,
+ component_name: str,
+ expected_trend: str,
+ tolerance_relaxed: float,
+):
+ """Validate that a component responds predictably to parameter changes.
+
+ Tests component sensitivity by applying parameter variations and verifying
+ the component value follows the expected trend (increasing, decreasing, or constant).
+
+ Args:
+ test_case: Test case instance with assertion methods
+ parameter_variations: List of parameter dicts to merge with base_params
+ base_context: Context object for reward calculation
+ base_params: Base parameter dictionary
+ base_factor: Base scaling factor
+ profit_target: Target profit threshold
+ risk_reward_ratio: Risk/reward ratio
+ component_name: Name of component to track (e.g., "exit_component")
+ expected_trend: Expected trend: "increasing", "decreasing", or "constant"
+ tolerance_relaxed: Numerical tolerance for trend validation
+
+ Example:
+ variations = [
+ {"exit_additive": 0.0},
+ {"exit_additive": 5.0},
+ {"exit_additive": 10.0},
+ ]
+ assert_parameter_sensitivity_behavior(
+ self, variations, ctx, params, 90.0, 0.06, 1.0,
+ "exit_component", "increasing", 1e-09
+ )
+ """
+ from reward_space_analysis import calculate_reward
+
+ results = []
+ for param_variation in parameter_variations:
+ params = base_params.copy()
+ params.update(param_variation)
+ breakdown = calculate_reward(
+ base_context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ component_value = getattr(breakdown, component_name)
+ results.append(component_value)
+ if expected_trend == "increasing":
+ for i in range(1, len(results)):
+ test_case.assertGreaterEqual(
+ results[i],
+ results[i - 1] - tolerance_relaxed,
+ f"{component_name} should increase with parameter variations",
+ )
+ elif expected_trend == "decreasing":
+ for i in range(1, len(results)):
+ test_case.assertLessEqual(
+ results[i],
+ results[i - 1] + tolerance_relaxed,
+ f"{component_name} should decrease with parameter variations",
+ )
+ elif expected_trend == "constant":
+ baseline = results[0]
+ for result in results[1:]:
+ test_case.assertAlmostEqual(
+ result,
+ baseline,
+ delta=tolerance_relaxed,
+ msg=f"{component_name} should remain constant with parameter variations",
+ )
+
+
+def make_idle_penalty_test_contexts(
+ context_factory_fn,
+ idle_duration_scenarios: Sequence[int],
+ base_context_kwargs: Dict[str, Any] | None = None,
+):
+ """Generate contexts for idle penalty testing with varying durations.
+
+ Factory function that creates a list of (context, description) tuples for
+ idle penalty scenario testing, reducing boilerplate in test setup.
+
+ Args:
+ context_factory_fn: Factory function that creates context objects
+ idle_duration_scenarios: Sequence of idle duration values to test
+ base_context_kwargs: Base kwargs merged with idle_duration for each scenario
+
+ Returns:
+ List of (context, description) tuples
+
+ Example:
+ contexts = make_idle_penalty_test_contexts(
+ make_context, [0, 50, 100, 200],
+ base_context_kwargs={"context_type": "idle"}
+ )
+ for context, desc in contexts:
+ breakdown = calculate_reward(context, ...)
+ """
+ if base_context_kwargs is None:
+ base_context_kwargs = {}
+ contexts = []
+ for idle_duration in idle_duration_scenarios:
+ kwargs = base_context_kwargs.copy()
+ kwargs["idle_duration"] = idle_duration
+ context = context_factory_fn(**kwargs)
+ description = f"idle_duration={idle_duration}"
+ contexts.append((context, description))
+ return contexts
+
+
+def assert_exit_factor_attenuation_modes(
+ test_case,
+ base_factor: float,
+ pnl: float,
+ pnl_factor: float,
+ attenuation_modes: Sequence[str],
+ base_params_fn,
+ tolerance_relaxed: float,
+):
+ """Validate exit factor attenuation across multiple modes.
+
+ Tests that exit factor decreases monotonically (attenuates) over duration
+ for various attenuation modes: linear, power, half_life, sqrt, and plateau_linear.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ base_factor: Base scaling factor
+ pnl: Profit/loss value
+ pnl_factor: PnL amplification factor
+ attenuation_modes: List of mode names to test
+ base_params_fn: Factory function for creating parameter dicts
+ tolerance_relaxed: Numerical tolerance for monotonicity checks
+
+ Supported modes:
+ - "plateau_linear": Linear attenuation after grace period
+ - "linear": Linear attenuation with configurable slope
+ - "power": Power-law attenuation with tau parameter
+ - "half_life": Exponential decay with half-life parameter
+ - "sqrt": Square root attenuation (default fallback)
+
+ Example:
+ assert_exit_factor_attenuation_modes(
+ self, 90.0, 0.08, 1.5,
+ ["linear", "power", "half_life"],
+ make_params, 1e-09
+ )
+ """
+ import numpy as np
+
+ for mode in attenuation_modes:
+ with test_case.subTest(mode=mode):
+ if mode == "plateau_linear":
+ mode_params = base_params_fn(
+ exit_attenuation_mode="linear",
+ exit_plateau=True,
+ exit_plateau_grace=0.2,
+ exit_linear_slope=1.0,
+ )
+ elif mode == "linear":
+ mode_params = base_params_fn(exit_attenuation_mode="linear", exit_linear_slope=1.2)
+ elif mode == "power":
+ mode_params = base_params_fn(exit_attenuation_mode="power", exit_power_tau=0.5)
+ elif mode == "half_life":
+ mode_params = base_params_fn(exit_attenuation_mode="half_life", exit_half_life=0.7)
+ else:
+ mode_params = base_params_fn(exit_attenuation_mode="sqrt")
+ ratios = np.linspace(0, 2, 15)
+ values = [
+ _get_exit_factor(base_factor, pnl, pnl_factor, r, mode_params) for r in ratios
+ ]
+ if mode == "plateau_linear":
+ grace = float(mode_params["exit_plateau_grace"])
+ filtered = [
+ (r, v) for r, v in zip(ratios, values) if r >= grace - tolerance_relaxed
+ ]
+ values_to_check = [v for _, v in filtered]
+ else:
+ values_to_check = values
+ for earlier, later in zip(values_to_check, values_to_check[1:]):
+ test_case.assertLessEqual(
+ later, earlier + tolerance_relaxed, f"Non-monotonic attenuation in mode={mode}"
+ )
+
+
+def assert_exit_mode_mathematical_validation(
+ test_case,
+ context,
+ params: Dict[str, Any],
+ base_factor: float,
+ profit_target: float,
+ risk_reward_ratio: float,
+ tolerance_relaxed: float,
+):
+ """Validate mathematical correctness of exit factor calculation modes.
+
+ Performs deep mathematical validation of exit factor attenuation modes,
+ including verification of half-life exponential decay formula and
+ ensuring different modes produce distinct results.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ context: Context object with trade_duration and pnl attributes
+ params: Parameter dictionary (will be modified in-place for testing)
+ base_factor: Base scaling factor
+ profit_target: Target profit threshold
+ risk_reward_ratio: Risk/reward ratio
+ tolerance_relaxed: Numerical tolerance for formula validation
+
+ Tests performed:
+ 1. Power mode produces positive exit component
+ 2. Half-life mode matches theoretical exponential decay formula
+ 3. Linear mode produces positive exit component
+ 4. Different modes produce distinguishable results
+
+ Example:
+ assert_exit_mode_mathematical_validation(
+ self, context, params, 90.0, 0.06, 1.0, 1e-09
+ )
+ """
+ duration_ratio = context.trade_duration / 100
+ params["exit_attenuation_mode"] = "power"
+ params["exit_power_tau"] = 0.5
+ params["exit_plateau"] = False
+ reward_power = calculate_reward(
+ context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ test_case.assertGreater(reward_power.exit_component, 0)
+ params["exit_attenuation_mode"] = "half_life"
+ params["exit_half_life"] = 0.5
+ reward_half_life = calculate_reward(
+ context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ pnl_factor_hl = _get_pnl_factor(params, context, profit_target, risk_reward_ratio)
+ observed_exit_factor = _get_exit_factor(
+ base_factor, context.pnl, pnl_factor_hl, duration_ratio, params
+ )
+ eps_base = 1e-8
+ observed_half_life_factor = observed_exit_factor / (base_factor * max(pnl_factor_hl, eps_base))
+ expected_half_life_factor = 2 ** (-duration_ratio / params["exit_half_life"])
+ test_case.assertAlmostEqual(
+ observed_half_life_factor,
+ expected_half_life_factor,
+ delta=tolerance_relaxed,
+ msg="Half-life attenuation mismatch: observed vs expected",
+ )
+ params["exit_attenuation_mode"] = "linear"
+ params["exit_linear_slope"] = 1.0
+ reward_linear = calculate_reward(
+ context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ rewards = [
+ reward_power.exit_component,
+ reward_half_life.exit_component,
+ reward_linear.exit_component,
+ ]
+ test_case.assertTrue(all((r > 0 for r in rewards)))
+ unique_rewards = set((f"{r:.6f}" for r in rewards))
+ test_case.assertGreater(len(unique_rewards), 1)
+
+
+def assert_multi_parameter_sensitivity(
+ test_case,
+ parameter_test_cases: List[Tuple[float, float, str]],
+ context_factory_fn,
+ base_params: Dict[str, Any],
+ base_factor: float,
+ tolerance_relaxed: float,
+):
+ """Validate reward behavior across multiple parameter combinations.
+
+ Tests reward calculation with various profit_target and risk_reward_ratio
+ combinations, ensuring consistent behavior including edge cases like
+ zero profit_target.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ parameter_test_cases: List of (profit_target, risk_reward_ratio, description) tuples
+ context_factory_fn: Factory function for creating context objects
+ base_params: Base parameter dictionary
+ base_factor: Base scaling factor
+ tolerance_relaxed: Numerical tolerance for assertions
+
+ Example:
+ test_cases = [
+ (0.0, 1.0, "zero profit target"),
+ (0.06, 1.0, "standard parameters"),
+ (0.06, 2.0, "high risk/reward ratio"),
+ ]
+ assert_multi_parameter_sensitivity(
+ self, test_cases, make_context, params, 90.0, 1e-09
+ )
+ """
+ for profit_target, risk_reward_ratio, description in parameter_test_cases:
+ with test_case.subTest(
+ profit_target=profit_target, risk_reward_ratio=risk_reward_ratio, desc=description
+ ):
+ idle_context = context_factory_fn(context_type="idle")
+ breakdown = calculate_reward(
+ idle_context,
+ base_params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ if profit_target == 0.0:
+ test_case.assertEqual(breakdown.idle_penalty, 0.0)
+ test_case.assertEqual(breakdown.total, 0.0)
+ else:
+ test_case.assertLess(breakdown.idle_penalty, 0.0)
+ if profit_target > 0:
+ exit_context = context_factory_fn(context_type="exit", profit_target=profit_target)
+ exit_breakdown = calculate_reward(
+ exit_context,
+ base_params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ test_case.assertNotEqual(exit_breakdown.exit_component, 0.0)
+
+
+def assert_hold_penalty_threshold_behavior(
+ test_case,
+ duration_test_cases: Sequence[Tuple[int, str]],
+ max_duration: int,
+ context_factory_fn,
+ params: Dict[str, Any],
+ base_factor: float,
+ profit_target: float,
+ risk_reward_ratio: float,
+ tolerance_relaxed: float,
+):
+ """Validate hold penalty activation at max_duration threshold.
+
+ Tests that hold penalty is zero before max_duration, then becomes
+ negative (penalty) at and after the threshold. Critical for verifying
+ threshold-based penalty logic.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ duration_test_cases: List of (trade_duration, description) tuples to test
+ max_duration: Maximum duration threshold for penalty activation
+ context_factory_fn: Factory function for creating context objects
+ params: Parameter dictionary
+ base_factor: Base scaling factor
+ profit_target: Target profit threshold
+ risk_reward_ratio: Risk/reward ratio
+ tolerance_relaxed: Numerical tolerance for assertions
+
+ Example:
+ test_cases = [
+ (50, "below threshold"),
+ (100, "at threshold"),
+ (150, "above threshold"),
+ ]
+ assert_hold_penalty_threshold_behavior(
+ self, test_cases, 100, make_context, params, 90.0, 0.06, 1.0, 1e-09
+ )
+ """
+ for trade_duration, description in duration_test_cases:
+ with test_case.subTest(duration=trade_duration, desc=description):
+ context = context_factory_fn(trade_duration=trade_duration)
+ breakdown = calculate_reward(
+ context,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=risk_reward_ratio,
+ short_allowed=True,
+ action_masking=True,
+ )
+ duration_ratio = trade_duration / max_duration
+ if duration_ratio < 1.0:
+ test_case.assertEqual(breakdown.hold_penalty, 0.0)
+ elif duration_ratio == 1.0:
+ test_case.assertLessEqual(breakdown.hold_penalty, 0.0)
+ else:
+ test_case.assertLess(breakdown.hold_penalty, 0.0)
+
+
+# ---------------- Validation & invariance helper cases ---------------- #
+
+
+def build_validation_case(
+ param_updates: Dict[str, Any],
+ strict: bool,
+ expect_error: bool = False,
+ expected_reason_substrings: Sequence[str] | None = None,
+) -> Dict[str, Any]:
+ """Build a structured validation test case descriptor.
+
+ Creates a standardized test case dictionary for parameter validation testing,
+ supporting both strict (raise on error) and relaxed (adjust and warn) modes.
+
+ Args:
+ param_updates: Dictionary of parameter updates to apply
+ strict: If True, validation should raise on invalid params
+ expect_error: If True, expect validation to raise an exception
+ expected_reason_substrings: Substrings expected in adjustment reasons (relaxed mode)
+
+ Returns:
+ Dictionary with keys: params, strict, expect_error, expected_reason_substrings
+
+ Example:
+ case = build_validation_case(
+ {"exit_plateau_grace": -0.5},
+ strict=False,
+ expected_reason_substrings=["clamped", "exit_plateau_grace"]
+ )
+ """
+ return {
+ "params": param_updates,
+ "strict": strict,
+ "expect_error": expect_error,
+ "expected_reason_substrings": list(expected_reason_substrings or []),
+ }
+
+
+def execute_validation_batch(test_case, cases: Sequence[Dict[str, Any]], validate_fn):
+ """Execute a batch of parameter validation test cases.
+
+ Runs multiple validation scenarios in batch, handling both strict (error-raising)
+ and relaxed (adjustment-collecting) modes. Validates that adjustment reasons
+ contain expected substrings in relaxed mode.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ cases: Sequence of validation case dictionaries from build_validation_case
+ validate_fn: Validation function to test (typically validate_reward_parameters)
+
+ Example:
+ cases = [
+ build_validation_case({"exit_power_tau": -1.0}, strict=True, expect_error=True),
+ build_validation_case({"exit_power_tau": -1.0}, strict=False,
+ expected_reason_substrings=["clamped"]),
+ ]
+ execute_validation_batch(self, cases, validate_reward_parameters)
+ """
+ for idx, case in enumerate(cases):
+ with test_case.subTest(
+ case_index=idx, strict=case["strict"], expect_error=case["expect_error"]
+ ):
+ params = case["params"].copy()
+ strict_flag = case["strict"]
+ if strict_flag and case["expect_error"]:
+ test_case.assertRaises(Exception, validate_fn, params, True)
+ continue
+ result = validate_fn(params, strict=strict_flag)
+ if isinstance(result, tuple) and len(result) == 2 and isinstance(result[0], dict):
+ sanitized, adjustments = result
+ else:
+ sanitized, adjustments = result, {}
+ # relaxed reason substrings
+ for substr in case.get("expected_reason_substrings", []):
+ # search across all adjustment reasons
+ found = any(substr in adj.get("reason", "") for adj in adjustments.values())
+ test_case.assertTrue(
+ found, f"Expected substring '{substr}' in some adjustment reason"
+ )
+ # basic sanity: sanitized returns a dict
+ test_case.assertIsInstance(sanitized, dict)
+
+
+def assert_adjustment_reason_contains(
+ test_case, adjustments: Dict[str, Dict[str, Any]], key: str, expected_substrings: Sequence[str]
+):
+ """Assert adjustment reason contains all expected substrings.
+
+ Validates that all expected substrings appear in the adjustment reason
+ message for a specific parameter key, regardless of order.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ adjustments: Dictionary of adjustment information from validation
+ key: Parameter key to check in adjustments dict
+ expected_substrings: List of substrings that must appear in reason
+
+ Example:
+ adjustments = {
+ "exit_plateau_grace": {
+ "reason": "clamped to valid range [0.0, 1.0]",
+ "validation_mode": "relaxed"
+ }
+ }
+ assert_adjustment_reason_contains(
+ self, adjustments, "exit_plateau_grace", ["clamped", "valid range"]
+ )
+ """
+ test_case.assertIn(key, adjustments, f"Adjustment key '{key}' missing")
+ reason = adjustments[key].get("reason", "")
+ for sub in expected_substrings:
+ test_case.assertIn(sub, reason, f"Missing substring '{sub}' in reason for key '{key}'")
+
+
+def run_strict_validation_failure_cases(
+ test_case, failure_params_list: Sequence[Dict[str, Any]], validate_fn
+):
+ """Batch test strict validation failures.
+
+ Runs multiple parameter dictionaries through validation in strict mode,
+ asserting that each raises a ValueError. Reduces boilerplate for testing
+ multiple invalid parameter combinations.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ failure_params_list: List of parameter dicts that should fail validation
+ validate_fn: Validation function to test
+
+ Example:
+ invalid_params = [
+ {"exit_power_tau": -1.0},
+ {"exit_plateau_grace": 1.5},
+ {"exit_half_life": 0.0},
+ ]
+ run_strict_validation_failure_cases(
+ self, invalid_params, validate_reward_parameters
+ )
+ """
+ for params in failure_params_list:
+ with test_case.subTest(params=params):
+ test_case.assertRaises(ValueError, validate_fn, params, True)
+
+
+def run_relaxed_validation_adjustment_cases(
+ test_case,
+ relaxed_cases: Sequence[Tuple[Dict[str, Any], Sequence[str]]],
+ validate_fn,
+):
+ """Batch test relaxed validation adjustments.
+
+ Runs multiple parameter dictionaries through validation in relaxed mode,
+ asserting that adjustment reasons contain expected substrings. Validates
+ that the system properly adjusts and reports issues rather than raising.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ relaxed_cases: List of (params, expected_reason_substrings) tuples
+ validate_fn: Validation function to test
+
+ Example:
+ relaxed_cases = [
+ ({"exit_power_tau": -1.0}, ["clamped", "tau"]),
+ ({"exit_plateau_grace": 1.5}, ["clamped", "grace"]),
+ ]
+ run_relaxed_validation_adjustment_cases(
+ self, relaxed_cases, validate_reward_parameters
+ )
+ """
+ for params, substrings in relaxed_cases:
+ with test_case.subTest(params=params):
+ sanitized, adjustments = validate_fn(params, strict=False)
+ test_case.assertIsInstance(sanitized, dict)
+ test_case.assertIsInstance(adjustments, dict)
+ # aggregate reasons
+ all_reasons = ",".join(adj.get("reason", "") for adj in adjustments.values())
+ for s in substrings:
+ test_case.assertIn(
+ s, all_reasons, f"Expected '{s}' in aggregated adjustment reasons"
+ )
+
+
+def assert_exit_factor_invariant_suite(
+ test_case, suite_cases: Sequence[Dict[str, Any]], exit_factor_fn
+):
+ """Validate exit factor invariants across multiple scenarios.
+
+ Batch validation of exit factor behavior under various conditions,
+ checking different invariants like non-negativity, safe zero handling,
+ and clamping behavior.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ suite_cases: List of scenario dicts with keys:
+ - base_factor: Base scaling factor
+ - pnl: Profit/loss value
+ - pnl_factor: PnL amplification factor
+ - duration_ratio: Duration ratio (0-2)
+ - params: Parameter dictionary
+ - expectation: Expected invariant ("non_negative", "safe_zero", "clamped")
+ - tolerance: Optional numerical tolerance
+ exit_factor_fn: Exit factor calculation function to test
+
+ Example:
+ cases = [
+ {
+ "base_factor": 90.0, "pnl": 0.08, "pnl_factor": 1.5,
+ "duration_ratio": 0.5, "params": {...},
+ "expectation": "non_negative", "tolerance": 1e-09
+ },
+ {
+ "base_factor": 90.0, "pnl": 0.0, "pnl_factor": 0.0,
+ "duration_ratio": 0.5, "params": {...},
+ "expectation": "safe_zero"
+ },
+ ]
+ assert_exit_factor_invariant_suite(self, cases, _get_exit_factor)
+ """
+ for i, case in enumerate(suite_cases):
+ with test_case.subTest(exit_case=i, expectation=case.get("expectation")):
+ f_val = exit_factor_fn(
+ case["base_factor"],
+ case["pnl"],
+ case["pnl_factor"],
+ case["duration_ratio"],
+ case["params"],
+ )
+ exp = case.get("expectation")
+ if exp == "safe_zero":
+ test_case.assertEqual(f_val, 0.0)
+ elif exp == "non_negative":
+ test_case.assertGreaterEqual(f_val, -case.get("tolerance", 0.0))
+ elif exp == "clamped":
+ test_case.assertGreaterEqual(f_val, 0.0)
+ else:
+ test_case.fail(f"Unknown expectation '{exp}' in exit factor suite case")
+
+
+def assert_exit_factor_kernel_fallback(
+ test_case,
+ exit_factor_fn,
+ base_factor: float,
+ pnl: float,
+ pnl_factor: float,
+ duration_ratio: float,
+ bad_params: Dict[str, Any],
+ reference_params: Dict[str, Any],
+):
+ """Validate exit factor fallback behavior on kernel failure.
+
+ Tests that when an attenuation kernel fails (e.g., invalid parameters),
+ the system falls back to linear mode and produces numerically equivalent
+ results. Caller must monkeypatch the kernel to trigger failure before calling.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ exit_factor_fn: Exit factor calculation function
+ base_factor: Base scaling factor
+ pnl: Profit/loss value
+ pnl_factor: PnL amplification factor
+ duration_ratio: Duration ratio
+ bad_params: Parameters that trigger kernel failure
+ reference_params: Reference linear mode parameters for comparison
+
+ Validates:
+ 1. Fallback produces non-negative result
+ 2. Fallback result matches linear reference within tight tolerance (1e-12)
+
+ Note:
+ Warning emission should be validated separately with warning context managers.
+
+ Example:
+ # After monkeypatching kernel to fail:
+ assert_exit_factor_kernel_fallback(
+ self, _get_exit_factor, 90.0, 0.08, 1.5, 0.5,
+ bad_params={"exit_attenuation_mode": "power", "exit_power_tau": -1.0},
+ reference_params={"exit_attenuation_mode": "linear"}
+ )
+ """
+
+ f_bad = exit_factor_fn(base_factor, pnl, pnl_factor, duration_ratio, bad_params)
+ f_ref = exit_factor_fn(base_factor, pnl, pnl_factor, duration_ratio, reference_params)
+ test_case.assertAlmostEqual(f_bad, f_ref, delta=1e-12)
+ test_case.assertGreaterEqual(f_bad, 0.0)
+
+
+def assert_relaxed_multi_reason_aggregation(
+ test_case,
+ validate_fn,
+ params: Dict[str, Any],
+ key_expectations: Dict[str, Sequence[str]],
+):
+ """Validate relaxed validation produces expected adjustment reasons.
+
+ Tests that relaxed validation properly aggregates and reports multiple
+ adjustment reasons for specified parameter keys, ensuring transparency
+ in parameter sanitization.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ validate_fn: Validation function to test
+ params: Parameter dictionary to validate
+ key_expectations: Mapping of param_key -> expected reason substrings
+
+ Example:
+ key_expectations = {
+ "exit_power_tau": ["clamped", "minimum"],
+ "exit_plateau_grace": ["clamped", "range"],
+ }
+ assert_relaxed_multi_reason_aggregation(
+ self, validate_reward_parameters, params, key_expectations
+ )
+ """
+ sanitized, adjustments = validate_fn(params, strict=False)
+ test_case.assertIsInstance(sanitized, dict)
+ for k, subs in key_expectations.items():
+ test_case.assertIn(k, adjustments, f"Missing adjustment for key '{k}'")
+ reason = adjustments[k].get("reason", "")
+ for sub in subs:
+ test_case.assertIn(sub, reason, f"Expected substring '{sub}' in reason for key '{k}'")
+ test_case.assertEqual(adjustments[k].get("validation_mode"), "relaxed")
+
+
+def assert_pbrs_invariance_report_classification(
+ test_case, content: str, expected_status: str, expect_additives: bool
+):
+ """Validate PBRS invariance report classification and additive reporting.
+
+ Checks that the invariance report correctly classifies PBRS behavior
+ and appropriately reports additive component involvement.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ content: Report content string to validate
+ expected_status: Expected classification: "Canonical",
+ "Canonical (with warning)", or "Non-canonical"
+ expect_additives: Whether additive components should be mentioned
+
+ Example:
+ assert_pbrs_invariance_report_classification(
+ self, report_content, "Canonical", expect_additives=False
+ )
+ assert_pbrs_invariance_report_classification(
+ self, report_content, "Non-canonical", expect_additives=True
+ )
+ """
+ test_case.assertIn(
+ expected_status, content, f"Expected invariance status '{expected_status}' not found"
+ )
+ if expect_additives:
+ test_case.assertRegex(
+ content, r"additives=\['entry', 'exit'\]|additives=\['exit', 'entry'\]"
+ )
+ else:
+ test_case.assertNotRegex(content, r"additives=\[")
+
+
+def assert_pbrs_canonical_sum_within_tolerance(test_case, total_shaping: float, tolerance: float):
+ """Validate cumulative PBRS shaping satisfies canonical bound.
+
+ For canonical PBRS, the cumulative reward shaping across a trajectory
+ must be near zero (within tolerance). This is a core PBRS invariant.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ total_shaping: Total cumulative reward shaping value
+ tolerance: Maximum allowed absolute deviation from zero
+
+ Example:
+ assert_pbrs_canonical_sum_within_tolerance(self, 5e-10, 1e-09)
+ """
+ test_case.assertLess(abs(total_shaping), tolerance)
+
+
+def assert_non_canonical_shaping_exceeds(
+ test_case, total_shaping: float, tolerance_multiple: float
+):
+ """Validate non-canonical PBRS shaping exceeds threshold.
+
+ For non-canonical PBRS (e.g., with additives), the cumulative shaping
+ should exceed a scaled tolerance threshold, indicating violation of
+ the canonical PBRS invariant.
+
+ Args:
+ test_case: Test case instance with assertion methods
+ total_shaping: Total cumulative reward shaping value
+ tolerance_multiple: Threshold value (typically scaled tolerance)
+
+ Example:
+ # Expect shaping to exceed 10x tolerance for non-canonical case
+ assert_non_canonical_shaping_exceeds(self, 0.05, 1e-08)
+ """
+ test_case.assertGreater(abs(total_shaping), tolerance_multiple)
+
+
+def assert_exit_factor_plateau_behavior(
+ test_case,
+ exit_factor_fn,
+ base_factor: float,
+ pnl: float,
+ pnl_factor: float,
+ plateau_params: dict,
+ grace: float,
+ tolerance_strict: float,
+):
+ """Assert plateau behavior: factor before grace >= factor after grace (attenuation begins after grace boundary).
+
+ Args:
+ test_case: Test case instance with assertion methods
+ exit_factor_fn: Exit factor calculation function (_get_exit_factor)
+ base_factor: Base factor for exit calculation
+ pnl: PnL value
+ pnl_factor: PnL factor multiplier
+ plateau_params: Parameters dict with plateau configuration
+ grace: Grace period threshold (exit_plateau_grace value)
+ tolerance_strict: Tolerance for numerical comparisons
+ """
+ # Test points: one before grace, one after grace
+ duration_ratio_pre = grace - 0.1 if grace >= 0.1 else grace * 0.5
+ duration_ratio_post = grace + 0.3
+
+ plateau_factor_pre = exit_factor_fn(
+ base_factor=base_factor,
+ pnl=pnl,
+ pnl_factor=pnl_factor,
+ duration_ratio=duration_ratio_pre,
+ params=plateau_params,
+ )
+ plateau_factor_post = exit_factor_fn(
+ base_factor=base_factor,
+ pnl=pnl,
+ pnl_factor=pnl_factor,
+ duration_ratio=duration_ratio_post,
+ params=plateau_params,
+ )
+
+ # Both factors should be positive
+ test_case.assertGreater(plateau_factor_pre, 0, "Pre-grace factor should be positive")
+ test_case.assertGreater(plateau_factor_post, 0, "Post-grace factor should be positive")
+
+ # Pre-grace factor should be >= post-grace factor (attenuation begins after grace)
+ test_case.assertGreaterEqual(
+ plateau_factor_pre,
+ plateau_factor_post - tolerance_strict,
+ "Plateau pre-grace factor should be >= post-grace factor",
+ )
--- /dev/null
+#!/usr/bin/env python3
+"""Configuration dataclasses for test helpers.
+
+This module provides strongly-typed configuration objects to simplify
+function signatures in test helpers, following the DRY principle and
+reducing parameter proliferation.
+
+Usage:
+ from tests.helpers.configs import RewardScenarioConfig
+
+ config = RewardScenarioConfig(
+ base_factor=90.0,
+ profit_target=0.06,
+ risk_reward_ratio=1.0,
+ tolerance_relaxed=1e-09
+ )
+
+ assert_reward_calculation_scenarios(
+ test_case, scenarios, config, validation_fn
+ )
+"""
+
+from dataclasses import dataclass
+from typing import Callable, Optional
+
+
+@dataclass
+class RewardScenarioConfig:
+ """Configuration for reward calculation scenario testing.
+
+ Encapsulates all parameters needed for reward calculation validation,
+ reducing function signature complexity and improving maintainability.
+
+ Attributes:
+ base_factor: Base scaling factor for reward calculations
+ profit_target: Target profit threshold
+ risk_reward_ratio: Risk/reward ratio for position sizing
+ tolerance_relaxed: Numerical tolerance for assertions
+ short_allowed: Whether short positions are permitted
+ action_masking: Whether to apply action masking
+ """
+
+ base_factor: float
+ profit_target: float
+ risk_reward_ratio: float
+ tolerance_relaxed: float
+ short_allowed: bool = True
+ action_masking: bool = True
+
+
+@dataclass
+class ValidationConfig:
+ """Configuration for validation helper functions.
+
+ Parameters controlling validation behavior, including tolerance levels
+ and component exclusion policies.
+
+ Attributes:
+ tolerance_strict: Strict numerical tolerance (typically 1e-12)
+ tolerance_relaxed: Relaxed numerical tolerance (typically 1e-09)
+ exclude_components: List of component names to exclude from validation
+ component_description: Human-readable description of validated components
+ """
+
+ tolerance_strict: float
+ tolerance_relaxed: float
+ exclude_components: Optional[list[str]] = None
+ component_description: str = "reward components"
+
+
+@dataclass
+class ThresholdTestConfig:
+ """Configuration for threshold behavior testing.
+
+ Parameters for testing threshold-based behavior, such as hold penalty
+ activation at max_duration boundaries.
+
+ Attributes:
+ max_duration: Maximum duration threshold
+ test_cases: List of (duration, description) tuples for testing
+ tolerance: Numerical tolerance for assertions
+ """
+
+ max_duration: int
+ test_cases: list[tuple[int, str]]
+ tolerance: float
+
+
+@dataclass
+class ProgressiveScalingConfig:
+ """Configuration for progressive scaling validation.
+
+ Parameters for validating that penalties or rewards scale progressively
+ (monotonically) with increasing input values.
+
+ Attributes:
+ input_values: Sequence of input values to test (e.g., durations)
+ expected_direction: "increasing" or "decreasing"
+ tolerance: Numerical tolerance for monotonicity checks
+ description: Human-readable description of what's being scaled
+ """
+
+ input_values: list[float]
+ expected_direction: str # "increasing" or "decreasing"
+ tolerance: float
+ description: str
+
+
+@dataclass
+class ExitFactorConfig:
+ """Configuration for exit factor validation.
+
+ Parameters specific to exit factor calculations, including attenuation
+ mode and plateau behavior.
+
+ Attributes:
+ base_factor: Base scaling factor
+ pnl: Profit/loss value
+ pnl_factor: PnL amplification factor
+ duration_ratio: Ratio of current to maximum duration
+ attenuation_mode: Mode of attenuation ("linear", "power", etc.)
+ plateau_enabled: Whether plateau behavior is active
+ plateau_grace: Grace period before attenuation begins
+ tolerance: Numerical tolerance for assertions
+ """
+
+ base_factor: float
+ pnl: float
+ pnl_factor: float
+ duration_ratio: float
+ attenuation_mode: str
+ plateau_enabled: bool = False
+ plateau_grace: float = 0.0
+ tolerance: float = 1e-09
+
+
+@dataclass
+class StatisticalTestConfig:
+ """Configuration for statistical hypothesis testing.
+
+ Parameters for statistical validation, including bootstrap settings
+ and hypothesis test configuration.
+
+ Attributes:
+ n_bootstrap: Number of bootstrap resamples
+ confidence_level: Confidence level for intervals (0-1)
+ seed: Random seed for reproducibility
+ adjust_method: Multiple testing correction method
+ alpha: Significance level
+ """
+
+ n_bootstrap: int = 100
+ confidence_level: float = 0.95
+ seed: int = 42
+ adjust_method: Optional[str] = None
+ alpha: float = 0.05
+
+
+@dataclass
+class SimulationConfig:
+ """Configuration for reward simulation.
+
+ Parameters controlling simulation behavior for generating synthetic
+ test datasets.
+
+ Attributes:
+ num_samples: Number of samples to generate
+ seed: Random seed for reproducibility
+ max_duration_ratio: Maximum duration ratio for trades
+ trading_mode: Trading mode ("margin", "spot", etc.)
+ pnl_base_std: Base standard deviation for PnL generation
+ pnl_duration_vol_scale: Volatility scaling factor for duration
+ """
+
+ num_samples: int
+ seed: int
+ max_duration_ratio: float = 2.0
+ trading_mode: str = "margin"
+ pnl_base_std: float = 0.02
+ pnl_duration_vol_scale: float = 0.001
+
+
+@dataclass
+class WarningCaptureConfig:
+ """Configuration for warning capture helpers.
+
+ Parameters controlling warning capture behavior in tests.
+
+ Attributes:
+ warning_category: Expected warning category class
+ expected_substrings: List of substrings expected in warning messages
+ strict_mode: If True, all expected substrings must be present
+ """
+
+ warning_category: type
+ expected_substrings: list[str]
+ strict_mode: bool = True
+
+
+# Type aliases for common callback signatures
+ValidationCallback = Callable[[object, object, str, float], None]
+ContextFactory = Callable[..., object]
+
+
+__all__ = [
+ "RewardScenarioConfig",
+ "ValidationConfig",
+ "ThresholdTestConfig",
+ "ProgressiveScalingConfig",
+ "ExitFactorConfig",
+ "StatisticalTestConfig",
+ "SimulationConfig",
+ "WarningCaptureConfig",
+ "ValidationCallback",
+ "ContextFactory",
+]
--- /dev/null
+import math
+
+import numpy as np
+
+from reward_space_analysis import (
+ Actions,
+ Positions,
+ RewardContext,
+ _get_bool_param,
+ _get_float_param,
+ calculate_reward,
+)
+
+
+def test_get_bool_param_none_and_invalid_literal():
+ params_none = {"check_invariants": None}
+ # None should coerce to False (coverage for _to_bool None path)
+ assert _get_bool_param(params_none, "check_invariants", True) is False
+
+ params_invalid = {"check_invariants": "not_a_bool"}
+ # Invalid literal triggers ValueError in _to_bool; fallback returns default (True)
+ assert _get_bool_param(params_invalid, "check_invariants", True) is True
+
+
+def test_get_float_param_invalid_string_returns_nan():
+ params = {"idle_penalty_scale": "abc"}
+ val = _get_float_param(params, "idle_penalty_scale", 0.5)
+ assert math.isnan(val)
+
+
+def test_calculate_reward_unrealized_pnl_hold_path():
+ # Exercise unrealized_pnl branch during hold to cover next_pnl tanh path
+ context = RewardContext(
+ pnl=0.01,
+ trade_duration=5,
+ idle_duration=0,
+ max_unrealized_profit=0.02,
+ min_unrealized_profit=-0.01,
+ position=Positions.Long,
+ action=Actions.Neutral,
+ )
+ params = {
+ "hold_potential_enabled": True,
+ "unrealized_pnl": True,
+ "pnl_factor_beta": 0.5,
+ }
+ breakdown = calculate_reward(
+ context,
+ params,
+ base_factor=100.0,
+ profit_target=0.05,
+ risk_reward_ratio=1.0,
+ short_allowed=True,
+ action_masking=True,
+ previous_potential=np.nan,
+ )
+ assert math.isfinite(breakdown.prev_potential)
+ assert math.isfinite(breakdown.next_potential)
+ # shaping should activate (non-zero or zero after potential difference)
+ assert breakdown.prev_potential != 0.0 or breakdown.next_potential != 0.0
--- /dev/null
+#!/usr/bin/env python3
+"""Utility tests narrowed to data loading behaviors.
+
+Moved tests:
+- Report formatting invariants -> integration/test_report_formatting.py
+- Additives deterministic contribution -> components/test_additives.py
+- CLI CSV + params propagation -> cli/test_cli_params_and_csv.py
+"""
+
+import pickle
+import unittest
+import warnings
+from pathlib import Path
+
+import pandas as pd
+
+from reward_space_analysis import load_real_episodes
+
+from ..test_base import RewardSpaceTestBase
+
+
+class TestLoadRealEpisodes(RewardSpaceTestBase):
+ """Unit tests for load_real_episodes."""
+
+ def test_drop_exact_duplicates_warns(self):
+ """Invariant 108: duplicate rows dropped with warning showing count removed."""
+ df = pd.DataFrame(
+ {
+ "pnl": [0.01, 0.01, -0.02], # first two duplicate
+ "trade_duration": [10, 10, 20],
+ "idle_duration": [5, 5, 0],
+ "position": [1.0, 1.0, 0.0],
+ "action": [2.0, 2.0, 0.0],
+ "reward": [1.0, 1.0, -0.5],
+ }
+ )
+ p = Path(self.temp_dir) / "dupes.pkl"
+ self.write_pickle(df, p)
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
+ loaded = load_real_episodes(p)
+ self.assertEqual(len(loaded), 2, "Expected duplicate row removal to reduce length")
+ msgs = [str(warning.message) for warning in w]
+ dup_msgs = [m for m in msgs if "duplicate transition" in m]
+ self.assertTrue(
+ any("Removed" in m for m in dup_msgs), f"No duplicate removal warning found in: {msgs}"
+ )
+
+ def test_missing_multiple_required_columns_single_warning(self):
+ """Invariant 109: enforce_columns=False fills all missing required cols with NaN and single warning."""
+ transitions = [
+ {"pnl": 0.02, "trade_duration": 12}, # Missing idle_duration, position, action, reward
+ {"pnl": -0.01, "trade_duration": 3},
+ ]
+ p = Path(self.temp_dir) / "missing_multi.pkl"
+ self.write_pickle(transitions, p)
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
+ loaded = load_real_episodes(p, enforce_columns=False)
+ required = {"idle_duration", "position", "action", "reward"}
+ for col in required:
+ self.assertIn(col, loaded.columns)
+ self.assertTrue(loaded[col].isna().all(), f"Column {col} should be all NaN")
+ msgs = [str(warning.message) for warning in w]
+ miss_msgs = [m for m in msgs if "missing columns" in m]
+ self.assertEqual(
+ len(miss_msgs), 1, f"Expected single missing columns warning (got {miss_msgs})"
+ )
+
+ def write_pickle(self, obj, path: Path):
+ with path.open("wb") as f:
+ pickle.dump(obj, f)
+
+ def test_top_level_dict_transitions(self):
+ df = pd.DataFrame(
+ {
+ "pnl": [0.01],
+ "trade_duration": [10],
+ "idle_duration": [5],
+ "position": [1.0],
+ "action": [2.0],
+ "reward": [1.0],
+ }
+ )
+ p = Path(self.temp_dir) / "top.pkl"
+ self.write_pickle({"transitions": df}, p)
+ loaded = load_real_episodes(p)
+ self.assertIsInstance(loaded, pd.DataFrame)
+ self.assertEqual(list(loaded.columns).count("pnl"), 1)
+ self.assertEqual(len(loaded), 1)
+
+ def test_mixed_episode_list_warns_and_flattens(self):
+ ep1 = {"episode_id": 1}
+ ep2 = {
+ "episode_id": 2,
+ "transitions": [
+ {
+ "pnl": 0.02,
+ "trade_duration": 5,
+ "idle_duration": 0,
+ "position": 1.0,
+ "action": 2.0,
+ "reward": 2.0,
+ }
+ ],
+ }
+ p = Path(self.temp_dir) / "mixed.pkl"
+ self.write_pickle([ep1, ep2], p)
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
+ loaded = load_real_episodes(p)
+ _ = w
+ self.assertEqual(len(loaded), 1)
+ self.assertPlacesEqual(float(loaded.iloc[0]["pnl"]), 0.02, places=7)
+
+ def test_non_iterable_transitions_raises(self):
+ bad = {"transitions": 123}
+ p = Path(self.temp_dir) / "bad.pkl"
+ self.write_pickle(bad, p)
+ with self.assertRaises(ValueError):
+ load_real_episodes(p)
+
+ def test_enforce_columns_false_fills_na(self):
+ trans = [
+ {"pnl": 0.03, "trade_duration": 10, "idle_duration": 0, "position": 1.0, "action": 2.0}
+ ]
+ p = Path(self.temp_dir) / "fill.pkl"
+ self.write_pickle(trans, p)
+ loaded = load_real_episodes(p, enforce_columns=False)
+ self.assertIn("reward", loaded.columns)
+ self.assertTrue(loaded["reward"].isna().all())
+
+ def test_casting_numeric_strings(self):
+ trans = [
+ {
+ "pnl": "0.04",
+ "trade_duration": "20",
+ "idle_duration": "0",
+ "position": "1.0",
+ "action": "2.0",
+ "reward": "3.0",
+ }
+ ]
+ p = Path(self.temp_dir) / "strs.pkl"
+ self.write_pickle(trans, p)
+ loaded = load_real_episodes(p)
+ self.assertIn("pnl", loaded.columns)
+ self.assertIn(loaded["pnl"].dtype.kind, ("f", "i"))
+ self.assertPlacesEqual(float(loaded.iloc[0]["pnl"]), 0.04, places=7)
+
+ def test_pickled_dataframe_loads(self):
+ test_episodes = pd.DataFrame(
+ {
+ "pnl": [0.01, -0.02, 0.03],
+ "trade_duration": [10, 20, 15],
+ "idle_duration": [5, 0, 8],
+ "position": [1.0, 0.0, 1.0],
+ "action": [2.0, 0.0, 2.0],
+ "reward": [10.5, -5.2, 15.8],
+ }
+ )
+ p = Path(self.temp_dir) / "test_episodes.pkl"
+ self.write_pickle(test_episodes, p)
+ loaded_data = load_real_episodes(p)
+ self.assertIsInstance(loaded_data, pd.DataFrame)
+ self.assertEqual(len(loaded_data), 3)
+ self.assertIn("pnl", loaded_data.columns)
+
+
+if __name__ == "__main__":
+ unittest.main()
--- /dev/null
+#!/usr/bin/env python3
+"""Warning capture and assertion helpers.
+
+This module provides standardized context managers and utilities for
+capturing and validating warnings in tests, reducing boilerplate code
+and ensuring consistent warning handling patterns.
+
+Usage:
+ from tests.helpers.warnings import assert_diagnostic_warning
+
+ with assert_diagnostic_warning(["exit_factor", "threshold"]) as caught:
+ result = calculate_something_that_warns()
+
+ # Assertions are automatic; caught warnings available for inspection
+"""
+
+import warnings
+from contextlib import contextmanager
+from typing import Any, Optional
+
+try:
+ from reward_space_analysis import RewardDiagnosticsWarning
+except ImportError:
+ RewardDiagnosticsWarning = RuntimeWarning # type: ignore
+
+
+@contextmanager
+def capture_warnings(warning_category: type[Warning] = Warning, always_capture: bool = True):
+ """Context manager for capturing warnings during test execution.
+
+ Provides a standardized way to capture warnings with consistent
+ configuration across the test suite.
+
+ Args:
+ warning_category: Warning category to filter (default: Warning for all)
+ always_capture: If True, use simplefilter("always") to capture all warnings
+
+ Yields:
+ list: List of captured warning objects
+
+ Example:
+ with capture_warnings(RewardDiagnosticsWarning) as caught:
+ result = function_that_warns()
+ assert len(caught) > 0
+ """
+ with warnings.catch_warnings(record=True) as caught:
+ if always_capture:
+ warnings.simplefilter("always", warning_category)
+ else:
+ warnings.simplefilter("default", warning_category)
+ yield caught
+
+
+@contextmanager
+def assert_diagnostic_warning(
+ expected_substrings: list[str],
+ warning_category: Optional[type[Warning]] = None,
+ strict_mode: bool = True,
+):
+ """Context manager that captures warnings and asserts their presence.
+
+ Automatically validates that expected warning substrings are present
+ in captured warning messages. Reduces boilerplate in tests that need
+ to validate warning behavior.
+
+ Args:
+ expected_substrings: List of substrings expected in warning messages
+ warning_category: Warning category to filter (default: use module's default)
+ strict_mode: If True, all substrings must be present; if False, at least one
+
+ Yields:
+ list: List of captured warning objects for additional inspection
+
+ Raises:
+ AssertionError: If expected warnings are not found
+
+ Example:
+ with assert_diagnostic_warning(["invalid", "clamped"]) as caught:
+ result = function_with_invalid_param()
+ """
+ category = warning_category if warning_category is not None else RewardDiagnosticsWarning
+
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", category)
+ yield caught
+
+ # Filter to only warnings of the expected category
+ filtered = [w for w in caught if issubclass(w.category, category)]
+
+ if not filtered:
+ raise AssertionError(
+ f"Expected {category.__name__} but no warnings of that category were captured. "
+ f"Total warnings: {len(caught)}"
+ )
+
+ # Check for expected substrings
+ all_messages = " ".join(str(w.message) for w in filtered)
+
+ if strict_mode:
+ # All substrings must be present
+ for substring in expected_substrings:
+ if substring not in all_messages:
+ raise AssertionError(
+ f"Expected substring '{substring}' not found in warning messages. "
+ f"Captured messages: {all_messages}"
+ )
+ else:
+ # At least one substring must be present
+ found = any(substring in all_messages for substring in expected_substrings)
+ if not found:
+ raise AssertionError(
+ f"None of the expected substrings {expected_substrings} found in warnings. "
+ f"Captured messages: {all_messages}"
+ )
+
+
+@contextmanager
+def assert_no_warnings(warning_category: type[Warning] = Warning):
+ """Context manager that asserts no warnings are raised.
+
+ Useful for validating that clean code paths don't emit unexpected warnings.
+
+ Args:
+ warning_category: Warning category to check (default: all warnings)
+
+ Yields:
+ None
+
+ Raises:
+ AssertionError: If any warnings of the specified category are captured
+
+ Example:
+ with assert_no_warnings(RewardDiagnosticsWarning):
+ result = function_that_should_not_warn()
+ """
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", warning_category)
+ yield
+
+ filtered = [w for w in caught if issubclass(w.category, warning_category)]
+ if filtered:
+ messages = [str(w.message) for w in filtered]
+ raise AssertionError(
+ f"Expected no {warning_category.__name__} but {len(filtered)} were raised: {messages}"
+ )
+
+
+def validate_warning_content(
+ caught_warnings: list[Any],
+ warning_category: type[Warning],
+ expected_substrings: list[str],
+ strict_mode: bool = True,
+) -> None:
+ """Validate captured warnings contain expected content.
+
+ Helper function for manual validation of warning content when using
+ a standard catch_warnings context.
+
+ Args:
+ caught_warnings: List of captured warning objects from catch_warnings
+ warning_category: Expected warning category
+ expected_substrings: List of substrings that should appear in messages
+ strict_mode: If True, all substrings must be present; if False, at least one
+
+ Raises:
+ AssertionError: If validation fails
+ """
+ filtered = [w for w in caught_warnings if issubclass(w.category, warning_category)]
+
+ if not filtered:
+ raise AssertionError(
+ f"No warnings of type {warning_category.__name__} captured. "
+ f"Total warnings: {len(caught_warnings)}"
+ )
+
+ all_messages = " ".join(str(w.message) for w in filtered)
+
+ if strict_mode:
+ missing = [s for s in expected_substrings if s not in all_messages]
+ if missing:
+ raise AssertionError(
+ f"Missing expected substrings: {missing}. Captured messages: {all_messages}"
+ )
+ else:
+ found = any(s in all_messages for s in expected_substrings)
+ if not found:
+ raise AssertionError(
+ f"None of the expected substrings {expected_substrings} found. "
+ f"Captured messages: {all_messages}"
+ )
+
+
+__all__ = [
+ "capture_warnings",
+ "assert_diagnostic_warning",
+ "assert_no_warnings",
+ "validate_warning_content",
+]
import unittest
from pathlib import Path
-from .test_base import RewardSpaceTestBase
+import pytest
+
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.integration
class TestIntegration(RewardSpaceTestBase):
def test_cli_execution_produces_expected_files(self):
"""CLI produces expected files."""
cmd = [
+ "uv",
+ "run",
sys.executable,
- "reward_space_analysis.py",
+ str(Path(__file__).parent.parent.parent / "reward_space_analysis.py"),
"--num_samples",
str(self.TEST_SAMPLES),
"--seed",
def test_manifest_structure_and_reproducibility(self):
"""Manifest structure + reproducibility."""
cmd1 = [
+ "uv",
+ "run",
sys.executable,
- "reward_space_analysis.py",
+ str(Path(__file__).parent.parent.parent / "reward_space_analysis.py"),
"--num_samples",
str(self.TEST_SAMPLES),
"--seed",
str(self.output_path / "run1"),
]
cmd2 = [
+ "uv",
+ "run",
sys.executable,
- "reward_space_analysis.py",
+ str(Path(__file__).parent.parent.parent / "reward_space_analysis.py"),
"--num_samples",
str(self.TEST_SAMPLES),
"--seed",
--- /dev/null
+#!/usr/bin/env python3
+"""Report formatting focused tests moved from helpers/test_utilities.py.
+
+Owns invariant: report-abs-shaping-line-091 (integration category)
+"""
+
+import re
+import unittest
+
+import numpy as np
+import pandas as pd
+
+from reward_space_analysis import PBRS_INVARIANCE_TOL, write_complete_statistical_analysis
+
+from ..test_base import RewardSpaceTestBase
+
+
+class TestReportFormatting(RewardSpaceTestBase):
+ def test_statistical_validation_section_absent_when_no_hypothesis_tests(self):
+ """Section 5 omitted entirely when no hypothesis tests qualify (idle<30, groups<2, pnl sign groups<30)."""
+ # Construct df with idle_duration always zero -> reward_idle all zeros so idle_mask.sum()==0
+ # Position has only one unique value -> groups<2
+ # pnl all zeros so no positive/negative groups with >=30 each
+ n = 40
+ df = pd.DataFrame(
+ {
+ "reward": np.zeros(n),
+ "reward_idle": np.zeros(n),
+ "reward_hold": np.zeros(n),
+ "reward_exit": np.zeros(n),
+ "pnl": np.zeros(n),
+ "trade_duration": np.ones(n),
+ "idle_duration": np.zeros(n),
+ "position": np.zeros(n),
+ }
+ )
+ content = self._write_report(df, real_df=None)
+ # Hypothesis section header should be absent
+ self.assertNotIn("## 5. Statistical Validation", content)
+ # Summary numbering still includes Statistical Validation line (always written)
+ self.assertIn("5. **Statistical Validation**", content)
+ # Distribution shift subsection appears only inside Section 5; since Section 5 omitted it should be absent.
+ self.assertNotIn("### 5.4 Distribution Shift Analysis", content)
+ self.assertNotIn("_Not performed (no real episodes provided)._", content)
+
+ def _write_report(
+ self, df: pd.DataFrame, *, real_df: pd.DataFrame | None = None, **kwargs
+ ) -> str:
+ """Helper: invoke write_complete_statistical_analysis into temp dir and return content."""
+ out_dir = self.output_path / "report_tmp"
+ # Ensure required columns present (action required for summary stats)
+ # Ensure required columns present (action required for summary stats)
+ required_cols = [
+ "action",
+ "reward_invalid",
+ "reward_shaping",
+ "reward_entry_additive",
+ "reward_exit_additive",
+ "duration_ratio",
+ "idle_ratio",
+ ]
+ df = df.copy()
+ for col in required_cols:
+ if col not in df.columns:
+ if col == "action":
+ df[col] = 0.0
+ else:
+ df[col] = 0.0
+ write_complete_statistical_analysis(
+ df=df,
+ output_dir=out_dir,
+ profit_target=self.TEST_PROFIT_TARGET,
+ seed=self.SEED,
+ real_df=real_df,
+ adjust_method="none",
+ strict_diagnostics=False,
+ bootstrap_resamples=200, # keep test fast
+ skip_partial_dependence=kwargs.get("skip_partial_dependence", False),
+ skip_feature_analysis=kwargs.get("skip_feature_analysis", False),
+ )
+ report_path = out_dir / "statistical_analysis.md"
+ return report_path.read_text(encoding="utf-8")
+
+ """Tests for report formatting elements not covered elsewhere."""
+
+ def test_abs_shaping_line_present_and_constant(self):
+ """Abs Σ Shaping Reward line present, formatted, uses constant not literal."""
+ df = pd.DataFrame(
+ {
+ "reward_shaping": [self.TOL_IDENTITY_STRICT, -self.TOL_IDENTITY_STRICT],
+ "reward_entry_additive": [0.0, 0.0],
+ "reward_exit_additive": [0.0, 0.0],
+ }
+ )
+ total_shaping = df["reward_shaping"].sum()
+ self.assertLess(abs(total_shaping), PBRS_INVARIANCE_TOL)
+ lines = [f"| Abs Σ Shaping Reward | {abs(total_shaping):.6e} |"]
+ content = "\n".join(lines)
+ m = re.search("\\| Abs Σ Shaping Reward \\| ([0-9]+\\.[0-9]{6}e[+-][0-9]{2}) \\|", content)
+ self.assertIsNotNone(m, "Abs Σ Shaping Reward line missing or misformatted")
+ val = float(m.group(1)) if m else None
+ if val is not None:
+ self.assertLess(val, self.TOL_NEGLIGIBLE + self.TOL_IDENTITY_STRICT)
+ self.assertNotIn(
+ str(self.TOL_GENERIC_EQ),
+ content,
+ "Tolerance constant value should appear, not raw literal",
+ )
+
+ def test_distribution_shift_section_present_with_real_episodes(self):
+ """Distribution Shift section renders metrics table when real episodes provided."""
+ # Synthetic df (ensure >=10 non-NaN per feature)
+ synth_df = self.make_stats_df(n=60, seed=123)
+ # Real df: shift slightly (different mean) so metrics non-zero
+ real_df = synth_df.copy()
+ real_df["pnl"] = real_df["pnl"] + 0.001 # small mean shift
+ real_df["trade_duration"] = real_df["trade_duration"] * 1.01
+ real_df["idle_duration"] = real_df["idle_duration"] * 0.99
+ content = self._write_report(synth_df, real_df=real_df)
+ # Assert metrics header and at least one feature row
+ self.assertIn("### 5.4 Distribution Shift Analysis", content)
+ self.assertIn(
+ "| Feature | KL Div | JS Dist | Wasserstein | KS Stat | KS p-value |", content
+ )
+ # Ensure placeholder text absent
+ self.assertNotIn("_Not performed (no real episodes provided)._", content)
+ # Basic regex to find a feature row (pnl)
+ import re as _re
+
+ m = _re.search(r"\| pnl \| ([0-9]+\.[0-9]{4}) \| ([0-9]+\.[0-9]{4}) \|", content)
+ self.assertIsNotNone(
+ m, "pnl feature row missing or misformatted in distribution shift table"
+ )
+
+ def test_partial_dependence_redundancy_note_emitted(self):
+ """Redundancy note appears when both feature analysis and partial dependence skipped."""
+ df = self.make_stats_df(
+ n=10, seed=321
+ ) # small but >=4 so skip_feature_analysis flag drives behavior
+ content = self._write_report(
+ df,
+ real_df=None,
+ skip_feature_analysis=True,
+ skip_partial_dependence=True,
+ )
+ self.assertIn(
+ "_Note: --skip_partial_dependence is redundant when feature analysis is skipped._",
+ content,
+ )
+ # Ensure feature importance section shows skipped label
+ self.assertIn("Feature Importance - (skipped)", content)
+ # Ensure no partial dependence plots line for success path appears
+ self.assertNotIn("partial_dependence_*.csv", content)
+
+
+if __name__ == "__main__":
+ unittest.main()
--- /dev/null
+"""Integration smoke tests: component activation and long/short symmetry."""
+
+import pytest
+
+from reward_space_analysis import (
+ Actions,
+ Positions,
+ calculate_reward,
+)
+
+from ..test_base import RewardSpaceTestBase
+
+
+class TestRewardCalculation(RewardSpaceTestBase):
+ """High-level integration smoke tests for reward calculation."""
+
+ @pytest.mark.smoke
+ def test_reward_component_activation_smoke(
+ self,
+ ):
+ """Smoke: each primary component activates in a representative scenario.
+
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101)
+ Detailed progressive / boundary / proportional invariants are NOT asserted here.
+ We only check sign / non-zero activation plus total decomposition identity.
+ """
+ scenarios = [
+ (
+ "hold_penalty_active",
+ dict(
+ pnl=0.0,
+ trade_duration=160, # > default threshold
+ idle_duration=0,
+ max_unrealized_profit=0.02,
+ min_unrealized_profit=-0.01,
+ position=Positions.Long,
+ action=Actions.Neutral,
+ ),
+ "hold_penalty",
+ ),
+ (
+ "idle_penalty_active",
+ dict(
+ pnl=0.0,
+ trade_duration=0,
+ idle_duration=25,
+ max_unrealized_profit=0.0,
+ min_unrealized_profit=0.0,
+ position=Positions.Neutral,
+ action=Actions.Neutral,
+ ),
+ "idle_penalty",
+ ),
+ (
+ "profitable_exit_long",
+ dict(
+ pnl=0.04,
+ trade_duration=40,
+ idle_duration=0,
+ max_unrealized_profit=0.05,
+ min_unrealized_profit=0.0,
+ position=Positions.Long,
+ action=Actions.Long_exit,
+ ),
+ "exit_component",
+ ),
+ (
+ "invalid_action_penalty",
+ dict(
+ pnl=0.01,
+ trade_duration=10,
+ idle_duration=0,
+ max_unrealized_profit=0.02,
+ min_unrealized_profit=0.0,
+ position=Positions.Short,
+ action=Actions.Long_exit, # invalid pairing
+ ),
+ "invalid_penalty",
+ ),
+ ]
+
+ for name, ctx_kwargs, expected_component in scenarios:
+ with self.subTest(scenario=name):
+ ctx = self.make_ctx(**ctx_kwargs)
+ breakdown = calculate_reward(
+ ctx,
+ self.DEFAULT_PARAMS,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ short_allowed=True,
+ action_masking=expected_component != "invalid_penalty",
+ )
+
+ value = getattr(breakdown, expected_component)
+ # Sign / activation expectations
+ if expected_component in {"hold_penalty", "idle_penalty", "invalid_penalty"}:
+ self.assertLess(value, 0.0, f"{expected_component} should be negative: {name}")
+ elif expected_component == "exit_component":
+ self.assertGreater(value, 0.0, f"exit_component should be positive: {name}")
+
+ # Decomposition identity (relaxed tolerance)
+ comp_sum = (
+ breakdown.exit_component
+ + breakdown.idle_penalty
+ + breakdown.hold_penalty
+ + breakdown.invalid_penalty
+ + breakdown.reward_shaping
+ + breakdown.entry_additive
+ + breakdown.exit_additive
+ )
+ self.assertAlmostEqualFloat(
+ breakdown.total,
+ comp_sum,
+ tolerance=self.TOL_IDENTITY_RELAXED,
+ msg=f"Total != sum components in {name}",
+ )
+
+ def test_long_short_symmetry_smoke(self):
+ """Smoke: exit component sign & approximate magnitude symmetry for long vs short.
+
+ Strict magnitude precision is tested in robustness suite; here we assert coarse symmetry.
+ """
+ params = self.base_params()
+ params.pop("base_factor", None)
+ base_factor = 100.0
+ profit_target = 0.04
+ rr = self.TEST_RR
+
+ for pnl, label in [(0.02, "profit"), (-0.02, "loss")]:
+ with self.subTest(pnl=pnl, label=label):
+ ctx_long = self.make_ctx(
+ pnl=pnl,
+ trade_duration=50,
+ idle_duration=0,
+ max_unrealized_profit=abs(pnl) + 0.005,
+ min_unrealized_profit=0.0 if pnl > 0 else pnl,
+ position=Positions.Long,
+ action=Actions.Long_exit,
+ )
+ ctx_short = self.make_ctx(
+ pnl=pnl,
+ trade_duration=50,
+ idle_duration=0,
+ max_unrealized_profit=abs(pnl) + 0.005 if pnl > 0 else 0.01,
+ min_unrealized_profit=0.0 if pnl > 0 else pnl,
+ position=Positions.Short,
+ action=Actions.Short_exit,
+ )
+
+ br_long = calculate_reward(
+ ctx_long,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=rr,
+ short_allowed=True,
+ action_masking=True,
+ )
+ br_short = calculate_reward(
+ ctx_short,
+ params,
+ base_factor=base_factor,
+ profit_target=profit_target,
+ risk_reward_ratio=rr,
+ short_allowed=True,
+ action_masking=True,
+ )
+
+ if pnl > 0:
+ self.assertGreater(br_long.exit_component, 0.0)
+ self.assertGreater(br_short.exit_component, 0.0)
+ else:
+ self.assertLess(br_long.exit_component, 0.0)
+ self.assertLess(br_short.exit_component, 0.0)
+
+ # Coarse symmetry: relative diff below relaxed tolerance
+ rel_diff = abs(abs(br_long.exit_component) - abs(br_short.exit_component)) / max(
+ 1e-12, abs(br_long.exit_component)
+ )
+ self.assertLess(rel_diff, 0.25, f"Excessive asymmetry ({rel_diff:.3f}) for {label}")
--- /dev/null
+#!/usr/bin/env python3
+"""Tests for Potential-Based Reward Shaping (PBRS) mechanics."""
+
+import unittest
+
+import numpy as np
+import pytest
+
+from reward_space_analysis import (
+ DEFAULT_IDLE_DURATION_MULTIPLIER,
+ DEFAULT_MODEL_REWARD_PARAMETERS,
+ PBRS_INVARIANCE_TOL,
+ _compute_entry_additive,
+ _compute_exit_additive,
+ _compute_exit_potential,
+ _compute_hold_potential,
+ _get_float_param,
+ apply_potential_shaping,
+ get_max_idle_duration_candles,
+ simulate_samples,
+ validate_reward_parameters,
+ write_complete_statistical_analysis,
+)
+
+from ..helpers import (
+ assert_non_canonical_shaping_exceeds,
+ assert_pbrs_canonical_sum_within_tolerance,
+ assert_pbrs_invariance_report_classification,
+ assert_relaxed_multi_reason_aggregation,
+ build_validation_case,
+ execute_validation_batch,
+)
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.pbrs
+
+
+class TestPBRS(RewardSpaceTestBase):
+ """PBRS mechanics tests (transforms, parameters, potentials, invariance)."""
+
+ # ---------------- Potential transform mechanics ---------------- #
+
+ def test_pbrs_progressive_release_decay_clamped(self):
+ """progressive_release decay>1 clamps -> Φ'=0 & Δ=-Φ_prev."""
+ params = self.DEFAULT_PARAMS.copy()
+ params.update(
+ {
+ "potential_gamma": DEFAULT_MODEL_REWARD_PARAMETERS["potential_gamma"],
+ "exit_potential_mode": "progressive_release",
+ "exit_potential_decay": 5.0,
+ "hold_potential_enabled": True,
+ "entry_additive_enabled": False,
+ "exit_additive_enabled": False,
+ }
+ )
+ current_pnl = 0.02
+ current_dur = 0.5
+ prev_potential = _compute_hold_potential(current_pnl, current_dur, params)
+ _total_reward, reward_shaping, next_potential = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=current_pnl,
+ current_duration_ratio=current_dur,
+ next_pnl=0.0,
+ next_duration_ratio=0.0,
+ is_exit=True,
+ is_entry=False,
+ last_potential=0.789,
+ params=params,
+ )
+ self.assertAlmostEqualFloat(next_potential, 0.0, tolerance=self.TOL_IDENTITY_RELAXED)
+ self.assertAlmostEqualFloat(
+ reward_shaping, -prev_potential, tolerance=self.TOL_IDENTITY_RELAXED
+ )
+
+ def test_pbrs_spike_cancel_invariance(self):
+ """spike_cancel terminal shaping ≈0 (Φ' inversion yields cancellation)."""
+ params = self.DEFAULT_PARAMS.copy()
+ params.update(
+ {
+ "potential_gamma": 0.9,
+ "exit_potential_mode": "spike_cancel",
+ "hold_potential_enabled": True,
+ "entry_additive_enabled": False,
+ "exit_additive_enabled": False,
+ }
+ )
+ current_pnl = 0.015
+ current_dur = 0.4
+ prev_potential = _compute_hold_potential(current_pnl, current_dur, params)
+ gamma = _get_float_param(
+ params, "potential_gamma", DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
+ )
+ expected_next_potential = (
+ prev_potential / gamma if gamma not in (0.0, None) else prev_potential
+ )
+ _total_reward, reward_shaping, next_potential = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=current_pnl,
+ current_duration_ratio=current_dur,
+ next_pnl=0.0,
+ next_duration_ratio=0.0,
+ is_exit=True,
+ is_entry=False,
+ last_potential=prev_potential,
+ params=params,
+ )
+ self.assertAlmostEqualFloat(
+ next_potential, expected_next_potential, tolerance=self.TOL_IDENTITY_RELAXED
+ )
+ self.assertNearZero(reward_shaping, atol=self.TOL_IDENTITY_RELAXED)
+
+ # ---------------- Invariance sum checks (simulate_samples) ---------------- #
+
+ def test_canonical_invariance_flag_and_sum(self):
+ """Canonical mode + no additives -> invariant flags True and Σ shaping ≈ 0."""
+ params = self.base_params(
+ exit_potential_mode="canonical",
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ hold_potential_enabled=True,
+ )
+ df = simulate_samples(
+ params={**params, "max_trade_duration_candles": 100},
+ num_samples=400,
+ seed=self.SEED,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ max_duration_ratio=2.0,
+ trading_mode="margin",
+ pnl_base_std=self.TEST_PNL_STD,
+ pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
+ )
+ unique_flags = set(df["pbrs_invariant"].unique().tolist())
+ self.assertEqual(unique_flags, {True}, f"Unexpected invariant flags: {unique_flags}")
+ total_shaping = float(df["reward_shaping"].sum())
+ assert_pbrs_canonical_sum_within_tolerance(self, total_shaping, PBRS_INVARIANCE_TOL)
+
+ def test_non_canonical_flag_false_and_sum_nonzero(self):
+ """Non-canonical mode -> invariant flags False and Σ shaping significantly non-zero."""
+ params = self.base_params(
+ exit_potential_mode="progressive_release",
+ exit_potential_decay=0.25,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ hold_potential_enabled=True,
+ )
+ df = simulate_samples(
+ params={**params, "max_trade_duration_candles": 100},
+ num_samples=400,
+ seed=self.SEED,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ max_duration_ratio=2.0,
+ trading_mode="margin",
+ pnl_base_std=self.TEST_PNL_STD,
+ pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
+ )
+ unique_flags = set(df["pbrs_invariant"].unique().tolist())
+ self.assertEqual(unique_flags, {False}, f"Unexpected invariant flags: {unique_flags}")
+ total_shaping = float(df["reward_shaping"].sum())
+ assert_non_canonical_shaping_exceeds(self, total_shaping, PBRS_INVARIANCE_TOL * 10)
+
+ # ---------------- Additives and canonical path mechanics ---------------- #
+
+ def test_additive_components_disabled_return_zero(self):
+ """Entry/exit additives return zero when disabled."""
+ params_entry = {"entry_additive_enabled": False, "entry_additive_scale": 1.0}
+ val_entry = _compute_entry_additive(0.5, 0.3, params_entry)
+ self.assertEqual(float(val_entry), 0.0)
+ params_exit = {"exit_additive_enabled": False, "exit_additive_scale": 1.0}
+ val_exit = _compute_exit_additive(0.5, 0.3, params_exit)
+ self.assertEqual(float(val_exit), 0.0)
+
+ def test_exit_potential_canonical(self):
+ """Canonical exit resets potential; additives auto-disabled."""
+ params = self.base_params(
+ exit_potential_mode="canonical",
+ hold_potential_enabled=True,
+ entry_additive_enabled=True,
+ exit_additive_enabled=True,
+ )
+ base_reward = 0.25
+ current_pnl = 0.05
+ current_duration_ratio = 0.4
+ next_pnl = 0.0
+ next_duration_ratio = 0.0
+ total, shaping, next_potential = apply_potential_shaping(
+ base_reward=base_reward,
+ current_pnl=current_pnl,
+ current_duration_ratio=current_duration_ratio,
+ next_pnl=next_pnl,
+ next_duration_ratio=next_duration_ratio,
+ is_exit=True,
+ is_entry=False,
+ last_potential=0.789,
+ params=params,
+ )
+ self.assertIn("_pbrs_invariance_applied", params)
+ self.assertFalse(
+ params["entry_additive_enabled"],
+ "Entry additive should be auto-disabled in canonical mode",
+ )
+ self.assertFalse(
+ params["exit_additive_enabled"],
+ "Exit additive should be auto-disabled in canonical mode",
+ )
+ self.assertPlacesEqual(next_potential, 0.0, places=12)
+ current_potential = _compute_hold_potential(
+ current_pnl,
+ current_duration_ratio,
+ {"hold_potential_enabled": True, "hold_potential_scale": 1.0},
+ )
+ self.assertAlmostEqual(shaping, -current_potential, delta=self.TOL_IDENTITY_RELAXED)
+ residual = total - base_reward - shaping
+ self.assertAlmostEqual(residual, 0.0, delta=self.TOL_IDENTITY_RELAXED)
+ self.assertTrue(np.isfinite(total))
+
+ def test_pbrs_invariance_internal_flag_set(self):
+ """Canonical path sets _pbrs_invariance_applied once; second call idempotent."""
+ params = self.base_params(
+ exit_potential_mode="canonical",
+ hold_potential_enabled=True,
+ entry_additive_enabled=True,
+ exit_additive_enabled=True,
+ )
+ terminal_next_potentials, shaping_values = self._canonical_sweep(params)
+ _t1, _s1, _n1 = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=0.05,
+ current_duration_ratio=0.3,
+ next_pnl=0.0,
+ next_duration_ratio=0.0,
+ is_exit=True,
+ is_entry=False,
+ last_potential=0.4,
+ params=params,
+ )
+ self.assertIn("_pbrs_invariance_applied", params)
+ self.assertFalse(params["entry_additive_enabled"])
+ self.assertFalse(params["exit_additive_enabled"])
+ if terminal_next_potentials:
+ self.assertTrue(
+ all((abs(p) < self.PBRS_TERMINAL_TOL for p in terminal_next_potentials))
+ )
+ max_abs = max((abs(v) for v in shaping_values)) if shaping_values else 0.0
+ self.assertLessEqual(max_abs, self.PBRS_MAX_ABS_SHAPING)
+ state_after = (params["entry_additive_enabled"], params["exit_additive_enabled"])
+ _t2, _s2, _n2 = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=0.02,
+ current_duration_ratio=0.1,
+ next_pnl=0.0,
+ next_duration_ratio=0.0,
+ is_exit=True,
+ is_entry=False,
+ last_potential=0.1,
+ params=params,
+ )
+ self.assertEqual(
+ state_after, (params["entry_additive_enabled"], params["exit_additive_enabled"])
+ )
+
+ def test_progressive_release_negative_decay_clamped(self):
+ """Negative decay clamps: next potential equals last potential (no release)."""
+ params = self.base_params(
+ exit_potential_mode="progressive_release",
+ exit_potential_decay=-0.75,
+ hold_potential_enabled=True,
+ )
+ last_potential = 0.42
+ total, shaping, next_potential = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=0.0,
+ current_duration_ratio=0.0,
+ next_pnl=0.0,
+ next_duration_ratio=0.0,
+ is_exit=True,
+ last_potential=last_potential,
+ params=params,
+ )
+ self.assertPlacesEqual(next_potential, last_potential, places=12)
+ gamma_raw = DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
+ gamma_fallback = 0.95 if gamma_raw is None else gamma_raw
+ try:
+ gamma = float(gamma_fallback)
+ except Exception:
+ gamma = 0.95
+ self.assertLessEqual(abs(shaping - gamma * last_potential), self.TOL_GENERIC_EQ)
+ self.assertPlacesEqual(total, shaping, places=12)
+
+ def test_potential_gamma_nan_fallback(self):
+ """potential_gamma=NaN falls back to default value (indirect comparison)."""
+ base_params_dict = self.base_params()
+ default_gamma = base_params_dict.get("potential_gamma", 0.95)
+ params_nan = self.base_params(potential_gamma=np.nan, hold_potential_enabled=True)
+ res_nan = apply_potential_shaping(
+ base_reward=0.1,
+ current_pnl=0.03,
+ current_duration_ratio=0.2,
+ next_pnl=0.035,
+ next_duration_ratio=0.25,
+ is_exit=False,
+ last_potential=0.0,
+ params=params_nan,
+ )
+ params_ref = self.base_params(potential_gamma=default_gamma, hold_potential_enabled=True)
+ res_ref = apply_potential_shaping(
+ base_reward=0.1,
+ current_pnl=0.03,
+ current_duration_ratio=0.2,
+ next_pnl=0.035,
+ next_duration_ratio=0.25,
+ is_exit=False,
+ last_potential=0.0,
+ params=params_ref,
+ )
+ self.assertLess(
+ abs(res_nan[1] - res_ref[1]),
+ self.TOL_IDENTITY_RELAXED,
+ "Unexpected shaping difference under gamma NaN fallback",
+ )
+ self.assertLess(
+ abs(res_nan[0] - res_ref[0]),
+ self.TOL_IDENTITY_RELAXED,
+ "Unexpected total difference under gamma NaN fallback",
+ )
+
+ # ---------------- Validation parameter batch & relaxed aggregation ---------------- #
+
+ def test_validate_reward_parameters_batch_and_relaxed_aggregation(self):
+ """Batch validate strict failures + relaxed multi-reason aggregation via helpers."""
+ # Build strict failure cases
+ strict_failures = [
+ build_validation_case({"potential_gamma": -0.2}, strict=True, expect_error=True),
+ build_validation_case({"hold_potential_scale": -5.0}, strict=True, expect_error=True),
+ ]
+ # Success default (strict) case
+ success_case = build_validation_case({}, strict=True, expect_error=False)
+ # Relaxed multi-reason aggregation case
+ relaxed_case = build_validation_case(
+ {
+ "potential_gamma": "not-a-number",
+ "hold_potential_scale": "-5.0",
+ "max_idle_duration_candles": "nan",
+ },
+ strict=False,
+ expect_error=False,
+ expected_reason_substrings=[
+ "non_numeric_reset",
+ "numeric_coerce",
+ "min=",
+ "derived_default",
+ ],
+ )
+ # Execute batch (strict successes + failures + relaxed case)
+ execute_validation_batch(
+ self,
+ [success_case] + strict_failures + [relaxed_case],
+ validate_reward_parameters,
+ )
+ # Explicit aggregation assertions for relaxed case using helper
+ params_relaxed = DEFAULT_MODEL_REWARD_PARAMETERS.copy()
+ params_relaxed.update(
+ {
+ "potential_gamma": "not-a-number",
+ "hold_potential_scale": "-5.0",
+ "max_idle_duration_candles": "nan",
+ }
+ )
+ assert_relaxed_multi_reason_aggregation(
+ self,
+ validate_reward_parameters,
+ params_relaxed,
+ {
+ "potential_gamma": ["non_numeric_reset"],
+ "hold_potential_scale": ["numeric_coerce", "min="],
+ "max_idle_duration_candles": ["derived_default"],
+ },
+ )
+
+ # ---------------- Exit potential mode comparisons ---------------- #
+
+ def test_compute_exit_potential_mode_differences(self):
+ """Exit potential modes: canonical vs spike_cancel shaping magnitude differences."""
+ gamma = 0.93
+ base_common = dict(
+ hold_potential_enabled=True,
+ potential_gamma=gamma,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ hold_potential_scale=1.0,
+ )
+ ctx_pnl = 0.012
+ ctx_dur_ratio = 0.3
+ params_can = self.base_params(exit_potential_mode="canonical", **base_common)
+ prev_phi = _compute_hold_potential(ctx_pnl, ctx_dur_ratio, params_can)
+ self.assertFinite(prev_phi, name="prev_phi")
+ next_phi_can = _compute_exit_potential(prev_phi, params_can)
+ self.assertAlmostEqualFloat(
+ next_phi_can,
+ 0.0,
+ tolerance=self.TOL_IDENTITY_STRICT,
+ msg="Canonical exit must zero potential",
+ )
+ canonical_delta = -prev_phi
+ self.assertAlmostEqualFloat(
+ canonical_delta,
+ -prev_phi,
+ tolerance=self.TOL_IDENTITY_RELAXED,
+ msg="Canonical delta mismatch",
+ )
+ params_spike = self.base_params(exit_potential_mode="spike_cancel", **base_common)
+ next_phi_spike = _compute_exit_potential(prev_phi, params_spike)
+ shaping_spike = gamma * next_phi_spike - prev_phi
+ self.assertNearZero(
+ shaping_spike,
+ atol=self.TOL_IDENTITY_RELAXED,
+ msg="Spike cancel should nullify shaping delta",
+ )
+ self.assertGreaterEqual(
+ abs(canonical_delta) + self.TOL_IDENTITY_STRICT,
+ abs(shaping_spike),
+ "Canonical shaping magnitude should exceed spike_cancel",
+ )
+
+ def test_pbrs_retain_previous_cumulative_drift(self):
+ """retain_previous mode accumulates negative shaping drift (non-invariant)."""
+ params = self.base_params(
+ exit_potential_mode="retain_previous",
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ potential_gamma=0.9,
+ )
+ gamma = _get_float_param(
+ params, "potential_gamma", DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
+ )
+ rng = np.random.default_rng(555)
+ potentials = rng.uniform(0.05, 0.85, size=220)
+ deltas = [gamma * p - p for p in potentials]
+ cumulative = float(np.sum(deltas))
+ self.assertLess(cumulative, -self.TOL_NEGLIGIBLE)
+ self.assertGreater(abs(cumulative), 10 * self.TOL_IDENTITY_RELAXED)
+
+ # ---------------- Drift correction invariants (simulate_samples) ---------------- #
+
+ # Owns invariant: pbrs-canonical-drift-correction-106
+ def test_pbrs_106_canonical_drift_correction_zero_sum(self):
+ """Invariant 106: canonical mode enforces near zero-sum shaping (drift correction)."""
+ params = self.base_params(
+ exit_potential_mode="canonical",
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ potential_gamma=0.94,
+ )
+ df = simulate_samples(
+ params={**params, "max_trade_duration_candles": 140},
+ num_samples=500,
+ seed=913,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ max_duration_ratio=2.0,
+ trading_mode="margin",
+ pnl_base_std=self.TEST_PNL_STD,
+ pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
+ )
+ total_shaping = float(df["reward_shaping"].sum())
+ assert_pbrs_canonical_sum_within_tolerance(self, total_shaping, PBRS_INVARIANCE_TOL)
+ flags = set(df["pbrs_invariant"].unique().tolist())
+ self.assertEqual(flags, {True}, f"Unexpected invariance flags canonical: {flags}")
+
+ # Owns invariant (extension path): pbrs-canonical-drift-correction-106
+ def test_pbrs_106_canonical_drift_correction_exception_fallback(self):
+ """Invariant 106 (extension): exception path graceful fallback."""
+ params = self.base_params(
+ exit_potential_mode="canonical",
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ potential_gamma=0.91,
+ )
+ import pandas as pd
+
+ original_sum = pd.DataFrame.sum
+
+ def boom(self, *args, **kwargs): # noqa: D401
+ if isinstance(self, pd.DataFrame) and "reward_shaping" in self.columns:
+ raise RuntimeError("forced drift correction failure")
+ return original_sum(self, *args, **kwargs)
+
+ pd.DataFrame.sum = boom
+ try:
+ df_exc = simulate_samples(
+ params={**params, "max_trade_duration_candles": 120},
+ num_samples=250,
+ seed=515,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ max_duration_ratio=2.0,
+ trading_mode="margin",
+ pnl_base_std=self.TEST_PNL_STD,
+ pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
+ )
+ finally:
+ pd.DataFrame.sum = original_sum
+ flags_exc = set(df_exc["pbrs_invariant"].unique().tolist())
+ self.assertEqual(flags_exc, {True})
+ # Column presence and successful completion are primary guarantees under fallback.
+ self.assertTrue("reward_shaping" in df_exc.columns)
+ self.assertIn("reward_shaping", df_exc.columns)
+
+ # Owns invariant (comparison path): pbrs-canonical-drift-correction-106
+ def test_pbrs_106_canonical_drift_correction_uniform_offset(self):
+ """Canonical drift correction reduces Σ shaping below tolerance vs non-canonical."""
+ params_can = self.base_params(
+ exit_potential_mode="canonical",
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ potential_gamma=0.92,
+ )
+ df_can = simulate_samples(
+ params={**params_can, "max_trade_duration_candles": 120},
+ num_samples=400,
+ seed=777,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ max_duration_ratio=2.0,
+ trading_mode="margin",
+ pnl_base_std=self.TEST_PNL_STD,
+ pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
+ )
+ params_non = self.base_params(
+ exit_potential_mode="retain_previous",
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ potential_gamma=0.92,
+ )
+ df_non = simulate_samples(
+ params={**params_non, "max_trade_duration_candles": 120},
+ num_samples=400,
+ seed=777,
+ base_factor=self.TEST_BASE_FACTOR,
+ profit_target=self.TEST_PROFIT_TARGET,
+ risk_reward_ratio=self.TEST_RR,
+ max_duration_ratio=2.0,
+ trading_mode="margin",
+ pnl_base_std=self.TEST_PNL_STD,
+ pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
+ )
+ total_can = float(df_can["reward_shaping"].sum())
+ total_non = float(df_non["reward_shaping"].sum())
+ self.assertLess(abs(total_can), abs(total_non) + self.TOL_IDENTITY_RELAXED)
+ assert_pbrs_canonical_sum_within_tolerance(self, total_can, PBRS_INVARIANCE_TOL)
+ invariant_mask = df_can["pbrs_invariant"]
+ if bool(getattr(invariant_mask, "any", lambda: False)()):
+ corrected_values = df_can.loc[invariant_mask, "reward_shaping"].to_numpy()
+ mean_corrected = float(np.mean(corrected_values))
+ self.assertLess(abs(mean_corrected), self.TOL_IDENTITY_RELAXED)
+ spread = float(np.max(corrected_values) - np.min(corrected_values))
+ self.assertLess(spread, self.PBRS_MAX_ABS_SHAPING)
+
+ # ---------------- Statistical shape invariance ---------------- #
+
+ def test_normality_invariance_under_scaling(self):
+ """Skewness & excess kurtosis invariant under positive scaling of normal sample."""
+ rng = np.random.default_rng(808)
+ base = rng.normal(0.0, 1.0, size=7000)
+ scaled = 5.0 * base
+
+ def _skew_kurt(x: np.ndarray) -> tuple[float, float]:
+ m = np.mean(x)
+ c = x - m
+ m2 = np.mean(c**2)
+ m3 = np.mean(c**3)
+ m4 = np.mean(c**4)
+ skew = m3 / (m2**1.5 + 1e-18)
+ kurt = m4 / (m2**2 + 1e-18) - 3.0
+ return (float(skew), float(kurt))
+
+ s_base, k_base = _skew_kurt(base)
+ s_scaled, k_scaled = _skew_kurt(scaled)
+ self.assertAlmostEqualFloat(s_base, s_scaled, tolerance=self.TOL_DISTRIB_SHAPE)
+ self.assertAlmostEqualFloat(k_base, k_scaled, tolerance=self.TOL_DISTRIB_SHAPE)
+
+ # ---------------- Report classification / formatting ---------------- #
+
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101), robustness/test_robustness.py:125 (robustness-exit-pnl-only-117)
+ @pytest.mark.smoke
+ def test_pbrs_non_canonical_report_generation(self):
+ """Synthetic invariance section: Non-canonical classification formatting."""
+ import re
+
+ import pandas as pd
+
+ from reward_space_analysis import PBRS_INVARIANCE_TOL
+
+ df = pd.DataFrame(
+ {
+ "reward_shaping": [0.01, -0.002],
+ "reward_entry_additive": [0.0, 0.0],
+ "reward_exit_additive": [0.001, 0.0],
+ }
+ )
+ total_shaping = df["reward_shaping"].sum()
+ self.assertGreater(abs(total_shaping), PBRS_INVARIANCE_TOL)
+ invariance_status = "❌ Non-canonical"
+ section = []
+ section.append("**PBRS Invariance Summary:**\n")
+ section.append("| Field | Value |\n")
+ section.append("|-------|-------|\n")
+ section.append(f"| Invariance | {invariance_status} |\n")
+ section.append(f"| Note | Total shaping = {total_shaping:.6f} (non-zero) |\n")
+ section.append(f"| Σ Shaping Reward | {total_shaping:.6f} |\n")
+ section.append(f"| Abs Σ Shaping Reward | {abs(total_shaping):.6e} |\n")
+ section.append(f"| Σ Entry Additive | {df['reward_entry_additive'].sum():.6f} |\n")
+ section.append(f"| Σ Exit Additive | {df['reward_exit_additive'].sum():.6f} |\n")
+ content = "".join(section)
+ assert_pbrs_invariance_report_classification(
+ self, content, "Non-canonical", expect_additives=False
+ )
+ self.assertRegex(content, "Σ Shaping Reward \\| 0\\.008000 \\|")
+ m_abs = re.search("Abs Σ Shaping Reward \\| ([0-9.]+e[+-][0-9]{2}) \\|", content)
+ self.assertIsNotNone(m_abs)
+ if m_abs:
+ val = float(m_abs.group(1))
+ self.assertAlmostEqual(abs(total_shaping), val, places=12)
+
+ def test_potential_gamma_boundary_values_stability(self):
+ """Potential gamma boundary values (0 and ≈1) produce bounded shaping."""
+ for gamma in [0.0, 0.999999]:
+ params = self.base_params(
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ exit_potential_mode="canonical",
+ potential_gamma=gamma,
+ )
+ _tot, shap, next_pot = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=0.02,
+ current_duration_ratio=0.3,
+ next_pnl=0.025,
+ next_duration_ratio=0.35,
+ is_exit=False,
+ last_potential=0.0,
+ params=params,
+ )
+ self.assertTrue(np.isfinite(shap))
+ self.assertTrue(np.isfinite(next_pot))
+ self.assertLessEqual(abs(shap), self.PBRS_MAX_ABS_SHAPING)
+
+ def test_report_cumulative_invariance_aggregation(self):
+ """Canonical telescoping term: small per-step mean drift, bounded increments."""
+ params = self.base_params(
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ exit_potential_mode="canonical",
+ )
+ gamma = _get_float_param(
+ params, "potential_gamma", DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
+ )
+ rng = np.random.default_rng(321)
+ last_potential = 0.0
+ telescoping_sum = 0.0
+ max_abs_step = 0.0
+ steps = 0
+ for _ in range(500):
+ is_exit = rng.uniform() < 0.1
+ current_pnl = float(rng.normal(0, 0.05))
+ current_dur = float(rng.uniform(0, 1))
+ next_pnl = 0.0 if is_exit else float(rng.normal(0, 0.05))
+ next_dur = 0.0 if is_exit else float(rng.uniform(0, 1))
+ _tot, _shap, next_potential = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=current_pnl,
+ current_duration_ratio=current_dur,
+ next_pnl=next_pnl,
+ next_duration_ratio=next_dur,
+ is_exit=is_exit,
+ last_potential=last_potential,
+ params=params,
+ )
+ inc = gamma * next_potential - last_potential
+ telescoping_sum += inc
+ if abs(inc) > max_abs_step:
+ max_abs_step = abs(inc)
+ steps += 1
+ if is_exit:
+ last_potential = 0.0
+ else:
+ last_potential = next_potential
+ mean_drift = telescoping_sum / max(1, steps)
+ self.assertLess(
+ abs(mean_drift),
+ 0.02,
+ f"Per-step telescoping drift too large (mean={mean_drift}, steps={steps})",
+ )
+ self.assertLessEqual(
+ max_abs_step,
+ self.PBRS_MAX_ABS_SHAPING,
+ f"Unexpected large telescoping increment (max={max_abs_step})",
+ )
+
+ def test_report_explicit_non_invariance_progressive_release(self):
+ """progressive_release cumulative shaping non-zero (release leak)."""
+ params = self.base_params(
+ hold_potential_enabled=True,
+ entry_additive_enabled=False,
+ exit_additive_enabled=False,
+ exit_potential_mode="progressive_release",
+ exit_potential_decay=0.25,
+ )
+ rng = np.random.default_rng(321)
+ last_potential = 0.0
+ shaping_sum = 0.0
+ for _ in range(160):
+ is_exit = rng.uniform() < 0.15
+ next_pnl = 0.0 if is_exit else float(rng.normal(0, 0.07))
+ next_dur = 0.0 if is_exit else float(rng.uniform(0, 1))
+ _tot, shap, next_pot = apply_potential_shaping(
+ base_reward=0.0,
+ current_pnl=float(rng.normal(0, 0.07)),
+ current_duration_ratio=float(rng.uniform(0, 1)),
+ next_pnl=next_pnl,
+ next_duration_ratio=next_dur,
+ is_exit=is_exit,
+ last_potential=last_potential,
+ params=params,
+ )
+ shaping_sum += shap
+ last_potential = 0.0 if is_exit else next_pot
+ self.assertGreater(
+ abs(shaping_sum),
+ PBRS_INVARIANCE_TOL * 50,
+ f"Expected non-zero Σ shaping (got {shaping_sum})",
+ )
+
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101)
+ # Owns invariant: pbrs-canonical-near-zero-report-116
+ @pytest.mark.smoke
+ def test_pbrs_canonical_near_zero_report(self):
+ """Invariant 116: canonical near-zero cumulative shaping classified in full report."""
+ import re
+
+ import numpy as np
+ import pandas as pd
+
+ from reward_space_analysis import PBRS_INVARIANCE_TOL
+
+ small_vals = [1.0e-7, -2.0e-7, 3.0e-7] # sum = 2.0e-7 < tolerance
+ total_shaping = float(sum(small_vals))
+ self.assertLess(
+ abs(total_shaping),
+ PBRS_INVARIANCE_TOL,
+ f"Total shaping {total_shaping} exceeds invariance tolerance",
+ )
+ n = len(small_vals)
+ df = pd.DataFrame(
+ {
+ "reward": np.random.normal(0, 1, n),
+ "reward_idle": np.zeros(n),
+ "reward_hold": np.random.normal(-0.2, 0.05, n),
+ "reward_exit": np.random.normal(0.4, 0.15, n),
+ "pnl": np.random.normal(0.01, 0.02, n),
+ "trade_duration": np.random.uniform(5, 30, n),
+ "idle_duration": np.zeros(n),
+ "position": np.random.choice([0.0, 0.5, 1.0], n),
+ "action": np.random.randint(0, 3, n),
+ "reward_shaping": small_vals,
+ "reward_entry_additive": [0.0] * n,
+ "reward_exit_additive": [0.0] * n,
+ "reward_invalid": np.zeros(n),
+ "duration_ratio": np.random.uniform(0.2, 1.0, n),
+ "idle_ratio": np.zeros(n),
+ }
+ )
+ df.attrs["reward_params"] = {
+ "exit_potential_mode": "canonical",
+ "entry_additive_enabled": False,
+ "exit_additive_enabled": False,
+ }
+ out_dir = self.output_path / "canonical_near_zero_report"
+ write_complete_statistical_analysis(
+ df,
+ output_dir=out_dir,
+ profit_target=self.TEST_PROFIT_TARGET,
+ seed=self.SEED,
+ skip_feature_analysis=True,
+ skip_partial_dependence=True,
+ bootstrap_resamples=25,
+ )
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(report_path.exists(), "Report file missing for canonical near-zero test")
+ content = report_path.read_text(encoding="utf-8")
+ assert_pbrs_invariance_report_classification(
+ self, content, "Canonical", expect_additives=False
+ )
+ self.assertRegex(content, r"\| Σ Shaping Reward \| 0\.000000 \|")
+ m_abs = re.search(r"\| Abs Σ Shaping Reward \| ([0-9.]+e[+-][0-9]{2}) \|", content)
+ self.assertIsNotNone(m_abs)
+ if m_abs:
+ val_abs = float(m_abs.group(1))
+ self.assertAlmostEqual(abs(total_shaping), val_abs, places=12)
+
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101)
+ @pytest.mark.smoke
+ def test_pbrs_canonical_warning_report(self):
+ """Canonical mode + no additives but |Σ shaping| > tolerance -> warning classification."""
+ import pandas as pd
+
+ from reward_space_analysis import PBRS_INVARIANCE_TOL
+
+ shaping_vals = [1.2e-4, 1.3e-4, 8.0e-5, -2.0e-5, 1.4e-4] # sum = 4.5e-4 (> tol)
+ total_shaping = sum(shaping_vals)
+ self.assertGreater(abs(total_shaping), PBRS_INVARIANCE_TOL)
+ n = len(shaping_vals)
+ df = pd.DataFrame(
+ {
+ "reward": np.random.normal(0, 1, n),
+ "reward_idle": np.zeros(n),
+ "reward_hold": np.random.normal(-0.2, 0.1, n),
+ "reward_exit": np.random.normal(0.5, 0.2, n),
+ "pnl": np.random.normal(0.01, 0.02, n),
+ "trade_duration": np.random.uniform(5, 50, n),
+ "idle_duration": np.zeros(n),
+ "position": np.random.choice([0.0, 0.5, 1.0], n),
+ "action": np.random.randint(0, 3, n),
+ "reward_shaping": shaping_vals,
+ "reward_entry_additive": [0.0] * n,
+ "reward_exit_additive": [0.0] * n,
+ "reward_invalid": np.zeros(n),
+ "duration_ratio": np.random.uniform(0.2, 1.2, n),
+ "idle_ratio": np.zeros(n),
+ }
+ )
+ df.attrs["reward_params"] = {
+ "exit_potential_mode": "canonical",
+ "entry_additive_enabled": False,
+ "exit_additive_enabled": False,
+ }
+ out_dir = self.output_path / "canonical_warning"
+ write_complete_statistical_analysis(
+ df,
+ output_dir=out_dir,
+ profit_target=self.TEST_PROFIT_TARGET,
+ seed=self.SEED,
+ skip_feature_analysis=True,
+ skip_partial_dependence=True,
+ bootstrap_resamples=50,
+ )
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(report_path.exists(), "Report file missing for canonical warning test")
+ content = report_path.read_text(encoding="utf-8")
+ assert_pbrs_invariance_report_classification(
+ self, content, "Canonical (with warning)", expect_additives=False
+ )
+ expected_sum_fragment = f"{total_shaping:.6f}"
+ self.assertIn(expected_sum_fragment, content)
+
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101)
+ @pytest.mark.smoke
+ def test_pbrs_non_canonical_full_report_reason_aggregation(self):
+ """Full report: Non-canonical classification aggregates mode + additives reasons."""
+ import pandas as pd
+
+ shaping_vals = [0.02, -0.005, 0.007]
+ entry_add_vals = [0.003, 0.0, 0.004]
+ exit_add_vals = [0.001, 0.002, 0.0]
+ n = len(shaping_vals)
+ df = pd.DataFrame(
+ {
+ "reward": np.random.normal(0, 1, n),
+ "reward_idle": np.zeros(n),
+ "reward_hold": np.random.normal(-0.1, 0.05, n),
+ "reward_exit": np.random.normal(0.4, 0.15, n),
+ "pnl": np.random.normal(0.01, 0.02, n),
+ "trade_duration": np.random.uniform(5, 25, n),
+ "idle_duration": np.zeros(n),
+ "position": np.random.choice([0.0, 0.5, 1.0], n),
+ "action": np.random.randint(0, 5, n),
+ "reward_shaping": shaping_vals,
+ "reward_entry_additive": entry_add_vals,
+ "reward_exit_additive": exit_add_vals,
+ "reward_invalid": np.zeros(n),
+ "duration_ratio": np.random.uniform(0.1, 1.0, n),
+ "idle_ratio": np.zeros(n),
+ }
+ )
+ df.attrs["reward_params"] = {
+ "exit_potential_mode": "progressive_release",
+ "entry_additive_enabled": True,
+ "exit_additive_enabled": True,
+ }
+ out_dir = self.output_path / "non_canonical_full_report"
+ write_complete_statistical_analysis(
+ df,
+ output_dir=out_dir,
+ profit_target=self.TEST_PROFIT_TARGET,
+ seed=self.SEED,
+ skip_feature_analysis=True,
+ skip_partial_dependence=True,
+ bootstrap_resamples=25,
+ )
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(
+ report_path.exists(), "Report file missing for non-canonical full report test"
+ )
+ content = report_path.read_text(encoding="utf-8")
+ assert_pbrs_invariance_report_classification(
+ self, content, "Non-canonical", expect_additives=True
+ )
+ self.assertIn("exit_potential_mode='progressive_release'", content)
+
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101)
+ @pytest.mark.smoke
+ def test_pbrs_non_canonical_mode_only_reason(self):
+ """Non-canonical exit mode with additives disabled -> reason excludes additive list."""
+ import pandas as pd
+
+ from reward_space_analysis import PBRS_INVARIANCE_TOL
+
+ shaping_vals = [0.002, -0.0005, 0.0012]
+ total_shaping = sum(shaping_vals)
+ self.assertGreater(abs(total_shaping), PBRS_INVARIANCE_TOL)
+ n = len(shaping_vals)
+ df = pd.DataFrame(
+ {
+ "reward": np.random.normal(0, 1, n),
+ "reward_idle": np.zeros(n),
+ "reward_hold": np.random.normal(-0.15, 0.05, n),
+ "reward_exit": np.random.normal(0.3, 0.1, n),
+ "pnl": np.random.normal(0.01, 0.02, n),
+ "trade_duration": np.random.uniform(5, 40, n),
+ "idle_duration": np.zeros(n),
+ "position": np.random.choice([0.0, 0.5, 1.0], n),
+ "action": np.random.randint(0, 5, n),
+ "reward_shaping": shaping_vals,
+ "reward_entry_additive": [0.0] * n,
+ "reward_exit_additive": [0.0] * n,
+ "reward_invalid": np.zeros(n),
+ "duration_ratio": np.random.uniform(0.2, 1.2, n),
+ "idle_ratio": np.zeros(n),
+ }
+ )
+ df.attrs["reward_params"] = {
+ "exit_potential_mode": "retain_previous",
+ "entry_additive_enabled": False,
+ "exit_additive_enabled": False,
+ }
+ out_dir = self.output_path / "non_canonical_mode_only"
+ write_complete_statistical_analysis(
+ df,
+ output_dir=out_dir,
+ profit_target=self.TEST_PROFIT_TARGET,
+ seed=self.SEED,
+ skip_feature_analysis=True,
+ skip_partial_dependence=True,
+ bootstrap_resamples=25,
+ )
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(
+ report_path.exists(), "Report file missing for non-canonical mode-only reason test"
+ )
+ content = report_path.read_text(encoding="utf-8")
+ assert_pbrs_invariance_report_classification(
+ self, content, "Non-canonical", expect_additives=False
+ )
+ self.assertIn("exit_potential_mode='retain_previous'", content)
+
+ # Owns invariant: pbrs-absence-shift-placeholder-118
+ def test_pbrs_absence_and_distribution_shift_placeholder(self):
+ """Report generation without PBRS columns triggers absence + shift placeholder."""
+ import pandas as pd
+
+ n = 90
+ rng = np.random.default_rng(123)
+ df = pd.DataFrame(
+ {
+ "reward": rng.normal(0.05, 0.02, n),
+ "reward_idle": np.concatenate(
+ [
+ rng.normal(-0.01, 0.003, n // 2),
+ np.zeros(n - n // 2),
+ ]
+ ),
+ "reward_hold": rng.normal(0.0, 0.01, n),
+ "reward_exit": rng.normal(0.04, 0.015, n),
+ "pnl": rng.normal(0.0, 0.05, n),
+ "trade_duration": rng.uniform(5, 25, n),
+ "idle_duration": rng.uniform(1, 20, n),
+ "position": rng.choice([0.0, 0.5, 1.0], n),
+ "action": rng.integers(0, 3, n),
+ "reward_invalid": np.zeros(n),
+ "duration_ratio": rng.uniform(0.2, 1.0, n),
+ "idle_ratio": rng.uniform(0.0, 0.8, n),
+ }
+ )
+ out_dir = self.output_path / "pbrs_absence_and_shift_placeholder"
+ import reward_space_analysis as rsa
+
+ original_compute_summary_stats = rsa._compute_summary_stats
+
+ def _minimal_summary_stats(_df):
+ import pandas as _pd
+
+ comp_share = _pd.Series([], dtype=float)
+ action_summary = _pd.DataFrame(
+ columns=["count", "mean", "std", "min", "max"],
+ index=_pd.Index([], name="action"),
+ )
+ component_bounds = _pd.DataFrame(
+ columns=["component_min", "component_mean", "component_max"],
+ index=_pd.Index([], name="component"),
+ )
+ global_stats = _pd.Series([], dtype=float)
+ return {
+ "global_stats": global_stats,
+ "action_summary": action_summary,
+ "component_share": comp_share,
+ "component_bounds": component_bounds,
+ }
+
+ rsa._compute_summary_stats = _minimal_summary_stats
+ try:
+ write_complete_statistical_analysis(
+ df,
+ output_dir=out_dir,
+ profit_target=self.TEST_PROFIT_TARGET,
+ seed=self.SEED,
+ skip_feature_analysis=True,
+ skip_partial_dependence=True,
+ bootstrap_resamples=10,
+ )
+ finally:
+ rsa._compute_summary_stats = original_compute_summary_stats
+ report_path = out_dir / "statistical_analysis.md"
+ self.assertTrue(report_path.exists(), "Report file missing for PBRS absence test")
+ content = report_path.read_text(encoding="utf-8")
+ self.assertIn("_PBRS components not present in this analysis._", content)
+ self.assertIn("_Not performed (no real episodes provided)._", content)
+
+ def test_get_max_idle_duration_candles_negative_or_zero_fallback(self):
+ """Explicit mid<=0 fallback path returns derived default multiplier."""
+ from reward_space_analysis import (
+ DEFAULT_MODEL_REWARD_PARAMETERS,
+ )
+
+ base = DEFAULT_MODEL_REWARD_PARAMETERS.copy()
+ base["max_trade_duration_candles"] = 64
+ base["max_idle_duration_candles"] = 0
+ result = get_max_idle_duration_candles(base)
+ expected = DEFAULT_IDLE_DURATION_MULTIPLIER * 64
+ self.assertEqual(
+ result, expected, f"Expected fallback {expected} for mid<=0 (got {result})"
+ )
+
+
+if __name__ == "__main__":
+ unittest.main()
--- /dev/null
+import unittest
+
+import pytest
+
+from reward_space_analysis import (
+ Actions,
+ Positions,
+ RewardContext,
+ RewardDiagnosticsWarning,
+ _get_exit_factor,
+ _hold_penalty,
+ _normalize_and_validate_mode,
+ validate_reward_parameters,
+)
+
+from ..helpers import run_strict_validation_failure_cases
+
+
+class _PyTestAdapter(unittest.TestCase):
+ """Adapter leveraging unittest.TestCase for assertion + subTest support.
+
+ Subclassing TestCase provides all assertion helpers and the subTest context manager
+ required by shared helpers in tests.helpers.
+ """
+
+ def runTest(self):
+ # Required abstract method; no-op for adapter usage.
+ pass
+
+
+@pytest.mark.robustness
+def test_validate_reward_parameters_strict_failure_batch():
+ """Batch strict validation failure scenarios using shared helper."""
+ adapter = _PyTestAdapter()
+ failure_params = [
+ {"exit_linear_slope": "not_a_number"},
+ {"exit_power_tau": 0.0},
+ {"exit_power_tau": 1.5},
+ {"exit_half_life": 0.0},
+ {"exit_half_life": float("nan")},
+ ]
+ run_strict_validation_failure_cases(adapter, failure_params, validate_reward_parameters)
+
+
+from ..helpers import run_relaxed_validation_adjustment_cases
+
+
+@pytest.mark.robustness
+def test_validate_reward_parameters_relaxed_adjustment_batch():
+ """Batch relaxed validation adjustment scenarios using shared helper."""
+ relaxed_cases = [
+ ({"exit_linear_slope": "not_a_number", "strict_validation": False}, ["non_numeric_reset"]),
+ ({"exit_power_tau": float("inf"), "strict_validation": False}, ["non_numeric_reset"]),
+ ({"max_idle_duration_candles": "bad", "strict_validation": False}, ["derived_default"]),
+ ]
+ run_relaxed_validation_adjustment_cases(
+ _PyTestAdapter(), relaxed_cases, validate_reward_parameters
+ )
+
+
+@pytest.mark.robustness
+def test_normalize_and_validate_mode_fallback():
+ params = {"exit_attenuation_mode": "invalid_mode"}
+ _normalize_and_validate_mode(params)
+ assert params["exit_attenuation_mode"] == "linear"
+
+
+@pytest.mark.robustness
+def test_get_exit_factor_negative_plateau_grace_warning():
+ params = {"exit_attenuation_mode": "linear", "exit_plateau": True, "exit_plateau_grace": -1.0}
+ with pytest.warns(RewardDiagnosticsWarning):
+ factor = _get_exit_factor(
+ base_factor=10.0,
+ pnl=0.01,
+ pnl_factor=1.0,
+ duration_ratio=0.5,
+ params=params,
+ )
+ assert factor >= 0.0
+
+
+@pytest.mark.robustness
+def test_get_exit_factor_negative_linear_slope_warning():
+ params = {"exit_attenuation_mode": "linear", "exit_linear_slope": -5.0}
+ with pytest.warns(RewardDiagnosticsWarning):
+ factor = _get_exit_factor(
+ base_factor=10.0,
+ pnl=0.01,
+ pnl_factor=1.0,
+ duration_ratio=2.0,
+ params=params,
+ )
+ assert factor >= 0.0
+
+
+@pytest.mark.robustness
+def test_get_exit_factor_invalid_power_tau_relaxed():
+ params = {"exit_attenuation_mode": "power", "exit_power_tau": 0.0, "strict_validation": False}
+ with pytest.warns(RewardDiagnosticsWarning):
+ factor = _get_exit_factor(
+ base_factor=5.0,
+ pnl=0.02,
+ pnl_factor=1.0,
+ duration_ratio=1.5,
+ params=params,
+ )
+ assert factor > 0.0
+
+
+@pytest.mark.robustness
+def test_get_exit_factor_half_life_near_zero_relaxed():
+ params = {
+ "exit_attenuation_mode": "half_life",
+ "exit_half_life": 1e-12,
+ "strict_validation": False,
+ }
+ with pytest.warns(RewardDiagnosticsWarning):
+ factor = _get_exit_factor(
+ base_factor=5.0,
+ pnl=0.02,
+ pnl_factor=1.0,
+ duration_ratio=2.0,
+ params=params,
+ )
+ assert factor != 0.0
+
+
+@pytest.mark.robustness
+def test_hold_penalty_short_duration_returns_zero():
+ context = RewardContext(
+ pnl=0.0,
+ trade_duration=1, # shorter than default max trade duration (128)
+ idle_duration=0,
+ max_unrealized_profit=0.0,
+ min_unrealized_profit=0.0,
+ position=Positions.Long,
+ action=Actions.Neutral,
+ )
+ params = {"max_trade_duration_candles": 128}
+ penalty = _hold_penalty(context, hold_factor=1.0, params=params)
+ assert penalty == 0.0
+
+
+from ..helpers import assert_exit_factor_invariant_suite
+
+
+@pytest.mark.robustness
+def test_exit_factor_invariant_suite_grouped():
+ """Grouped exit factor invariant scenarios using shared helper."""
+ suite = [
+ {
+ "base_factor": 15.0,
+ "pnl": 0.02,
+ "pnl_factor": 1.0,
+ "duration_ratio": -5.0,
+ "params": {
+ "exit_attenuation_mode": "linear",
+ "exit_linear_slope": 1.2,
+ "exit_plateau": False,
+ },
+ "expectation": "non_negative",
+ },
+ {
+ "base_factor": 15.0,
+ "pnl": 0.02,
+ "pnl_factor": 1.0,
+ "duration_ratio": 0.0,
+ "params": {
+ "exit_attenuation_mode": "linear",
+ "exit_linear_slope": 1.2,
+ "exit_plateau": False,
+ },
+ "expectation": "non_negative",
+ },
+ {
+ "base_factor": float("nan"),
+ "pnl": 0.01,
+ "pnl_factor": 1.0,
+ "duration_ratio": 0.2,
+ "params": {"exit_attenuation_mode": "linear", "exit_linear_slope": 0.5},
+ "expectation": "safe_zero",
+ },
+ {
+ "base_factor": 10.0,
+ "pnl": float("nan"),
+ "pnl_factor": 1.0,
+ "duration_ratio": 0.2,
+ "params": {"exit_attenuation_mode": "linear", "exit_linear_slope": 0.5},
+ "expectation": "safe_zero",
+ },
+ {
+ "base_factor": 10.0,
+ "pnl": 0.01,
+ "pnl_factor": 1.0,
+ "duration_ratio": float("nan"),
+ "params": {"exit_attenuation_mode": "linear", "exit_linear_slope": 0.5},
+ "expectation": "safe_zero",
+ },
+ {
+ "base_factor": 10.0,
+ "pnl": 0.02,
+ "pnl_factor": float("inf"),
+ "duration_ratio": 0.5,
+ "params": {
+ "exit_attenuation_mode": "linear",
+ "exit_linear_slope": 1.0,
+ "check_invariants": True,
+ },
+ "expectation": "safe_zero",
+ },
+ {
+ "base_factor": 10.0,
+ "pnl": 0.015,
+ "pnl_factor": -2.5,
+ "duration_ratio": 2.0,
+ "params": {
+ "exit_attenuation_mode": "legacy",
+ "exit_plateau": False,
+ "check_invariants": True,
+ },
+ "expectation": "clamped",
+ },
+ ]
+ assert_exit_factor_invariant_suite(_PyTestAdapter(), suite, _get_exit_factor)
import warnings
import numpy as np
+import pytest
from reward_space_analysis import (
ATTENUATION_MODES,
Actions,
Positions,
RewardContext,
+ RewardDiagnosticsWarning,
_get_exit_factor,
- _get_pnl_factor,
calculate_reward,
simulate_samples,
)
-from .test_base import RewardSpaceTestBase
+from ..helpers import (
+ assert_exit_factor_attenuation_modes,
+ assert_exit_mode_mathematical_validation,
+ assert_single_active_component_with_additives,
+)
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.robustness
class TestRewardRobustnessAndBoundaries(RewardSpaceTestBase):
- """Robustness & boundary assertions: invariants, attenuation maths, parameter edges, scaling, warnings."""
+ """Robustness invariants, attenuation maths, parameter edges, scaling, warnings."""
+ # Owns invariant: robustness-decomposition-integrity-101 (robustness category)
def test_decomposition_integrity(self):
"""reward must equal the single active core component under mutually exclusive scenarios (idle/hold/exit/invalid)."""
scenarios = [
),
]
for sc in scenarios:
- ctx_obj: RewardContext = sc["ctx"]
- active_label: str = sc["active"]
+ ctx_obj = sc["ctx"]
+ active_label = sc["active"]
+ assert isinstance(ctx_obj, RewardContext), (
+ f"Expected RewardContext, got {type(ctx_obj)}"
+ )
+ assert isinstance(active_label, str), f"Expected str, got {type(active_label)}"
with self.subTest(active=active_label):
params = self.base_params(
entry_additive_enabled=False,
short_allowed=True,
action_masking=True,
)
- core_components = {
- "exit_component": br.exit_component,
- "idle_penalty": br.idle_penalty,
- "hold_penalty": br.hold_penalty,
- "invalid_penalty": br.invalid_penalty,
- }
- for name, value in core_components.items():
- if name == active_label:
- self.assertAlmostEqualFloat(
- value,
- br.total,
- tolerance=self.TOL_IDENTITY_RELAXED,
- msg=f"Active component {name} != total",
- )
- else:
- self.assertNearZero(
- value,
- atol=self.TOL_IDENTITY_RELAXED,
- msg=f"Inactive component {name} not near zero (val={value})",
- )
- self.assertAlmostEqualFloat(
- br.reward_shaping, 0.0, tolerance=self.TOL_IDENTITY_RELAXED
- )
- self.assertAlmostEqualFloat(
- br.entry_additive, 0.0, tolerance=self.TOL_IDENTITY_RELAXED
- )
- self.assertAlmostEqualFloat(
- br.exit_additive, 0.0, tolerance=self.TOL_IDENTITY_RELAXED
+ assert_single_active_component_with_additives(
+ self,
+ br,
+ active_label,
+ self.TOL_IDENTITY_RELAXED,
+ inactive_core=[
+ "exit_component",
+ "idle_penalty",
+ "hold_penalty",
+ "invalid_penalty",
+ ],
)
+ # Owns invariant: robustness-exit-pnl-only-117 (robustness category)
def test_pnl_invariant_exit_only(self):
"""Invariant: only exit actions have non-zero PnL (robustness category)."""
df = simulate_samples(
places=10,
msg="PnL invariant violation: total PnL != sum of exit PnL",
)
- non_zero_pnl_actions = set(df[df["pnl"].abs() > self.EPS_BASE]["action"].unique())
+ non_zero_pnl_actions = set(np.unique(df[df["pnl"].abs() > self.EPS_BASE]["action"]))
expected_exit_actions = {2.0, 4.0}
self.assertTrue(
non_zero_pnl_actions.issubset(expected_exit_actions),
action=Actions.Long_exit,
)
params = self.DEFAULT_PARAMS.copy()
- duration_ratio = 50 / 100
-
- # Test power mode
- params["exit_attenuation_mode"] = "power"
- params["exit_power_tau"] = 0.5
- params["exit_plateau"] = False
- reward_power = calculate_reward(
- context,
- params,
- self.TEST_BASE_FACTOR,
- self.TEST_PROFIT_TARGET,
- self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- self.assertGreater(reward_power.exit_component, 0)
-
- # Test half_life mode with mathematical validation
- params["exit_attenuation_mode"] = "half_life"
- params["exit_half_life"] = 0.5
- reward_half_life = calculate_reward(
- context,
- params,
- self.TEST_BASE_FACTOR,
- self.TEST_PROFIT_TARGET,
- self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- pnl_factor_hl = _get_pnl_factor(params, context, self.TEST_PROFIT_TARGET, self.TEST_RR)
- observed_exit_factor = _get_exit_factor(
- self.TEST_BASE_FACTOR, context.pnl, pnl_factor_hl, duration_ratio, params
- )
- observed_half_life_factor = observed_exit_factor / (
- self.TEST_BASE_FACTOR * max(pnl_factor_hl, self.EPS_BASE)
- )
- expected_half_life_factor = 2 ** (-duration_ratio / params["exit_half_life"])
- self.assertAlmostEqualFloat(
- observed_half_life_factor,
- expected_half_life_factor,
- tolerance=self.TOL_IDENTITY_RELAXED,
- msg="Half-life attenuation mismatch: observed vs expected",
- )
- # Test linear mode
- params["exit_attenuation_mode"] = "linear"
- params["exit_linear_slope"] = 1.0
- reward_linear = calculate_reward(
+ assert_exit_mode_mathematical_validation(
+ self,
context,
params,
self.TEST_BASE_FACTOR,
self.TEST_PROFIT_TARGET,
self.TEST_RR,
- short_allowed=True,
- action_masking=True,
+ self.TOL_IDENTITY_RELAXED,
)
- rewards = [
- reward_power.exit_component,
- reward_half_life.exit_component,
- reward_linear.exit_component,
- ]
- self.assertTrue(all((r > 0 for r in rewards)))
- unique_rewards = set((f"{r:.6f}" for r in rewards))
- self.assertGreater(len(unique_rewards), 1)
# Part 2: Monotonic attenuation validation
modes = list(ATTENUATION_MODES) + ["plateau_linear"]
- base_factor = self.TEST_BASE_FACTOR
- pnl = 0.05
- pnl_factor = 1.0
- for mode in modes:
- with self.subTest(mode=mode):
- if mode == "plateau_linear":
- mode_params = self.base_params(
- exit_attenuation_mode="linear",
- exit_plateau=True,
- exit_plateau_grace=0.2,
- exit_linear_slope=1.0,
- )
- elif mode == "linear":
- mode_params = self.base_params(
- exit_attenuation_mode="linear", exit_linear_slope=1.2
- )
- elif mode == "power":
- mode_params = self.base_params(
- exit_attenuation_mode="power", exit_power_tau=0.5
- )
- elif mode == "half_life":
- mode_params = self.base_params(
- exit_attenuation_mode="half_life", exit_half_life=0.7
- )
- else:
- mode_params = self.base_params(exit_attenuation_mode="sqrt")
-
- ratios = np.linspace(0, 2, 15)
- values = [
- _get_exit_factor(base_factor, pnl, pnl_factor, r, mode_params) for r in ratios
- ]
-
- if mode == "plateau_linear":
- grace = float(mode_params["exit_plateau_grace"])
- filtered = [
- (r, v)
- for r, v in zip(ratios, values)
- if r >= grace - self.TOL_IDENTITY_RELAXED
- ]
- values_to_check = [v for _, v in filtered]
- else:
- values_to_check = values
-
- for earlier, later in zip(values_to_check, values_to_check[1:]):
- self.assertLessEqual(
- later,
- earlier + self.TOL_IDENTITY_RELAXED,
- f"Non-monotonic attenuation in mode={mode}",
- )
+ assert_exit_factor_attenuation_modes(
+ self,
+ base_factor=self.TEST_BASE_FACTOR,
+ pnl=0.05,
+ pnl_factor=1.0,
+ attenuation_modes=modes,
+ base_params_fn=self.base_params,
+ tolerance_relaxed=self.TOL_IDENTITY_RELAXED,
+ )
def test_exit_factor_threshold_warning_and_non_capping(self):
"""Warning emission without capping when exit_factor_threshold exceeded."""
diff1 = f_boundary - f1
diff2 = f_boundary - f2
ratio = diff1 / max(diff2, self.TOL_NUMERIC_GUARD)
- self.assertGreater(ratio, 5.0, f"Scaling ratio too small (ratio={ratio:.2f})")
- self.assertLess(ratio, 15.0, f"Scaling ratio too large (ratio={ratio:.2f})")
+ self.assertGreater(
+ ratio,
+ self.EXIT_FACTOR_SCALING_RATIO_MIN,
+ f"Scaling ratio too small (ratio={ratio:.2f})",
+ )
+ self.assertLess(
+ ratio,
+ self.EXIT_FACTOR_SCALING_RATIO_MAX,
+ f"Scaling ratio too large (ratio={ratio:.2f})",
+ )
+
+ # === Robustness invariants 102–105 ===
+ # Owns invariant: robustness-exit-mode-fallback-102
+ def test_robustness_102_unknown_exit_mode_fallback_linear(self):
+ """Invariant 102: Unknown exit_attenuation_mode gracefully warns and falls back to linear kernel."""
+ params = self.base_params(
+ exit_attenuation_mode="nonexistent_kernel_xyz", exit_plateau=False
+ )
+ base_factor = 75.0
+ pnl = 0.05
+ pnl_factor = 1.0
+ duration_ratio = 0.8
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", RewardDiagnosticsWarning)
+ f_unknown = _get_exit_factor(base_factor, pnl, pnl_factor, duration_ratio, params)
+ linear_params = self.base_params(exit_attenuation_mode="linear", exit_plateau=False)
+ f_linear = _get_exit_factor(base_factor, pnl, pnl_factor, duration_ratio, linear_params)
+ self.assertAlmostEqualFloat(
+ f_unknown,
+ f_linear,
+ tolerance=self.TOL_IDENTITY_RELAXED,
+ msg=f"Fallback linear mismatch unknown={f_unknown} linear={f_linear}",
+ )
+ diag_warnings = [w for w in caught if issubclass(w.category, RewardDiagnosticsWarning)]
+ self.assertTrue(
+ diag_warnings, "No RewardDiagnosticsWarning emitted for unknown mode fallback"
+ )
+ self.assertTrue(
+ any("Unknown exit_attenuation_mode" in str(w.message) for w in diag_warnings),
+ "Fallback warning message content mismatch",
+ )
+
+ # Owns invariant: robustness-negative-grace-clamp-103
+ def test_robustness_103_negative_plateau_grace_clamped(self):
+ """Invariant 103: Negative exit_plateau_grace emits warning and clamps to 0.0 (no plateau extension)."""
+ params = self.base_params(
+ exit_attenuation_mode="linear",
+ exit_plateau=True,
+ exit_plateau_grace=-2.0,
+ exit_linear_slope=1.2,
+ )
+ base_factor = 90.0
+ pnl = 0.03
+ pnl_factor = 1.0
+ duration_ratio = 0.5
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", RewardDiagnosticsWarning)
+ f_neg = _get_exit_factor(base_factor, pnl, pnl_factor, duration_ratio, params)
+ # Reference with grace=0.0 (since negative should clamp)
+ ref_params = self.base_params(
+ exit_attenuation_mode="linear",
+ exit_plateau=True,
+ exit_plateau_grace=0.0,
+ exit_linear_slope=1.2,
+ )
+ f_ref = _get_exit_factor(base_factor, pnl, pnl_factor, duration_ratio, ref_params)
+ self.assertAlmostEqualFloat(
+ f_neg,
+ f_ref,
+ tolerance=self.TOL_IDENTITY_RELAXED,
+ msg=f"Negative grace clamp mismatch f_neg={f_neg} f_ref={f_ref}",
+ )
+ diag_warnings = [w for w in caught if issubclass(w.category, RewardDiagnosticsWarning)]
+ self.assertTrue(diag_warnings, "No RewardDiagnosticsWarning for negative grace")
+ self.assertTrue(
+ any("exit_plateau_grace < 0" in str(w.message) for w in diag_warnings),
+ "Warning content missing for negative grace clamp",
+ )
+
+ # Owns invariant: robustness-invalid-power-tau-104
+ def test_robustness_104_invalid_power_tau_fallback_alpha_one(self):
+ """Invariant 104: Invalid exit_power_tau (<=0 or >1 or NaN) warns and falls back alpha=1.0."""
+ invalid_taus = [0.0, -0.5, 2.0, float("nan")]
+ base_factor = 120.0
+ pnl = 0.04
+ pnl_factor = 1.0
+ duration_ratio = 1.0
+ # Explicit alpha=1 expected ratio: f(dr)/f(0)=1/(1+dr)^1 with plateau disabled to observe attenuation.
+ expected_ratio_alpha1 = 1.0 / (1.0 + duration_ratio)
+ for tau in invalid_taus:
+ params = self.base_params(
+ exit_attenuation_mode="power", exit_power_tau=tau, exit_plateau=False
+ )
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", RewardDiagnosticsWarning)
+ f0 = _get_exit_factor(base_factor, pnl, pnl_factor, 0.0, params)
+ f1 = _get_exit_factor(base_factor, pnl, pnl_factor, duration_ratio, params)
+ diag_warnings = [w for w in caught if issubclass(w.category, RewardDiagnosticsWarning)]
+ self.assertTrue(diag_warnings, f"No RewardDiagnosticsWarning for invalid tau={tau}")
+ self.assertTrue(any("exit_power_tau" in str(w.message) for w in diag_warnings))
+ ratio = f1 / max(f0, self.TOL_NUMERIC_GUARD)
+ self.assertAlmostEqual(
+ ratio,
+ expected_ratio_alpha1,
+ places=9,
+ msg=f"Alpha=1 fallback ratio mismatch tau={tau} ratio={ratio} expected={expected_ratio_alpha1}",
+ )
+
+ # Owns invariant: robustness-near-zero-half-life-105
+ def test_robustness_105_half_life_near_zero_fallback(self):
+ """Invariant 105: Near-zero exit_half_life warns and returns factor≈base_factor (no attenuation)."""
+ base_factor = 60.0
+ pnl = 0.02
+ pnl_factor = 1.0
+ duration_ratio = 0.7
+ near_zero_values = [1e-15, 1e-12, 5e-14]
+ for hl in near_zero_values:
+ params = self.base_params(exit_attenuation_mode="half_life", exit_half_life=hl)
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", RewardDiagnosticsWarning)
+ _ = _get_exit_factor(base_factor, pnl, pnl_factor, 0.0, params)
+ fdr = _get_exit_factor(base_factor, pnl, pnl_factor, duration_ratio, params)
+ diag_warnings = [w for w in caught if issubclass(w.category, RewardDiagnosticsWarning)]
+ self.assertTrue(
+ diag_warnings, f"No RewardDiagnosticsWarning for near-zero half-life hl={hl}"
+ )
+ self.assertTrue(
+ any(
+ "exit_half_life" in str(w.message) and "close to 0" in str(w.message)
+ for w in diag_warnings
+ )
+ )
+ self.assertAlmostEqualFloat(
+ fdr,
+ 1.0 * pnl_factor, # Kernel returns 1.0 then * pnl_factor
+ tolerance=self.TOL_IDENTITY_RELAXED,
+ msg=f"Near-zero half-life attenuation mismatch hl={hl} fdr={fdr}",
+ )
if __name__ == "__main__":
--- /dev/null
+#!/usr/bin/env python3
+"""Targeted tests for _perform_feature_analysis failure and edge paths.
+
+Covers early stub returns and guarded exception branches to raise coverage:
+- Missing reward column
+- Empty frame
+- Single usable feature (<2 features path)
+- NaNs present after preprocessing (>=2 features path)
+- Model fitting failure (monkeypatched fit)
+- Permutation importance failure (monkeypatched permutation_importance) while partial dependence still computed
+- Successful partial dependence computation path (not skipped)
+- scikit-learn import fallback (RandomForestRegressor/train_test_split/permutation_importance/r2_score unavailable)
+"""
+
+import numpy as np
+import pandas as pd
+import pytest
+
+from reward_space_analysis import _perform_feature_analysis # type: ignore
+
+pytestmark = pytest.mark.statistics
+
+
+def _minimal_df(n: int = 30) -> pd.DataFrame:
+ rng = np.random.default_rng(42)
+ return pd.DataFrame(
+ {
+ "pnl": rng.normal(0, 1, n),
+ "trade_duration": rng.integers(1, 10, n),
+ "idle_duration": rng.integers(1, 5, n),
+ "position": rng.choice([0.0, 1.0], n),
+ "action": rng.integers(0, 3, n),
+ "is_invalid": rng.choice([0, 1], n),
+ "duration_ratio": rng.random(n),
+ "idle_ratio": rng.random(n),
+ "reward": rng.normal(0, 1, n),
+ }
+ )
+
+
+def test_feature_analysis_missing_reward_column():
+ df = _minimal_df().drop(columns=["reward"]) # remove reward
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=7, skip_partial_dependence=True
+ )
+ assert importance_df.empty
+ assert stats["model_fitted"] is False
+ assert stats["n_features"] == 0
+ assert partial_deps == {}
+ assert model is None
+
+
+def test_feature_analysis_empty_frame():
+ df = _minimal_df(0) # empty
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=7, skip_partial_dependence=True
+ )
+ assert importance_df.empty
+ assert stats["n_features"] == 0
+ assert model is None
+
+
+def test_feature_analysis_single_feature_path():
+ df = pd.DataFrame({"pnl": np.random.normal(0, 1, 25), "reward": np.random.normal(0, 1, 25)})
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=11, skip_partial_dependence=True
+ )
+ assert stats["n_features"] == 1
+ # Importance stub path returns NaNs
+ assert importance_df["importance_mean"].isna().all()
+ assert model is None
+
+
+def test_feature_analysis_nans_present_path():
+ rng = np.random.default_rng(9)
+ df = pd.DataFrame(
+ {
+ "pnl": rng.normal(0, 1, 40),
+ "trade_duration": [1.0, np.nan] * 20, # introduces NaNs but not wholly NaN column
+ "reward": rng.normal(0, 1, 40),
+ }
+ )
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=13, skip_partial_dependence=True
+ )
+ # Should hit NaN stub path (model_fitted False)
+ assert stats["model_fitted"] is False
+ assert importance_df["importance_mean"].isna().all()
+ assert model is None
+
+
+def test_feature_analysis_model_fitting_failure(monkeypatch):
+ # Monkeypatch model fit to raise
+ from reward_space_analysis import RandomForestRegressor # type: ignore
+
+ _ = RandomForestRegressor.fit # preserve reference for clarity (unused)
+
+ def boom(self, *a, **kw): # noqa: D401
+ raise RuntimeError("forced fit failure")
+
+ monkeypatch.setattr(RandomForestRegressor, "fit", boom)
+ df = _minimal_df(50)
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=21, skip_partial_dependence=True
+ )
+ assert stats["model_fitted"] is False
+ assert model is None
+ assert importance_df["importance_mean"].isna().all()
+ # Restore (pytest monkeypatch will revert automatically at teardown)
+
+
+def test_feature_analysis_permutation_failure_partial_dependence(monkeypatch):
+ # Monkeypatch permutation_importance to raise while allowing partial dependence
+ def perm_boom(*a, **kw): # noqa: D401
+ raise RuntimeError("forced permutation failure")
+
+ monkeypatch.setattr("reward_space_analysis.permutation_importance", perm_boom)
+ df = _minimal_df(60)
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=33, skip_partial_dependence=False
+ )
+ assert stats["model_fitted"] is True
+ # Importance should be NaNs due to failure
+ assert importance_df["importance_mean"].isna().all()
+ # Partial dependencies should still attempt and produce entries for available features listed in function
+ assert len(partial_deps) >= 1 # at least one PD computed
+ assert model is not None
+
+
+def test_feature_analysis_success_partial_dependence():
+ df = _minimal_df(70)
+ importance_df, stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=47, skip_partial_dependence=False
+ )
+ # Expect at least one non-NaN importance (model fitted path)
+ assert importance_df["importance_mean"].notna().any()
+ assert stats["model_fitted"] is True
+ assert len(partial_deps) >= 1
+ assert model is not None
+
+
+def test_feature_analysis_import_fallback(monkeypatch):
+ """Simulate scikit-learn components unavailable to hit ImportError early raise."""
+ # Set any one (or all) of the guarded sklearn symbols to None; function should fast-fail.
+ monkeypatch.setattr("reward_space_analysis.RandomForestRegressor", None)
+ monkeypatch.setattr("reward_space_analysis.train_test_split", None)
+ monkeypatch.setattr("reward_space_analysis.permutation_importance", None)
+ monkeypatch.setattr("reward_space_analysis.r2_score", None)
+ df = _minimal_df(10)
+ with pytest.raises(ImportError):
+ _perform_feature_analysis(df, seed=5, skip_partial_dependence=True)
+
+
+def test_module_level_sklearn_import_failure_reload():
+ """Force module-level sklearn import failure to execute fallback block (lines 32–42).
+
+ Strategy:
+ - Temporarily monkeypatch builtins.__import__ to raise on any 'sklearn' import.
+ - Remove 'reward_space_analysis' from sys.modules and re-import to trigger try/except.
+ - Assert guarded sklearn symbols are None (fallback assigned) in newly loaded module.
+ - Call its _perform_feature_analysis to confirm ImportError path surfaces.
+ - Restore original importer and original module to avoid side-effects on other tests.
+ """
+ import builtins
+ import importlib
+ import sys
+
+ orig_mod = sys.modules.get("reward_space_analysis")
+ orig_import = builtins.__import__
+
+ def fake_import(name, *args, **kwargs): # noqa: D401
+ if name.startswith("sklearn"):
+ raise RuntimeError("forced sklearn import failure")
+ return orig_import(name, *args, **kwargs)
+
+ builtins.__import__ = fake_import
+ try:
+ # Drop existing module to force fresh execution of top-level imports
+ if "reward_space_analysis" in sys.modules:
+ del sys.modules["reward_space_analysis"]
+ import reward_space_analysis as rsa_fallback # noqa: F401
+
+ # Fallback assigns sklearn symbols to None
+ assert getattr(rsa_fallback, "RandomForestRegressor") is None
+ assert getattr(rsa_fallback, "train_test_split") is None
+ assert getattr(rsa_fallback, "permutation_importance") is None
+ assert getattr(rsa_fallback, "r2_score") is None
+ # Perform feature analysis should raise ImportError under missing components
+ df = _minimal_df(15)
+ with pytest.raises(ImportError):
+ rsa_fallback._perform_feature_analysis(df, seed=3, skip_partial_dependence=True) # type: ignore[attr-defined]
+ finally:
+ # Restore importer
+ builtins.__import__ = orig_import
+ # Restore original module state if it existed
+ if orig_mod is not None:
+ sys.modules["reward_space_analysis"] = orig_mod
+ else:
+ if "reward_space_analysis" in sys.modules:
+ del sys.modules["reward_space_analysis"]
+ importlib.import_module("reward_space_analysis")
"""Statistical tests, distribution metrics, and bootstrap validation."""
import unittest
+import warnings
import numpy as np
import pandas as pd
+import pytest
from reward_space_analysis import (
+ RewardDiagnosticsWarning,
+ _binned_stats,
+ _compute_relationship_stats,
bootstrap_confidence_intervals,
compute_distribution_shift_metrics,
distribution_diagnostics,
statistical_hypothesis_tests,
)
-from .test_base import RewardSpaceTestBase
+from ..test_base import RewardSpaceTestBase
+
+pytestmark = pytest.mark.statistics
class TestStatistics(RewardSpaceTestBase):
"""Statistical tests: metrics, diagnostics, bootstrap, correlations."""
+ def test_statistics_feature_analysis_skip_partial_dependence(self):
+ """Invariant 107: skip_partial_dependence=True yields empty partial_deps."""
+ try:
+ from reward_space_analysis import _perform_feature_analysis # type: ignore
+ except ImportError:
+ self.skipTest("sklearn not available; skipping feature analysis invariance test")
+ # Use existing helper to get synthetic stats df (small for speed)
+ df = self.make_stats_df(n=120, seed=self.SEED, idle_pattern="mixed")
+ importance_df, analysis_stats, partial_deps, model = _perform_feature_analysis(
+ df, seed=self.SEED, skip_partial_dependence=True, rf_n_jobs=1, perm_n_jobs=1
+ )
+ self.assertIsInstance(importance_df, pd.DataFrame)
+ self.assertIsInstance(analysis_stats, dict)
+ self.assertEqual(
+ partial_deps, {}, "partial_deps must be empty when skip_partial_dependence=True"
+ )
+
+ def test_statistics_binned_stats_invalid_bins_raises(self):
+ """Invariant 110: _binned_stats must raise ValueError for <2 bin edges."""
+
+ df = self.make_stats_df(n=50, seed=self.SEED)
+ with self.assertRaises(ValueError):
+ _binned_stats(df, "idle_duration", "reward_idle", [0.0]) # single edge invalid
+ # Control: valid case should not raise and produce frame
+ result = _binned_stats(df, "idle_duration", "reward_idle", [0.0, 10.0, 20.0])
+ self.assertIsInstance(result, pd.DataFrame)
+ self.assertGreaterEqual(len(result), 1)
+
+ def test_statistics_correlation_dropped_constant_columns(self):
+ """Invariant 111: constant columns are listed in correlation_dropped and excluded."""
+
+ df = self.make_stats_df(n=90, seed=self.SEED)
+ # Force some columns constant
+ df.loc[:, "reward_hold"] = 0.0
+ df.loc[:, "idle_duration"] = 5.0
+ stats_rel = _compute_relationship_stats(df)
+ dropped = stats_rel["correlation_dropped"]
+ self.assertIn("reward_hold", dropped)
+ self.assertIn("idle_duration", dropped)
+ corr = stats_rel["correlation"]
+ self.assertIsInstance(corr, pd.DataFrame)
+ self.assertNotIn("reward_hold", corr.columns)
+ self.assertNotIn("idle_duration", corr.columns)
+
+ def test_statistics_distribution_shift_metrics_degenerate_zero(self):
+ """Invariant 112: degenerate distributions yield zero shift metrics and KS p=1.0."""
+ # Build two identical constant distributions (length >=10)
+ n = 40
+ df_const = pd.DataFrame(
+ {
+ "pnl": np.zeros(n),
+ "trade_duration": np.ones(n) * 7.0,
+ "idle_duration": np.ones(n) * 3.0,
+ }
+ )
+ metrics = compute_distribution_shift_metrics(df_const, df_const.copy())
+ # Each feature should have zero metrics and ks_pvalue=1.0
+ for feature in ["pnl", "trade_duration", "idle_duration"]:
+ for suffix in ["kl_divergence", "js_distance", "wasserstein", "ks_statistic"]:
+ key = f"{feature}_{suffix}"
+ if key in metrics:
+ self.assertPlacesEqual(
+ float(metrics[key]), 0.0, places=12, msg=f"Expected 0 for {key}"
+ )
+ p_key = f"{feature}_ks_pvalue"
+ if p_key in metrics:
+ self.assertPlacesEqual(
+ float(metrics[p_key]), 1.0, places=12, msg=f"Expected 1.0 for {p_key}"
+ )
+
def _make_idle_variance_df(self, n: int = 100) -> pd.DataFrame:
"""Synthetic dataframe focusing on idle_duration ↔ reward_idle correlation."""
self.seed_all(self.SEED)
}
)
- def test_stats_distribution_shift_metrics(self):
+ def test_statistics_distribution_shift_metrics(self):
"""KL/JS/Wasserstein metrics."""
df1 = self._make_idle_variance_df(100)
df2 = self._make_idle_variance_df(100)
else:
self.assertFinite(value, name=metric_name)
- def test_stats_distribution_shift_identity_null_metrics(self):
+ def test_statistics_distribution_shift_identity_null_metrics(self):
"""Identity distributions -> near-zero shift metrics."""
df = self._make_idle_variance_df(180)
metrics_id = compute_distribution_shift_metrics(df, df.copy())
f"KS statistic should be near 0 on identical distributions (got {val})",
)
- def test_stats_hypothesis_testing(self):
+ def test_statistics_hypothesis_testing(self):
"""Light correlation sanity check."""
df = self._make_idle_variance_df(200)
if len(df) > 30:
negative_ratio, 0.5, "Most idle rewards should be negative (penalties)"
)
+ def test_statistics_distribution_constant_fallback_diagnostics(self):
+ """Invariant 115: constant distribution triggers fallback diagnostics (zero moments, qq_r2=1.0)."""
+ # Build constant reward/pnl columns to force degenerate stats
+ n = 60
+ df_const = pd.DataFrame(
+ {
+ "reward": np.zeros(n),
+ "reward_idle": np.zeros(n),
+ "reward_hold": np.zeros(n),
+ "pnl": np.zeros(n),
+ "pnl_raw": np.zeros(n),
+ }
+ )
+ diagnostics = distribution_diagnostics(df_const)
+ # Mean and std for constant arrays
+ for key in ["reward_mean", "reward_std", "pnl_mean", "pnl_std"]:
+ if key in diagnostics:
+ self.assertAlmostEqualFloat(
+ float(diagnostics[key]), 0.0, tolerance=self.TOL_IDENTITY_RELAXED
+ )
+ # Skewness & kurtosis fallback to INTERNAL_GUARDS['distribution_constant_fallback_moment'] (0.0)
+ for key in ["reward_skewness", "reward_kurtosis", "pnl_skewness", "pnl_kurtosis"]:
+ if key in diagnostics:
+ self.assertAlmostEqualFloat(
+ float(diagnostics[key]), 0.0, tolerance=self.TOL_IDENTITY_RELAXED
+ )
+ # Q-Q plot r2 fallback value
+ qq_key = next((k for k in diagnostics if k.endswith("_qq_r2")), None)
+ if qq_key is not None:
+ self.assertAlmostEqualFloat(
+ float(diagnostics[qq_key]), 1.0, tolerance=self.TOL_IDENTITY_RELAXED
+ )
+ # All diagnostic values finite
+ for k, v in diagnostics.items():
+ self.assertFinite(v, name=k)
+
def test_stats_distribution_diagnostics(self):
"""Distribution diagnostics."""
df = self._make_idle_variance_df(100)
f"Expected near-zero divergence after equal scaling (k={k}, v={v})",
)
+ # Non-owning smoke; ownership: robustness/test_robustness.py:35 (robustness-decomposition-integrity-101)
+ @pytest.mark.smoke
def test_stats_mean_decomposition_consistency(self):
"""Batch mean additivity."""
df_a = self._shift_scale_df(120)
flags.append(bool(v["significant"]))
if flags:
rate = sum(flags) / len(flags)
- self.assertLess(rate, 0.15, f"BH null FP rate too high under null: {rate:.3f}")
+ self.assertLess(
+ rate, self.BH_FP_RATE_THRESHOLD, f"BH null FP rate too high under null: {rate:.3f}"
+ )
def test_stats_half_life_monotonic_series(self):
"""Smoothed exponential decay monotonic."""
self.assertEqual(v1, v2, f"Mismatch for {k}:{field}")
metrics = ["reward", "pnl"]
ci_a = bootstrap_confidence_intervals(
- df, metrics, n_bootstrap=150, seed=self.SEED_BOOTSTRAP
+ df, metrics, n_bootstrap=self.BOOTSTRAP_DEFAULT_ITERATIONS, seed=self.SEED_BOOTSTRAP
)
ci_b = bootstrap_confidence_intervals(
- df, metrics, n_bootstrap=150, seed=self.SEED_BOOTSTRAP
+ df, metrics, n_bootstrap=self.BOOTSTRAP_DEFAULT_ITERATIONS, seed=self.SEED_BOOTSTRAP
)
for metric in metrics:
m_a, lo_a, hi_a = ci_a[metric]
self.assertFinite(hw_large, name="hw_large")
self.assertLess(hw_large, hw_small * 0.55)
- def test_stats_bootstrap_constant_distribution_and_diagnostics(self):
- """Bootstrap on degenerate columns produce (mean≈lo≈hi) zero-width intervals."""
+ # Owns invariant: statistics-constant-dist-widened-ci-113a
+ def test_stats_bootstrap_constant_distribution_widening(self):
+ """Invariant 113 (non-strict): constant distribution CI widened with warning (positive epsilon width)."""
+
df = self._const_df(80)
- res = bootstrap_confidence_intervals(
- df, ["reward", "pnl"], n_bootstrap=200, confidence_level=0.95
+ with warnings.catch_warnings(record=True) as caught:
+ warnings.simplefilter("always", RewardDiagnosticsWarning)
+ res = bootstrap_confidence_intervals(
+ df,
+ ["reward", "pnl"],
+ n_bootstrap=200,
+ confidence_level=0.95,
+ strict_diagnostics=False,
+ )
+ diag_warnings = [w for w in caught if issubclass(w.category, RewardDiagnosticsWarning)]
+ self.assertTrue(
+ diag_warnings,
+ "Expected RewardDiagnosticsWarning for degenerate bootstrap CI widening",
)
for _metric, (mean, lo, hi) in res.items():
- self.assertAlmostEqualFloat(mean, lo, tolerance=2e-09)
- self.assertAlmostEqualFloat(mean, hi, tolerance=2e-09)
- self.assertLessEqual(hi - lo, 2e-09)
- if "effect_size_rank_biserial" in res:
- rb = res["effect_size_rank_biserial"]
- self.assertFinite(rb)
- self.assertWithin(rb, -1, 1, name="rank_biserial")
+ self.assertLess(
+ lo,
+ hi,
+ "Degenerate CI should be widened (lo < hi) under non-strict diagnostics",
+ )
+ width = hi - lo
+ self.assertGreater(width, 0.0)
+ self.assertLessEqual(width, 3e-09, "Width should be small epsilon range (<=3e-9)")
+ # Mean should be centered (approx) within widened bounds
+ self.assertGreaterEqual(mean, lo)
+ self.assertLessEqual(mean, hi)
+
+ # Owns invariant: statistics-constant-dist-strict-omit-113b
+ def test_stats_bootstrap_constant_distribution_strict_diagnostics(self):
+ """Invariant 113 (strict): constant distribution metrics are omitted (no widened CI returned)."""
+ df = self._const_df(60)
+ res = bootstrap_confidence_intervals(
+ df, ["reward", "pnl"], n_bootstrap=150, confidence_level=0.95, strict_diagnostics=True
+ )
+ # Strict mode should omit constant metrics entirely
+ self.assertTrue(
+ all(m not in res for m in ["reward", "pnl"]),
+ f"Strict diagnostics should omit constant metrics; got keys: {list(res.keys())}",
+ )
if __name__ == "__main__":
cls.TEST_RR_HIGH = 2.0
cls.TEST_PNL_STD = 0.02
cls.TEST_PNL_DUR_VOL_SCALE = 0.5
- # Specialized seeds for different test contexts
+ # Seeds for different test contexts
cls.SEED_SMOKE_TEST = 7
cls.SEED_REPRODUCIBILITY = 777
cls.SEED_BOOTSTRAP = 2024
cls.SEED_HETEROSCEDASTICITY = 123
+ # Statistical test thresholds
+ cls.BOOTSTRAP_DEFAULT_ITERATIONS = 200
+ cls.BH_FP_RATE_THRESHOLD = 0.15
+ cls.EXIT_FACTOR_SCALING_RATIO_MIN = 5.0
+ cls.EXIT_FACTOR_SCALING_RATIO_MAX = 15.0
def setUp(self):
"""Set up test fixtures with reproducible random seed."""
+++ /dev/null
-#!/usr/bin/env python3
-"""Tests for Potential-Based Reward Shaping (PBRS) mechanics."""
-
-import math
-import unittest
-
-import numpy as np
-
-from reward_space_analysis import (
- DEFAULT_MODEL_REWARD_PARAMETERS,
- PBRS_INVARIANCE_TOL,
- _compute_entry_additive,
- _compute_exit_additive,
- _compute_exit_potential,
- _compute_hold_potential,
- _get_float_param,
- apply_potential_shaping,
- apply_transform,
- simulate_samples,
- validate_reward_parameters,
-)
-
-from .test_base import RewardSpaceTestBase
-
-
-class TestPBRS(RewardSpaceTestBase):
- """PBRS mechanics tests (transforms, parameters, potentials, invariance)."""
-
- def test_pbrs_progressive_release_decay_clamped(self):
- """progressive_release decay>1 clamps -> Φ'=0 & Δ=-Φ_prev."""
- params = self.DEFAULT_PARAMS.copy()
- params.update(
- {
- "potential_gamma": DEFAULT_MODEL_REWARD_PARAMETERS["potential_gamma"],
- "exit_potential_mode": "progressive_release",
- "exit_potential_decay": 5.0,
- "hold_potential_enabled": True,
- "entry_additive_enabled": False,
- "exit_additive_enabled": False,
- }
- )
- current_pnl = 0.02
- current_dur = 0.5
- prev_potential = _compute_hold_potential(current_pnl, current_dur, params)
- _total_reward, reward_shaping, next_potential = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=current_pnl,
- current_duration_ratio=current_dur,
- next_pnl=0.0,
- next_duration_ratio=0.0,
- is_exit=True,
- is_entry=False,
- last_potential=0.789,
- params=params,
- )
- self.assertAlmostEqualFloat(next_potential, 0.0, tolerance=self.TOL_IDENTITY_RELAXED)
- self.assertAlmostEqualFloat(
- reward_shaping, -prev_potential, tolerance=self.TOL_IDENTITY_RELAXED
- )
-
- def test_pbrs_spike_cancel_invariance(self):
- """spike_cancel terminal shaping ≈0 (Φ' inversion yields cancellation)."""
- params = self.DEFAULT_PARAMS.copy()
- params.update(
- {
- "potential_gamma": 0.9,
- "exit_potential_mode": "spike_cancel",
- "hold_potential_enabled": True,
- "entry_additive_enabled": False,
- "exit_additive_enabled": False,
- }
- )
- current_pnl = 0.015
- current_dur = 0.4
- prev_potential = _compute_hold_potential(current_pnl, current_dur, params)
- gamma = _get_float_param(
- params, "potential_gamma", DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
- )
- expected_next_potential = (
- prev_potential / gamma if gamma not in (0.0, None) else prev_potential
- )
- _total_reward, reward_shaping, next_potential = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=current_pnl,
- current_duration_ratio=current_dur,
- next_pnl=0.0,
- next_duration_ratio=0.0,
- is_exit=True,
- is_entry=False,
- last_potential=prev_potential,
- params=params,
- )
- self.assertAlmostEqualFloat(
- next_potential, expected_next_potential, tolerance=self.TOL_IDENTITY_RELAXED
- )
- self.assertNearZero(reward_shaping, atol=self.TOL_IDENTITY_RELAXED)
-
- def test_tanh_transform(self):
- """tanh transform: tanh(x) in (-1, 1)."""
- self.assertAlmostEqualFloat(apply_transform("tanh", 0.0), 0.0)
- self.assertAlmostEqualFloat(apply_transform("tanh", 1.0), math.tanh(1.0))
- self.assertAlmostEqualFloat(apply_transform("tanh", -1.0), math.tanh(-1.0))
- self.assertTrue(abs(apply_transform("tanh", 100.0)) <= 1.0)
- self.assertTrue(abs(apply_transform("tanh", -100.0)) <= 1.0)
-
- def test_softsign_transform(self):
- """softsign transform: x / (1 + |x|) in (-1, 1)."""
- self.assertAlmostEqualFloat(apply_transform("softsign", 0.0), 0.0)
- self.assertAlmostEqualFloat(apply_transform("softsign", 1.0), 0.5)
- self.assertAlmostEqualFloat(apply_transform("softsign", -1.0), -0.5)
- self.assertTrue(abs(apply_transform("softsign", 100.0)) < 1.0)
- self.assertTrue(abs(apply_transform("softsign", -100.0)) < 1.0)
-
- def test_canonical_invariance_flag_and_sum(self):
- """Canonical mode + no additives -> pbrs_invariant True and Σ shaping ≈ 0."""
- params = self.base_params(
- exit_potential_mode="canonical",
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- hold_potential_enabled=True,
- )
- df = simulate_samples(
- params={**params, "max_trade_duration_candles": 100},
- num_samples=400,
- seed=self.SEED,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=self.TEST_RR,
- max_duration_ratio=2.0,
- trading_mode="margin",
- pnl_base_std=self.TEST_PNL_STD,
- pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
- )
- unique_flags = set(df["pbrs_invariant"].unique().tolist())
- self.assertEqual(unique_flags, {True}, f"Unexpected invariant flags: {unique_flags}")
- total_shaping = float(df["reward_shaping"].sum())
- self.assertLess(
- abs(total_shaping),
- PBRS_INVARIANCE_TOL,
- f"Canonical invariance violated: Σ shaping = {total_shaping}",
- )
-
- def test_non_canonical_flag_false_and_sum_nonzero(self):
- """Non-canonical exit potential (progressive_release) -> pbrs_invariant False and Σ shaping != 0."""
- params = self.base_params(
- exit_potential_mode="progressive_release",
- exit_potential_decay=0.25,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- hold_potential_enabled=True,
- )
- df = simulate_samples(
- params={**params, "max_trade_duration_candles": 100},
- num_samples=400,
- seed=self.SEED,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=self.TEST_RR,
- max_duration_ratio=2.0,
- trading_mode="margin",
- pnl_base_std=self.TEST_PNL_STD,
- pnl_duration_vol_scale=self.TEST_PNL_DUR_VOL_SCALE,
- )
- unique_flags = set(df["pbrs_invariant"].unique().tolist())
- self.assertEqual(unique_flags, {False}, f"Unexpected invariant flags: {unique_flags}")
- total_shaping = float(df["reward_shaping"].sum())
- self.assertGreater(
- abs(total_shaping),
- PBRS_INVARIANCE_TOL * 10,
- f"Expected non-zero Σ shaping in non-canonical mode (got {total_shaping})",
- )
-
- def test_asinh_transform(self):
- """asinh transform: x / sqrt(1 + x^2) in (-1, 1)."""
- self.assertAlmostEqualFloat(apply_transform("asinh", 0.0), 0.0)
- self.assertAlmostEqualFloat(
- apply_transform("asinh", 1.2345),
- -apply_transform("asinh", -1.2345),
- tolerance=self.TOL_IDENTITY_STRICT,
- )
- vals = [apply_transform("asinh", x) for x in [-5.0, -1.0, 0.0, 1.0, 5.0]]
- self.assertTrue(all((vals[i] < vals[i + 1] for i in range(len(vals) - 1))))
- self.assertTrue(abs(apply_transform("asinh", 1000000.0)) < 1.0)
- self.assertTrue(abs(apply_transform("asinh", -1000000.0)) < 1.0)
-
- def test_arctan_transform(self):
- """arctan transform: (2/pi) * arctan(x) in (-1, 1)."""
- self.assertAlmostEqualFloat(apply_transform("arctan", 0.0), 0.0)
- self.assertAlmostEqualFloat(
- apply_transform("arctan", 1.0), 2.0 / math.pi * math.atan(1.0), tolerance=1e-10
- )
- self.assertTrue(abs(apply_transform("arctan", 100.0)) <= 1.0)
- self.assertTrue(abs(apply_transform("arctan", -100.0)) <= 1.0)
-
- def test_sigmoid_transform(self):
- """sigmoid transform: 2σ(x) - 1, σ(x) = 1/(1 + e^(-x)) in (-1, 1)."""
- self.assertAlmostEqualFloat(apply_transform("sigmoid", 0.0), 0.0)
- self.assertTrue(apply_transform("sigmoid", 100.0) > 0.99)
- self.assertTrue(apply_transform("sigmoid", -100.0) < -0.99)
- self.assertTrue(-1 < apply_transform("sigmoid", 10.0) < 1)
- self.assertTrue(-1 < apply_transform("sigmoid", -10.0) < 1)
-
- def test_clip_transform(self):
- """clip transform: clip(x, -1, 1) in [-1, 1]."""
- self.assertAlmostEqualFloat(apply_transform("clip", 0.0), 0.0)
- self.assertAlmostEqualFloat(apply_transform("clip", 0.5), 0.5)
- self.assertAlmostEqualFloat(apply_transform("clip", 2.0), 1.0)
- self.assertAlmostEqualFloat(apply_transform("clip", -2.0), -1.0)
-
- def test_invalid_transform(self):
- """Test error handling for invalid transforms."""
- self.assertAlmostEqualFloat(
- apply_transform("invalid_transform", 1.0),
- math.tanh(1.0),
- tolerance=self.TOL_IDENTITY_RELAXED,
- )
-
- def test_additive_components_disabled_return_zero(self):
- """Test entry and exit additives return zero when disabled."""
- # Test entry additive disabled
- params_entry = {"entry_additive_enabled": False}
- val_entry = _compute_entry_additive(0.5, 0.3, params_entry)
- self.assertEqual(val_entry, 0.0)
-
- # Test exit additive disabled
- params_exit = {"exit_additive_enabled": False}
- val_exit = _compute_exit_additive(0.5, 0.3, params_exit)
- self.assertEqual(val_exit, 0.0)
-
- def test_exit_potential_canonical(self):
- """Test exit potential canonical."""
- params = self.base_params(
- exit_potential_mode="canonical",
- hold_potential_enabled=True,
- entry_additive_enabled=True,
- exit_additive_enabled=True,
- )
- base_reward = 0.25
- current_pnl = 0.05
- current_duration_ratio = 0.4
- next_pnl = 0.0
- next_duration_ratio = 0.0
- total, shaping, next_potential = apply_potential_shaping(
- base_reward=base_reward,
- current_pnl=current_pnl,
- current_duration_ratio=current_duration_ratio,
- next_pnl=next_pnl,
- next_duration_ratio=next_duration_ratio,
- is_exit=True,
- is_entry=False,
- last_potential=0.789,
- params=params,
- )
- self.assertIn("_pbrs_invariance_applied", params)
- self.assertFalse(
- params["entry_additive_enabled"],
- "Entry additive should be auto-disabled in canonical mode",
- )
- self.assertFalse(
- params["exit_additive_enabled"],
- "Exit additive should be auto-disabled in canonical mode",
- )
- self.assertPlacesEqual(next_potential, 0.0, places=12)
- current_potential = _compute_hold_potential(
- current_pnl,
- current_duration_ratio,
- {"hold_potential_enabled": True, "hold_potential_scale": 1.0},
- )
- self.assertAlmostEqual(shaping, -current_potential, delta=self.TOL_IDENTITY_RELAXED)
- residual = total - base_reward - shaping
- self.assertAlmostEqual(residual, 0.0, delta=self.TOL_IDENTITY_RELAXED)
- self.assertTrue(np.isfinite(total))
-
- def test_pbrs_invariance_internal_flag_set(self):
- """Canonical path sets _pbrs_invariance_applied once; second call idempotent."""
- params = self.base_params(
- exit_potential_mode="canonical",
- hold_potential_enabled=True,
- entry_additive_enabled=True,
- exit_additive_enabled=True,
- )
- terminal_next_potentials, shaping_values = self._canonical_sweep(params)
- _t1, _s1, _n1 = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=0.05,
- current_duration_ratio=0.3,
- next_pnl=0.0,
- next_duration_ratio=0.0,
- is_exit=True,
- is_entry=False,
- last_potential=0.4,
- params=params,
- )
- self.assertIn("_pbrs_invariance_applied", params)
- self.assertFalse(params["entry_additive_enabled"])
- self.assertFalse(params["exit_additive_enabled"])
- if terminal_next_potentials:
- self.assertTrue(
- all((abs(p) < self.PBRS_TERMINAL_TOL for p in terminal_next_potentials))
- )
- max_abs = max((abs(v) for v in shaping_values)) if shaping_values else 0.0
- self.assertLessEqual(max_abs, self.PBRS_MAX_ABS_SHAPING)
- state_after = (params["entry_additive_enabled"], params["exit_additive_enabled"])
- _t2, _s2, _n2 = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=0.02,
- current_duration_ratio=0.1,
- next_pnl=0.0,
- next_duration_ratio=0.0,
- is_exit=True,
- is_entry=False,
- last_potential=0.1,
- params=params,
- )
- self.assertEqual(
- state_after, (params["entry_additive_enabled"], params["exit_additive_enabled"])
- )
-
- def test_progressive_release_negative_decay_clamped(self):
- """Negative decay must clamp to 0 => next potential equals last potential (no release)."""
- params = self.base_params(
- exit_potential_mode="progressive_release",
- exit_potential_decay=-0.75,
- hold_potential_enabled=True,
- )
- last_potential = 0.42
- total, shaping, next_potential = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=0.0,
- current_duration_ratio=0.0,
- next_pnl=0.0,
- next_duration_ratio=0.0,
- is_exit=True,
- last_potential=last_potential,
- params=params,
- )
- self.assertPlacesEqual(next_potential, last_potential, places=12)
- gamma_raw = DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
- try:
- gamma = float(gamma_raw)
- except Exception:
- gamma = 0.95
- self.assertLessEqual(abs(shaping - gamma * last_potential), self.TOL_GENERIC_EQ)
- self.assertPlacesEqual(total, shaping, places=12)
-
- def test_potential_gamma_nan_fallback(self):
- """potential_gamma=NaN should fall back to default value (indirect comparison)."""
- base_params_dict = self.base_params()
- default_gamma = base_params_dict.get("potential_gamma", 0.95)
- params_nan = self.base_params(potential_gamma=np.nan, hold_potential_enabled=True)
- res_nan = apply_potential_shaping(
- base_reward=0.1,
- current_pnl=0.03,
- current_duration_ratio=0.2,
- next_pnl=0.035,
- next_duration_ratio=0.25,
- is_exit=False,
- last_potential=0.0,
- params=params_nan,
- )
- params_ref = self.base_params(potential_gamma=default_gamma, hold_potential_enabled=True)
- res_ref = apply_potential_shaping(
- base_reward=0.1,
- current_pnl=0.03,
- current_duration_ratio=0.2,
- next_pnl=0.035,
- next_duration_ratio=0.25,
- is_exit=False,
- last_potential=0.0,
- params=params_ref,
- )
- self.assertLess(
- abs(res_nan[1] - res_ref[1]),
- self.TOL_IDENTITY_RELAXED,
- "Unexpected shaping difference under gamma NaN fallback",
- )
- self.assertLess(
- abs(res_nan[0] - res_ref[0]),
- self.TOL_IDENTITY_RELAXED,
- "Unexpected total difference under gamma NaN fallback",
- )
-
- def test_validate_reward_parameters_success_and_failure(self):
- """validate_reward_parameters: success on defaults and failure on invalid ranges."""
- params_ok = DEFAULT_MODEL_REWARD_PARAMETERS.copy()
- try:
- validated = validate_reward_parameters(params_ok)
- except Exception as e:
- self.fail(f"validate_reward_parameters raised unexpectedly: {e}")
- if isinstance(validated, tuple) and len(validated) >= 1 and isinstance(validated[0], dict):
- validated_params = validated[0]
- else:
- validated_params = validated
- for k in ("potential_gamma", "hold_potential_enabled", "exit_potential_mode"):
- self.assertIn(k, validated_params, f"Missing key '{k}' in validated params")
- params_bad = params_ok.copy()
- params_bad["potential_gamma"] = -0.2
- params_bad["hold_potential_scale"] = -5.0
- with self.assertRaises((ValueError, AssertionError)):
- vr = validate_reward_parameters(params_bad)
- if not isinstance(vr, Exception):
- self.fail("validate_reward_parameters should raise on invalid params")
-
- def test_compute_exit_potential_mode_differences(self):
- """_compute_exit_potential modes: canonical resets Φ; spike_cancel approx preserves γΦ' ≈ Φ_prev (delta≈0)."""
- gamma = 0.93
- base_common = dict(
- hold_potential_enabled=True,
- potential_gamma=gamma,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- hold_potential_scale=1.0,
- )
- ctx_pnl = 0.012
- ctx_dur_ratio = 0.3
- params_can = self.base_params(exit_potential_mode="canonical", **base_common)
- prev_phi = _compute_hold_potential(ctx_pnl, ctx_dur_ratio, params_can)
- self.assertFinite(prev_phi, name="prev_phi")
- next_phi_can = _compute_exit_potential(prev_phi, params_can)
- self.assertAlmostEqualFloat(
- next_phi_can,
- 0.0,
- tolerance=self.TOL_IDENTITY_STRICT,
- msg="Canonical exit must zero potential",
- )
- canonical_delta = -prev_phi
- self.assertAlmostEqualFloat(
- canonical_delta,
- -prev_phi,
- tolerance=self.TOL_IDENTITY_RELAXED,
- msg="Canonical delta mismatch",
- )
- params_spike = self.base_params(exit_potential_mode="spike_cancel", **base_common)
- next_phi_spike = _compute_exit_potential(prev_phi, params_spike)
- shaping_spike = gamma * next_phi_spike - prev_phi
- self.assertNearZero(
- shaping_spike,
- atol=self.TOL_IDENTITY_RELAXED,
- msg="Spike cancel should nullify shaping delta",
- )
- self.assertGreaterEqual(
- abs(canonical_delta) + self.TOL_IDENTITY_STRICT,
- abs(shaping_spike),
- "Canonical shaping magnitude should exceed spike_cancel",
- )
-
- def test_transform_bulk_monotonicity_and_bounds(self):
- """Non-decreasing monotonicity & (-1,1) bounds for smooth transforms (excluding clip)."""
- transforms = ["tanh", "softsign", "arctan", "sigmoid", "asinh"]
- xs = [-5.0, -1.0, -0.5, 0.0, 0.5, 1.0, 5.0]
- for name in transforms:
- with self.subTest(transform=name):
- vals = [apply_transform(name, x) for x in xs]
- self.assertTrue(all((-1.0 < v < 1.0 for v in vals)), f"{name} out of bounds")
- for a, b in zip(vals, vals[1:]):
- self.assertLessEqual(
- a, b + self.TOL_IDENTITY_STRICT, f"{name} not monotonic between {a} and {b}"
- )
-
- def test_pbrs_retain_previous_cumulative_drift(self):
- """retain_previous mode accumulates negative shaping drift (non-invariant)."""
- params = self.base_params(
- exit_potential_mode="retain_previous",
- hold_potential_enabled=True,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- potential_gamma=0.9,
- )
- gamma = _get_float_param(
- params, "potential_gamma", DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
- )
- rng = np.random.default_rng(555)
- potentials = rng.uniform(0.05, 0.85, size=220)
- deltas = [gamma * p - p for p in potentials]
- cumulative = float(np.sum(deltas))
- self.assertLess(cumulative, -self.TOL_NEGLIGIBLE)
- self.assertGreater(abs(cumulative), 10 * self.TOL_IDENTITY_RELAXED)
-
- def test_normality_invariance_under_scaling(self):
- """Skewness & excess kurtosis invariant under positive scaling of normal sample."""
- rng = np.random.default_rng(808)
- base = rng.normal(0.0, 1.0, size=7000)
- scaled = 5.0 * base
-
- def _skew_kurt(x: np.ndarray) -> tuple[float, float]:
- m = np.mean(x)
- c = x - m
- m2 = np.mean(c**2)
- m3 = np.mean(c**3)
- m4 = np.mean(c**4)
- skew = m3 / (m2**1.5 + 1e-18)
- kurt = m4 / (m2**2 + 1e-18) - 3.0
- return (skew, kurt)
-
- s_base, k_base = _skew_kurt(base)
- s_scaled, k_scaled = _skew_kurt(scaled)
- self.assertAlmostEqualFloat(s_base, s_scaled, tolerance=self.TOL_DISTRIB_SHAPE)
- self.assertAlmostEqualFloat(k_base, k_scaled, tolerance=self.TOL_DISTRIB_SHAPE)
-
- def test_pbrs_non_canonical_report_generation(self):
- """Generate synthetic invariance section with non-zero shaping to assert Non-canonical classification."""
- import re
-
- import pandas as pd
-
- from reward_space_analysis import PBRS_INVARIANCE_TOL
-
- df = pd.DataFrame(
- {
- "reward_shaping": [0.01, -0.002],
- "reward_entry_additive": [0.0, 0.0],
- "reward_exit_additive": [0.001, 0.0],
- }
- )
- total_shaping = df["reward_shaping"].sum()
- self.assertGreater(abs(total_shaping), PBRS_INVARIANCE_TOL)
- invariance_status = "❌ Non-canonical"
- section = []
- section.append("**PBRS Invariance Summary:**\n")
- section.append("| Field | Value |\n")
- section.append("|-------|-------|\n")
- section.append(f"| Invariance | {invariance_status} |\n")
- section.append(f"| Note | Total shaping = {total_shaping:.6f} (non-zero) |\n")
- section.append(f"| Σ Shaping Reward | {total_shaping:.6f} |\n")
- section.append(f"| Abs Σ Shaping Reward | {abs(total_shaping):.6e} |\n")
- section.append(f"| Σ Entry Additive | {df['reward_entry_additive'].sum():.6f} |\n")
- section.append(f"| Σ Exit Additive | {df['reward_exit_additive'].sum():.6f} |\n")
- content = "".join(section)
- self.assertIn("❌ Non-canonical", content)
- self.assertRegex(content, "Σ Shaping Reward \\| 0\\.008000 \\|")
- m_abs = re.search("Abs Σ Shaping Reward \\| ([0-9.]+e[+-][0-9]{2}) \\|", content)
- self.assertIsNotNone(m_abs)
- if m_abs:
- val = float(m_abs.group(1))
- self.assertAlmostEqual(abs(total_shaping), val, places=12)
-
- def test_potential_gamma_boundary_values_stability(self):
- """Test potential gamma boundary values (0 and ≈1) produce bounded shaping."""
- for gamma in [0.0, 0.999999]:
- params = self.base_params(
- hold_potential_enabled=True,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- exit_potential_mode="canonical",
- potential_gamma=gamma,
- )
- _tot, shap, next_pot = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=0.02,
- current_duration_ratio=0.3,
- next_pnl=0.025,
- next_duration_ratio=0.35,
- is_exit=False,
- last_potential=0.0,
- params=params,
- )
- self.assertTrue(np.isfinite(shap))
- self.assertTrue(np.isfinite(next_pot))
- self.assertLessEqual(abs(shap), self.PBRS_MAX_ABS_SHAPING)
-
- def test_report_cumulative_invariance_aggregation(self):
- """Canonical telescoping term: small per-step mean drift, bounded increments."""
- params = self.base_params(
- hold_potential_enabled=True,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- exit_potential_mode="canonical",
- )
- gamma = _get_float_param(
- params, "potential_gamma", DEFAULT_MODEL_REWARD_PARAMETERS.get("potential_gamma", 0.95)
- )
- rng = np.random.default_rng(321)
- last_potential = 0.0
- telescoping_sum = 0.0
- max_abs_step = 0.0
- steps = 0
- for _ in range(500):
- is_exit = rng.uniform() < 0.1
- current_pnl = float(rng.normal(0, 0.05))
- current_dur = float(rng.uniform(0, 1))
- next_pnl = 0.0 if is_exit else float(rng.normal(0, 0.05))
- next_dur = 0.0 if is_exit else float(rng.uniform(0, 1))
- _tot, _shap, next_potential = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=current_pnl,
- current_duration_ratio=current_dur,
- next_pnl=next_pnl,
- next_duration_ratio=next_dur,
- is_exit=is_exit,
- last_potential=last_potential,
- params=params,
- )
- inc = gamma * next_potential - last_potential
- telescoping_sum += inc
- if abs(inc) > max_abs_step:
- max_abs_step = abs(inc)
- steps += 1
- if is_exit:
- last_potential = 0.0
- else:
- last_potential = next_potential
- mean_drift = telescoping_sum / max(1, steps)
- self.assertLess(
- abs(mean_drift),
- 0.02,
- f"Per-step telescoping drift too large (mean={mean_drift}, steps={steps})",
- )
- self.assertLessEqual(
- max_abs_step,
- self.PBRS_MAX_ABS_SHAPING,
- f"Unexpected large telescoping increment (max={max_abs_step})",
- )
-
- def test_report_explicit_non_invariance_progressive_release(self):
- """progressive_release should generally yield non-zero cumulative shaping (release leak)."""
- params = self.base_params(
- hold_potential_enabled=True,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- exit_potential_mode="progressive_release",
- exit_potential_decay=0.25,
- )
- rng = np.random.default_rng(321)
- last_potential = 0.0
- shaping_sum = 0.0
- for _ in range(160):
- is_exit = rng.uniform() < 0.15
- next_pnl = 0.0 if is_exit else float(rng.normal(0, 0.07))
- next_dur = 0.0 if is_exit else float(rng.uniform(0, 1))
- _tot, shap, next_pot = apply_potential_shaping(
- base_reward=0.0,
- current_pnl=float(rng.normal(0, 0.07)),
- current_duration_ratio=float(rng.uniform(0, 1)),
- next_pnl=next_pnl,
- next_duration_ratio=next_dur,
- is_exit=is_exit,
- last_potential=last_potential,
- params=params,
- )
- shaping_sum += shap
- last_potential = 0.0 if is_exit else next_pot
- self.assertGreater(
- abs(shaping_sum),
- PBRS_INVARIANCE_TOL * 50,
- f"Expected non-zero Σ shaping (got {shaping_sum})",
- )
-
-
-if __name__ == "__main__":
- unittest.main()
+++ /dev/null
-#!/usr/bin/env python3
-"""Tests for reward calculation components and algorithms."""
-
-import dataclasses
-import math
-import unittest
-
-from reward_space_analysis import (
- Actions,
- Positions,
- RewardContext,
- _compute_hold_potential,
- _get_exit_factor,
- _get_float_param,
- _get_pnl_factor,
- calculate_reward,
-)
-
-from .test_base import RewardSpaceTestBase
-
-
-class TestRewardComponents(RewardSpaceTestBase):
- def test_hold_potential_computation_finite(self):
- """Test hold potential computation returns finite values."""
- params = {
- "hold_potential_enabled": True,
- "hold_potential_scale": 1.0,
- "hold_potential_gain": 1.0,
- "hold_potential_transform_pnl": "tanh",
- "hold_potential_transform_duration": "tanh",
- }
- val = _compute_hold_potential(0.5, 0.3, params)
- self.assertFinite(val, name="hold_potential")
-
- def test_hold_penalty_comprehensive(self):
- """Comprehensive hold penalty test: calculation, thresholds, and progressive scaling."""
- # Test 1: Basic hold penalty calculation via reward calculation (trade_duration > max_duration)
- context = self.make_ctx(
- pnl=0.01,
- trade_duration=150, # > default max_duration (128)
- idle_duration=0,
- max_unrealized_profit=0.02,
- min_unrealized_profit=0.0,
- position=Positions.Long,
- action=Actions.Neutral,
- )
- breakdown = calculate_reward(
- context,
- self.DEFAULT_PARAMS,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- self.assertLess(breakdown.hold_penalty, 0, "Hold penalty should be negative")
- self.assertAlmostEqualFloat(
- breakdown.total,
- breakdown.hold_penalty
- + breakdown.reward_shaping
- + breakdown.entry_additive
- + breakdown.exit_additive,
- tolerance=self.TOL_IDENTITY_RELAXED,
- msg="Total should equal sum of components (hold + shaping/additives)",
- )
-
- # Test 2: Zero penalty before max_duration threshold
- max_duration = 128
- test_cases = [
- (64, "before max_duration"),
- (127, "just before max_duration"),
- (128, "exactly at max_duration"),
- (129, "just after max_duration"),
- ]
- for trade_duration, description in test_cases:
- with self.subTest(duration=trade_duration, desc=description):
- context = self.make_ctx(
- pnl=0.0,
- trade_duration=trade_duration,
- idle_duration=0,
- position=Positions.Long,
- action=Actions.Neutral,
- )
- breakdown = calculate_reward(
- context,
- self.DEFAULT_PARAMS,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=1.0,
- short_allowed=True,
- action_masking=True,
- )
- duration_ratio = trade_duration / max_duration
- if duration_ratio < 1.0:
- self.assertEqual(
- breakdown.hold_penalty,
- 0.0,
- f"Hold penalty should be 0.0 {description} (ratio={duration_ratio:.2f})",
- )
- elif duration_ratio == 1.0:
- # At exact max duration, penalty can be 0.0 or slightly negative (implementation dependent)
- self.assertLessEqual(
- breakdown.hold_penalty,
- 0.0,
- f"Hold penalty should be <= 0.0 {description} (ratio={duration_ratio:.2f})",
- )
- else:
- # Beyond max duration, penalty should be strictly negative
- self.assertLess(
- breakdown.hold_penalty,
- 0.0,
- f"Hold penalty should be negative {description} (ratio={duration_ratio:.2f})",
- )
-
- # Test 3: Progressive scaling after max_duration
- params = self.base_params(max_trade_duration_candles=100)
- durations = [150, 200, 300]
- penalties: list[float] = []
- for duration in durations:
- context = self.make_ctx(
- pnl=0.0,
- trade_duration=duration,
- idle_duration=0,
- position=Positions.Long,
- action=Actions.Neutral,
- )
- breakdown = calculate_reward(
- context,
- params,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- penalties.append(breakdown.hold_penalty)
- for i in range(1, len(penalties)):
- self.assertLessEqual(
- penalties[i],
- penalties[i - 1],
- f"Penalty should increase (more negative) with duration: {penalties[i]} <= {penalties[i - 1]}",
- )
-
- def test_idle_penalty_via_rewards(self):
- """Test idle penalty calculation via reward calculation."""
- context = self.make_ctx(
- pnl=0.0,
- trade_duration=0,
- idle_duration=20,
- max_unrealized_profit=0.0,
- min_unrealized_profit=0.0,
- position=Positions.Neutral,
- action=Actions.Neutral,
- )
- breakdown = calculate_reward(
- context,
- self.DEFAULT_PARAMS,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=1.0,
- short_allowed=True,
- action_masking=True,
- )
- self.assertLess(breakdown.idle_penalty, 0, "Idle penalty should be negative")
- self.assertAlmostEqualFloat(
- breakdown.total,
- breakdown.idle_penalty
- + breakdown.reward_shaping
- + breakdown.entry_additive
- + breakdown.exit_additive,
- tolerance=self.TOL_IDENTITY_RELAXED,
- msg="Total should equal sum of components (idle + shaping/additives)",
- )
-
- """Core reward component tests."""
-
- def test_reward_calculation_component_activation(self):
- """Test reward component activation: idle_penalty and exit_component trigger correctly."""
- test_cases = [
- (Positions.Neutral, Actions.Neutral, "idle_penalty"),
- (Positions.Long, Actions.Long_exit, "exit_component"),
- (Positions.Short, Actions.Short_exit, "exit_component"),
- ]
- for position, action, expected_type in test_cases:
- with self.subTest(position=position, action=action):
- context = self.make_ctx(
- pnl=0.02 if expected_type == "exit_component" else 0.0,
- trade_duration=50 if position != Positions.Neutral else 0,
- idle_duration=10 if position == Positions.Neutral else 0,
- max_unrealized_profit=0.03,
- min_unrealized_profit=-0.01,
- position=position,
- action=action,
- )
- breakdown = calculate_reward(
- context,
- self.DEFAULT_PARAMS,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=1.0,
- short_allowed=True,
- action_masking=True,
- )
- if expected_type == "idle_penalty":
- self.assertNotEqual(breakdown.idle_penalty, 0.0)
- elif expected_type == "exit_component":
- self.assertNotEqual(breakdown.exit_component, 0.0)
- self.assertFinite(breakdown.total, name="breakdown.total")
-
- def test_efficiency_zero_policy(self):
- """Test efficiency zero policy."""
- ctx = self.make_ctx(
- pnl=0.0,
- trade_duration=1,
- max_unrealized_profit=0.0,
- min_unrealized_profit=-0.02,
- position=Positions.Long,
- action=Actions.Long_exit,
- )
- params = self.base_params()
- profit_target = self.TEST_PROFIT_TARGET * self.TEST_RR
- pnl_factor = _get_pnl_factor(params, ctx, profit_target, self.TEST_RR)
- self.assertFinite(pnl_factor, name="pnl_factor")
- self.assertAlmostEqualFloat(pnl_factor, 1.0, tolerance=self.TOL_GENERIC_EQ)
-
- def test_max_idle_duration_candles_logic(self):
- """Test max idle duration candles logic."""
- params_small = self.base_params(max_idle_duration_candles=50)
- params_large = self.base_params(max_idle_duration_candles=200)
- base_factor = self.TEST_BASE_FACTOR
- context = self.make_ctx(
- pnl=0.0,
- trade_duration=0,
- idle_duration=40,
- position=Positions.Neutral,
- action=Actions.Neutral,
- )
- small = calculate_reward(
- context,
- params_small,
- base_factor,
- profit_target=self.TEST_PROFIT_TARGET,
- risk_reward_ratio=self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- large = calculate_reward(
- context,
- params_large,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=0.06,
- risk_reward_ratio=self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- self.assertLess(small.idle_penalty, 0.0)
- self.assertLess(large.idle_penalty, 0.0)
- self.assertGreater(large.idle_penalty, small.idle_penalty)
-
- def test_exit_factor_calculation(self):
- """Exit factor calculation across core modes + plateau variant (plateau via exit_plateau=True)."""
- modes_to_test = ["linear", "power"]
- for mode in modes_to_test:
- test_params = self.base_params(exit_attenuation_mode=mode)
- factor = _get_exit_factor(
- base_factor=1.0, pnl=0.02, pnl_factor=1.5, duration_ratio=0.3, params=test_params
- )
- self.assertFinite(factor, name=f"exit_factor[{mode}]")
- self.assertGreater(factor, 0, f"Exit factor for {mode} should be positive")
- plateau_params = self.base_params(
- exit_attenuation_mode="linear",
- exit_plateau=True,
- exit_plateau_grace=0.5,
- exit_linear_slope=1.0,
- )
- plateau_factor_pre = _get_exit_factor(
- base_factor=1.0, pnl=0.02, pnl_factor=1.5, duration_ratio=0.4, params=plateau_params
- )
- plateau_factor_post = _get_exit_factor(
- base_factor=1.0, pnl=0.02, pnl_factor=1.5, duration_ratio=0.8, params=plateau_params
- )
- self.assertGreater(plateau_factor_pre, 0)
- self.assertGreater(plateau_factor_post, 0)
- self.assertGreaterEqual(
- plateau_factor_pre,
- plateau_factor_post - self.TOL_IDENTITY_STRICT,
- "Plateau pre-grace factor should be >= post-grace factor",
- )
-
- def test_idle_penalty_zero_when_profit_target_zero(self):
- """If profit_target=0 → idle_factor=0 → idle penalty must be exactly 0 for neutral idle state."""
- context = self.make_ctx(
- pnl=0.0,
- trade_duration=0,
- idle_duration=30,
- position=Positions.Neutral,
- action=Actions.Neutral,
- )
- br = calculate_reward(
- context,
- self.DEFAULT_PARAMS,
- base_factor=self.TEST_BASE_FACTOR,
- profit_target=0.0,
- risk_reward_ratio=self.TEST_RR,
- short_allowed=True,
- action_masking=True,
- )
- self.assertEqual(br.idle_penalty, 0.0, "Idle penalty should be zero when profit_target=0")
- self.assertEqual(br.total, 0.0, "Total reward should be zero in this configuration")
-
- def test_win_reward_factor_saturation(self):
- """Saturation test: pnl amplification factor should monotonically approach (1 + win_reward_factor)."""
- win_reward_factor = 3.0
- beta = 0.5
- profit_target = self.TEST_PROFIT_TARGET
- params = self.base_params(
- win_reward_factor=win_reward_factor,
- pnl_factor_beta=beta,
- efficiency_weight=0.0,
- exit_attenuation_mode="linear",
- exit_plateau=False,
- exit_linear_slope=0.0,
- )
- params.pop("base_factor", None)
- pnl_values = [profit_target * m for m in (1.05, self.TEST_RR_HIGH, 5.0, 10.0)]
- ratios_observed: list[float] = []
- for pnl in pnl_values:
- context = self.make_ctx(
- pnl=pnl,
- trade_duration=0,
- idle_duration=0,
- max_unrealized_profit=pnl,
- min_unrealized_profit=0.0,
- position=Positions.Long,
- action=Actions.Long_exit,
- )
- br = calculate_reward(
- context,
- params,
- base_factor=1.0,
- profit_target=profit_target,
- risk_reward_ratio=1.0,
- short_allowed=True,
- action_masking=True,
- )
- ratio = br.exit_component / pnl if pnl != 0 else 0.0
- ratios_observed.append(float(ratio))
- self.assertMonotonic(
- ratios_observed,
- non_decreasing=True,
- tolerance=self.TOL_IDENTITY_STRICT,
- name="pnl_amplification_ratio",
- )
- asymptote = 1.0 + win_reward_factor
- final_ratio = ratios_observed[-1]
- self.assertFinite(final_ratio, name="final_ratio")
- self.assertLess(
- abs(final_ratio - asymptote),
- 0.001,
- f"Final amplification {final_ratio:.6f} not close to asymptote {asymptote:.6f}",
- )
- expected_ratios: list[float] = []
- for pnl in pnl_values:
- pnl_ratio = pnl / profit_target
- expected = 1.0 + win_reward_factor * math.tanh(beta * (pnl_ratio - 1.0))
- expected_ratios.append(expected)
- for obs, exp in zip(ratios_observed, expected_ratios):
- self.assertFinite(obs, name="observed_ratio")
- self.assertFinite(exp, name="expected_ratio")
- self.assertLess(
- abs(obs - exp),
- 5e-06,
- f"Observed amplification {obs:.8f} deviates from expected {exp:.8f}",
- )
-
- def test_scale_invariance_and_decomposition(self):
- """Components scale ~ linearly with base_factor; total equals sum(core + shaping + additives)."""
- params = self.base_params()
- params.pop("base_factor", None)
- base_factor = 80.0
- k = 7.5
- profit_target = self.TEST_PROFIT_TARGET
- rr = 1.5
- contexts: list[RewardContext] = [
- self.make_ctx(
- pnl=0.025,
- trade_duration=40,
- idle_duration=0,
- max_unrealized_profit=0.03,
- min_unrealized_profit=0.0,
- position=Positions.Long,
- action=Actions.Long_exit,
- ),
- self.make_ctx(
- pnl=-self.TEST_PNL_STD,
- trade_duration=60,
- idle_duration=0,
- max_unrealized_profit=0.01,
- min_unrealized_profit=-0.04,
- position=Positions.Long,
- action=Actions.Long_exit,
- ),
- self.make_ctx(
- pnl=0.0,
- trade_duration=0,
- idle_duration=35,
- max_unrealized_profit=0.0,
- min_unrealized_profit=0.0,
- position=Positions.Neutral,
- action=Actions.Neutral,
- ),
- self.make_ctx(
- pnl=0.0,
- trade_duration=80,
- idle_duration=0,
- max_unrealized_profit=0.04,
- min_unrealized_profit=-0.01,
- position=Positions.Long,
- action=Actions.Neutral,
- ),
- ]
- tol_scale = self.TOL_RELATIVE
- for ctx in contexts:
- br1 = calculate_reward(
- ctx,
- params,
- base_factor=base_factor,
- profit_target=profit_target,
- risk_reward_ratio=rr,
- short_allowed=True,
- action_masking=True,
- )
- br2 = calculate_reward(
- ctx,
- params,
- base_factor=base_factor * k,
- profit_target=profit_target,
- risk_reward_ratio=rr,
- short_allowed=True,
- action_masking=True,
- )
- for br in (br1, br2):
- comp_sum = (
- br.exit_component
- + br.idle_penalty
- + br.hold_penalty
- + br.invalid_penalty
- + br.reward_shaping
- + br.entry_additive
- + br.exit_additive
- )
- self.assertAlmostEqual(
- br.total,
- comp_sum,
- places=12,
- msg=f"Decomposition mismatch (ctx={ctx}, total={br.total}, sum={comp_sum})",
- )
- components1 = {
- "exit_component": br1.exit_component,
- "idle_penalty": br1.idle_penalty,
- "hold_penalty": br1.hold_penalty,
- "invalid_penalty": br1.invalid_penalty,
- "total": br1.exit_component
- + br1.idle_penalty
- + br1.hold_penalty
- + br1.invalid_penalty,
- }
- components2 = {
- "exit_component": br2.exit_component,
- "idle_penalty": br2.idle_penalty,
- "hold_penalty": br2.hold_penalty,
- "invalid_penalty": br2.invalid_penalty,
- "total": br2.exit_component
- + br2.idle_penalty
- + br2.hold_penalty
- + br2.invalid_penalty,
- }
- for key, v1 in components1.items():
- v2 = components2[key]
- if abs(v1) < 1e-15 and abs(v2) < 1e-15:
- continue
- self.assertLess(
- abs(v2 - k * v1),
- tol_scale * max(1.0, abs(k * v1)),
- f"Scale invariance failed for {key}: v1={v1}, v2={v2}, k={k}",
- )
-
- def test_long_short_symmetry(self):
- """Long vs Short exit reward magnitudes should match in absolute value for identical PnL (no directional bias)."""
- params = self.base_params()
- params.pop("base_factor", None)
- base_factor = 120.0
- profit_target = 0.04
- rr = self.TEST_RR_HIGH
- pnls = [0.018, -0.022]
- for pnl in pnls:
- ctx_long = self.make_ctx(
- pnl=pnl,
- trade_duration=55,
- idle_duration=0,
- max_unrealized_profit=pnl if pnl > 0 else 0.01,
- min_unrealized_profit=pnl if pnl < 0 else -0.01,
- position=Positions.Long,
- action=Actions.Long_exit,
- )
- ctx_short = self.make_ctx(
- pnl=pnl,
- trade_duration=55,
- idle_duration=0,
- max_unrealized_profit=pnl if pnl > 0 else 0.01,
- min_unrealized_profit=pnl if pnl < 0 else -0.01,
- position=Positions.Short,
- action=Actions.Short_exit,
- )
- br_long = calculate_reward(
- ctx_long,
- params,
- base_factor=base_factor,
- profit_target=profit_target,
- risk_reward_ratio=rr,
- short_allowed=True,
- action_masking=True,
- )
- br_short = calculate_reward(
- ctx_short,
- params,
- base_factor=base_factor,
- profit_target=profit_target,
- risk_reward_ratio=rr,
- short_allowed=True,
- action_masking=True,
- )
- if pnl > 0:
- self.assertGreater(br_long.exit_component, 0)
- self.assertGreater(br_short.exit_component, 0)
- else:
- self.assertLess(br_long.exit_component, 0)
- self.assertLess(br_short.exit_component, 0)
- self.assertLess(
- abs(abs(br_long.exit_component) - abs(br_short.exit_component)),
- self.TOL_RELATIVE * max(1.0, abs(br_long.exit_component)),
- f"Long/Short asymmetry pnl={pnl}: long={br_long.exit_component}, short={br_short.exit_component}",
- )
-
- def test_idle_penalty_fallback_and_proportionality(self):
- """Idle penalty fallback denominator & proportional scaling."""
- params = self.base_params(max_idle_duration_candles=None, max_trade_duration_candles=100)
- base_factor = 90.0
- profit_target = self.TEST_PROFIT_TARGET
- risk_reward_ratio = 1.0
- ctx_a = self.make_ctx(
- pnl=0.0,
- trade_duration=0,
- idle_duration=20,
- position=Positions.Neutral,
- action=Actions.Neutral,
- )
- ctx_b = dataclasses.replace(ctx_a, idle_duration=40)
- br_a = calculate_reward(
- ctx_a,
- params,
- base_factor=base_factor,
- profit_target=profit_target,
- risk_reward_ratio=risk_reward_ratio,
- short_allowed=True,
- action_masking=True,
- )
- br_b = calculate_reward(
- ctx_b,
- params,
- base_factor=base_factor,
- profit_target=profit_target,
- risk_reward_ratio=risk_reward_ratio,
- short_allowed=True,
- action_masking=True,
- )
- self.assertLess(br_a.idle_penalty, 0.0)
- self.assertLess(br_b.idle_penalty, 0.0)
- ratio = br_b.idle_penalty / br_a.idle_penalty if br_a.idle_penalty != 0 else None
- self.assertIsNotNone(ratio)
- if ratio is not None:
- self.assertAlmostEqualFloat(abs(ratio), 2.0, tolerance=0.2)
- ctx_mid = dataclasses.replace(ctx_a, idle_duration=120)
- br_mid = calculate_reward(
- ctx_mid,
- params,
- base_factor=base_factor,
- profit_target=profit_target,
- risk_reward_ratio=risk_reward_ratio,
- short_allowed=True,
- action_masking=True,
- )
- self.assertLess(br_mid.idle_penalty, 0.0)
- idle_penalty_scale = _get_float_param(params, "idle_penalty_scale", 0.5)
- idle_penalty_power = _get_float_param(params, "idle_penalty_power", 1.025)
- factor = _get_float_param(params, "base_factor", float(base_factor))
- idle_factor = factor * (profit_target * risk_reward_ratio) / 4.0
- observed_ratio = abs(br_mid.idle_penalty) / (idle_factor * idle_penalty_scale)
- if observed_ratio > 0:
- implied_D = 120 / observed_ratio ** (1 / idle_penalty_power)
- self.assertAlmostEqualFloat(implied_D, 400.0, tolerance=20.0)
-
-
-if __name__ == "__main__":
- unittest.main()
+++ /dev/null
-#!/usr/bin/env python3
-"""Utility tests for data loading, formatting, and parameter propagation."""
-
-import json
-import pickle
-import re
-import subprocess
-import sys
-import unittest
-import warnings
-from pathlib import Path
-
-import pandas as pd
-
-from reward_space_analysis import (
- PBRS_INVARIANCE_TOL,
- apply_potential_shaping,
- load_real_episodes,
-)
-
-from .test_base import RewardSpaceTestBase
-
-
-class TestLoadRealEpisodes(RewardSpaceTestBase):
- """Unit tests for load_real_episodes."""
-
- def write_pickle(self, obj, path: Path):
- with path.open("wb") as f:
- pickle.dump(obj, f)
-
- def test_top_level_dict_transitions(self):
- """Test top level dict transitions."""
- df = pd.DataFrame(
- {
- "pnl": [0.01],
- "trade_duration": [10],
- "idle_duration": [5],
- "position": [1.0],
- "action": [2.0],
- "reward": [1.0],
- }
- )
- p = Path(self.temp_dir) / "top.pkl"
- self.write_pickle({"transitions": df}, p)
- loaded = load_real_episodes(p)
- self.assertIsInstance(loaded, pd.DataFrame)
- self.assertEqual(list(loaded.columns).count("pnl"), 1)
- self.assertEqual(len(loaded), 1)
-
- def test_mixed_episode_list_warns_and_flattens(self):
- """Test mixed episode list warns and flattens."""
- ep1 = {"episode_id": 1}
- ep2 = {
- "episode_id": 2,
- "transitions": [
- {
- "pnl": 0.02,
- "trade_duration": 5,
- "idle_duration": 0,
- "position": 1.0,
- "action": 2.0,
- "reward": 2.0,
- }
- ],
- }
- p = Path(self.temp_dir) / "mixed.pkl"
- self.write_pickle([ep1, ep2], p)
- with warnings.catch_warnings(record=True) as w:
- warnings.simplefilter("always")
- loaded = load_real_episodes(p)
- _ = w
- self.assertEqual(len(loaded), 1)
- self.assertPlacesEqual(float(loaded.iloc[0]["pnl"]), 0.02, places=7)
-
- def test_non_iterable_transitions_raises(self):
- """Test non iterable transitions raises."""
- bad = {"transitions": 123}
- p = Path(self.temp_dir) / "bad.pkl"
- self.write_pickle(bad, p)
- with self.assertRaises(ValueError):
- load_real_episodes(p)
-
- def test_enforce_columns_false_fills_na(self):
- """Test enforce columns false fills na."""
- trans = [
- {"pnl": 0.03, "trade_duration": 10, "idle_duration": 0, "position": 1.0, "action": 2.0}
- ]
- p = Path(self.temp_dir) / "fill.pkl"
- self.write_pickle(trans, p)
- loaded = load_real_episodes(p, enforce_columns=False)
- self.assertIn("reward", loaded.columns)
- self.assertTrue(loaded["reward"].isna().all())
-
- def test_casting_numeric_strings(self):
- """Test casting numeric strings."""
- trans = [
- {
- "pnl": "0.04",
- "trade_duration": "20",
- "idle_duration": "0",
- "position": "1.0",
- "action": "2.0",
- "reward": "3.0",
- }
- ]
- p = Path(self.temp_dir) / "strs.pkl"
- self.write_pickle(trans, p)
- loaded = load_real_episodes(p)
- self.assertIn("pnl", loaded.columns)
- self.assertIn(loaded["pnl"].dtype.kind, ("f", "i"))
- self.assertPlacesEqual(float(loaded.iloc[0]["pnl"]), 0.04, places=7)
-
- def test_pickled_dataframe_loads(self):
- """Ensure a directly pickled DataFrame loads correctly."""
- test_episodes = pd.DataFrame(
- {
- "pnl": [0.01, -0.02, 0.03],
- "trade_duration": [10, 20, 15],
- "idle_duration": [5, 0, 8],
- "position": [1.0, 0.0, 1.0],
- "action": [2.0, 0.0, 2.0],
- "reward": [10.5, -5.2, 15.8],
- }
- )
- p = Path(self.temp_dir) / "test_episodes.pkl"
- self.write_pickle(test_episodes, p)
- loaded_data = load_real_episodes(p)
- self.assertIsInstance(loaded_data, pd.DataFrame)
- self.assertEqual(len(loaded_data), 3)
- self.assertIn("pnl", loaded_data.columns)
-
-
-class TestReportFormatting(RewardSpaceTestBase):
- """Tests for report formatting elements not covered elsewhere."""
-
- def test_abs_shaping_line_present_and_constant(self):
- """Abs Σ Shaping Reward line present, formatted, uses constant not literal."""
- df = pd.DataFrame(
- {
- "reward_shaping": [self.TOL_IDENTITY_STRICT, -self.TOL_IDENTITY_STRICT],
- "reward_entry_additive": [0.0, 0.0],
- "reward_exit_additive": [0.0, 0.0],
- }
- )
- total_shaping = df["reward_shaping"].sum()
- self.assertTrue(abs(total_shaping) < PBRS_INVARIANCE_TOL)
- lines = [f"| Abs Σ Shaping Reward | {abs(total_shaping):.6e} |"]
- content = "\n".join(lines)
- m = re.search("\\| Abs Σ Shaping Reward \\| ([0-9]+\\.[0-9]{6}e[+-][0-9]{2}) \\|", content)
- self.assertIsNotNone(m, "Abs Σ Shaping Reward line missing or misformatted")
- val = float(m.group(1)) if m else None
- if val is not None:
- self.assertLess(val, self.TOL_NEGLIGIBLE + self.TOL_IDENTITY_STRICT)
- self.assertNotIn(
- str(self.TOL_GENERIC_EQ),
- content,
- "Tolerance constant value should appear, not raw literal",
- )
-
- def test_additive_activation_deterministic_contribution(self):
- """Additives enabled increase total reward; shaping impact limited."""
- base = self.base_params(
- hold_potential_enabled=True,
- entry_additive_enabled=False,
- exit_additive_enabled=False,
- exit_potential_mode="non_canonical",
- )
- with_add = base.copy()
- with_add.update(
- {
- "entry_additive_enabled": True,
- "exit_additive_enabled": True,
- "entry_additive_scale": 0.4,
- "exit_additive_scale": 0.4,
- "entry_additive_gain": 1.0,
- "exit_additive_gain": 1.0,
- }
- )
- ctx = {
- "base_reward": 0.05,
- "current_pnl": 0.01,
- "current_duration_ratio": 0.2,
- "next_pnl": 0.012,
- "next_duration_ratio": 0.25,
- "is_entry": True,
- "is_exit": False,
- }
- _t0, s0, _n0 = apply_potential_shaping(last_potential=0.0, params=base, **ctx)
- t1, s1, _n1 = apply_potential_shaping(last_potential=0.0, params=with_add, **ctx)
- self.assertFinite(t1)
- self.assertFinite(s1)
- self.assertLess(abs(s1 - s0), 0.2)
- self.assertGreater(t1 - _t0, 0.0, "Total reward should increase with additives present")
-
-
-class TestCsvAndSimulationOptions(RewardSpaceTestBase):
- """CLI-level tests: CSV encoding and simulate_unrealized_pnl option effects."""
-
- def test_action_column_integer_in_csv(self):
- """Ensure 'action' column in reward_samples.csv is encoded as integers."""
- out_dir = self.output_path / "csv_int_check"
- cmd = [
- sys.executable,
- "reward_space_analysis.py",
- "--num_samples",
- "200",
- "--seed",
- str(self.SEED),
- "--out_dir",
- str(out_dir),
- ]
- result = subprocess.run(
- cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
- )
- self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
- csv_path = out_dir / "reward_samples.csv"
- self.assertTrue(csv_path.exists(), "Missing reward_samples.csv")
- df = pd.read_csv(csv_path)
- self.assertIn("action", df.columns)
- values = df["action"].tolist()
- self.assertTrue(
- all((float(v).is_integer() for v in values)),
- "Non-integer values detected in 'action' column",
- )
- allowed = {0, 1, 2, 3, 4}
- self.assertTrue(set((int(v) for v in values)).issubset(allowed))
-
-
-class TestParamsPropagation(RewardSpaceTestBase):
- """Integration tests to validate max_trade_duration_candles propagation via CLI params and dynamic flag."""
-
- def test_max_trade_duration_candles_propagation_params(self):
- """--params max_trade_duration_candles=X propagates to manifest and simulation params."""
- out_dir = self.output_path / "mtd_params"
- cmd = [
- sys.executable,
- "reward_space_analysis.py",
- "--num_samples",
- "120",
- "--seed",
- str(self.SEED),
- "--out_dir",
- str(out_dir),
- "--params",
- "max_trade_duration_candles=96",
- ]
- result = subprocess.run(
- cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
- )
- self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
- manifest_path = out_dir / "manifest.json"
- self.assertTrue(manifest_path.exists(), "Missing manifest.json")
- with open(manifest_path, "r") as f:
- manifest = json.load(f)
- self.assertIn("reward_params", manifest)
- self.assertIn("simulation_params", manifest)
- rp = manifest["reward_params"]
- self.assertIn("max_trade_duration_candles", rp)
- self.assertEqual(int(rp["max_trade_duration_candles"]), 96)
-
- def test_max_trade_duration_candles_propagation_flag(self):
- """Dynamic flag --max_trade_duration_candles X propagates identically."""
- out_dir = self.output_path / "mtd_flag"
- cmd = [
- sys.executable,
- "reward_space_analysis.py",
- "--num_samples",
- "120",
- "--seed",
- str(self.SEED),
- "--out_dir",
- str(out_dir),
- "--max_trade_duration_candles",
- "64",
- ]
- result = subprocess.run(
- cmd, capture_output=True, text=True, cwd=Path(__file__).parent.parent
- )
- self.assertEqual(result.returncode, 0, f"CLI failed: {result.stderr}")
- manifest_path = out_dir / "manifest.json"
- self.assertTrue(manifest_path.exists(), "Missing manifest.json")
- with open(manifest_path, "r") as f:
- manifest = json.load(f)
- self.assertIn("reward_params", manifest)
- self.assertIn("simulation_params", manifest)
- rp = manifest["reward_params"]
- self.assertIn("max_trade_duration_candles", rp)
- self.assertEqual(int(rp["max_trade_duration_candles"]), 64)
-
-
-if __name__ == "__main__":
- unittest.main()
version = 1
revision = 3
-requires-python = ">=3.9"
+requires-python = ">=3.11"
resolution-markers = [
"python_full_version >= '3.12'",
- "python_full_version == '3.11.*'",
- "python_full_version == '3.10.*'",
- "python_full_version < '3.10'",
+ "python_full_version < '3.12'",
]
[[package]]
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
]
-[[package]]
-name = "coverage"
-version = "7.10.7"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version < '3.10'",
-]
-sdist = { url = "https://files.pythonhosted.org/packages/51/26/d22c300112504f5f9a9fd2297ce33c35f3d353e4aeb987c8419453b2a7c2/coverage-7.10.7.tar.gz", hash = "sha256:f4ab143ab113be368a3e9b795f9cd7906c5ef407d6173fe9675a902e1fffc239", size = 827704, upload-time = "2025-09-21T20:03:56.815Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/e5/6c/3a3f7a46888e69d18abe3ccc6fe4cb16cccb1e6a2f99698931dafca489e6/coverage-7.10.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:fc04cc7a3db33664e0c2d10eb8990ff6b3536f6842c9590ae8da4c614b9ed05a", size = 217987, upload-time = "2025-09-21T20:00:57.218Z" },
- { url = "https://files.pythonhosted.org/packages/03/94/952d30f180b1a916c11a56f5c22d3535e943aa22430e9e3322447e520e1c/coverage-7.10.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e201e015644e207139f7e2351980feb7040e6f4b2c2978892f3e3789d1c125e5", size = 218388, upload-time = "2025-09-21T20:01:00.081Z" },
- { url = "https://files.pythonhosted.org/packages/50/2b/9e0cf8ded1e114bcd8b2fd42792b57f1c4e9e4ea1824cde2af93a67305be/coverage-7.10.7-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:240af60539987ced2c399809bd34f7c78e8abe0736af91c3d7d0e795df633d17", size = 245148, upload-time = "2025-09-21T20:01:01.768Z" },
- { url = "https://files.pythonhosted.org/packages/19/20/d0384ac06a6f908783d9b6aa6135e41b093971499ec488e47279f5b846e6/coverage-7.10.7-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8421e088bc051361b01c4b3a50fd39a4b9133079a2229978d9d30511fd05231b", size = 246958, upload-time = "2025-09-21T20:01:03.355Z" },
- { url = "https://files.pythonhosted.org/packages/60/83/5c283cff3d41285f8eab897651585db908a909c572bdc014bcfaf8a8b6ae/coverage-7.10.7-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6be8ed3039ae7f7ac5ce058c308484787c86e8437e72b30bf5e88b8ea10f3c87", size = 248819, upload-time = "2025-09-21T20:01:04.968Z" },
- { url = "https://files.pythonhosted.org/packages/60/22/02eb98fdc5ff79f423e990d877693e5310ae1eab6cb20ae0b0b9ac45b23b/coverage-7.10.7-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e28299d9f2e889e6d51b1f043f58d5f997c373cc12e6403b90df95b8b047c13e", size = 245754, upload-time = "2025-09-21T20:01:06.321Z" },
- { url = "https://files.pythonhosted.org/packages/b4/bc/25c83bcf3ad141b32cd7dc45485ef3c01a776ca3aa8ef0a93e77e8b5bc43/coverage-7.10.7-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:c4e16bd7761c5e454f4efd36f345286d6f7c5fa111623c355691e2755cae3b9e", size = 246860, upload-time = "2025-09-21T20:01:07.605Z" },
- { url = "https://files.pythonhosted.org/packages/3c/b7/95574702888b58c0928a6e982038c596f9c34d52c5e5107f1eef729399b5/coverage-7.10.7-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:b1c81d0e5e160651879755c9c675b974276f135558cf4ba79fee7b8413a515df", size = 244877, upload-time = "2025-09-21T20:01:08.829Z" },
- { url = "https://files.pythonhosted.org/packages/47/b6/40095c185f235e085df0e0b158f6bd68cc6e1d80ba6c7721dc81d97ec318/coverage-7.10.7-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:606cc265adc9aaedcc84f1f064f0e8736bc45814f15a357e30fca7ecc01504e0", size = 245108, upload-time = "2025-09-21T20:01:10.527Z" },
- { url = "https://files.pythonhosted.org/packages/c8/50/4aea0556da7a4b93ec9168420d170b55e2eb50ae21b25062513d020c6861/coverage-7.10.7-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:10b24412692df990dbc34f8fb1b6b13d236ace9dfdd68df5b28c2e39cafbba13", size = 245752, upload-time = "2025-09-21T20:01:11.857Z" },
- { url = "https://files.pythonhosted.org/packages/6a/28/ea1a84a60828177ae3b100cb6723838523369a44ec5742313ed7db3da160/coverage-7.10.7-cp310-cp310-win32.whl", hash = "sha256:b51dcd060f18c19290d9b8a9dd1e0181538df2ce0717f562fff6cf74d9fc0b5b", size = 220497, upload-time = "2025-09-21T20:01:13.459Z" },
- { url = "https://files.pythonhosted.org/packages/fc/1a/a81d46bbeb3c3fd97b9602ebaa411e076219a150489bcc2c025f151bd52d/coverage-7.10.7-cp310-cp310-win_amd64.whl", hash = "sha256:3a622ac801b17198020f09af3eaf45666b344a0d69fc2a6ffe2ea83aeef1d807", size = 221392, upload-time = "2025-09-21T20:01:14.722Z" },
- { url = "https://files.pythonhosted.org/packages/d2/5d/c1a17867b0456f2e9ce2d8d4708a4c3a089947d0bec9c66cdf60c9e7739f/coverage-7.10.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a609f9c93113be646f44c2a0256d6ea375ad047005d7f57a5c15f614dc1b2f59", size = 218102, upload-time = "2025-09-21T20:01:16.089Z" },
- { url = "https://files.pythonhosted.org/packages/54/f0/514dcf4b4e3698b9a9077f084429681bf3aad2b4a72578f89d7f643eb506/coverage-7.10.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:65646bb0359386e07639c367a22cf9b5bf6304e8630b565d0626e2bdf329227a", size = 218505, upload-time = "2025-09-21T20:01:17.788Z" },
- { url = "https://files.pythonhosted.org/packages/20/f6/9626b81d17e2a4b25c63ac1b425ff307ecdeef03d67c9a147673ae40dc36/coverage-7.10.7-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5f33166f0dfcce728191f520bd2692914ec70fac2713f6bf3ce59c3deacb4699", size = 248898, upload-time = "2025-09-21T20:01:19.488Z" },
- { url = "https://files.pythonhosted.org/packages/b0/ef/bd8e719c2f7417ba03239052e099b76ea1130ac0cbb183ee1fcaa58aaff3/coverage-7.10.7-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:35f5e3f9e455bb17831876048355dca0f758b6df22f49258cb5a91da23ef437d", size = 250831, upload-time = "2025-09-21T20:01:20.817Z" },
- { url = "https://files.pythonhosted.org/packages/a5/b6/bf054de41ec948b151ae2b79a55c107f5760979538f5fb80c195f2517718/coverage-7.10.7-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4da86b6d62a496e908ac2898243920c7992499c1712ff7c2b6d837cc69d9467e", size = 252937, upload-time = "2025-09-21T20:01:22.171Z" },
- { url = "https://files.pythonhosted.org/packages/0f/e5/3860756aa6f9318227443c6ce4ed7bf9e70bb7f1447a0353f45ac5c7974b/coverage-7.10.7-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6b8b09c1fad947c84bbbc95eca841350fad9cbfa5a2d7ca88ac9f8d836c92e23", size = 249021, upload-time = "2025-09-21T20:01:23.907Z" },
- { url = "https://files.pythonhosted.org/packages/26/0f/bd08bd042854f7fd07b45808927ebcce99a7ed0f2f412d11629883517ac2/coverage-7.10.7-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:4376538f36b533b46f8971d3a3e63464f2c7905c9800db97361c43a2b14792ab", size = 250626, upload-time = "2025-09-21T20:01:25.721Z" },
- { url = "https://files.pythonhosted.org/packages/8e/a7/4777b14de4abcc2e80c6b1d430f5d51eb18ed1d75fca56cbce5f2db9b36e/coverage-7.10.7-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:121da30abb574f6ce6ae09840dae322bef734480ceafe410117627aa54f76d82", size = 248682, upload-time = "2025-09-21T20:01:27.105Z" },
- { url = "https://files.pythonhosted.org/packages/34/72/17d082b00b53cd45679bad682fac058b87f011fd8b9fe31d77f5f8d3a4e4/coverage-7.10.7-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:88127d40df529336a9836870436fc2751c339fbaed3a836d42c93f3e4bd1d0a2", size = 248402, upload-time = "2025-09-21T20:01:28.629Z" },
- { url = "https://files.pythonhosted.org/packages/81/7a/92367572eb5bdd6a84bfa278cc7e97db192f9f45b28c94a9ca1a921c3577/coverage-7.10.7-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ba58bbcd1b72f136080c0bccc2400d66cc6115f3f906c499013d065ac33a4b61", size = 249320, upload-time = "2025-09-21T20:01:30.004Z" },
- { url = "https://files.pythonhosted.org/packages/2f/88/a23cc185f6a805dfc4fdf14a94016835eeb85e22ac3a0e66d5e89acd6462/coverage-7.10.7-cp311-cp311-win32.whl", hash = "sha256:972b9e3a4094b053a4e46832b4bc829fc8a8d347160eb39d03f1690316a99c14", size = 220536, upload-time = "2025-09-21T20:01:32.184Z" },
- { url = "https://files.pythonhosted.org/packages/fe/ef/0b510a399dfca17cec7bc2f05ad8bd78cf55f15c8bc9a73ab20c5c913c2e/coverage-7.10.7-cp311-cp311-win_amd64.whl", hash = "sha256:a7b55a944a7f43892e28ad4bc0561dfd5f0d73e605d1aa5c3c976b52aea121d2", size = 221425, upload-time = "2025-09-21T20:01:33.557Z" },
- { url = "https://files.pythonhosted.org/packages/51/7f/023657f301a276e4ba1850f82749bc136f5a7e8768060c2e5d9744a22951/coverage-7.10.7-cp311-cp311-win_arm64.whl", hash = "sha256:736f227fb490f03c6488f9b6d45855f8e0fd749c007f9303ad30efab0e73c05a", size = 220103, upload-time = "2025-09-21T20:01:34.929Z" },
- { url = "https://files.pythonhosted.org/packages/13/e4/eb12450f71b542a53972d19117ea5a5cea1cab3ac9e31b0b5d498df1bd5a/coverage-7.10.7-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7bb3b9ddb87ef7725056572368040c32775036472d5a033679d1fa6c8dc08417", size = 218290, upload-time = "2025-09-21T20:01:36.455Z" },
- { url = "https://files.pythonhosted.org/packages/37/66/593f9be12fc19fb36711f19a5371af79a718537204d16ea1d36f16bd78d2/coverage-7.10.7-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:18afb24843cbc175687225cab1138c95d262337f5473512010e46831aa0c2973", size = 218515, upload-time = "2025-09-21T20:01:37.982Z" },
- { url = "https://files.pythonhosted.org/packages/66/80/4c49f7ae09cafdacc73fbc30949ffe77359635c168f4e9ff33c9ebb07838/coverage-7.10.7-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:399a0b6347bcd3822be369392932884b8216d0944049ae22925631a9b3d4ba4c", size = 250020, upload-time = "2025-09-21T20:01:39.617Z" },
- { url = "https://files.pythonhosted.org/packages/a6/90/a64aaacab3b37a17aaedd83e8000142561a29eb262cede42d94a67f7556b/coverage-7.10.7-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:314f2c326ded3f4b09be11bc282eb2fc861184bc95748ae67b360ac962770be7", size = 252769, upload-time = "2025-09-21T20:01:41.341Z" },
- { url = "https://files.pythonhosted.org/packages/98/2e/2dda59afd6103b342e096f246ebc5f87a3363b5412609946c120f4e7750d/coverage-7.10.7-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c41e71c9cfb854789dee6fc51e46743a6d138b1803fab6cb860af43265b42ea6", size = 253901, upload-time = "2025-09-21T20:01:43.042Z" },
- { url = "https://files.pythonhosted.org/packages/53/dc/8d8119c9051d50f3119bb4a75f29f1e4a6ab9415cd1fa8bf22fcc3fb3b5f/coverage-7.10.7-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc01f57ca26269c2c706e838f6422e2a8788e41b3e3c65e2f41148212e57cd59", size = 250413, upload-time = "2025-09-21T20:01:44.469Z" },
- { url = "https://files.pythonhosted.org/packages/98/b3/edaff9c5d79ee4d4b6d3fe046f2b1d799850425695b789d491a64225d493/coverage-7.10.7-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a6442c59a8ac8b85812ce33bc4d05bde3fb22321fa8294e2a5b487c3505f611b", size = 251820, upload-time = "2025-09-21T20:01:45.915Z" },
- { url = "https://files.pythonhosted.org/packages/11/25/9a0728564bb05863f7e513e5a594fe5ffef091b325437f5430e8cfb0d530/coverage-7.10.7-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:78a384e49f46b80fb4c901d52d92abe098e78768ed829c673fbb53c498bef73a", size = 249941, upload-time = "2025-09-21T20:01:47.296Z" },
- { url = "https://files.pythonhosted.org/packages/e0/fd/ca2650443bfbef5b0e74373aac4df67b08180d2f184b482c41499668e258/coverage-7.10.7-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:5e1e9802121405ede4b0133aa4340ad8186a1d2526de5b7c3eca519db7bb89fb", size = 249519, upload-time = "2025-09-21T20:01:48.73Z" },
- { url = "https://files.pythonhosted.org/packages/24/79/f692f125fb4299b6f963b0745124998ebb8e73ecdfce4ceceb06a8c6bec5/coverage-7.10.7-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:d41213ea25a86f69efd1575073d34ea11aabe075604ddf3d148ecfec9e1e96a1", size = 251375, upload-time = "2025-09-21T20:01:50.529Z" },
- { url = "https://files.pythonhosted.org/packages/5e/75/61b9bbd6c7d24d896bfeec57acba78e0f8deac68e6baf2d4804f7aae1f88/coverage-7.10.7-cp312-cp312-win32.whl", hash = "sha256:77eb4c747061a6af8d0f7bdb31f1e108d172762ef579166ec84542f711d90256", size = 220699, upload-time = "2025-09-21T20:01:51.941Z" },
- { url = "https://files.pythonhosted.org/packages/ca/f3/3bf7905288b45b075918d372498f1cf845b5b579b723c8fd17168018d5f5/coverage-7.10.7-cp312-cp312-win_amd64.whl", hash = "sha256:f51328ffe987aecf6d09f3cd9d979face89a617eacdaea43e7b3080777f647ba", size = 221512, upload-time = "2025-09-21T20:01:53.481Z" },
- { url = "https://files.pythonhosted.org/packages/5c/44/3e32dbe933979d05cf2dac5e697c8599cfe038aaf51223ab901e208d5a62/coverage-7.10.7-cp312-cp312-win_arm64.whl", hash = "sha256:bda5e34f8a75721c96085903c6f2197dc398c20ffd98df33f866a9c8fd95f4bf", size = 220147, upload-time = "2025-09-21T20:01:55.2Z" },
- { url = "https://files.pythonhosted.org/packages/9a/94/b765c1abcb613d103b64fcf10395f54d69b0ef8be6a0dd9c524384892cc7/coverage-7.10.7-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:981a651f543f2854abd3b5fcb3263aac581b18209be49863ba575de6edf4c14d", size = 218320, upload-time = "2025-09-21T20:01:56.629Z" },
- { url = "https://files.pythonhosted.org/packages/72/4f/732fff31c119bb73b35236dd333030f32c4bfe909f445b423e6c7594f9a2/coverage-7.10.7-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:73ab1601f84dc804f7812dc297e93cd99381162da39c47040a827d4e8dafe63b", size = 218575, upload-time = "2025-09-21T20:01:58.203Z" },
- { url = "https://files.pythonhosted.org/packages/87/02/ae7e0af4b674be47566707777db1aa375474f02a1d64b9323e5813a6cdd5/coverage-7.10.7-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:a8b6f03672aa6734e700bbcd65ff050fd19cddfec4b031cc8cf1c6967de5a68e", size = 249568, upload-time = "2025-09-21T20:01:59.748Z" },
- { url = "https://files.pythonhosted.org/packages/a2/77/8c6d22bf61921a59bce5471c2f1f7ac30cd4ac50aadde72b8c48d5727902/coverage-7.10.7-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:10b6ba00ab1132a0ce4428ff68cf50a25efd6840a42cdf4239c9b99aad83be8b", size = 252174, upload-time = "2025-09-21T20:02:01.192Z" },
- { url = "https://files.pythonhosted.org/packages/b1/20/b6ea4f69bbb52dac0aebd62157ba6a9dddbfe664f5af8122dac296c3ee15/coverage-7.10.7-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c79124f70465a150e89340de5963f936ee97097d2ef76c869708c4248c63ca49", size = 253447, upload-time = "2025-09-21T20:02:02.701Z" },
- { url = "https://files.pythonhosted.org/packages/f9/28/4831523ba483a7f90f7b259d2018fef02cb4d5b90bc7c1505d6e5a84883c/coverage-7.10.7-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:69212fbccdbd5b0e39eac4067e20a4a5256609e209547d86f740d68ad4f04911", size = 249779, upload-time = "2025-09-21T20:02:04.185Z" },
- { url = "https://files.pythonhosted.org/packages/a7/9f/4331142bc98c10ca6436d2d620c3e165f31e6c58d43479985afce6f3191c/coverage-7.10.7-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7ea7c6c9d0d286d04ed3541747e6597cbe4971f22648b68248f7ddcd329207f0", size = 251604, upload-time = "2025-09-21T20:02:06.034Z" },
- { url = "https://files.pythonhosted.org/packages/ce/60/bda83b96602036b77ecf34e6393a3836365481b69f7ed7079ab85048202b/coverage-7.10.7-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b9be91986841a75042b3e3243d0b3cb0b2434252b977baaf0cd56e960fe1e46f", size = 249497, upload-time = "2025-09-21T20:02:07.619Z" },
- { url = "https://files.pythonhosted.org/packages/5f/af/152633ff35b2af63977edd835d8e6430f0caef27d171edf2fc76c270ef31/coverage-7.10.7-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:b281d5eca50189325cfe1f365fafade89b14b4a78d9b40b05ddd1fc7d2a10a9c", size = 249350, upload-time = "2025-09-21T20:02:10.34Z" },
- { url = "https://files.pythonhosted.org/packages/9d/71/d92105d122bd21cebba877228990e1646d862e34a98bb3374d3fece5a794/coverage-7.10.7-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:99e4aa63097ab1118e75a848a28e40d68b08a5e19ce587891ab7fd04475e780f", size = 251111, upload-time = "2025-09-21T20:02:12.122Z" },
- { url = "https://files.pythonhosted.org/packages/a2/9e/9fdb08f4bf476c912f0c3ca292e019aab6712c93c9344a1653986c3fd305/coverage-7.10.7-cp313-cp313-win32.whl", hash = "sha256:dc7c389dce432500273eaf48f410b37886be9208b2dd5710aaf7c57fd442c698", size = 220746, upload-time = "2025-09-21T20:02:13.919Z" },
- { url = "https://files.pythonhosted.org/packages/b1/b1/a75fd25df44eab52d1931e89980d1ada46824c7a3210be0d3c88a44aaa99/coverage-7.10.7-cp313-cp313-win_amd64.whl", hash = "sha256:cac0fdca17b036af3881a9d2729a850b76553f3f716ccb0360ad4dbc06b3b843", size = 221541, upload-time = "2025-09-21T20:02:15.57Z" },
- { url = "https://files.pythonhosted.org/packages/14/3a/d720d7c989562a6e9a14b2c9f5f2876bdb38e9367126d118495b89c99c37/coverage-7.10.7-cp313-cp313-win_arm64.whl", hash = "sha256:4b6f236edf6e2f9ae8fcd1332da4e791c1b6ba0dc16a2dc94590ceccb482e546", size = 220170, upload-time = "2025-09-21T20:02:17.395Z" },
- { url = "https://files.pythonhosted.org/packages/bb/22/e04514bf2a735d8b0add31d2b4ab636fc02370730787c576bb995390d2d5/coverage-7.10.7-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:a0ec07fd264d0745ee396b666d47cef20875f4ff2375d7c4f58235886cc1ef0c", size = 219029, upload-time = "2025-09-21T20:02:18.936Z" },
- { url = "https://files.pythonhosted.org/packages/11/0b/91128e099035ece15da3445d9015e4b4153a6059403452d324cbb0a575fa/coverage-7.10.7-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:dd5e856ebb7bfb7672b0086846db5afb4567a7b9714b8a0ebafd211ec7ce6a15", size = 219259, upload-time = "2025-09-21T20:02:20.44Z" },
- { url = "https://files.pythonhosted.org/packages/8b/51/66420081e72801536a091a0c8f8c1f88a5c4bf7b9b1bdc6222c7afe6dc9b/coverage-7.10.7-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:f57b2a3c8353d3e04acf75b3fed57ba41f5c0646bbf1d10c7c282291c97936b4", size = 260592, upload-time = "2025-09-21T20:02:22.313Z" },
- { url = "https://files.pythonhosted.org/packages/5d/22/9b8d458c2881b22df3db5bb3e7369e63d527d986decb6c11a591ba2364f7/coverage-7.10.7-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:1ef2319dd15a0b009667301a3f84452a4dc6fddfd06b0c5c53ea472d3989fbf0", size = 262768, upload-time = "2025-09-21T20:02:24.287Z" },
- { url = "https://files.pythonhosted.org/packages/f7/08/16bee2c433e60913c610ea200b276e8eeef084b0d200bdcff69920bd5828/coverage-7.10.7-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:83082a57783239717ceb0ad584de3c69cf581b2a95ed6bf81ea66034f00401c0", size = 264995, upload-time = "2025-09-21T20:02:26.133Z" },
- { url = "https://files.pythonhosted.org/packages/20/9d/e53eb9771d154859b084b90201e5221bca7674ba449a17c101a5031d4054/coverage-7.10.7-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:50aa94fb1fb9a397eaa19c0d5ec15a5edd03a47bf1a3a6111a16b36e190cff65", size = 259546, upload-time = "2025-09-21T20:02:27.716Z" },
- { url = "https://files.pythonhosted.org/packages/ad/b0/69bc7050f8d4e56a89fb550a1577d5d0d1db2278106f6f626464067b3817/coverage-7.10.7-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:2120043f147bebb41c85b97ac45dd173595ff14f2a584f2963891cbcc3091541", size = 262544, upload-time = "2025-09-21T20:02:29.216Z" },
- { url = "https://files.pythonhosted.org/packages/ef/4b/2514b060dbd1bc0aaf23b852c14bb5818f244c664cb16517feff6bb3a5ab/coverage-7.10.7-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:2fafd773231dd0378fdba66d339f84904a8e57a262f583530f4f156ab83863e6", size = 260308, upload-time = "2025-09-21T20:02:31.226Z" },
- { url = "https://files.pythonhosted.org/packages/54/78/7ba2175007c246d75e496f64c06e94122bdb914790a1285d627a918bd271/coverage-7.10.7-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:0b944ee8459f515f28b851728ad224fa2d068f1513ef6b7ff1efafeb2185f999", size = 258920, upload-time = "2025-09-21T20:02:32.823Z" },
- { url = "https://files.pythonhosted.org/packages/c0/b3/fac9f7abbc841409b9a410309d73bfa6cfb2e51c3fada738cb607ce174f8/coverage-7.10.7-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4b583b97ab2e3efe1b3e75248a9b333bd3f8b0b1b8e5b45578e05e5850dfb2c2", size = 261434, upload-time = "2025-09-21T20:02:34.86Z" },
- { url = "https://files.pythonhosted.org/packages/ee/51/a03bec00d37faaa891b3ff7387192cef20f01604e5283a5fabc95346befa/coverage-7.10.7-cp313-cp313t-win32.whl", hash = "sha256:2a78cd46550081a7909b3329e2266204d584866e8d97b898cd7fb5ac8d888b1a", size = 221403, upload-time = "2025-09-21T20:02:37.034Z" },
- { url = "https://files.pythonhosted.org/packages/53/22/3cf25d614e64bf6d8e59c7c669b20d6d940bb337bdee5900b9ca41c820bb/coverage-7.10.7-cp313-cp313t-win_amd64.whl", hash = "sha256:33a5e6396ab684cb43dc7befa386258acb2d7fae7f67330ebb85ba4ea27938eb", size = 222469, upload-time = "2025-09-21T20:02:39.011Z" },
- { url = "https://files.pythonhosted.org/packages/49/a1/00164f6d30d8a01c3c9c48418a7a5be394de5349b421b9ee019f380df2a0/coverage-7.10.7-cp313-cp313t-win_arm64.whl", hash = "sha256:86b0e7308289ddde73d863b7683f596d8d21c7d8664ce1dee061d0bcf3fbb4bb", size = 220731, upload-time = "2025-09-21T20:02:40.939Z" },
- { url = "https://files.pythonhosted.org/packages/23/9c/5844ab4ca6a4dd97a1850e030a15ec7d292b5c5cb93082979225126e35dd/coverage-7.10.7-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:b06f260b16ead11643a5a9f955bd4b5fd76c1a4c6796aeade8520095b75de520", size = 218302, upload-time = "2025-09-21T20:02:42.527Z" },
- { url = "https://files.pythonhosted.org/packages/f0/89/673f6514b0961d1f0e20ddc242e9342f6da21eaba3489901b565c0689f34/coverage-7.10.7-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:212f8f2e0612778f09c55dd4872cb1f64a1f2b074393d139278ce902064d5b32", size = 218578, upload-time = "2025-09-21T20:02:44.468Z" },
- { url = "https://files.pythonhosted.org/packages/05/e8/261cae479e85232828fb17ad536765c88dd818c8470aca690b0ac6feeaa3/coverage-7.10.7-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3445258bcded7d4aa630ab8296dea4d3f15a255588dd535f980c193ab6b95f3f", size = 249629, upload-time = "2025-09-21T20:02:46.503Z" },
- { url = "https://files.pythonhosted.org/packages/82/62/14ed6546d0207e6eda876434e3e8475a3e9adbe32110ce896c9e0c06bb9a/coverage-7.10.7-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:bb45474711ba385c46a0bfe696c695a929ae69ac636cda8f532be9e8c93d720a", size = 252162, upload-time = "2025-09-21T20:02:48.689Z" },
- { url = "https://files.pythonhosted.org/packages/ff/49/07f00db9ac6478e4358165a08fb41b469a1b053212e8a00cb02f0d27a05f/coverage-7.10.7-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:813922f35bd800dca9994c5971883cbc0d291128a5de6b167c7aa697fcf59360", size = 253517, upload-time = "2025-09-21T20:02:50.31Z" },
- { url = "https://files.pythonhosted.org/packages/a2/59/c5201c62dbf165dfbc91460f6dbbaa85a8b82cfa6131ac45d6c1bfb52deb/coverage-7.10.7-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:93c1b03552081b2a4423091d6fb3787265b8f86af404cff98d1b5342713bdd69", size = 249632, upload-time = "2025-09-21T20:02:51.971Z" },
- { url = "https://files.pythonhosted.org/packages/07/ae/5920097195291a51fb00b3a70b9bbd2edbfe3c84876a1762bd1ef1565ebc/coverage-7.10.7-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:cc87dd1b6eaf0b848eebb1c86469b9f72a1891cb42ac7adcfbce75eadb13dd14", size = 251520, upload-time = "2025-09-21T20:02:53.858Z" },
- { url = "https://files.pythonhosted.org/packages/b9/3c/a815dde77a2981f5743a60b63df31cb322c944843e57dbd579326625a413/coverage-7.10.7-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:39508ffda4f343c35f3236fe8d1a6634a51f4581226a1262769d7f970e73bffe", size = 249455, upload-time = "2025-09-21T20:02:55.807Z" },
- { url = "https://files.pythonhosted.org/packages/aa/99/f5cdd8421ea656abefb6c0ce92556709db2265c41e8f9fc6c8ae0f7824c9/coverage-7.10.7-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:925a1edf3d810537c5a3abe78ec5530160c5f9a26b1f4270b40e62cc79304a1e", size = 249287, upload-time = "2025-09-21T20:02:57.784Z" },
- { url = "https://files.pythonhosted.org/packages/c3/7a/e9a2da6a1fc5d007dd51fca083a663ab930a8c4d149c087732a5dbaa0029/coverage-7.10.7-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2c8b9a0636f94c43cd3576811e05b89aa9bc2d0a85137affc544ae5cb0e4bfbd", size = 250946, upload-time = "2025-09-21T20:02:59.431Z" },
- { url = "https://files.pythonhosted.org/packages/ef/5b/0b5799aa30380a949005a353715095d6d1da81927d6dbed5def2200a4e25/coverage-7.10.7-cp314-cp314-win32.whl", hash = "sha256:b7b8288eb7cdd268b0304632da8cb0bb93fadcfec2fe5712f7b9cc8f4d487be2", size = 221009, upload-time = "2025-09-21T20:03:01.324Z" },
- { url = "https://files.pythonhosted.org/packages/da/b0/e802fbb6eb746de006490abc9bb554b708918b6774b722bb3a0e6aa1b7de/coverage-7.10.7-cp314-cp314-win_amd64.whl", hash = "sha256:1ca6db7c8807fb9e755d0379ccc39017ce0a84dcd26d14b5a03b78563776f681", size = 221804, upload-time = "2025-09-21T20:03:03.4Z" },
- { url = "https://files.pythonhosted.org/packages/9e/e8/71d0c8e374e31f39e3389bb0bd19e527d46f00ea8571ec7ec8fd261d8b44/coverage-7.10.7-cp314-cp314-win_arm64.whl", hash = "sha256:097c1591f5af4496226d5783d036bf6fd6cd0cbc132e071b33861de756efb880", size = 220384, upload-time = "2025-09-21T20:03:05.111Z" },
- { url = "https://files.pythonhosted.org/packages/62/09/9a5608d319fa3eba7a2019addeacb8c746fb50872b57a724c9f79f146969/coverage-7.10.7-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:a62c6ef0d50e6de320c270ff91d9dd0a05e7250cac2a800b7784bae474506e63", size = 219047, upload-time = "2025-09-21T20:03:06.795Z" },
- { url = "https://files.pythonhosted.org/packages/f5/6f/f58d46f33db9f2e3647b2d0764704548c184e6f5e014bef528b7f979ef84/coverage-7.10.7-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:9fa6e4dd51fe15d8738708a973470f67a855ca50002294852e9571cdbd9433f2", size = 219266, upload-time = "2025-09-21T20:03:08.495Z" },
- { url = "https://files.pythonhosted.org/packages/74/5c/183ffc817ba68e0b443b8c934c8795553eb0c14573813415bd59941ee165/coverage-7.10.7-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:8fb190658865565c549b6b4706856d6a7b09302c797eb2cf8e7fe9dabb043f0d", size = 260767, upload-time = "2025-09-21T20:03:10.172Z" },
- { url = "https://files.pythonhosted.org/packages/0f/48/71a8abe9c1ad7e97548835e3cc1adbf361e743e9d60310c5f75c9e7bf847/coverage-7.10.7-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:affef7c76a9ef259187ef31599a9260330e0335a3011732c4b9effa01e1cd6e0", size = 262931, upload-time = "2025-09-21T20:03:11.861Z" },
- { url = "https://files.pythonhosted.org/packages/84/fd/193a8fb132acfc0a901f72020e54be5e48021e1575bb327d8ee1097a28fd/coverage-7.10.7-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e16e07d85ca0cf8bafe5f5d23a0b850064e8e945d5677492b06bbe6f09cc699", size = 265186, upload-time = "2025-09-21T20:03:13.539Z" },
- { url = "https://files.pythonhosted.org/packages/b1/8f/74ecc30607dd95ad50e3034221113ccb1c6d4e8085cc761134782995daae/coverage-7.10.7-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:03ffc58aacdf65d2a82bbeb1ffe4d01ead4017a21bfd0454983b88ca73af94b9", size = 259470, upload-time = "2025-09-21T20:03:15.584Z" },
- { url = "https://files.pythonhosted.org/packages/0f/55/79ff53a769f20d71b07023ea115c9167c0bb56f281320520cf64c5298a96/coverage-7.10.7-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:1b4fd784344d4e52647fd7857b2af5b3fbe6c239b0b5fa63e94eb67320770e0f", size = 262626, upload-time = "2025-09-21T20:03:17.673Z" },
- { url = "https://files.pythonhosted.org/packages/88/e2/dac66c140009b61ac3fc13af673a574b00c16efdf04f9b5c740703e953c0/coverage-7.10.7-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:0ebbaddb2c19b71912c6f2518e791aa8b9f054985a0769bdb3a53ebbc765c6a1", size = 260386, upload-time = "2025-09-21T20:03:19.36Z" },
- { url = "https://files.pythonhosted.org/packages/a2/f1/f48f645e3f33bb9ca8a496bc4a9671b52f2f353146233ebd7c1df6160440/coverage-7.10.7-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:a2d9a3b260cc1d1dbdb1c582e63ddcf5363426a1a68faa0f5da28d8ee3c722a0", size = 258852, upload-time = "2025-09-21T20:03:21.007Z" },
- { url = "https://files.pythonhosted.org/packages/bb/3b/8442618972c51a7affeead957995cfa8323c0c9bcf8fa5a027421f720ff4/coverage-7.10.7-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a3cc8638b2480865eaa3926d192e64ce6c51e3d29c849e09d5b4ad95efae5399", size = 261534, upload-time = "2025-09-21T20:03:23.12Z" },
- { url = "https://files.pythonhosted.org/packages/b2/dc/101f3fa3a45146db0cb03f5b4376e24c0aac818309da23e2de0c75295a91/coverage-7.10.7-cp314-cp314t-win32.whl", hash = "sha256:67f8c5cbcd3deb7a60b3345dffc89a961a484ed0af1f6f73de91705cc6e31235", size = 221784, upload-time = "2025-09-21T20:03:24.769Z" },
- { url = "https://files.pythonhosted.org/packages/4c/a1/74c51803fc70a8a40d7346660379e144be772bab4ac7bb6e6b905152345c/coverage-7.10.7-cp314-cp314t-win_amd64.whl", hash = "sha256:e1ed71194ef6dea7ed2d5cb5f7243d4bcd334bfb63e59878519be558078f848d", size = 222905, upload-time = "2025-09-21T20:03:26.93Z" },
- { url = "https://files.pythonhosted.org/packages/12/65/f116a6d2127df30bcafbceef0302d8a64ba87488bf6f73a6d8eebf060873/coverage-7.10.7-cp314-cp314t-win_arm64.whl", hash = "sha256:7fe650342addd8524ca63d77b2362b02345e5f1a093266787d210c70a50b471a", size = 220922, upload-time = "2025-09-21T20:03:28.672Z" },
- { url = "https://files.pythonhosted.org/packages/a3/ad/d1c25053764b4c42eb294aae92ab617d2e4f803397f9c7c8295caa77a260/coverage-7.10.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:fff7b9c3f19957020cac546c70025331113d2e61537f6e2441bc7657913de7d3", size = 217978, upload-time = "2025-09-21T20:03:30.362Z" },
- { url = "https://files.pythonhosted.org/packages/52/2f/b9f9daa39b80ece0b9548bbb723381e29bc664822d9a12c2135f8922c22b/coverage-7.10.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:bc91b314cef27742da486d6839b677b3f2793dfe52b51bbbb7cf736d5c29281c", size = 218370, upload-time = "2025-09-21T20:03:32.147Z" },
- { url = "https://files.pythonhosted.org/packages/dd/6e/30d006c3b469e58449650642383dddf1c8fb63d44fdf92994bfd46570695/coverage-7.10.7-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:567f5c155eda8df1d3d439d40a45a6a5f029b429b06648235f1e7e51b522b396", size = 244802, upload-time = "2025-09-21T20:03:33.919Z" },
- { url = "https://files.pythonhosted.org/packages/b0/49/8a070782ce7e6b94ff6a0b6d7c65ba6bc3091d92a92cef4cd4eb0767965c/coverage-7.10.7-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2af88deffcc8a4d5974cf2d502251bc3b2db8461f0b66d80a449c33757aa9f40", size = 246625, upload-time = "2025-09-21T20:03:36.09Z" },
- { url = "https://files.pythonhosted.org/packages/6a/92/1c1c5a9e8677ce56d42b97bdaca337b2d4d9ebe703d8c174ede52dbabd5f/coverage-7.10.7-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c7315339eae3b24c2d2fa1ed7d7a38654cba34a13ef19fbcb9425da46d3dc594", size = 248399, upload-time = "2025-09-21T20:03:38.342Z" },
- { url = "https://files.pythonhosted.org/packages/c0/54/b140edee7257e815de7426d5d9846b58505dffc29795fff2dfb7f8a1c5a0/coverage-7.10.7-cp39-cp39-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:912e6ebc7a6e4adfdbb1aec371ad04c68854cd3bf3608b3514e7ff9062931d8a", size = 245142, upload-time = "2025-09-21T20:03:40.591Z" },
- { url = "https://files.pythonhosted.org/packages/e4/9e/6d6b8295940b118e8b7083b29226c71f6154f7ff41e9ca431f03de2eac0d/coverage-7.10.7-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:f49a05acd3dfe1ce9715b657e28d138578bc40126760efb962322c56e9ca344b", size = 246284, upload-time = "2025-09-21T20:03:42.355Z" },
- { url = "https://files.pythonhosted.org/packages/db/e5/5e957ca747d43dbe4d9714358375c7546cb3cb533007b6813fc20fce37ad/coverage-7.10.7-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:cce2109b6219f22ece99db7644b9622f54a4e915dad65660ec435e89a3ea7cc3", size = 244353, upload-time = "2025-09-21T20:03:44.218Z" },
- { url = "https://files.pythonhosted.org/packages/9a/45/540fc5cc92536a1b783b7ef99450bd55a4b3af234aae35a18a339973ce30/coverage-7.10.7-cp39-cp39-musllinux_1_2_riscv64.whl", hash = "sha256:f3c887f96407cea3916294046fc7dab611c2552beadbed4ea901cbc6a40cc7a0", size = 244430, upload-time = "2025-09-21T20:03:46.065Z" },
- { url = "https://files.pythonhosted.org/packages/75/0b/8287b2e5b38c8fe15d7e3398849bb58d382aedc0864ea0fa1820e8630491/coverage-7.10.7-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:635adb9a4507c9fd2ed65f39693fa31c9a3ee3a8e6dc64df033e8fdf52a7003f", size = 245311, upload-time = "2025-09-21T20:03:48.19Z" },
- { url = "https://files.pythonhosted.org/packages/0c/1d/29724999984740f0c86d03e6420b942439bf5bd7f54d4382cae386a9d1e9/coverage-7.10.7-cp39-cp39-win32.whl", hash = "sha256:5a02d5a850e2979b0a014c412573953995174743a3f7fa4ea5a6e9a3c5617431", size = 220500, upload-time = "2025-09-21T20:03:50.024Z" },
- { url = "https://files.pythonhosted.org/packages/43/11/4b1e6b129943f905ca54c339f343877b55b365ae2558806c1be4f7476ed5/coverage-7.10.7-cp39-cp39-win_amd64.whl", hash = "sha256:c134869d5ffe34547d14e174c866fd8fe2254918cc0a95e99052903bc1543e07", size = 221408, upload-time = "2025-09-21T20:03:51.803Z" },
- { url = "https://files.pythonhosted.org/packages/ec/16/114df1c291c22cac3b0c127a73e0af5c12ed7bbb6558d310429a0ae24023/coverage-7.10.7-py3-none-any.whl", hash = "sha256:f7941f6f2fe6dd6807a1208737b8a0cbcf1cc6d7b07d24998ad2d63590868260", size = 209952, upload-time = "2025-09-21T20:03:53.918Z" },
-]
-
-[package.optional-dependencies]
-toml = [
- { name = "tomli", marker = "python_full_version < '3.10'" },
-]
-
[[package]]
name = "coverage"
version = "7.11.0"
source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version >= '3.12'",
- "python_full_version == '3.11.*'",
- "python_full_version == '3.10.*'",
-]
sdist = { url = "https://files.pythonhosted.org/packages/1c/38/ee22495420457259d2f3390309505ea98f98a5eed40901cf62196abad006/coverage-7.11.0.tar.gz", hash = "sha256:167bd504ac1ca2af7ff3b81d245dfea0292c5032ebef9d66cc08a7d28c1b8050", size = 811905, upload-time = "2025-10-15T15:15:08.542Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/12/95/c49df0aceb5507a80b9fe5172d3d39bf23f05be40c23c8d77d556df96cec/coverage-7.11.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:eb53f1e8adeeb2e78962bade0c08bfdc461853c7969706ed901821e009b35e31", size = 215800, upload-time = "2025-10-15T15:12:19.824Z" },
- { url = "https://files.pythonhosted.org/packages/dc/c6/7bb46ce01ed634fff1d7bb53a54049f539971862cc388b304ff3c51b4f66/coverage-7.11.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d9a03ec6cb9f40a5c360f138b88266fd8f58408d71e89f536b4f91d85721d075", size = 216198, upload-time = "2025-10-15T15:12:22.549Z" },
- { url = "https://files.pythonhosted.org/packages/94/b2/75d9d8fbf2900268aca5de29cd0a0fe671b0f69ef88be16767cc3c828b85/coverage-7.11.0-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0d7f0616c557cbc3d1c2090334eddcbb70e1ae3a40b07222d62b3aa47f608fab", size = 242953, upload-time = "2025-10-15T15:12:24.139Z" },
- { url = "https://files.pythonhosted.org/packages/65/ac/acaa984c18f440170525a8743eb4b6c960ace2dbad80dc22056a437fc3c6/coverage-7.11.0-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:e44a86a47bbdf83b0a3ea4d7df5410d6b1a0de984fbd805fa5101f3624b9abe0", size = 244766, upload-time = "2025-10-15T15:12:25.974Z" },
- { url = "https://files.pythonhosted.org/packages/d8/0d/938d0bff76dfa4a6b228c3fc4b3e1c0e2ad4aa6200c141fcda2bd1170227/coverage-7.11.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:596763d2f9a0ee7eec6e643e29660def2eef297e1de0d334c78c08706f1cb785", size = 246625, upload-time = "2025-10-15T15:12:27.387Z" },
- { url = "https://files.pythonhosted.org/packages/38/54/8f5f5e84bfa268df98f46b2cb396b1009734cfb1e5d6adb663d284893b32/coverage-7.11.0-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ef55537ff511b5e0a43edb4c50a7bf7ba1c3eea20b4f49b1490f1e8e0e42c591", size = 243568, upload-time = "2025-10-15T15:12:28.799Z" },
- { url = "https://files.pythonhosted.org/packages/68/30/8ba337c2877fe3f2e1af0ed7ff4be0c0c4aca44d6f4007040f3ca2255e99/coverage-7.11.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:9cbabd8f4d0d3dc571d77ae5bdbfa6afe5061e679a9d74b6797c48d143307088", size = 244665, upload-time = "2025-10-15T15:12:30.297Z" },
- { url = "https://files.pythonhosted.org/packages/cc/fb/c6f1d6d9a665536b7dde2333346f0cc41dc6a60bd1ffc10cd5c33e7eb000/coverage-7.11.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e24045453384e0ae2a587d562df2a04d852672eb63051d16096d3f08aa4c7c2f", size = 242681, upload-time = "2025-10-15T15:12:32.326Z" },
- { url = "https://files.pythonhosted.org/packages/be/38/1b532319af5f991fa153c20373291dc65c2bf532af7dbcffdeef745c8f79/coverage-7.11.0-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:7161edd3426c8d19bdccde7d49e6f27f748f3c31cc350c5de7c633fea445d866", size = 242912, upload-time = "2025-10-15T15:12:34.079Z" },
- { url = "https://files.pythonhosted.org/packages/67/3d/f39331c60ef6050d2a861dc1b514fa78f85f792820b68e8c04196ad733d6/coverage-7.11.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3d4ed4de17e692ba6415b0587bc7f12bc80915031fc9db46a23ce70fc88c9841", size = 243559, upload-time = "2025-10-15T15:12:35.809Z" },
- { url = "https://files.pythonhosted.org/packages/4b/55/cb7c9df9d0495036ce582a8a2958d50c23cd73f84a23284bc23bd4711a6f/coverage-7.11.0-cp310-cp310-win32.whl", hash = "sha256:765c0bc8fe46f48e341ef737c91c715bd2a53a12792592296a095f0c237e09cf", size = 218266, upload-time = "2025-10-15T15:12:37.429Z" },
- { url = "https://files.pythonhosted.org/packages/68/a8/b79cb275fa7bd0208767f89d57a1b5f6ba830813875738599741b97c2e04/coverage-7.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:24d6f3128f1b2d20d84b24f4074475457faedc3d4613a7e66b5e769939c7d969", size = 219169, upload-time = "2025-10-15T15:12:39.25Z" },
{ url = "https://files.pythonhosted.org/packages/49/3a/ee1074c15c408ddddddb1db7dd904f6b81bc524e01f5a1c5920e13dbde23/coverage-7.11.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d58ecaa865c5b9fa56e35efc51d1014d4c0d22838815b9fce57a27dd9576847", size = 215912, upload-time = "2025-10-15T15:12:40.665Z" },
{ url = "https://files.pythonhosted.org/packages/70/c4/9f44bebe5cb15f31608597b037d78799cc5f450044465bcd1ae8cb222fe1/coverage-7.11.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b679e171f1c104a5668550ada700e3c4937110dbdd153b7ef9055c4f1a1ee3cc", size = 216310, upload-time = "2025-10-15T15:12:42.461Z" },
{ url = "https://files.pythonhosted.org/packages/42/01/5e06077cfef92d8af926bdd86b84fb28bf9bc6ad27343d68be9b501d89f2/coverage-7.11.0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ca61691ba8c5b6797deb221a0d09d7470364733ea9c69425a640f1f01b7c5bf0", size = 246706, upload-time = "2025-10-15T15:12:44.001Z" },
[package.optional-dependencies]
toml = [
- { name = "tomli", marker = "python_full_version >= '3.10' and python_full_version <= '3.11'" },
-]
-
-[[package]]
-name = "exceptiongroup"
-version = "1.3.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "typing-extensions", marker = "python_full_version < '3.11'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/0b/9f/a65090624ecf468cdca03533906e7c69ed7588582240cfe7cc9e770b50eb/exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88", size = 29749, upload-time = "2025-05-10T17:42:51.123Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/36/f4/c6e662dade71f56cd2f3735141b265c3c79293c109549c1e6933b0651ffc/exceptiongroup-1.3.0-py3-none-any.whl", hash = "sha256:4d111e6e0c13d0644cad6ddaa7ed0261a0b36971f6d23e7ec9b4b9097da78a10", size = 16674, upload-time = "2025-05-10T17:42:49.33Z" },
-]
-
-[[package]]
-name = "iniconfig"
-version = "2.1.0"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version < '3.10'",
-]
-sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793, upload-time = "2025-03-19T20:09:59.721Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050, upload-time = "2025-03-19T20:10:01.071Z" },
+ { name = "tomli", marker = "python_full_version <= '3.11'" },
]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version >= '3.12'",
- "python_full_version == '3.11.*'",
- "python_full_version == '3.10.*'",
-]
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
{ url = "https://files.pythonhosted.org/packages/1e/e8/685f47e0d754320684db4425a0967f7d3fa70126bffd76110b7009a0090f/joblib-1.5.2-py3-none-any.whl", hash = "sha256:4e1f0bdbb987e6d843c70cf43714cb276623def372df3c22fe5266b2670bc241", size = 308396, upload-time = "2025-08-27T12:15:45.188Z" },
]
-[[package]]
-name = "numpy"
-version = "2.0.2"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version < '3.10'",
-]
-sdist = { url = "https://files.pythonhosted.org/packages/a9/75/10dd1f8116a8b796cb2c737b674e02d02e80454bda953fa7e65d8c12b016/numpy-2.0.2.tar.gz", hash = "sha256:883c987dee1880e2a864ab0dc9892292582510604156762362d9326444636e78", size = 18902015, upload-time = "2024-08-26T20:19:40.945Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/21/91/3495b3237510f79f5d81f2508f9f13fea78ebfdf07538fc7444badda173d/numpy-2.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:51129a29dbe56f9ca83438b706e2e69a39892b5eda6cedcb6b0c9fdc9b0d3ece", size = 21165245, upload-time = "2024-08-26T20:04:14.625Z" },
- { url = "https://files.pythonhosted.org/packages/05/33/26178c7d437a87082d11019292dce6d3fe6f0e9026b7b2309cbf3e489b1d/numpy-2.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f15975dfec0cf2239224d80e32c3170b1d168335eaedee69da84fbe9f1f9cd04", size = 13738540, upload-time = "2024-08-26T20:04:36.784Z" },
- { url = "https://files.pythonhosted.org/packages/ec/31/cc46e13bf07644efc7a4bf68df2df5fb2a1a88d0cd0da9ddc84dc0033e51/numpy-2.0.2-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:8c5713284ce4e282544c68d1c3b2c7161d38c256d2eefc93c1d683cf47683e66", size = 5300623, upload-time = "2024-08-26T20:04:46.491Z" },
- { url = "https://files.pythonhosted.org/packages/6e/16/7bfcebf27bb4f9d7ec67332ffebee4d1bf085c84246552d52dbb548600e7/numpy-2.0.2-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:becfae3ddd30736fe1889a37f1f580e245ba79a5855bff5f2a29cb3ccc22dd7b", size = 6901774, upload-time = "2024-08-26T20:04:58.173Z" },
- { url = "https://files.pythonhosted.org/packages/f9/a3/561c531c0e8bf082c5bef509d00d56f82e0ea7e1e3e3a7fc8fa78742a6e5/numpy-2.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2da5960c3cf0df7eafefd806d4e612c5e19358de82cb3c343631188991566ccd", size = 13907081, upload-time = "2024-08-26T20:05:19.098Z" },
- { url = "https://files.pythonhosted.org/packages/fa/66/f7177ab331876200ac7563a580140643d1179c8b4b6a6b0fc9838de2a9b8/numpy-2.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:496f71341824ed9f3d2fd36cf3ac57ae2e0165c143b55c3a035ee219413f3318", size = 19523451, upload-time = "2024-08-26T20:05:47.479Z" },
- { url = "https://files.pythonhosted.org/packages/25/7f/0b209498009ad6453e4efc2c65bcdf0ae08a182b2b7877d7ab38a92dc542/numpy-2.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a61ec659f68ae254e4d237816e33171497e978140353c0c2038d46e63282d0c8", size = 19927572, upload-time = "2024-08-26T20:06:17.137Z" },
- { url = "https://files.pythonhosted.org/packages/3e/df/2619393b1e1b565cd2d4c4403bdd979621e2c4dea1f8532754b2598ed63b/numpy-2.0.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d731a1c6116ba289c1e9ee714b08a8ff882944d4ad631fd411106a30f083c326", size = 14400722, upload-time = "2024-08-26T20:06:39.16Z" },
- { url = "https://files.pythonhosted.org/packages/22/ad/77e921b9f256d5da36424ffb711ae79ca3f451ff8489eeca544d0701d74a/numpy-2.0.2-cp310-cp310-win32.whl", hash = "sha256:984d96121c9f9616cd33fbd0618b7f08e0cfc9600a7ee1d6fd9b239186d19d97", size = 6472170, upload-time = "2024-08-26T20:06:50.361Z" },
- { url = "https://files.pythonhosted.org/packages/10/05/3442317535028bc29cf0c0dd4c191a4481e8376e9f0db6bcf29703cadae6/numpy-2.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:c7b0be4ef08607dd04da4092faee0b86607f111d5ae68036f16cc787e250a131", size = 15905558, upload-time = "2024-08-26T20:07:13.881Z" },
- { url = "https://files.pythonhosted.org/packages/8b/cf/034500fb83041aa0286e0fb16e7c76e5c8b67c0711bb6e9e9737a717d5fe/numpy-2.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:49ca4decb342d66018b01932139c0961a8f9ddc7589611158cb3c27cbcf76448", size = 21169137, upload-time = "2024-08-26T20:07:45.345Z" },
- { url = "https://files.pythonhosted.org/packages/4a/d9/32de45561811a4b87fbdee23b5797394e3d1504b4a7cf40c10199848893e/numpy-2.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:11a76c372d1d37437857280aa142086476136a8c0f373b2e648ab2c8f18fb195", size = 13703552, upload-time = "2024-08-26T20:08:06.666Z" },
- { url = "https://files.pythonhosted.org/packages/c1/ca/2f384720020c7b244d22508cb7ab23d95f179fcfff33c31a6eeba8d6c512/numpy-2.0.2-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:807ec44583fd708a21d4a11d94aedf2f4f3c3719035c76a2bbe1fe8e217bdc57", size = 5298957, upload-time = "2024-08-26T20:08:15.83Z" },
- { url = "https://files.pythonhosted.org/packages/0e/78/a3e4f9fb6aa4e6fdca0c5428e8ba039408514388cf62d89651aade838269/numpy-2.0.2-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:8cafab480740e22f8d833acefed5cc87ce276f4ece12fdaa2e8903db2f82897a", size = 6905573, upload-time = "2024-08-26T20:08:27.185Z" },
- { url = "https://files.pythonhosted.org/packages/a0/72/cfc3a1beb2caf4efc9d0b38a15fe34025230da27e1c08cc2eb9bfb1c7231/numpy-2.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a15f476a45e6e5a3a79d8a14e62161d27ad897381fecfa4a09ed5322f2085669", size = 13914330, upload-time = "2024-08-26T20:08:48.058Z" },
- { url = "https://files.pythonhosted.org/packages/ba/a8/c17acf65a931ce551fee11b72e8de63bf7e8a6f0e21add4c937c83563538/numpy-2.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13e689d772146140a252c3a28501da66dfecd77490b498b168b501835041f951", size = 19534895, upload-time = "2024-08-26T20:09:16.536Z" },
- { url = "https://files.pythonhosted.org/packages/ba/86/8767f3d54f6ae0165749f84648da9dcc8cd78ab65d415494962c86fac80f/numpy-2.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9ea91dfb7c3d1c56a0e55657c0afb38cf1eeae4544c208dc465c3c9f3a7c09f9", size = 19937253, upload-time = "2024-08-26T20:09:46.263Z" },
- { url = "https://files.pythonhosted.org/packages/df/87/f76450e6e1c14e5bb1eae6836478b1028e096fd02e85c1c37674606ab752/numpy-2.0.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c1c9307701fec8f3f7a1e6711f9089c06e6284b3afbbcd259f7791282d660a15", size = 14414074, upload-time = "2024-08-26T20:10:08.483Z" },
- { url = "https://files.pythonhosted.org/packages/5c/ca/0f0f328e1e59f73754f06e1adfb909de43726d4f24c6a3f8805f34f2b0fa/numpy-2.0.2-cp311-cp311-win32.whl", hash = "sha256:a392a68bd329eafac5817e5aefeb39038c48b671afd242710b451e76090e81f4", size = 6470640, upload-time = "2024-08-26T20:10:19.732Z" },
- { url = "https://files.pythonhosted.org/packages/eb/57/3a3f14d3a759dcf9bf6e9eda905794726b758819df4663f217d658a58695/numpy-2.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:286cd40ce2b7d652a6f22efdfc6d1edf879440e53e76a75955bc0c826c7e64dc", size = 15910230, upload-time = "2024-08-26T20:10:43.413Z" },
- { url = "https://files.pythonhosted.org/packages/45/40/2e117be60ec50d98fa08c2f8c48e09b3edea93cfcabd5a9ff6925d54b1c2/numpy-2.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:df55d490dea7934f330006d0f81e8551ba6010a5bf035a249ef61a94f21c500b", size = 20895803, upload-time = "2024-08-26T20:11:13.916Z" },
- { url = "https://files.pythonhosted.org/packages/46/92/1b8b8dee833f53cef3e0a3f69b2374467789e0bb7399689582314df02651/numpy-2.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8df823f570d9adf0978347d1f926b2a867d5608f434a7cff7f7908c6570dcf5e", size = 13471835, upload-time = "2024-08-26T20:11:34.779Z" },
- { url = "https://files.pythonhosted.org/packages/7f/19/e2793bde475f1edaea6945be141aef6c8b4c669b90c90a300a8954d08f0a/numpy-2.0.2-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:9a92ae5c14811e390f3767053ff54eaee3bf84576d99a2456391401323f4ec2c", size = 5038499, upload-time = "2024-08-26T20:11:43.902Z" },
- { url = "https://files.pythonhosted.org/packages/e3/ff/ddf6dac2ff0dd50a7327bcdba45cb0264d0e96bb44d33324853f781a8f3c/numpy-2.0.2-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:a842d573724391493a97a62ebbb8e731f8a5dcc5d285dfc99141ca15a3302d0c", size = 6633497, upload-time = "2024-08-26T20:11:55.09Z" },
- { url = "https://files.pythonhosted.org/packages/72/21/67f36eac8e2d2cd652a2e69595a54128297cdcb1ff3931cfc87838874bd4/numpy-2.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c05e238064fc0610c840d1cf6a13bf63d7e391717d247f1bf0318172e759e692", size = 13621158, upload-time = "2024-08-26T20:12:14.95Z" },
- { url = "https://files.pythonhosted.org/packages/39/68/e9f1126d757653496dbc096cb429014347a36b228f5a991dae2c6b6cfd40/numpy-2.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0123ffdaa88fa4ab64835dcbde75dcdf89c453c922f18dced6e27c90d1d0ec5a", size = 19236173, upload-time = "2024-08-26T20:12:44.049Z" },
- { url = "https://files.pythonhosted.org/packages/d1/e9/1f5333281e4ebf483ba1c888b1d61ba7e78d7e910fdd8e6499667041cc35/numpy-2.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:96a55f64139912d61de9137f11bf39a55ec8faec288c75a54f93dfd39f7eb40c", size = 19634174, upload-time = "2024-08-26T20:13:13.634Z" },
- { url = "https://files.pythonhosted.org/packages/71/af/a469674070c8d8408384e3012e064299f7a2de540738a8e414dcfd639996/numpy-2.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ec9852fb39354b5a45a80bdab5ac02dd02b15f44b3804e9f00c556bf24b4bded", size = 14099701, upload-time = "2024-08-26T20:13:34.851Z" },
- { url = "https://files.pythonhosted.org/packages/d0/3d/08ea9f239d0e0e939b6ca52ad403c84a2bce1bde301a8eb4888c1c1543f1/numpy-2.0.2-cp312-cp312-win32.whl", hash = "sha256:671bec6496f83202ed2d3c8fdc486a8fc86942f2e69ff0e986140339a63bcbe5", size = 6174313, upload-time = "2024-08-26T20:13:45.653Z" },
- { url = "https://files.pythonhosted.org/packages/b2/b5/4ac39baebf1fdb2e72585c8352c56d063b6126be9fc95bd2bb5ef5770c20/numpy-2.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:cfd41e13fdc257aa5778496b8caa5e856dc4896d4ccf01841daee1d96465467a", size = 15606179, upload-time = "2024-08-26T20:14:08.786Z" },
- { url = "https://files.pythonhosted.org/packages/43/c1/41c8f6df3162b0c6ffd4437d729115704bd43363de0090c7f913cfbc2d89/numpy-2.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9059e10581ce4093f735ed23f3b9d283b9d517ff46009ddd485f1747eb22653c", size = 21169942, upload-time = "2024-08-26T20:14:40.108Z" },
- { url = "https://files.pythonhosted.org/packages/39/bc/fd298f308dcd232b56a4031fd6ddf11c43f9917fbc937e53762f7b5a3bb1/numpy-2.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:423e89b23490805d2a5a96fe40ec507407b8ee786d66f7328be214f9679df6dd", size = 13711512, upload-time = "2024-08-26T20:15:00.985Z" },
- { url = "https://files.pythonhosted.org/packages/96/ff/06d1aa3eeb1c614eda245c1ba4fb88c483bee6520d361641331872ac4b82/numpy-2.0.2-cp39-cp39-macosx_14_0_arm64.whl", hash = "sha256:2b2955fa6f11907cf7a70dab0d0755159bca87755e831e47932367fc8f2f2d0b", size = 5306976, upload-time = "2024-08-26T20:15:10.876Z" },
- { url = "https://files.pythonhosted.org/packages/2d/98/121996dcfb10a6087a05e54453e28e58694a7db62c5a5a29cee14c6e047b/numpy-2.0.2-cp39-cp39-macosx_14_0_x86_64.whl", hash = "sha256:97032a27bd9d8988b9a97a8c4d2c9f2c15a81f61e2f21404d7e8ef00cb5be729", size = 6906494, upload-time = "2024-08-26T20:15:22.055Z" },
- { url = "https://files.pythonhosted.org/packages/15/31/9dffc70da6b9bbf7968f6551967fc21156207366272c2a40b4ed6008dc9b/numpy-2.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1e795a8be3ddbac43274f18588329c72939870a16cae810c2b73461c40718ab1", size = 13912596, upload-time = "2024-08-26T20:15:42.452Z" },
- { url = "https://files.pythonhosted.org/packages/b9/14/78635daab4b07c0930c919d451b8bf8c164774e6a3413aed04a6d95758ce/numpy-2.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f26b258c385842546006213344c50655ff1555a9338e2e5e02a0756dc3e803dd", size = 19526099, upload-time = "2024-08-26T20:16:11.048Z" },
- { url = "https://files.pythonhosted.org/packages/26/4c/0eeca4614003077f68bfe7aac8b7496f04221865b3a5e7cb230c9d055afd/numpy-2.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5fec9451a7789926bcf7c2b8d187292c9f93ea30284802a0ab3f5be8ab36865d", size = 19932823, upload-time = "2024-08-26T20:16:40.171Z" },
- { url = "https://files.pythonhosted.org/packages/f1/46/ea25b98b13dccaebddf1a803f8c748680d972e00507cd9bc6dcdb5aa2ac1/numpy-2.0.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:9189427407d88ff25ecf8f12469d4d39d35bee1db5d39fc5c168c6f088a6956d", size = 14404424, upload-time = "2024-08-26T20:17:02.604Z" },
- { url = "https://files.pythonhosted.org/packages/c8/a6/177dd88d95ecf07e722d21008b1b40e681a929eb9e329684d449c36586b2/numpy-2.0.2-cp39-cp39-win32.whl", hash = "sha256:905d16e0c60200656500c95b6b8dca5d109e23cb24abc701d41c02d74c6b3afa", size = 6476809, upload-time = "2024-08-26T20:17:13.553Z" },
- { url = "https://files.pythonhosted.org/packages/ea/2b/7fc9f4e7ae5b507c1a3a21f0f15ed03e794c1242ea8a242ac158beb56034/numpy-2.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:a3f4ab0caa7f053f6797fcd4e1e25caee367db3112ef2b6ef82d749530768c73", size = 15911314, upload-time = "2024-08-26T20:17:36.72Z" },
- { url = "https://files.pythonhosted.org/packages/8f/3b/df5a870ac6a3be3a86856ce195ef42eec7ae50d2a202be1f5a4b3b340e14/numpy-2.0.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7f0a0c6f12e07fa94133c8a67404322845220c06a9e80e85999afe727f7438b8", size = 21025288, upload-time = "2024-08-26T20:18:07.732Z" },
- { url = "https://files.pythonhosted.org/packages/2c/97/51af92f18d6f6f2d9ad8b482a99fb74e142d71372da5d834b3a2747a446e/numpy-2.0.2-pp39-pypy39_pp73-macosx_14_0_x86_64.whl", hash = "sha256:312950fdd060354350ed123c0e25a71327d3711584beaef30cdaa93320c392d4", size = 6762793, upload-time = "2024-08-26T20:18:19.125Z" },
- { url = "https://files.pythonhosted.org/packages/12/46/de1fbd0c1b5ccaa7f9a005b66761533e2f6a3e560096682683a223631fe9/numpy-2.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26df23238872200f63518dd2aa984cfca675d82469535dc7162dc2ee52d9dd5c", size = 19334885, upload-time = "2024-08-26T20:18:47.237Z" },
- { url = "https://files.pythonhosted.org/packages/cc/dc/d330a6faefd92b446ec0f0dfea4c3207bb1fef3c4771d19cf4543efd2c78/numpy-2.0.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:a46288ec55ebbd58947d31d72be2c63cbf839f0a63b49cb755022310792a3385", size = 15828784, upload-time = "2024-08-26T20:19:11.19Z" },
-]
-
-[[package]]
-name = "numpy"
-version = "2.2.6"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version == '3.10.*'",
-]
-sdist = { url = "https://files.pythonhosted.org/packages/76/21/7d2a95e4bba9dc13d043ee156a356c0a8f0c6309dff6b21b4d71a073b8a8/numpy-2.2.6.tar.gz", hash = "sha256:e29554e2bef54a90aa5cc07da6ce955accb83f21ab5de01a62c8478897b264fd", size = 20276440, upload-time = "2025-05-17T22:38:04.611Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/9a/3e/ed6db5be21ce87955c0cbd3009f2803f59fa08df21b5df06862e2d8e2bdd/numpy-2.2.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b412caa66f72040e6d268491a59f2c43bf03eb6c96dd8f0307829feb7fa2b6fb", size = 21165245, upload-time = "2025-05-17T21:27:58.555Z" },
- { url = "https://files.pythonhosted.org/packages/22/c2/4b9221495b2a132cc9d2eb862e21d42a009f5a60e45fc44b00118c174bff/numpy-2.2.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e41fd67c52b86603a91c1a505ebaef50b3314de0213461c7a6e99c9a3beff90", size = 14360048, upload-time = "2025-05-17T21:28:21.406Z" },
- { url = "https://files.pythonhosted.org/packages/fd/77/dc2fcfc66943c6410e2bf598062f5959372735ffda175b39906d54f02349/numpy-2.2.6-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:37e990a01ae6ec7fe7fa1c26c55ecb672dd98b19c3d0e1d1f326fa13cb38d163", size = 5340542, upload-time = "2025-05-17T21:28:30.931Z" },
- { url = "https://files.pythonhosted.org/packages/7a/4f/1cb5fdc353a5f5cc7feb692db9b8ec2c3d6405453f982435efc52561df58/numpy-2.2.6-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:5a6429d4be8ca66d889b7cf70f536a397dc45ba6faeb5f8c5427935d9592e9cf", size = 6878301, upload-time = "2025-05-17T21:28:41.613Z" },
- { url = "https://files.pythonhosted.org/packages/eb/17/96a3acd228cec142fcb8723bd3cc39c2a474f7dcf0a5d16731980bcafa95/numpy-2.2.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:efd28d4e9cd7d7a8d39074a4d44c63eda73401580c5c76acda2ce969e0a38e83", size = 14297320, upload-time = "2025-05-17T21:29:02.78Z" },
- { url = "https://files.pythonhosted.org/packages/b4/63/3de6a34ad7ad6646ac7d2f55ebc6ad439dbbf9c4370017c50cf403fb19b5/numpy-2.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc7b73d02efb0e18c000e9ad8b83480dfcd5dfd11065997ed4c6747470ae8915", size = 16801050, upload-time = "2025-05-17T21:29:27.675Z" },
- { url = "https://files.pythonhosted.org/packages/07/b6/89d837eddef52b3d0cec5c6ba0456c1bf1b9ef6a6672fc2b7873c3ec4e2e/numpy-2.2.6-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:74d4531beb257d2c3f4b261bfb0fc09e0f9ebb8842d82a7b4209415896adc680", size = 15807034, upload-time = "2025-05-17T21:29:51.102Z" },
- { url = "https://files.pythonhosted.org/packages/01/c8/dc6ae86e3c61cfec1f178e5c9f7858584049b6093f843bca541f94120920/numpy-2.2.6-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8fc377d995680230e83241d8a96def29f204b5782f371c532579b4f20607a289", size = 18614185, upload-time = "2025-05-17T21:30:18.703Z" },
- { url = "https://files.pythonhosted.org/packages/5b/c5/0064b1b7e7c89137b471ccec1fd2282fceaae0ab3a9550f2568782d80357/numpy-2.2.6-cp310-cp310-win32.whl", hash = "sha256:b093dd74e50a8cba3e873868d9e93a85b78e0daf2e98c6797566ad8044e8363d", size = 6527149, upload-time = "2025-05-17T21:30:29.788Z" },
- { url = "https://files.pythonhosted.org/packages/a3/dd/4b822569d6b96c39d1215dbae0582fd99954dcbcf0c1a13c61783feaca3f/numpy-2.2.6-cp310-cp310-win_amd64.whl", hash = "sha256:f0fd6321b839904e15c46e0d257fdd101dd7f530fe03fd6359c1ea63738703f3", size = 12904620, upload-time = "2025-05-17T21:30:48.994Z" },
- { url = "https://files.pythonhosted.org/packages/da/a8/4f83e2aa666a9fbf56d6118faaaf5f1974d456b1823fda0a176eff722839/numpy-2.2.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f9f1adb22318e121c5c69a09142811a201ef17ab257a1e66ca3025065b7f53ae", size = 21176963, upload-time = "2025-05-17T21:31:19.36Z" },
- { url = "https://files.pythonhosted.org/packages/b3/2b/64e1affc7972decb74c9e29e5649fac940514910960ba25cd9af4488b66c/numpy-2.2.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c820a93b0255bc360f53eca31a0e676fd1101f673dda8da93454a12e23fc5f7a", size = 14406743, upload-time = "2025-05-17T21:31:41.087Z" },
- { url = "https://files.pythonhosted.org/packages/4a/9f/0121e375000b5e50ffdd8b25bf78d8e1a5aa4cca3f185d41265198c7b834/numpy-2.2.6-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:3d70692235e759f260c3d837193090014aebdf026dfd167834bcba43e30c2a42", size = 5352616, upload-time = "2025-05-17T21:31:50.072Z" },
- { url = "https://files.pythonhosted.org/packages/31/0d/b48c405c91693635fbe2dcd7bc84a33a602add5f63286e024d3b6741411c/numpy-2.2.6-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:481b49095335f8eed42e39e8041327c05b0f6f4780488f61286ed3c01368d491", size = 6889579, upload-time = "2025-05-17T21:32:01.712Z" },
- { url = "https://files.pythonhosted.org/packages/52/b8/7f0554d49b565d0171eab6e99001846882000883998e7b7d9f0d98b1f934/numpy-2.2.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b64d8d4d17135e00c8e346e0a738deb17e754230d7e0810ac5012750bbd85a5a", size = 14312005, upload-time = "2025-05-17T21:32:23.332Z" },
- { url = "https://files.pythonhosted.org/packages/b3/dd/2238b898e51bd6d389b7389ffb20d7f4c10066d80351187ec8e303a5a475/numpy-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba10f8411898fc418a521833e014a77d3ca01c15b0c6cdcce6a0d2897e6dbbdf", size = 16821570, upload-time = "2025-05-17T21:32:47.991Z" },
- { url = "https://files.pythonhosted.org/packages/83/6c/44d0325722cf644f191042bf47eedad61c1e6df2432ed65cbe28509d404e/numpy-2.2.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:bd48227a919f1bafbdda0583705e547892342c26fb127219d60a5c36882609d1", size = 15818548, upload-time = "2025-05-17T21:33:11.728Z" },
- { url = "https://files.pythonhosted.org/packages/ae/9d/81e8216030ce66be25279098789b665d49ff19eef08bfa8cb96d4957f422/numpy-2.2.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9551a499bf125c1d4f9e250377c1ee2eddd02e01eac6644c080162c0c51778ab", size = 18620521, upload-time = "2025-05-17T21:33:39.139Z" },
- { url = "https://files.pythonhosted.org/packages/6a/fd/e19617b9530b031db51b0926eed5345ce8ddc669bb3bc0044b23e275ebe8/numpy-2.2.6-cp311-cp311-win32.whl", hash = "sha256:0678000bb9ac1475cd454c6b8c799206af8107e310843532b04d49649c717a47", size = 6525866, upload-time = "2025-05-17T21:33:50.273Z" },
- { url = "https://files.pythonhosted.org/packages/31/0a/f354fb7176b81747d870f7991dc763e157a934c717b67b58456bc63da3df/numpy-2.2.6-cp311-cp311-win_amd64.whl", hash = "sha256:e8213002e427c69c45a52bbd94163084025f533a55a59d6f9c5b820774ef3303", size = 12907455, upload-time = "2025-05-17T21:34:09.135Z" },
- { url = "https://files.pythonhosted.org/packages/82/5d/c00588b6cf18e1da539b45d3598d3557084990dcc4331960c15ee776ee41/numpy-2.2.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:41c5a21f4a04fa86436124d388f6ed60a9343a6f767fced1a8a71c3fbca038ff", size = 20875348, upload-time = "2025-05-17T21:34:39.648Z" },
- { url = "https://files.pythonhosted.org/packages/66/ee/560deadcdde6c2f90200450d5938f63a34b37e27ebff162810f716f6a230/numpy-2.2.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:de749064336d37e340f640b05f24e9e3dd678c57318c7289d222a8a2f543e90c", size = 14119362, upload-time = "2025-05-17T21:35:01.241Z" },
- { url = "https://files.pythonhosted.org/packages/3c/65/4baa99f1c53b30adf0acd9a5519078871ddde8d2339dc5a7fde80d9d87da/numpy-2.2.6-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:894b3a42502226a1cac872f840030665f33326fc3dac8e57c607905773cdcde3", size = 5084103, upload-time = "2025-05-17T21:35:10.622Z" },
- { url = "https://files.pythonhosted.org/packages/cc/89/e5a34c071a0570cc40c9a54eb472d113eea6d002e9ae12bb3a8407fb912e/numpy-2.2.6-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:71594f7c51a18e728451bb50cc60a3ce4e6538822731b2933209a1f3614e9282", size = 6625382, upload-time = "2025-05-17T21:35:21.414Z" },
- { url = "https://files.pythonhosted.org/packages/f8/35/8c80729f1ff76b3921d5c9487c7ac3de9b2a103b1cd05e905b3090513510/numpy-2.2.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f2618db89be1b4e05f7a1a847a9c1c0abd63e63a1607d892dd54668dd92faf87", size = 14018462, upload-time = "2025-05-17T21:35:42.174Z" },
- { url = "https://files.pythonhosted.org/packages/8c/3d/1e1db36cfd41f895d266b103df00ca5b3cbe965184df824dec5c08c6b803/numpy-2.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd83c01228a688733f1ded5201c678f0c53ecc1006ffbc404db9f7a899ac6249", size = 16527618, upload-time = "2025-05-17T21:36:06.711Z" },
- { url = "https://files.pythonhosted.org/packages/61/c6/03ed30992602c85aa3cd95b9070a514f8b3c33e31124694438d88809ae36/numpy-2.2.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:37c0ca431f82cd5fa716eca9506aefcabc247fb27ba69c5062a6d3ade8cf8f49", size = 15505511, upload-time = "2025-05-17T21:36:29.965Z" },
- { url = "https://files.pythonhosted.org/packages/b7/25/5761d832a81df431e260719ec45de696414266613c9ee268394dd5ad8236/numpy-2.2.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fe27749d33bb772c80dcd84ae7e8df2adc920ae8297400dabec45f0dedb3f6de", size = 18313783, upload-time = "2025-05-17T21:36:56.883Z" },
- { url = "https://files.pythonhosted.org/packages/57/0a/72d5a3527c5ebffcd47bde9162c39fae1f90138c961e5296491ce778e682/numpy-2.2.6-cp312-cp312-win32.whl", hash = "sha256:4eeaae00d789f66c7a25ac5f34b71a7035bb474e679f410e5e1a94deb24cf2d4", size = 6246506, upload-time = "2025-05-17T21:37:07.368Z" },
- { url = "https://files.pythonhosted.org/packages/36/fa/8c9210162ca1b88529ab76b41ba02d433fd54fecaf6feb70ef9f124683f1/numpy-2.2.6-cp312-cp312-win_amd64.whl", hash = "sha256:c1f9540be57940698ed329904db803cf7a402f3fc200bfe599334c9bd84a40b2", size = 12614190, upload-time = "2025-05-17T21:37:26.213Z" },
- { url = "https://files.pythonhosted.org/packages/f9/5c/6657823f4f594f72b5471f1db1ab12e26e890bb2e41897522d134d2a3e81/numpy-2.2.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0811bb762109d9708cca4d0b13c4f67146e3c3b7cf8d34018c722adb2d957c84", size = 20867828, upload-time = "2025-05-17T21:37:56.699Z" },
- { url = "https://files.pythonhosted.org/packages/dc/9e/14520dc3dadf3c803473bd07e9b2bd1b69bc583cb2497b47000fed2fa92f/numpy-2.2.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:287cc3162b6f01463ccd86be154f284d0893d2b3ed7292439ea97eafa8170e0b", size = 14143006, upload-time = "2025-05-17T21:38:18.291Z" },
- { url = "https://files.pythonhosted.org/packages/4f/06/7e96c57d90bebdce9918412087fc22ca9851cceaf5567a45c1f404480e9e/numpy-2.2.6-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:f1372f041402e37e5e633e586f62aa53de2eac8d98cbfb822806ce4bbefcb74d", size = 5076765, upload-time = "2025-05-17T21:38:27.319Z" },
- { url = "https://files.pythonhosted.org/packages/73/ed/63d920c23b4289fdac96ddbdd6132e9427790977d5457cd132f18e76eae0/numpy-2.2.6-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:55a4d33fa519660d69614a9fad433be87e5252f4b03850642f88993f7b2ca566", size = 6617736, upload-time = "2025-05-17T21:38:38.141Z" },
- { url = "https://files.pythonhosted.org/packages/85/c5/e19c8f99d83fd377ec8c7e0cf627a8049746da54afc24ef0a0cb73d5dfb5/numpy-2.2.6-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f92729c95468a2f4f15e9bb94c432a9229d0d50de67304399627a943201baa2f", size = 14010719, upload-time = "2025-05-17T21:38:58.433Z" },
- { url = "https://files.pythonhosted.org/packages/19/49/4df9123aafa7b539317bf6d342cb6d227e49f7a35b99c287a6109b13dd93/numpy-2.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1bc23a79bfabc5d056d106f9befb8d50c31ced2fbc70eedb8155aec74a45798f", size = 16526072, upload-time = "2025-05-17T21:39:22.638Z" },
- { url = "https://files.pythonhosted.org/packages/b2/6c/04b5f47f4f32f7c2b0e7260442a8cbcf8168b0e1a41ff1495da42f42a14f/numpy-2.2.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e3143e4451880bed956e706a3220b4e5cf6172ef05fcc397f6f36a550b1dd868", size = 15503213, upload-time = "2025-05-17T21:39:45.865Z" },
- { url = "https://files.pythonhosted.org/packages/17/0a/5cd92e352c1307640d5b6fec1b2ffb06cd0dabe7d7b8227f97933d378422/numpy-2.2.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b4f13750ce79751586ae2eb824ba7e1e8dba64784086c98cdbbcc6a42112ce0d", size = 18316632, upload-time = "2025-05-17T21:40:13.331Z" },
- { url = "https://files.pythonhosted.org/packages/f0/3b/5cba2b1d88760ef86596ad0f3d484b1cbff7c115ae2429678465057c5155/numpy-2.2.6-cp313-cp313-win32.whl", hash = "sha256:5beb72339d9d4fa36522fc63802f469b13cdbe4fdab4a288f0c441b74272ebfd", size = 6244532, upload-time = "2025-05-17T21:43:46.099Z" },
- { url = "https://files.pythonhosted.org/packages/cb/3b/d58c12eafcb298d4e6d0d40216866ab15f59e55d148a5658bb3132311fcf/numpy-2.2.6-cp313-cp313-win_amd64.whl", hash = "sha256:b0544343a702fa80c95ad5d3d608ea3599dd54d4632df855e4c8d24eb6ecfa1c", size = 12610885, upload-time = "2025-05-17T21:44:05.145Z" },
- { url = "https://files.pythonhosted.org/packages/6b/9e/4bf918b818e516322db999ac25d00c75788ddfd2d2ade4fa66f1f38097e1/numpy-2.2.6-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0bca768cd85ae743b2affdc762d617eddf3bcf8724435498a1e80132d04879e6", size = 20963467, upload-time = "2025-05-17T21:40:44Z" },
- { url = "https://files.pythonhosted.org/packages/61/66/d2de6b291507517ff2e438e13ff7b1e2cdbdb7cb40b3ed475377aece69f9/numpy-2.2.6-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:fc0c5673685c508a142ca65209b4e79ed6740a4ed6b2267dbba90f34b0b3cfda", size = 14225144, upload-time = "2025-05-17T21:41:05.695Z" },
- { url = "https://files.pythonhosted.org/packages/e4/25/480387655407ead912e28ba3a820bc69af9adf13bcbe40b299d454ec011f/numpy-2.2.6-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:5bd4fc3ac8926b3819797a7c0e2631eb889b4118a9898c84f585a54d475b7e40", size = 5200217, upload-time = "2025-05-17T21:41:15.903Z" },
- { url = "https://files.pythonhosted.org/packages/aa/4a/6e313b5108f53dcbf3aca0c0f3e9c92f4c10ce57a0a721851f9785872895/numpy-2.2.6-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:fee4236c876c4e8369388054d02d0e9bb84821feb1a64dd59e137e6511a551f8", size = 6712014, upload-time = "2025-05-17T21:41:27.321Z" },
- { url = "https://files.pythonhosted.org/packages/b7/30/172c2d5c4be71fdf476e9de553443cf8e25feddbe185e0bd88b096915bcc/numpy-2.2.6-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e1dda9c7e08dc141e0247a5b8f49cf05984955246a327d4c48bda16821947b2f", size = 14077935, upload-time = "2025-05-17T21:41:49.738Z" },
- { url = "https://files.pythonhosted.org/packages/12/fb/9e743f8d4e4d3c710902cf87af3512082ae3d43b945d5d16563f26ec251d/numpy-2.2.6-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f447e6acb680fd307f40d3da4852208af94afdfab89cf850986c3ca00562f4fa", size = 16600122, upload-time = "2025-05-17T21:42:14.046Z" },
- { url = "https://files.pythonhosted.org/packages/12/75/ee20da0e58d3a66f204f38916757e01e33a9737d0b22373b3eb5a27358f9/numpy-2.2.6-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:389d771b1623ec92636b0786bc4ae56abafad4a4c513d36a55dce14bd9ce8571", size = 15586143, upload-time = "2025-05-17T21:42:37.464Z" },
- { url = "https://files.pythonhosted.org/packages/76/95/bef5b37f29fc5e739947e9ce5179ad402875633308504a52d188302319c8/numpy-2.2.6-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:8e9ace4a37db23421249ed236fdcdd457d671e25146786dfc96835cd951aa7c1", size = 18385260, upload-time = "2025-05-17T21:43:05.189Z" },
- { url = "https://files.pythonhosted.org/packages/09/04/f2f83279d287407cf36a7a8053a5abe7be3622a4363337338f2585e4afda/numpy-2.2.6-cp313-cp313t-win32.whl", hash = "sha256:038613e9fb8c72b0a41f025a7e4c3f0b7a1b5d768ece4796b674c8f3fe13efff", size = 6377225, upload-time = "2025-05-17T21:43:16.254Z" },
- { url = "https://files.pythonhosted.org/packages/67/0e/35082d13c09c02c011cf21570543d202ad929d961c02a147493cb0c2bdf5/numpy-2.2.6-cp313-cp313t-win_amd64.whl", hash = "sha256:6031dd6dfecc0cf9f668681a37648373bddd6421fff6c66ec1624eed0180ee06", size = 12771374, upload-time = "2025-05-17T21:43:35.479Z" },
- { url = "https://files.pythonhosted.org/packages/9e/3b/d94a75f4dbf1ef5d321523ecac21ef23a3cd2ac8b78ae2aac40873590229/numpy-2.2.6-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0b605b275d7bd0c640cad4e5d30fa701a8d59302e127e5f79138ad62762c3e3d", size = 21040391, upload-time = "2025-05-17T21:44:35.948Z" },
- { url = "https://files.pythonhosted.org/packages/17/f4/09b2fa1b58f0fb4f7c7963a1649c64c4d315752240377ed74d9cd878f7b5/numpy-2.2.6-pp310-pypy310_pp73-macosx_14_0_x86_64.whl", hash = "sha256:7befc596a7dc9da8a337f79802ee8adb30a552a94f792b9c9d18c840055907db", size = 6786754, upload-time = "2025-05-17T21:44:47.446Z" },
- { url = "https://files.pythonhosted.org/packages/af/30/feba75f143bdc868a1cc3f44ccfa6c4b9ec522b36458e738cd00f67b573f/numpy-2.2.6-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce47521a4754c8f4593837384bd3424880629f718d87c5d44f8ed763edd63543", size = 16643476, upload-time = "2025-05-17T21:45:11.871Z" },
- { url = "https://files.pythonhosted.org/packages/37/48/ac2a9584402fb6c0cd5b5d1a91dcf176b15760130dd386bbafdbfe3640bf/numpy-2.2.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d042d24c90c41b54fd506da306759e06e568864df8ec17ccc17e9e884634fd00", size = 12812666, upload-time = "2025-05-17T21:45:31.426Z" },
-]
-
[[package]]
name = "numpy"
version = "2.3.4"
source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version >= '3.12'",
- "python_full_version == '3.11.*'",
-]
sdist = { url = "https://files.pythonhosted.org/packages/b5/f4/098d2270d52b41f1bd7db9fc288aaa0400cb48c2a3e2af6fa365d9720947/numpy-2.3.4.tar.gz", hash = "sha256:a7d018bfedb375a8d979ac758b120ba846a7fe764911a64465fd87b8729f4a6a", size = 20582187, upload-time = "2025-10-15T16:18:11.77Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/60/e7/0e07379944aa8afb49a556a2b54587b828eb41dc9adc56fb7615b678ca53/numpy-2.3.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:e78aecd2800b32e8347ce49316d3eaf04aed849cd5b38e0af39f829a4e59f5eb", size = 21259519, upload-time = "2025-10-15T16:15:19.012Z" },
version = "2.3.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
- { name = "numpy", version = "2.0.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.10.*'" },
- { name = "numpy", version = "2.3.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
+ { name = "numpy" },
{ name = "python-dateutil" },
{ name = "pytz" },
{ name = "tzdata" },
]
sdist = { url = "https://files.pythonhosted.org/packages/33/01/d40b85317f86cf08d853a4f495195c73815fdf205eef3993821720274518/pandas-2.3.3.tar.gz", hash = "sha256:e05e1af93b977f7eafa636d043f9f94c7ee3ac81af99c13508215942e64c993b", size = 4495223, upload-time = "2025-09-29T23:34:51.853Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/3d/f7/f425a00df4fcc22b292c6895c6831c0c8ae1d9fac1e024d16f98a9ce8749/pandas-2.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:376c6446ae31770764215a6c937f72d917f214b43560603cd60da6408f183b6c", size = 11555763, upload-time = "2025-09-29T23:16:53.287Z" },
- { url = "https://files.pythonhosted.org/packages/13/4f/66d99628ff8ce7857aca52fed8f0066ce209f96be2fede6cef9f84e8d04f/pandas-2.3.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e19d192383eab2f4ceb30b412b22ea30690c9e618f78870357ae1d682912015a", size = 10801217, upload-time = "2025-09-29T23:17:04.522Z" },
- { url = "https://files.pythonhosted.org/packages/1d/03/3fc4a529a7710f890a239cc496fc6d50ad4a0995657dccc1d64695adb9f4/pandas-2.3.3-cp310-cp310-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5caf26f64126b6c7aec964f74266f435afef1c1b13da3b0636c7518a1fa3e2b1", size = 12148791, upload-time = "2025-09-29T23:17:18.444Z" },
- { url = "https://files.pythonhosted.org/packages/40/a8/4dac1f8f8235e5d25b9955d02ff6f29396191d4e665d71122c3722ca83c5/pandas-2.3.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dd7478f1463441ae4ca7308a70e90b33470fa593429f9d4c578dd00d1fa78838", size = 12769373, upload-time = "2025-09-29T23:17:35.846Z" },
- { url = "https://files.pythonhosted.org/packages/df/91/82cc5169b6b25440a7fc0ef3a694582418d875c8e3ebf796a6d6470aa578/pandas-2.3.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4793891684806ae50d1288c9bae9330293ab4e083ccd1c5e383c34549c6e4250", size = 13200444, upload-time = "2025-09-29T23:17:49.341Z" },
- { url = "https://files.pythonhosted.org/packages/10/ae/89b3283800ab58f7af2952704078555fa60c807fff764395bb57ea0b0dbd/pandas-2.3.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:28083c648d9a99a5dd035ec125d42439c6c1c525098c58af0fc38dd1a7a1b3d4", size = 13858459, upload-time = "2025-09-29T23:18:03.722Z" },
- { url = "https://files.pythonhosted.org/packages/85/72/530900610650f54a35a19476eca5104f38555afccda1aa11a92ee14cb21d/pandas-2.3.3-cp310-cp310-win_amd64.whl", hash = "sha256:503cf027cf9940d2ceaa1a93cfb5f8c8c7e6e90720a2850378f0b3f3b1e06826", size = 11346086, upload-time = "2025-09-29T23:18:18.505Z" },
{ url = "https://files.pythonhosted.org/packages/c1/fa/7ac648108144a095b4fb6aa3de1954689f7af60a14cf25583f4960ecb878/pandas-2.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:602b8615ebcc4a0c1751e71840428ddebeb142ec02c786e8ad6b1ce3c8dec523", size = 11578790, upload-time = "2025-09-29T23:18:30.065Z" },
{ url = "https://files.pythonhosted.org/packages/9b/35/74442388c6cf008882d4d4bdfc4109be87e9b8b7ccd097ad1e7f006e2e95/pandas-2.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8fe25fc7b623b0ef6b5009149627e34d2a4657e880948ec3c840e9402e5c1b45", size = 10833831, upload-time = "2025-09-29T23:38:56.071Z" },
{ url = "https://files.pythonhosted.org/packages/fe/e4/de154cbfeee13383ad58d23017da99390b91d73f8c11856f2095e813201b/pandas-2.3.3-cp311-cp311-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b468d3dad6ff947df92dcb32ede5b7bd41a9b3cceef0a30ed925f6d01fb8fa66", size = 12199267, upload-time = "2025-09-29T23:18:41.627Z" },
{ url = "https://files.pythonhosted.org/packages/a4/1e/1bac1a839d12e6a82ec6cb40cda2edde64a2013a66963293696bbf31fbbb/pandas-2.3.3-cp314-cp314t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2e3ebdb170b5ef78f19bfb71b0dc5dc58775032361fa188e814959b74d726dd5", size = 12121582, upload-time = "2025-09-29T23:30:43.391Z" },
{ url = "https://files.pythonhosted.org/packages/44/91/483de934193e12a3b1d6ae7c8645d083ff88dec75f46e827562f1e4b4da6/pandas-2.3.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:d051c0e065b94b7a3cea50eb1ec32e912cd96dba41647eb24104b6c6c14c5788", size = 12699963, upload-time = "2025-09-29T23:31:10.009Z" },
{ url = "https://files.pythonhosted.org/packages/70/44/5191d2e4026f86a2a109053e194d3ba7a31a2d10a9c2348368c63ed4e85a/pandas-2.3.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:3869faf4bd07b3b66a9f462417d0ca3a9df29a9f6abd5d0d0dbab15dac7abe87", size = 13202175, upload-time = "2025-09-29T23:31:59.173Z" },
- { url = "https://files.pythonhosted.org/packages/56/b4/52eeb530a99e2a4c55ffcd352772b599ed4473a0f892d127f4147cf0f88e/pandas-2.3.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c503ba5216814e295f40711470446bc3fd00f0faea8a086cbc688808e26f92a2", size = 11567720, upload-time = "2025-09-29T23:33:06.209Z" },
- { url = "https://files.pythonhosted.org/packages/48/4a/2d8b67632a021bced649ba940455ed441ca854e57d6e7658a6024587b083/pandas-2.3.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a637c5cdfa04b6d6e2ecedcb81fc52ffb0fd78ce2ebccc9ea964df9f658de8c8", size = 10810302, upload-time = "2025-09-29T23:33:35.846Z" },
- { url = "https://files.pythonhosted.org/packages/13/e6/d2465010ee0569a245c975dc6967b801887068bc893e908239b1f4b6c1ac/pandas-2.3.3-cp39-cp39-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:854d00d556406bffe66a4c0802f334c9ad5a96b4f1f868adf036a21b11ef13ff", size = 12154874, upload-time = "2025-09-29T23:33:49.939Z" },
- { url = "https://files.pythonhosted.org/packages/1f/18/aae8c0aa69a386a3255940e9317f793808ea79d0a525a97a903366bb2569/pandas-2.3.3-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bf1f8a81d04ca90e32a0aceb819d34dbd378a98bf923b6398b9a3ec0bf44de29", size = 12790141, upload-time = "2025-09-29T23:34:05.655Z" },
- { url = "https://files.pythonhosted.org/packages/f7/26/617f98de789de00c2a444fbe6301bb19e66556ac78cff933d2c98f62f2b4/pandas-2.3.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:23ebd657a4d38268c7dfbdf089fbc31ea709d82e4923c5ffd4fbd5747133ce73", size = 13208697, upload-time = "2025-09-29T23:34:21.835Z" },
- { url = "https://files.pythonhosted.org/packages/b9/fb/25709afa4552042bd0e15717c75e9b4a2294c3dc4f7e6ea50f03c5136600/pandas-2.3.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:5554c929ccc317d41a5e3d1234f3be588248e61f08a74dd17c9eabb535777dc9", size = 13879233, upload-time = "2025-09-29T23:34:35.079Z" },
- { url = "https://files.pythonhosted.org/packages/98/af/7be05277859a7bc399da8ba68b88c96b27b48740b6cf49688899c6eb4176/pandas-2.3.3-cp39-cp39-win_amd64.whl", hash = "sha256:d3e28b3e83862ccf4d85ff19cf8c20b2ae7e503881711ff2d534dc8f761131aa", size = 11359119, upload-time = "2025-09-29T23:34:46.339Z" },
]
[[package]]
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
- { name = "exceptiongroup", marker = "python_full_version < '3.11'" },
- { name = "iniconfig", version = "2.1.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "iniconfig", version = "2.3.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.10'" },
+ { name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
- { name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" }
wheels = [
version = "7.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
- { name = "coverage", version = "7.10.7", source = { registry = "https://pypi.org/simple" }, extra = ["toml"], marker = "python_full_version < '3.10'" },
- { name = "coverage", version = "7.11.0", source = { registry = "https://pypi.org/simple" }, extra = ["toml"], marker = "python_full_version >= '3.10'" },
+ { name = "coverage", extra = ["toml"] },
{ name = "pluggy" },
{ name = "pytest" },
]
version = "0.1.0"
source = { editable = "." }
dependencies = [
- { name = "numpy", version = "2.0.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.10.*'" },
- { name = "numpy", version = "2.3.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
+ { name = "numpy" },
{ name = "pandas" },
{ name = "pytest" },
- { name = "scikit-learn", version = "1.6.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "scikit-learn", version = "1.7.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.10'" },
- { name = "scipy", version = "1.13.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "scipy", version = "1.15.3", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.10.*'" },
- { name = "scipy", version = "1.16.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
+ { name = "scikit-learn" },
+ { name = "scipy" },
]
[package.dev-dependencies]
{ url = "https://files.pythonhosted.org/packages/b8/81/4b6387be7014858d924b843530e1b2a8e531846807516e9bea2ee0936bf7/ruff-0.14.1-py3-none-win_arm64.whl", hash = "sha256:e3b443c4c9f16ae850906b8d0a707b2a4c16f8d2f0a7fe65c475c5886665ce44", size = 12436636, upload-time = "2025-10-16T18:05:38.995Z" },
]
-[[package]]
-name = "scikit-learn"
-version = "1.6.1"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version < '3.10'",
-]
-dependencies = [
- { name = "joblib", marker = "python_full_version < '3.10'" },
- { name = "numpy", version = "2.0.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "scipy", version = "1.13.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
- { name = "threadpoolctl", marker = "python_full_version < '3.10'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/9e/a5/4ae3b3a0755f7b35a280ac90b28817d1f380318973cff14075ab41ef50d9/scikit_learn-1.6.1.tar.gz", hash = "sha256:b4fc2525eca2c69a59260f583c56a7557c6ccdf8deafdba6e060f94c1c59738e", size = 7068312, upload-time = "2025-01-10T08:07:55.348Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/2e/3a/f4597eb41049110b21ebcbb0bcb43e4035017545daa5eedcfeb45c08b9c5/scikit_learn-1.6.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d056391530ccd1e501056160e3c9673b4da4805eb67eb2bdf4e983e1f9c9204e", size = 12067702, upload-time = "2025-01-10T08:05:56.515Z" },
- { url = "https://files.pythonhosted.org/packages/37/19/0423e5e1fd1c6ec5be2352ba05a537a473c1677f8188b9306097d684b327/scikit_learn-1.6.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:0c8d036eb937dbb568c6242fa598d551d88fb4399c0344d95c001980ec1c7d36", size = 11112765, upload-time = "2025-01-10T08:06:00.272Z" },
- { url = "https://files.pythonhosted.org/packages/70/95/d5cb2297a835b0f5fc9a77042b0a2d029866379091ab8b3f52cc62277808/scikit_learn-1.6.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8634c4bd21a2a813e0a7e3900464e6d593162a29dd35d25bdf0103b3fce60ed5", size = 12643991, upload-time = "2025-01-10T08:06:04.813Z" },
- { url = "https://files.pythonhosted.org/packages/b7/91/ab3c697188f224d658969f678be86b0968ccc52774c8ab4a86a07be13c25/scikit_learn-1.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:775da975a471c4f6f467725dff0ced5c7ac7bda5e9316b260225b48475279a1b", size = 13497182, upload-time = "2025-01-10T08:06:08.42Z" },
- { url = "https://files.pythonhosted.org/packages/17/04/d5d556b6c88886c092cc989433b2bab62488e0f0dafe616a1d5c9cb0efb1/scikit_learn-1.6.1-cp310-cp310-win_amd64.whl", hash = "sha256:8a600c31592bd7dab31e1c61b9bbd6dea1b3433e67d264d17ce1017dbdce8002", size = 11125517, upload-time = "2025-01-10T08:06:12.783Z" },
- { url = "https://files.pythonhosted.org/packages/6c/2a/e291c29670795406a824567d1dfc91db7b699799a002fdaa452bceea8f6e/scikit_learn-1.6.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:72abc587c75234935e97d09aa4913a82f7b03ee0b74111dcc2881cba3c5a7b33", size = 12102620, upload-time = "2025-01-10T08:06:16.675Z" },
- { url = "https://files.pythonhosted.org/packages/25/92/ee1d7a00bb6b8c55755d4984fd82608603a3cc59959245068ce32e7fb808/scikit_learn-1.6.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:b3b00cdc8f1317b5f33191df1386c0befd16625f49d979fe77a8d44cae82410d", size = 11116234, upload-time = "2025-01-10T08:06:21.83Z" },
- { url = "https://files.pythonhosted.org/packages/30/cd/ed4399485ef364bb25f388ab438e3724e60dc218c547a407b6e90ccccaef/scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc4765af3386811c3ca21638f63b9cf5ecf66261cc4815c1db3f1e7dc7b79db2", size = 12592155, upload-time = "2025-01-10T08:06:27.309Z" },
- { url = "https://files.pythonhosted.org/packages/a8/f3/62fc9a5a659bb58a03cdd7e258956a5824bdc9b4bb3c5d932f55880be569/scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25fc636bdaf1cc2f4a124a116312d837148b5e10872147bdaf4887926b8c03d8", size = 13497069, upload-time = "2025-01-10T08:06:32.515Z" },
- { url = "https://files.pythonhosted.org/packages/a1/a6/c5b78606743a1f28eae8f11973de6613a5ee87366796583fb74c67d54939/scikit_learn-1.6.1-cp311-cp311-win_amd64.whl", hash = "sha256:fa909b1a36e000a03c382aade0bd2063fd5680ff8b8e501660c0f59f021a6415", size = 11139809, upload-time = "2025-01-10T08:06:35.514Z" },
- { url = "https://files.pythonhosted.org/packages/0a/18/c797c9b8c10380d05616db3bfb48e2a3358c767affd0857d56c2eb501caa/scikit_learn-1.6.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:926f207c804104677af4857b2c609940b743d04c4c35ce0ddc8ff4f053cddc1b", size = 12104516, upload-time = "2025-01-10T08:06:40.009Z" },
- { url = "https://files.pythonhosted.org/packages/c4/b7/2e35f8e289ab70108f8cbb2e7a2208f0575dc704749721286519dcf35f6f/scikit_learn-1.6.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:2c2cae262064e6a9b77eee1c8e768fc46aa0b8338c6a8297b9b6759720ec0ff2", size = 11167837, upload-time = "2025-01-10T08:06:43.305Z" },
- { url = "https://files.pythonhosted.org/packages/a4/f6/ff7beaeb644bcad72bcfd5a03ff36d32ee4e53a8b29a639f11bcb65d06cd/scikit_learn-1.6.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1061b7c028a8663fb9a1a1baf9317b64a257fcb036dae5c8752b2abef31d136f", size = 12253728, upload-time = "2025-01-10T08:06:47.618Z" },
- { url = "https://files.pythonhosted.org/packages/29/7a/8bce8968883e9465de20be15542f4c7e221952441727c4dad24d534c6d99/scikit_learn-1.6.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e69fab4ebfc9c9b580a7a80111b43d214ab06250f8a7ef590a4edf72464dd86", size = 13147700, upload-time = "2025-01-10T08:06:50.888Z" },
- { url = "https://files.pythonhosted.org/packages/62/27/585859e72e117fe861c2079bcba35591a84f801e21bc1ab85bce6ce60305/scikit_learn-1.6.1-cp312-cp312-win_amd64.whl", hash = "sha256:70b1d7e85b1c96383f872a519b3375f92f14731e279a7b4c6cfd650cf5dffc52", size = 11110613, upload-time = "2025-01-10T08:06:54.115Z" },
- { url = "https://files.pythonhosted.org/packages/2e/59/8eb1872ca87009bdcdb7f3cdc679ad557b992c12f4b61f9250659e592c63/scikit_learn-1.6.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2ffa1e9e25b3d93990e74a4be2c2fc61ee5af85811562f1288d5d055880c4322", size = 12010001, upload-time = "2025-01-10T08:06:58.613Z" },
- { url = "https://files.pythonhosted.org/packages/9d/05/f2fc4effc5b32e525408524c982c468c29d22f828834f0625c5ef3d601be/scikit_learn-1.6.1-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:dc5cf3d68c5a20ad6d571584c0750ec641cc46aeef1c1507be51300e6003a7e1", size = 11096360, upload-time = "2025-01-10T08:07:01.556Z" },
- { url = "https://files.pythonhosted.org/packages/c8/e4/4195d52cf4f113573fb8ebc44ed5a81bd511a92c0228889125fac2f4c3d1/scikit_learn-1.6.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c06beb2e839ecc641366000ca84f3cf6fa9faa1777e29cf0c04be6e4d096a348", size = 12209004, upload-time = "2025-01-10T08:07:06.931Z" },
- { url = "https://files.pythonhosted.org/packages/94/be/47e16cdd1e7fcf97d95b3cb08bde1abb13e627861af427a3651fcb80b517/scikit_learn-1.6.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e8ca8cb270fee8f1f76fa9bfd5c3507d60c6438bbee5687f81042e2bb98e5a97", size = 13171776, upload-time = "2025-01-10T08:07:11.715Z" },
- { url = "https://files.pythonhosted.org/packages/34/b0/ca92b90859070a1487827dbc672f998da95ce83edce1270fc23f96f1f61a/scikit_learn-1.6.1-cp313-cp313-win_amd64.whl", hash = "sha256:7a1c43c8ec9fde528d664d947dc4c0789be4077a3647f232869f41d9bf50e0fb", size = 11071865, upload-time = "2025-01-10T08:07:16.088Z" },
- { url = "https://files.pythonhosted.org/packages/12/ae/993b0fb24a356e71e9a894e42b8a9eec528d4c70217353a1cd7a48bc25d4/scikit_learn-1.6.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:a17c1dea1d56dcda2fac315712f3651a1fea86565b64b48fa1bc090249cbf236", size = 11955804, upload-time = "2025-01-10T08:07:20.385Z" },
- { url = "https://files.pythonhosted.org/packages/d6/54/32fa2ee591af44507eac86406fa6bba968d1eb22831494470d0a2e4a1eb1/scikit_learn-1.6.1-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:6a7aa5f9908f0f28f4edaa6963c0a6183f1911e63a69aa03782f0d924c830a35", size = 11100530, upload-time = "2025-01-10T08:07:23.675Z" },
- { url = "https://files.pythonhosted.org/packages/3f/58/55856da1adec655bdce77b502e94a267bf40a8c0b89f8622837f89503b5a/scikit_learn-1.6.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0650e730afb87402baa88afbf31c07b84c98272622aaba002559b614600ca691", size = 12433852, upload-time = "2025-01-10T08:07:26.817Z" },
- { url = "https://files.pythonhosted.org/packages/ff/4f/c83853af13901a574f8f13b645467285a48940f185b690936bb700a50863/scikit_learn-1.6.1-cp313-cp313t-win_amd64.whl", hash = "sha256:3f59fe08dc03ea158605170eb52b22a105f238a5d512c4470ddeca71feae8e5f", size = 11337256, upload-time = "2025-01-10T08:07:31.084Z" },
- { url = "https://files.pythonhosted.org/packages/d2/37/b305b759cc65829fe1b8853ff3e308b12cdd9d8884aa27840835560f2b42/scikit_learn-1.6.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6849dd3234e87f55dce1db34c89a810b489ead832aaf4d4550b7ea85628be6c1", size = 12101868, upload-time = "2025-01-10T08:07:34.189Z" },
- { url = "https://files.pythonhosted.org/packages/83/74/f64379a4ed5879d9db744fe37cfe1978c07c66684d2439c3060d19a536d8/scikit_learn-1.6.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:e7be3fa5d2eb9be7d77c3734ff1d599151bb523674be9b834e8da6abe132f44e", size = 11144062, upload-time = "2025-01-10T08:07:37.67Z" },
- { url = "https://files.pythonhosted.org/packages/fd/dc/d5457e03dc9c971ce2b0d750e33148dd060fefb8b7dc71acd6054e4bb51b/scikit_learn-1.6.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:44a17798172df1d3c1065e8fcf9019183f06c87609b49a124ebdf57ae6cb0107", size = 12693173, upload-time = "2025-01-10T08:07:42.713Z" },
- { url = "https://files.pythonhosted.org/packages/79/35/b1d2188967c3204c78fa79c9263668cf1b98060e8e58d1a730fe5b2317bb/scikit_learn-1.6.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8b7a3b86e411e4bce21186e1c180d792f3d99223dcfa3b4f597ecc92fa1a422", size = 13518605, upload-time = "2025-01-10T08:07:46.551Z" },
- { url = "https://files.pythonhosted.org/packages/fb/d8/8d603bdd26601f4b07e2363032b8565ab82eb857f93d86d0f7956fcf4523/scikit_learn-1.6.1-cp39-cp39-win_amd64.whl", hash = "sha256:7a73d457070e3318e32bdb3aa79a8d990474f19035464dfd8bede2883ab5dc3b", size = 11155078, upload-time = "2025-01-10T08:07:51.376Z" },
-]
-
[[package]]
name = "scikit-learn"
version = "1.7.2"
source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version >= '3.12'",
- "python_full_version == '3.11.*'",
- "python_full_version == '3.10.*'",
-]
dependencies = [
- { name = "joblib", marker = "python_full_version >= '3.10'" },
- { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.10.*'" },
- { name = "numpy", version = "2.3.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
- { name = "scipy", version = "1.15.3", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.10.*'" },
- { name = "scipy", version = "1.16.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
- { name = "threadpoolctl", marker = "python_full_version >= '3.10'" },
+ { name = "joblib" },
+ { name = "numpy" },
+ { name = "scipy" },
+ { name = "threadpoolctl" },
]
sdist = { url = "https://files.pythonhosted.org/packages/98/c2/a7855e41c9d285dfe86dc50b250978105dce513d6e459ea66a6aeb0e1e0c/scikit_learn-1.7.2.tar.gz", hash = "sha256:20e9e49ecd130598f1ca38a1d85090e1a600147b9c02fa6f15d69cb53d968fda", size = 7193136, upload-time = "2025-09-09T08:21:29.075Z" }
wheels = [
- { url = "https://files.pythonhosted.org/packages/ba/3e/daed796fd69cce768b8788401cc464ea90b306fb196ae1ffed0b98182859/scikit_learn-1.7.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6b33579c10a3081d076ab403df4a4190da4f4432d443521674637677dc91e61f", size = 9336221, upload-time = "2025-09-09T08:20:19.328Z" },
- { url = "https://files.pythonhosted.org/packages/1c/ce/af9d99533b24c55ff4e18d9b7b4d9919bbc6cd8f22fe7a7be01519a347d5/scikit_learn-1.7.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:36749fb62b3d961b1ce4fedf08fa57a1986cd409eff2d783bca5d4b9b5fce51c", size = 8653834, upload-time = "2025-09-09T08:20:22.073Z" },
- { url = "https://files.pythonhosted.org/packages/58/0e/8c2a03d518fb6bd0b6b0d4b114c63d5f1db01ff0f9925d8eb10960d01c01/scikit_learn-1.7.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7a58814265dfc52b3295b1900cfb5701589d30a8bb026c7540f1e9d3499d5ec8", size = 9660938, upload-time = "2025-09-09T08:20:24.327Z" },
- { url = "https://files.pythonhosted.org/packages/2b/75/4311605069b5d220e7cf5adabb38535bd96f0079313cdbb04b291479b22a/scikit_learn-1.7.2-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4a847fea807e278f821a0406ca01e387f97653e284ecbd9750e3ee7c90347f18", size = 9477818, upload-time = "2025-09-09T08:20:26.845Z" },
- { url = "https://files.pythonhosted.org/packages/7f/9b/87961813c34adbca21a6b3f6b2bea344c43b30217a6d24cc437c6147f3e8/scikit_learn-1.7.2-cp310-cp310-win_amd64.whl", hash = "sha256:ca250e6836d10e6f402436d6463d6c0e4d8e0234cfb6a9a47835bd392b852ce5", size = 8886969, upload-time = "2025-09-09T08:20:29.329Z" },
{ url = "https://files.pythonhosted.org/packages/43/83/564e141eef908a5863a54da8ca342a137f45a0bfb71d1d79704c9894c9d1/scikit_learn-1.7.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c7509693451651cd7361d30ce4e86a1347493554f172b1c72a39300fa2aea79e", size = 9331967, upload-time = "2025-09-09T08:20:32.421Z" },
{ url = "https://files.pythonhosted.org/packages/18/d6/ba863a4171ac9d7314c4d3fc251f015704a2caeee41ced89f321c049ed83/scikit_learn-1.7.2-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:0486c8f827c2e7b64837c731c8feff72c0bd2b998067a8a9cbc10643c31f0fe1", size = 8648645, upload-time = "2025-09-09T08:20:34.436Z" },
{ url = "https://files.pythonhosted.org/packages/ef/0e/97dbca66347b8cf0ea8b529e6bb9367e337ba2e8be0ef5c1a545232abfde/scikit_learn-1.7.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:89877e19a80c7b11a2891a27c21c4894fb18e2c2e077815bcade10d34287b20d", size = 9715424, upload-time = "2025-09-09T08:20:36.776Z" },
{ url = "https://files.pythonhosted.org/packages/8e/87/24f541b6d62b1794939ae6422f8023703bbf6900378b2b34e0b4384dfefd/scikit_learn-1.7.2-cp314-cp314-win_amd64.whl", hash = "sha256:bb24510ed3f9f61476181e4db51ce801e2ba37541def12dc9333b946fc7a9cf8", size = 8820007, upload-time = "2025-09-09T08:21:26.713Z" },
]
-[[package]]
-name = "scipy"
-version = "1.13.1"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version < '3.10'",
-]
-dependencies = [
- { name = "numpy", version = "2.0.2", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.10'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/ae/00/48c2f661e2816ccf2ecd77982f6605b2950afe60f60a52b4cbbc2504aa8f/scipy-1.13.1.tar.gz", hash = "sha256:095a87a0312b08dfd6a6155cbbd310a8c51800fc931b8c0b84003014b874ed3c", size = 57210720, upload-time = "2024-05-23T03:29:26.079Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/33/59/41b2529908c002ade869623b87eecff3e11e3ce62e996d0bdcb536984187/scipy-1.13.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:20335853b85e9a49ff7572ab453794298bcf0354d8068c5f6775a0eabf350aca", size = 39328076, upload-time = "2024-05-23T03:19:01.687Z" },
- { url = "https://files.pythonhosted.org/packages/d5/33/f1307601f492f764062ce7dd471a14750f3360e33cd0f8c614dae208492c/scipy-1.13.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:d605e9c23906d1994f55ace80e0125c587f96c020037ea6aa98d01b4bd2e222f", size = 30306232, upload-time = "2024-05-23T03:19:09.089Z" },
- { url = "https://files.pythonhosted.org/packages/c0/66/9cd4f501dd5ea03e4a4572ecd874936d0da296bd04d1c45ae1a4a75d9c3a/scipy-1.13.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cfa31f1def5c819b19ecc3a8b52d28ffdcc7ed52bb20c9a7589669dd3c250989", size = 33743202, upload-time = "2024-05-23T03:19:15.138Z" },
- { url = "https://files.pythonhosted.org/packages/a3/ba/7255e5dc82a65adbe83771c72f384d99c43063648456796436c9a5585ec3/scipy-1.13.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f26264b282b9da0952a024ae34710c2aff7d27480ee91a2e82b7b7073c24722f", size = 38577335, upload-time = "2024-05-23T03:19:21.984Z" },
- { url = "https://files.pythonhosted.org/packages/49/a5/bb9ded8326e9f0cdfdc412eeda1054b914dfea952bda2097d174f8832cc0/scipy-1.13.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:eccfa1906eacc02de42d70ef4aecea45415f5be17e72b61bafcfd329bdc52e94", size = 38820728, upload-time = "2024-05-23T03:19:28.225Z" },
- { url = "https://files.pythonhosted.org/packages/12/30/df7a8fcc08f9b4a83f5f27cfaaa7d43f9a2d2ad0b6562cced433e5b04e31/scipy-1.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:2831f0dc9c5ea9edd6e51e6e769b655f08ec6db6e2e10f86ef39bd32eb11da54", size = 46210588, upload-time = "2024-05-23T03:19:35.661Z" },
- { url = "https://files.pythonhosted.org/packages/b4/15/4a4bb1b15bbd2cd2786c4f46e76b871b28799b67891f23f455323a0cdcfb/scipy-1.13.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:27e52b09c0d3a1d5b63e1105f24177e544a222b43611aaf5bc44d4a0979e32f9", size = 39333805, upload-time = "2024-05-23T03:19:43.081Z" },
- { url = "https://files.pythonhosted.org/packages/ba/92/42476de1af309c27710004f5cdebc27bec62c204db42e05b23a302cb0c9a/scipy-1.13.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:54f430b00f0133e2224c3ba42b805bfd0086fe488835effa33fa291561932326", size = 30317687, upload-time = "2024-05-23T03:19:48.799Z" },
- { url = "https://files.pythonhosted.org/packages/80/ba/8be64fe225360a4beb6840f3cbee494c107c0887f33350d0a47d55400b01/scipy-1.13.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e89369d27f9e7b0884ae559a3a956e77c02114cc60a6058b4e5011572eea9299", size = 33694638, upload-time = "2024-05-23T03:19:55.104Z" },
- { url = "https://files.pythonhosted.org/packages/36/07/035d22ff9795129c5a847c64cb43c1fa9188826b59344fee28a3ab02e283/scipy-1.13.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a78b4b3345f1b6f68a763c6e25c0c9a23a9fd0f39f5f3d200efe8feda560a5fa", size = 38569931, upload-time = "2024-05-23T03:20:01.82Z" },
- { url = "https://files.pythonhosted.org/packages/d9/10/f9b43de37e5ed91facc0cfff31d45ed0104f359e4f9a68416cbf4e790241/scipy-1.13.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:45484bee6d65633752c490404513b9ef02475b4284c4cfab0ef946def50b3f59", size = 38838145, upload-time = "2024-05-23T03:20:09.173Z" },
- { url = "https://files.pythonhosted.org/packages/4a/48/4513a1a5623a23e95f94abd675ed91cfb19989c58e9f6f7d03990f6caf3d/scipy-1.13.1-cp311-cp311-win_amd64.whl", hash = "sha256:5713f62f781eebd8d597eb3f88b8bf9274e79eeabf63afb4a737abc6c84ad37b", size = 46196227, upload-time = "2024-05-23T03:20:16.433Z" },
- { url = "https://files.pythonhosted.org/packages/f2/7b/fb6b46fbee30fc7051913068758414f2721003a89dd9a707ad49174e3843/scipy-1.13.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5d72782f39716b2b3509cd7c33cdc08c96f2f4d2b06d51e52fb45a19ca0c86a1", size = 39357301, upload-time = "2024-05-23T03:20:23.538Z" },
- { url = "https://files.pythonhosted.org/packages/dc/5a/2043a3bde1443d94014aaa41e0b50c39d046dda8360abd3b2a1d3f79907d/scipy-1.13.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:017367484ce5498445aade74b1d5ab377acdc65e27095155e448c88497755a5d", size = 30363348, upload-time = "2024-05-23T03:20:29.885Z" },
- { url = "https://files.pythonhosted.org/packages/e7/cb/26e4a47364bbfdb3b7fb3363be6d8a1c543bcd70a7753ab397350f5f189a/scipy-1.13.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:949ae67db5fa78a86e8fa644b9a6b07252f449dcf74247108c50e1d20d2b4627", size = 33406062, upload-time = "2024-05-23T03:20:36.012Z" },
- { url = "https://files.pythonhosted.org/packages/88/ab/6ecdc526d509d33814835447bbbeedbebdec7cca46ef495a61b00a35b4bf/scipy-1.13.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de3ade0e53bc1f21358aa74ff4830235d716211d7d077e340c7349bc3542e884", size = 38218311, upload-time = "2024-05-23T03:20:42.086Z" },
- { url = "https://files.pythonhosted.org/packages/0b/00/9f54554f0f8318100a71515122d8f4f503b1a2c4b4cfab3b4b68c0eb08fa/scipy-1.13.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:2ac65fb503dad64218c228e2dc2d0a0193f7904747db43014645ae139c8fad16", size = 38442493, upload-time = "2024-05-23T03:20:48.292Z" },
- { url = "https://files.pythonhosted.org/packages/3e/df/963384e90733e08eac978cd103c34df181d1fec424de383cdc443f418dd4/scipy-1.13.1-cp312-cp312-win_amd64.whl", hash = "sha256:cdd7dacfb95fea358916410ec61bbc20440f7860333aee6d882bb8046264e949", size = 45910955, upload-time = "2024-05-23T03:20:55.091Z" },
- { url = "https://files.pythonhosted.org/packages/7f/29/c2ea58c9731b9ecb30b6738113a95d147e83922986b34c685b8f6eefde21/scipy-1.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:436bbb42a94a8aeef855d755ce5a465479c721e9d684de76bf61a62e7c2b81d5", size = 39352927, upload-time = "2024-05-23T03:21:01.95Z" },
- { url = "https://files.pythonhosted.org/packages/5c/c0/e71b94b20ccf9effb38d7147c0064c08c622309fd487b1b677771a97d18c/scipy-1.13.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:8335549ebbca860c52bf3d02f80784e91a004b71b059e3eea9678ba994796a24", size = 30324538, upload-time = "2024-05-23T03:21:07.634Z" },
- { url = "https://files.pythonhosted.org/packages/6d/0f/aaa55b06d474817cea311e7b10aab2ea1fd5d43bc6a2861ccc9caec9f418/scipy-1.13.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d533654b7d221a6a97304ab63c41c96473ff04459e404b83275b60aa8f4b7004", size = 33732190, upload-time = "2024-05-23T03:21:14.41Z" },
- { url = "https://files.pythonhosted.org/packages/35/f5/d0ad1a96f80962ba65e2ce1de6a1e59edecd1f0a7b55990ed208848012e0/scipy-1.13.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:637e98dcf185ba7f8e663e122ebf908c4702420477ae52a04f9908707456ba4d", size = 38612244, upload-time = "2024-05-23T03:21:21.827Z" },
- { url = "https://files.pythonhosted.org/packages/8d/02/1165905f14962174e6569076bcc3315809ae1291ed14de6448cc151eedfd/scipy-1.13.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a014c2b3697bde71724244f63de2476925596c24285c7a637364761f8710891c", size = 38845637, upload-time = "2024-05-23T03:21:28.729Z" },
- { url = "https://files.pythonhosted.org/packages/3e/77/dab54fe647a08ee4253963bcd8f9cf17509c8ca64d6335141422fe2e2114/scipy-1.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:392e4ec766654852c25ebad4f64e4e584cf19820b980bc04960bca0b0cd6eaa2", size = 46227440, upload-time = "2024-05-23T03:21:35.888Z" },
-]
-
-[[package]]
-name = "scipy"
-version = "1.15.3"
-source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version == '3.10.*'",
-]
-dependencies = [
- { name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.10.*'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/0f/37/6964b830433e654ec7485e45a00fc9a27cf868d622838f6b6d9c5ec0d532/scipy-1.15.3.tar.gz", hash = "sha256:eae3cf522bc7df64b42cad3925c876e1b0b6c35c1337c93e12c0f366f55b0eaf", size = 59419214, upload-time = "2025-05-08T16:13:05.955Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/78/2f/4966032c5f8cc7e6a60f1b2e0ad686293b9474b65246b0c642e3ef3badd0/scipy-1.15.3-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:a345928c86d535060c9c2b25e71e87c39ab2f22fc96e9636bd74d1dbf9de448c", size = 38702770, upload-time = "2025-05-08T16:04:20.849Z" },
- { url = "https://files.pythonhosted.org/packages/a0/6e/0c3bf90fae0e910c274db43304ebe25a6b391327f3f10b5dcc638c090795/scipy-1.15.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:ad3432cb0f9ed87477a8d97f03b763fd1d57709f1bbde3c9369b1dff5503b253", size = 30094511, upload-time = "2025-05-08T16:04:27.103Z" },
- { url = "https://files.pythonhosted.org/packages/ea/b1/4deb37252311c1acff7f101f6453f0440794f51b6eacb1aad4459a134081/scipy-1.15.3-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:aef683a9ae6eb00728a542b796f52a5477b78252edede72b8327a886ab63293f", size = 22368151, upload-time = "2025-05-08T16:04:31.731Z" },
- { url = "https://files.pythonhosted.org/packages/38/7d/f457626e3cd3c29b3a49ca115a304cebb8cc6f31b04678f03b216899d3c6/scipy-1.15.3-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:1c832e1bd78dea67d5c16f786681b28dd695a8cb1fb90af2e27580d3d0967e92", size = 25121732, upload-time = "2025-05-08T16:04:36.596Z" },
- { url = "https://files.pythonhosted.org/packages/db/0a/92b1de4a7adc7a15dcf5bddc6e191f6f29ee663b30511ce20467ef9b82e4/scipy-1.15.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:263961f658ce2165bbd7b99fa5135195c3a12d9bef045345016b8b50c315cb82", size = 35547617, upload-time = "2025-05-08T16:04:43.546Z" },
- { url = "https://files.pythonhosted.org/packages/8e/6d/41991e503e51fc1134502694c5fa7a1671501a17ffa12716a4a9151af3df/scipy-1.15.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9e2abc762b0811e09a0d3258abee2d98e0c703eee49464ce0069590846f31d40", size = 37662964, upload-time = "2025-05-08T16:04:49.431Z" },
- { url = "https://files.pythonhosted.org/packages/25/e1/3df8f83cb15f3500478c889be8fb18700813b95e9e087328230b98d547ff/scipy-1.15.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ed7284b21a7a0c8f1b6e5977ac05396c0d008b89e05498c8b7e8f4a1423bba0e", size = 37238749, upload-time = "2025-05-08T16:04:55.215Z" },
- { url = "https://files.pythonhosted.org/packages/93/3e/b3257cf446f2a3533ed7809757039016b74cd6f38271de91682aa844cfc5/scipy-1.15.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:5380741e53df2c566f4d234b100a484b420af85deb39ea35a1cc1be84ff53a5c", size = 40022383, upload-time = "2025-05-08T16:05:01.914Z" },
- { url = "https://files.pythonhosted.org/packages/d1/84/55bc4881973d3f79b479a5a2e2df61c8c9a04fcb986a213ac9c02cfb659b/scipy-1.15.3-cp310-cp310-win_amd64.whl", hash = "sha256:9d61e97b186a57350f6d6fd72640f9e99d5a4a2b8fbf4b9ee9a841eab327dc13", size = 41259201, upload-time = "2025-05-08T16:05:08.166Z" },
- { url = "https://files.pythonhosted.org/packages/96/ab/5cc9f80f28f6a7dff646c5756e559823614a42b1939d86dd0ed550470210/scipy-1.15.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:993439ce220d25e3696d1b23b233dd010169b62f6456488567e830654ee37a6b", size = 38714255, upload-time = "2025-05-08T16:05:14.596Z" },
- { url = "https://files.pythonhosted.org/packages/4a/4a/66ba30abe5ad1a3ad15bfb0b59d22174012e8056ff448cb1644deccbfed2/scipy-1.15.3-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:34716e281f181a02341ddeaad584205bd2fd3c242063bd3423d61ac259ca7eba", size = 30111035, upload-time = "2025-05-08T16:05:20.152Z" },
- { url = "https://files.pythonhosted.org/packages/4b/fa/a7e5b95afd80d24313307f03624acc65801846fa75599034f8ceb9e2cbf6/scipy-1.15.3-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:3b0334816afb8b91dab859281b1b9786934392aa3d527cd847e41bb6f45bee65", size = 22384499, upload-time = "2025-05-08T16:05:24.494Z" },
- { url = "https://files.pythonhosted.org/packages/17/99/f3aaddccf3588bb4aea70ba35328c204cadd89517a1612ecfda5b2dd9d7a/scipy-1.15.3-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:6db907c7368e3092e24919b5e31c76998b0ce1684d51a90943cb0ed1b4ffd6c1", size = 25152602, upload-time = "2025-05-08T16:05:29.313Z" },
- { url = "https://files.pythonhosted.org/packages/56/c5/1032cdb565f146109212153339f9cb8b993701e9fe56b1c97699eee12586/scipy-1.15.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:721d6b4ef5dc82ca8968c25b111e307083d7ca9091bc38163fb89243e85e3889", size = 35503415, upload-time = "2025-05-08T16:05:34.699Z" },
- { url = "https://files.pythonhosted.org/packages/bd/37/89f19c8c05505d0601ed5650156e50eb881ae3918786c8fd7262b4ee66d3/scipy-1.15.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:39cb9c62e471b1bb3750066ecc3a3f3052b37751c7c3dfd0fd7e48900ed52982", size = 37652622, upload-time = "2025-05-08T16:05:40.762Z" },
- { url = "https://files.pythonhosted.org/packages/7e/31/be59513aa9695519b18e1851bb9e487de66f2d31f835201f1b42f5d4d475/scipy-1.15.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:795c46999bae845966368a3c013e0e00947932d68e235702b5c3f6ea799aa8c9", size = 37244796, upload-time = "2025-05-08T16:05:48.119Z" },
- { url = "https://files.pythonhosted.org/packages/10/c0/4f5f3eeccc235632aab79b27a74a9130c6c35df358129f7ac8b29f562ac7/scipy-1.15.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:18aaacb735ab38b38db42cb01f6b92a2d0d4b6aabefeb07f02849e47f8fb3594", size = 40047684, upload-time = "2025-05-08T16:05:54.22Z" },
- { url = "https://files.pythonhosted.org/packages/ab/a7/0ddaf514ce8a8714f6ed243a2b391b41dbb65251affe21ee3077ec45ea9a/scipy-1.15.3-cp311-cp311-win_amd64.whl", hash = "sha256:ae48a786a28412d744c62fd7816a4118ef97e5be0bee968ce8f0a2fba7acf3bb", size = 41246504, upload-time = "2025-05-08T16:06:00.437Z" },
- { url = "https://files.pythonhosted.org/packages/37/4b/683aa044c4162e10ed7a7ea30527f2cbd92e6999c10a8ed8edb253836e9c/scipy-1.15.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6ac6310fdbfb7aa6612408bd2f07295bcbd3fda00d2d702178434751fe48e019", size = 38766735, upload-time = "2025-05-08T16:06:06.471Z" },
- { url = "https://files.pythonhosted.org/packages/7b/7e/f30be3d03de07f25dc0ec926d1681fed5c732d759ac8f51079708c79e680/scipy-1.15.3-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:185cd3d6d05ca4b44a8f1595af87f9c372bb6acf9c808e99aa3e9aa03bd98cf6", size = 30173284, upload-time = "2025-05-08T16:06:11.686Z" },
- { url = "https://files.pythonhosted.org/packages/07/9c/0ddb0d0abdabe0d181c1793db51f02cd59e4901da6f9f7848e1f96759f0d/scipy-1.15.3-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:05dc6abcd105e1a29f95eada46d4a3f251743cfd7d3ae8ddb4088047f24ea477", size = 22446958, upload-time = "2025-05-08T16:06:15.97Z" },
- { url = "https://files.pythonhosted.org/packages/af/43/0bce905a965f36c58ff80d8bea33f1f9351b05fad4beaad4eae34699b7a1/scipy-1.15.3-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:06efcba926324df1696931a57a176c80848ccd67ce6ad020c810736bfd58eb1c", size = 25242454, upload-time = "2025-05-08T16:06:20.394Z" },
- { url = "https://files.pythonhosted.org/packages/56/30/a6f08f84ee5b7b28b4c597aca4cbe545535c39fe911845a96414700b64ba/scipy-1.15.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c05045d8b9bfd807ee1b9f38761993297b10b245f012b11b13b91ba8945f7e45", size = 35210199, upload-time = "2025-05-08T16:06:26.159Z" },
- { url = "https://files.pythonhosted.org/packages/0b/1f/03f52c282437a168ee2c7c14a1a0d0781a9a4a8962d84ac05c06b4c5b555/scipy-1.15.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:271e3713e645149ea5ea3e97b57fdab61ce61333f97cfae392c28ba786f9bb49", size = 37309455, upload-time = "2025-05-08T16:06:32.778Z" },
- { url = "https://files.pythonhosted.org/packages/89/b1/fbb53137f42c4bf630b1ffdfc2151a62d1d1b903b249f030d2b1c0280af8/scipy-1.15.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6cfd56fc1a8e53f6e89ba3a7a7251f7396412d655bca2aa5611c8ec9a6784a1e", size = 36885140, upload-time = "2025-05-08T16:06:39.249Z" },
- { url = "https://files.pythonhosted.org/packages/2e/2e/025e39e339f5090df1ff266d021892694dbb7e63568edcfe43f892fa381d/scipy-1.15.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:0ff17c0bb1cb32952c09217d8d1eed9b53d1463e5f1dd6052c7857f83127d539", size = 39710549, upload-time = "2025-05-08T16:06:45.729Z" },
- { url = "https://files.pythonhosted.org/packages/e6/eb/3bf6ea8ab7f1503dca3a10df2e4b9c3f6b3316df07f6c0ded94b281c7101/scipy-1.15.3-cp312-cp312-win_amd64.whl", hash = "sha256:52092bc0472cfd17df49ff17e70624345efece4e1a12b23783a1ac59a1b728ed", size = 40966184, upload-time = "2025-05-08T16:06:52.623Z" },
- { url = "https://files.pythonhosted.org/packages/73/18/ec27848c9baae6e0d6573eda6e01a602e5649ee72c27c3a8aad673ebecfd/scipy-1.15.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2c620736bcc334782e24d173c0fdbb7590a0a436d2fdf39310a8902505008759", size = 38728256, upload-time = "2025-05-08T16:06:58.696Z" },
- { url = "https://files.pythonhosted.org/packages/74/cd/1aef2184948728b4b6e21267d53b3339762c285a46a274ebb7863c9e4742/scipy-1.15.3-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:7e11270a000969409d37ed399585ee530b9ef6aa99d50c019de4cb01e8e54e62", size = 30109540, upload-time = "2025-05-08T16:07:04.209Z" },
- { url = "https://files.pythonhosted.org/packages/5b/d8/59e452c0a255ec352bd0a833537a3bc1bfb679944c4938ab375b0a6b3a3e/scipy-1.15.3-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:8c9ed3ba2c8a2ce098163a9bdb26f891746d02136995df25227a20e71c396ebb", size = 22383115, upload-time = "2025-05-08T16:07:08.998Z" },
- { url = "https://files.pythonhosted.org/packages/08/f5/456f56bbbfccf696263b47095291040655e3cbaf05d063bdc7c7517f32ac/scipy-1.15.3-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:0bdd905264c0c9cfa74a4772cdb2070171790381a5c4d312c973382fc6eaf730", size = 25163884, upload-time = "2025-05-08T16:07:14.091Z" },
- { url = "https://files.pythonhosted.org/packages/a2/66/a9618b6a435a0f0c0b8a6d0a2efb32d4ec5a85f023c2b79d39512040355b/scipy-1.15.3-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79167bba085c31f38603e11a267d862957cbb3ce018d8b38f79ac043bc92d825", size = 35174018, upload-time = "2025-05-08T16:07:19.427Z" },
- { url = "https://files.pythonhosted.org/packages/b5/09/c5b6734a50ad4882432b6bb7c02baf757f5b2f256041da5df242e2d7e6b6/scipy-1.15.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9deabd6d547aee2c9a81dee6cc96c6d7e9a9b1953f74850c179f91fdc729cb7", size = 37269716, upload-time = "2025-05-08T16:07:25.712Z" },
- { url = "https://files.pythonhosted.org/packages/77/0a/eac00ff741f23bcabd352731ed9b8995a0a60ef57f5fd788d611d43d69a1/scipy-1.15.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:dde4fc32993071ac0c7dd2d82569e544f0bdaff66269cb475e0f369adad13f11", size = 36872342, upload-time = "2025-05-08T16:07:31.468Z" },
- { url = "https://files.pythonhosted.org/packages/fe/54/4379be86dd74b6ad81551689107360d9a3e18f24d20767a2d5b9253a3f0a/scipy-1.15.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f77f853d584e72e874d87357ad70f44b437331507d1c311457bed8ed2b956126", size = 39670869, upload-time = "2025-05-08T16:07:38.002Z" },
- { url = "https://files.pythonhosted.org/packages/87/2e/892ad2862ba54f084ffe8cc4a22667eaf9c2bcec6d2bff1d15713c6c0703/scipy-1.15.3-cp313-cp313-win_amd64.whl", hash = "sha256:b90ab29d0c37ec9bf55424c064312930ca5f4bde15ee8619ee44e69319aab163", size = 40988851, upload-time = "2025-05-08T16:08:33.671Z" },
- { url = "https://files.pythonhosted.org/packages/1b/e9/7a879c137f7e55b30d75d90ce3eb468197646bc7b443ac036ae3fe109055/scipy-1.15.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:3ac07623267feb3ae308487c260ac684b32ea35fd81e12845039952f558047b8", size = 38863011, upload-time = "2025-05-08T16:07:44.039Z" },
- { url = "https://files.pythonhosted.org/packages/51/d1/226a806bbd69f62ce5ef5f3ffadc35286e9fbc802f606a07eb83bf2359de/scipy-1.15.3-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:6487aa99c2a3d509a5227d9a5e889ff05830a06b2ce08ec30df6d79db5fcd5c5", size = 30266407, upload-time = "2025-05-08T16:07:49.891Z" },
- { url = "https://files.pythonhosted.org/packages/e5/9b/f32d1d6093ab9eeabbd839b0f7619c62e46cc4b7b6dbf05b6e615bbd4400/scipy-1.15.3-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:50f9e62461c95d933d5c5ef4a1f2ebf9a2b4e83b0db374cb3f1de104d935922e", size = 22540030, upload-time = "2025-05-08T16:07:54.121Z" },
- { url = "https://files.pythonhosted.org/packages/e7/29/c278f699b095c1a884f29fda126340fcc201461ee8bfea5c8bdb1c7c958b/scipy-1.15.3-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:14ed70039d182f411ffc74789a16df3835e05dc469b898233a245cdfd7f162cb", size = 25218709, upload-time = "2025-05-08T16:07:58.506Z" },
- { url = "https://files.pythonhosted.org/packages/24/18/9e5374b617aba742a990581373cd6b68a2945d65cc588482749ef2e64467/scipy-1.15.3-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a769105537aa07a69468a0eefcd121be52006db61cdd8cac8a0e68980bbb723", size = 34809045, upload-time = "2025-05-08T16:08:03.929Z" },
- { url = "https://files.pythonhosted.org/packages/e1/fe/9c4361e7ba2927074360856db6135ef4904d505e9b3afbbcb073c4008328/scipy-1.15.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9db984639887e3dffb3928d118145ffe40eff2fa40cb241a306ec57c219ebbbb", size = 36703062, upload-time = "2025-05-08T16:08:09.558Z" },
- { url = "https://files.pythonhosted.org/packages/b7/8e/038ccfe29d272b30086b25a4960f757f97122cb2ec42e62b460d02fe98e9/scipy-1.15.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:40e54d5c7e7ebf1aa596c374c49fa3135f04648a0caabcb66c52884b943f02b4", size = 36393132, upload-time = "2025-05-08T16:08:15.34Z" },
- { url = "https://files.pythonhosted.org/packages/10/7e/5c12285452970be5bdbe8352c619250b97ebf7917d7a9a9e96b8a8140f17/scipy-1.15.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:5e721fed53187e71d0ccf382b6bf977644c533e506c4d33c3fb24de89f5c3ed5", size = 38979503, upload-time = "2025-05-08T16:08:21.513Z" },
- { url = "https://files.pythonhosted.org/packages/81/06/0a5e5349474e1cbc5757975b21bd4fad0e72ebf138c5592f191646154e06/scipy-1.15.3-cp313-cp313t-win_amd64.whl", hash = "sha256:76ad1fb5f8752eabf0fa02e4cc0336b4e8f021e2d5f061ed37d6d264db35e3ca", size = 40308097, upload-time = "2025-05-08T16:08:27.627Z" },
-]
-
[[package]]
name = "scipy"
version = "1.16.2"
source = { registry = "https://pypi.org/simple" }
-resolution-markers = [
- "python_full_version >= '3.12'",
- "python_full_version == '3.11.*'",
-]
dependencies = [
- { name = "numpy", version = "2.3.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
+ { name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/4c/3b/546a6f0bfe791bbb7f8d591613454d15097e53f906308ec6f7c1ce588e8e/scipy-1.16.2.tar.gz", hash = "sha256:af029b153d243a80afb6eabe40b0a07f8e35c9adc269c019f364ad747f826a6b", size = 30580599, upload-time = "2025-09-11T17:48:08.271Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/77/b8/0135fadc89e73be292b473cb820b4f5a08197779206b33191e801feeae40/tomli-2.3.0-py3-none-any.whl", hash = "sha256:e95b1af3c5b07d9e643909b5abbec77cd9f1217e6d0bca72b0234736b9fb1f1b", size = 14408, upload-time = "2025-10-08T22:01:46.04Z" },
]
-[[package]]
-name = "typing-extensions"
-version = "4.15.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
-]
-
[[package]]
name = "tzdata"
version = "2025.2"
--- /dev/null
+# OpenSpec Instructions
+
+Instructions for AI coding assistants using OpenSpec for spec-driven development.
+
+## TL;DR Quick Checklist
+
+- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
+- Decide scope: new capability vs modify existing capability
+- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
+- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
+- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
+- Validate: `openspec validate [change-id] --strict` and fix issues
+- Request approval: Do not start implementation until proposal is approved
+
+## Three-Stage Workflow
+
+### Stage 1: Creating Changes
+
+Create proposal when you need to:
+
+- Add features or functionality
+- Make breaking changes (API, schema)
+- Change architecture or patterns
+- Optimize performance (changes behavior)
+- Update security patterns
+
+Triggers (examples):
+
+- "Help me create a change proposal"
+- "Help me plan a change"
+- "Help me create a proposal"
+- "I want to create a spec proposal"
+- "I want to create a spec"
+
+Loose matching guidance:
+
+- Contains one of: `proposal`, `change`, `spec`
+- With one of: `create`, `plan`, `make`, `start`, `help`
+
+Skip proposal for:
+
+- Bug fixes (restore intended behavior)
+- Typos, formatting, comments
+- Dependency updates (non-breaking)
+- Configuration changes
+- Tests for existing behavior
+
+**Workflow**
+
+1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
+2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
+3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
+4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
+
+### Stage 2: Implementing Changes
+
+Track these steps as TODOs and complete them one by one.
+
+1. **Read proposal.md** - Understand what's being built
+2. **Read design.md** (if exists) - Review technical decisions
+3. **Read tasks.md** - Get implementation checklist
+4. **Implement tasks sequentially** - Complete in order
+5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
+6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
+7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
+
+### Stage 3: Archiving Changes
+
+After deployment, create separate PR to:
+
+- Move `changes/[name]/` → `changes/archive/YYYY-MM-DD-[name]/`
+- Update `specs/` if capabilities changed
+- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
+- Run `openspec validate --strict` to confirm the archived change passes checks
+
+## Before Any Task
+
+**Context Checklist:**
+
+- [ ] Read relevant specs in `specs/[capability]/spec.md`
+- [ ] Check pending changes in `changes/` for conflicts
+- [ ] Read `openspec/project.md` for conventions
+- [ ] Run `openspec list` to see active changes
+- [ ] Run `openspec list --specs` to see existing capabilities
+
+**Before Creating Specs:**
+
+- Always check if capability already exists
+- Prefer modifying existing specs over creating duplicates
+- Use `openspec show [spec]` to review current state
+- If request is ambiguous, ask 1–2 clarifying questions before scaffolding
+
+### Search Guidance
+
+- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
+- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
+- Show details:
+ - Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
+ - Change: `openspec show <change-id> --json --deltas-only`
+- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
+
+## Quick Start
+
+### CLI Commands
+
+```bash
+# Essential commands
+openspec list # List active changes
+openspec list --specs # List specifications
+openspec show [item] # Display change or spec
+openspec validate [item] # Validate changes or specs
+openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
+
+# Project management
+openspec init [path] # Initialize OpenSpec
+openspec update [path] # Update instruction files
+
+# Interactive mode
+openspec show # Prompts for selection
+openspec validate # Bulk validation mode
+
+# Debugging
+openspec show [change] --json --deltas-only
+openspec validate [change] --strict
+```
+
+### Command Flags
+
+- `--json` - Machine-readable output
+- `--type change|spec` - Disambiguate items
+- `--strict` - Comprehensive validation
+- `--no-interactive` - Disable prompts
+- `--skip-specs` - Archive without spec updates
+- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
+
+## Directory Structure
+
+```
+openspec/
+├── project.md # Project conventions
+├── specs/ # Current truth - what IS built
+│ └── [capability]/ # Single focused capability
+│ ├── spec.md # Requirements and scenarios
+│ └── design.md # Technical patterns
+├── changes/ # Proposals - what SHOULD change
+│ ├── [change-name]/
+│ │ ├── proposal.md # Why, what, impact
+│ │ ├── tasks.md # Implementation checklist
+│ │ ├── design.md # Technical decisions (optional; see criteria)
+│ │ └── specs/ # Delta changes
+│ │ └── [capability]/
+│ │ └── spec.md # ADDED/MODIFIED/REMOVED
+│ └── archive/ # Completed changes
+```
+
+## Creating Change Proposals
+
+### Decision Tree
+
+```
+New request?
+├─ Bug fix restoring spec behavior? → Fix directly
+├─ Typo/format/comment? → Fix directly
+├─ New feature/capability? → Create proposal
+├─ Breaking change? → Create proposal
+├─ Architecture change? → Create proposal
+└─ Unclear? → Create proposal (safer)
+```
+
+### Proposal Structure
+
+1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
+
+2. **Write proposal.md:**
+
+```markdown
+# Change: [Brief description of change]
+
+## Why
+
+[1-2 sentences on problem/opportunity]
+
+## What Changes
+
+- [Bullet list of changes]
+- [Mark breaking changes with **BREAKING**]
+
+## Impact
+
+- Affected specs: [list capabilities]
+- Affected code: [key files/systems]
+```
+
+3. **Create spec deltas:** `specs/[capability]/spec.md`
+
+```markdown
+## ADDED Requirements
+
+### Requirement: New Feature
+
+The system SHALL provide...
+
+#### Scenario: Success case
+
+- **WHEN** user performs action
+- **THEN** expected result
+
+## MODIFIED Requirements
+
+### Requirement: Existing Feature
+
+[Complete modified requirement]
+
+## REMOVED Requirements
+
+### Requirement: Old Feature
+
+**Reason**: [Why removing]
+**Migration**: [How to handle]
+```
+
+If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
+
+4. **Create tasks.md:**
+
+```markdown
+## 1. Implementation
+
+- [ ] 1.1 Create database schema
+- [ ] 1.2 Implement API endpoint
+- [ ] 1.3 Add frontend component
+- [ ] 1.4 Write tests
+```
+
+5. **Create design.md when needed:**
+ Create `design.md` if any of the following apply; otherwise omit it:
+
+- Cross-cutting change (multiple services/modules) or a new architectural pattern
+- New external dependency or significant data model changes
+- Security, performance, or migration complexity
+- Ambiguity that benefits from technical decisions before coding
+
+Minimal `design.md` skeleton:
+
+```markdown
+## Context
+
+[Background, constraints, stakeholders]
+
+## Goals / Non-Goals
+
+- Goals: [...]
+- Non-Goals: [...]
+
+## Decisions
+
+- Decision: [What and why]
+- Alternatives considered: [Options + rationale]
+
+## Risks / Trade-offs
+
+- [Risk] → Mitigation
+
+## Migration Plan
+
+[Steps, rollback]
+
+## Open Questions
+
+- [...]
+```
+
+## Spec File Format
+
+### Critical: Scenario Formatting
+
+**CORRECT** (use #### headers):
+
+```markdown
+#### Scenario: User login success
+
+- **WHEN** valid credentials provided
+- **THEN** return JWT token
+```
+
+**WRONG** (don't use bullets or bold):
+
+```markdown
+- **Scenario: User login** ❌
+ **Scenario**: User login ❌
+
+### Scenario: User login ❌
+```
+
+Every requirement MUST have at least one scenario.
+
+### Requirement Wording
+
+- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
+
+### Delta Operations
+
+- `## ADDED Requirements` - New capabilities
+- `## MODIFIED Requirements` - Changed behavior
+- `## REMOVED Requirements` - Deprecated features
+- `## RENAMED Requirements` - Name changes
+
+Headers matched with `trim(header)` - whitespace ignored.
+
+#### When to use ADDED vs MODIFIED
+
+- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
+- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
+- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
+
+Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you aren’t explicitly changing the existing requirement, add a new requirement under ADDED instead.
+
+Authoring a MODIFIED requirement correctly:
+
+1. Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
+2. Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
+3. Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
+4. Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
+
+Example for RENAMED:
+
+```markdown
+## RENAMED Requirements
+
+- FROM: `### Requirement: Login`
+- TO: `### Requirement: User Authentication`
+```
+
+## Troubleshooting
+
+### Common Errors
+
+**"Change must have at least one delta"**
+
+- Check `changes/[name]/specs/` exists with .md files
+- Verify files have operation prefixes (## ADDED Requirements)
+
+**"Requirement must have at least one scenario"**
+
+- Check scenarios use `#### Scenario:` format (4 hashtags)
+- Don't use bullet points or bold for scenario headers
+
+**Silent scenario parsing failures**
+
+- Exact format required: `#### Scenario: Name`
+- Debug with: `openspec show [change] --json --deltas-only`
+
+### Validation Tips
+
+```bash
+# Always use strict mode for comprehensive checks
+openspec validate [change] --strict
+
+# Debug delta parsing
+openspec show [change] --json | jq '.deltas'
+
+# Check specific requirement
+openspec show [spec] --json -r 1
+```
+
+## Happy Path Script
+
+```bash
+# 1) Explore current state
+openspec spec list --long
+openspec list
+# Optional full-text search:
+# rg -n "Requirement:|Scenario:" openspec/specs
+# rg -n "^#|Requirement:" openspec/changes
+
+# 2) Choose change id and scaffold
+CHANGE=add-two-factor-auth
+mkdir -p openspec/changes/$CHANGE/{specs/auth}
+printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
+printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
+
+# 3) Add deltas (example)
+cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
+## ADDED Requirements
+### Requirement: Two-Factor Authentication
+Users MUST provide a second factor during login.
+
+#### Scenario: OTP required
+- **WHEN** valid credentials are provided
+- **THEN** an OTP challenge is required
+EOF
+
+# 4) Validate
+openspec validate $CHANGE --strict
+```
+
+## Multi-Capability Example
+
+```
+openspec/changes/add-2fa-notify/
+├── proposal.md
+├── tasks.md
+└── specs/
+ ├── auth/
+ │ └── spec.md # ADDED: Two-Factor Authentication
+ └── notifications/
+ └── spec.md # ADDED: OTP email notification
+```
+
+auth/spec.md
+
+```markdown
+## ADDED Requirements
+
+### Requirement: Two-Factor Authentication
+
+...
+```
+
+notifications/spec.md
+
+```markdown
+## ADDED Requirements
+
+### Requirement: OTP Email Notification
+
+...
+```
+
+## Best Practices
+
+### Simplicity First
+
+- Default to <100 lines of new code
+- Single-file implementations until proven insufficient
+- Avoid frameworks without clear justification
+- Choose boring, proven patterns
+
+### Complexity Triggers
+
+Only add complexity with:
+
+- Performance data showing current solution too slow
+- Concrete scale requirements (>1000 users, >100MB data)
+- Multiple proven use cases requiring abstraction
+
+### Clear References
+
+- Use `file.ts:42` format for code locations
+- Reference specs as `specs/auth/spec.md`
+- Link related changes and PRs
+
+### Capability Naming
+
+- Use verb-noun: `user-auth`, `payment-capture`
+- Single purpose per capability
+- 10-minute understandability rule
+- Split if description needs "AND"
+
+### Change ID Naming
+
+- Use kebab-case, short and descriptive: `add-two-factor-auth`
+- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
+- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
+
+## Tool Selection Guide
+
+| Task | Tool | Why |
+| --------------------- | ---- | ------------------------ |
+| Find files by pattern | Glob | Fast pattern matching |
+| Search code content | Grep | Optimized regex search |
+| Read specific files | Read | Direct file access |
+| Explore unknown scope | Task | Multi-step investigation |
+
+## Error Recovery
+
+### Change Conflicts
+
+1. Run `openspec list` to see active changes
+2. Check for overlapping specs
+3. Coordinate with change owners
+4. Consider combining proposals
+
+### Validation Failures
+
+1. Run with `--strict` flag
+2. Check JSON output for details
+3. Verify spec file format
+4. Ensure scenarios properly formatted
+
+### Missing Context
+
+1. Read project.md first
+2. Check related specs
+3. Review recent archives
+4. Ask for clarification
+
+## Quick Reference
+
+### Stage Indicators
+
+- `changes/` - Proposed, not yet built
+- `specs/` - Built and deployed
+- `archive/` - Completed changes
+
+### File Purposes
+
+- `proposal.md` - Why and what
+- `tasks.md` - Implementation steps
+- `design.md` - Technical decisions
+- `spec.md` - Requirements and behavior
+
+### CLI Essentials
+
+```bash
+openspec list # What's in progress?
+openspec show [item] # View details
+openspec validate --strict # Is it correct?
+openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
+```
+
+Remember: Specs are truth. Changes are proposals. Keep them in sync.
--- /dev/null
+# Project Context
+
+## Purpose
+
+Research, prototype, and refine advanced ML‑driven and RL‑driven trading strategies for the FreqAI / Freqtrade ecosystem. Two strategies:
+
+- QuickAdapter: ML strategy + adaptive execution heuristics for partial exits, volatility‑aware stop/take profit logic.
+- ReforceXY: RL strategy and reward space analysis.
+ Reward space analysis goals:
+ - Provide deterministic synthetic sampling and statistical diagnostics to validate reward components and potential‑based shaping behavior and reason about reward parameterization before RL training.
+ - Maintain deterministic runs and rich diagnostics to accelerate iteration and anomaly debugging.
+
+## Tech Stack
+
+- Python 3.11+.
+- Freqtrade + FreqAI (strategy framework & ML integration).
+- TA libraries: TA-Lib, pandas_ta, custom technical helpers.
+- ReforceXY reward space analysis:
+ - Project management: uv.
+ - Scientific stack: numpy, pandas, scipy, scikit‑learn.
+ - Linting: Ruff.
+ - Testing: PyTest + pytest-cov.
+- Docker + docker-compose (containerized runs / reproducibility).
+
+## Project Conventions
+
+### Code Style
+
+- Base formatting guided by `.editorconfig` (UTF-8, LF, final newline, trimming whitespace, Python indent = 4 spaces, global indent_size=2 for non‑Python where appropriate, max Python line length target 100; Markdown max line length disabled).
+- Naming:
+ - Functions & methods: `snake_case`.
+ - Constants: `UPPER_SNAKE_CASE`.
+ - Internal strategy transient labels/features use prefixes: `"%-"` for engineered feature columns; special markers like `"&s-"` / `"&-"` for internal prediction target(s).
+ - Private helpers or internal state use leading underscore (`_exit_thresholds_calibration`).
+- Avoid one‑letter variable names; prefer descriptive one (e.g. `trade_duration_candles`, `natr_ratio_percent`).
+- Prefer explicit type hints (Python 3.11+ built‑in generics: `list[str]`, `dict[str, float]`).
+- Logging: use module logger (`logger = logging.getLogger(__name__)`), info for decision denials, warning for anomalies, error for exceptions.
+- No non-English terms in code, docs, comments, logs.
+
+### Architecture Patterns
+
+- Strategy classes subclass `IStrategy` with model classes subclass `IFreqaiModel`; separate standalone strategy under root directory.
+- Reward Space Analysis: standalone CLI module (`reward_space_analysis.py`) + tests focusing on deterministic synthetic scenario generation, decomposition, statistical validation, potential‑based reward shaping (PBRS) variants.
+- Separation of concerns: reward analytic tooling does not depend on strategy runtime state; consumption is offline pre‑training / validation.
+
+### Reward Space Analysis Testing Strategy
+
+- PyTest test modules in `reward_space_analysis/tests/<focus>`.
+- Focus: correctness of reward calculations, statistical invariants, PBRS modes, transforms, robustness, integration end‑to‑end.
+- Logging configured for concise INFO output; colored, warnings disabled by default in test runs.
+- Coverage goal: ≥85% on new analytical logic; critical reward shaping paths must be exercised (component bounds, invariance checks, exit attenuation kernels, transform functions, distribution metrics).
+- Focused test invocation examples (integration, statistical coherence, reward alignment) documented in README.
+- Run tests after: modifying reward logic; before major analyses; dependency or Python version changes; unexpected anomalies.
+
+### Git Workflow
+
+- Primary branch: `main`. Feature / experiment branches should be: `feat/<concise-topic>`, `exp/<strategy-or-reward-param>`. Fix branches should be: `fix/<bug>`.
+- Commit messages: imperative, follow Conventional Commits. Emphasize WHY over raw WHAT when non‑obvious.
+- Avoid large mixed commits; isolate analytical tooling changes from strategy behavior changes.
+- Keep manifests and generated outputs out of version control (only code + templates); user data directories contain `.gitkeep` placeholders.
+
+## Domain Context
+
+- Strategies operate on sequential market OHLCV data.
+- QuickAdapterV3 features engineered include volatility metrics (NATR/ATR), momentum (MACD, EWO), market structure shift (extrema labeling via zigzag), band widths (BB, KC, VWAP), and price distance measures.
+- QuickAdapterV3 integrates dynamic volatility interpolation (weighted/moving average/interpolation modes) to derive adaptive NATR for stoploss/take profit calculations; partial exits based on staged NATR ratio percentages.
+- ReforceXY reward shaping emphasizes potential‑based reward shaping (PBRS) invariance: canonical vs non canonical modes, hold/entry/exit additive toggles, duration penalties, exit attenuation kernels (linear/power/half‑life/etc.).
+
+## Important Constraints
+
+- Python version ≥3.11 (target for type hints).
+- Trading mode affects short availability (spot disallows shorts); logic must gate short entries accordingly.
+- Computations must handle missing/NaN gracefully.
+- Regulatory / business: none explicit; treat strategies as experimental research (no performance guarantees) and avoid embedding sensitive credentials.
+
+## External Dependencies
+
+- Freqtrade / FreqAI framework APIs.
+- Docker images defined per strategy project (`Dockerfile.quickadapter`, `Dockerfile.reforcexy`) for containerized execution.