hyperparameter_tuning
Subject
Section titled “Subject”Sampling, Validation and ML Diagnostics
Why This Module Exists
Section titled “Why This Module Exists”AFML Chapter 9 recommends tuning under PurgedKFold, using randomized search for large spaces, and scoring with metrics aligned to trading objectives.
Mathematical Foundations
Section titled “Mathematical Foundations”Purged CV Objective
Section titled “Purged CV Objective”
Log-Uniform Draw
Section titled “Log-Uniform Draw”
Weighted Neg Log Loss
Section titled “Weighted Neg Log Loss”
Usage Examples
Section titled “Usage Examples”Randomized search with PurgedKFold semantics
Section titled “Randomized search with PurgedKFold semantics”use std::collections::BTreeMap;use openquant::hyperparameter_tuning::{ randomized_search, RandomParamDistribution, SearchData, SearchScoring,};
let mut space = BTreeMap::new();space.insert("C".to_string(), RandomParamDistribution::LogUniform { low: 1e-2, high: 1e2 });space.insert("gamma".to_string(), RandomParamDistribution::LogUniform { low: 1e-3, high: 1e1 });
let result = randomized_search( build_model, &space, 25, 42, SearchData { x: &x, y: &y, sample_weight: Some(&w), samples_info_sets: &info_sets }, 5, 0.01, SearchScoring::NegLogLoss,)?;println!("best score = {}", result.best_score);API Reference
Section titled “API Reference”Rust API
Section titled “Rust API”grid_searchrandomized_searchexpand_param_gridsample_log_uniformclassification_scoreSearchScoringRandomParamDistribution
Implementation Notes
Section titled “Implementation Notes”- Use Accuracy only when each prediction has similar economic value (equal bet sizing).
- Prefer weighted NegLogLoss when probabilities drive position sizing or outcomes have different economic magnitude.
- BalancedAccuracy is useful for severe class imbalance, especially in meta-labeling where recall of positives matters.