optuna.importance.PedAnovaImportanceEvaluator
- class optuna.importance.PedAnovaImportanceEvaluator(*, baseline_quantile=0.1, evaluate_on_local=True)[source]
PED-ANOVA importance evaluator.
Implements the PED-ANOVA hyperparameter importance evaluation algorithm.
PED-ANOVA fits Parzen estimators of
COMPLETE
trials better than a user-specified baseline. Users can specify the baseline by a quantile. The importance can be interpreted as how important each hyperparameter is to get the performance better than baseline.For further information about PED-ANOVA algorithm, please refer to the following paper:
Note
The performance of PED-ANOVA depends on how many trials to consider above baseline. To stabilize the analysis, it is preferable to include at least 5 trials above baseline.
Note
Please refer to the original work.
- Parameters:
baseline_quantile (float) – Compute the importance of achieving top-
baseline_quantile
quantile objective value. For example,baseline_quantile=0.1
means that the importances give the information of which parameters were important to achieve the top-10% performance during optimization.evaluate_on_local (bool) – Whether we measure the importance in the local or global space. If
True
, the importances imply how importance each parameter is during optimization. Meanwhile,evaluate_on_local=False
gives the importances in the specified search_space.evaluate_on_local=True
is especially useful when users modify search space during optimization.
Example
An example of using PED-ANOVA is as follows:
import optuna from optuna.importance import PedAnovaImportanceEvaluator def objective(trial): x1 = trial.suggest_float("x1", -10, 10) x2 = trial.suggest_float("x2", -10, 10) return x1 + x2 / 1000 study = optuna.create_study() study.optimize(objective, n_trials=100) evaluator = PedAnovaImportanceEvaluator() importance = optuna.importance.get_param_importances(study, evaluator=evaluator)
Note
Added in v3.6.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.6.0.
Methods
evaluate
(study[, params, target])Evaluate parameter importances based on completed trials in the given study.
- evaluate(study, params=None, *, target=None)[source]
Evaluate parameter importances based on completed trials in the given study.
Note
This method is not meant to be called by library users.
See also
Please refer to
get_param_importances()
for how a concrete evaluator should implement this method.- Parameters:
study (Study) – An optimized study.
params (list[str] | None) – A list of names of parameters to assess. If
None
, all parameters that are present in all of the completed trials are assessed.target (Callable[[FrozenTrial], float] | None) –
A function to specify the value to evaluate importances. If it is
None
andstudy
is being used for single-objective optimization, the objective values are used. Can also be used for other trial attributes, such as the duration, liketarget=lambda t: t.duration.total_seconds()
.Note
Specify this argument if
study
is being used for multi-objective optimization. For example, to get the hyperparameter importance of the first objective, usetarget=lambda t: t.values[0]
for the target parameter.
- Returns:
A
dict
where the keys are parameter names and the values are assessed importances.- Return type: