# 高效的优化算法¶

## 采样算法¶

Optuna 提供了下列采样算法：

## 切换采样器¶

```import optuna
```

```study = optuna.create_study()
print(f"Sampler is {study.sampler.__class__.__name__}")
```

Out:

```Sampler is TPESampler
```

```study = optuna.create_study(sampler=optuna.samplers.RandomSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")

study = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
```

Out:

```Sampler is RandomSampler
Sampler is CmaEsSampler
```

## 剪枝算法¶

`Pruners` 自动在训练的早期（也就是自动化的 early-stopping）终止无望的 trial.

Optuna 提供以下剪枝算法：

## 激活 Pruner¶

```import logging
import sys

import sklearn.datasets
import sklearn.linear_model
import sklearn.model_selection

def objective(trial):
classes = list(set(iris.target))
train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(
iris.data, iris.target, test_size=0.25, random_state=0
)

alpha = trial.suggest_float("alpha", 1e-5, 1e-1, log=True)
clf = sklearn.linear_model.SGDClassifier(alpha=alpha)

for step in range(100):
clf.partial_fit(train_x, train_y, classes=classes)

# Report intermediate objective value.
intermediate_value = 1.0 - clf.score(valid_x, valid_y)
trial.report(intermediate_value, step)

# Handle pruning based on the intermediate value.
if trial.should_prune():
raise optuna.TrialPruned()

return 1.0 - clf.score(valid_x, valid_y)
```

```# Add stream handler of stdout to show the messages
study = optuna.create_study(pruner=optuna.pruners.MedianPruner())
study.optimize(objective, n_trials=20)
```

Out:

```A new study created in memory with name: no-name-4152b178-727c-45fd-bad0-df1423fb21f4
Trial 0 finished with value: 0.052631578947368474 and parameters: {'alpha': 1.2811941669507692e-05}. Best is trial 0 with value: 0.052631578947368474.
Trial 1 finished with value: 0.2894736842105263 and parameters: {'alpha': 8.861312283015687e-05}. Best is trial 0 with value: 0.052631578947368474.
Trial 2 finished with value: 0.42105263157894735 and parameters: {'alpha': 2.1259265425531073e-05}. Best is trial 0 with value: 0.052631578947368474.
Trial 3 finished with value: 0.2894736842105263 and parameters: {'alpha': 0.06819642889595902}. Best is trial 0 with value: 0.052631578947368474.
Trial 4 finished with value: 0.368421052631579 and parameters: {'alpha': 0.0014263093085762285}. Best is trial 0 with value: 0.052631578947368474.
Trial 5 finished with value: 0.02631578947368418 and parameters: {'alpha': 0.0009641035355747206}. Best is trial 5 with value: 0.02631578947368418.
Trial 6 finished with value: 0.3157894736842105 and parameters: {'alpha': 0.027053233617346455}. Best is trial 5 with value: 0.02631578947368418.
Trial 7 finished with value: 0.10526315789473684 and parameters: {'alpha': 0.0005326175352541014}. Best is trial 5 with value: 0.02631578947368418.
Trial 8 pruned.
Trial 9 finished with value: 0.10526315789473684 and parameters: {'alpha': 1.979260962104906e-05}. Best is trial 5 with value: 0.02631578947368418.
Trial 10 pruned.
Trial 11 finished with value: 0.368421052631579 and parameters: {'alpha': 0.00015434674555602333}. Best is trial 5 with value: 0.02631578947368418.
Trial 12 pruned.
Trial 13 finished with value: 0.07894736842105265 and parameters: {'alpha': 3.800921473311539e-05}. Best is trial 5 with value: 0.02631578947368418.
Trial 14 pruned.
Trial 15 pruned.
Trial 16 pruned.
Trial 17 pruned.
Trial 18 pruned.
Trial 19 pruned.
```

## 应该使用哪个 pruner 呢？¶

Parallel Compute Resource

Categorical/Conditional Hyperparameters

Recommended Algorithms

Limited

No

TPE. GP-EI if search space is low-dimensional and continuous.

Yes

TPE. GP-EI if search space is low-dimensional and continuous

Sufficient

No

CMA-ES, Random Search

Yes

Random Search or Genetic Algorithm

## 用于剪枝的集成模块¶

For example, `XGBoostPruningCallback` introduces pruning without directly changing the logic of training iteration. (See also example for the entire script.)

```pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')
bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])
```

Total running time of the script: ( 0 minutes 2.191 seconds)

Gallery generated by Sphinx-Gallery