Integration¶
-
class
optuna.integration.
ChainerPruningExtension
(trial, observation_key, pruner_trigger)[source]¶ Chainer extension to prune unpromising trials.
Example
Add a pruning extension which observes validation losses to Chainer Trainer.
trainer.extend( ChainerPruningExtension(trial, 'validation/main/loss', (1, 'epoch')))
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - observation_key – An evaluation metric for pruning, e.g.,
main/loss
andvalidation/main/accuracy
. Please refer to chainer.Reporter reference for further details. - pruner_trigger –
A trigger to execute pruning.
pruner_trigger
is an instance of IntervalTrigger or ManualScheduleTrigger. IntervalTrigger can be specified by a tuple of the interval length and its unit like(1, 'epoch')
.
- trial – A
-
class
optuna.integration.
ChainerMNStudy
(study, comm)[source]¶ A wrapper of
Study
to incorporate Optuna with ChainerMN.See also
ChainerMNStudy
provides the same interface asStudy
. Please refer tooptuna.study.Study
for further details.Example
Optimize an objective function that trains neural network written with ChainerMN.
comm = chainermn.create_communicator('naive') study = optuna.load_study(study_name, storage_url) chainermn_study = optuna.integration.ChainerMNStudy(study, comm) chainermn_study.optimize(objective, n_trials=25)
Parameters: - study – A
Study
object. - comm – A ChainerMN communicator.
-
optimize
(func, n_trials=None, timeout=None, catch=())[source]¶ Optimize an objective function.
This method provides the same interface as
optuna.study.Study.optimize()
except the absence ofn_jobs
argument.
- study – A
-
class
optuna.integration.
CmaEsSampler
(x0=None, sigma0=None, cma_stds=None, seed=None, cma_opts=None, n_startup_trials=1, independent_sampler=None, warn_independent_sampling=True)[source]¶ A Sampler using cma library as the backend.
Example
Optimize a simple quadratic function by using
CmaEsSampler
.import optuna def objective(trial): x = trial.suggest_uniform('x', -1, 1) y = trial.suggest_int('y', -1, 1) return x**2 + y sampler = optuna.integration.CmaEsSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=20)
Note that parallel execution of trials may affect the optimization performance of CMA-ES, especially if the number of trials running in parallel exceeds the population size.
Parameters: - x0 – A dictionary of an initial parameter values for CMA-ES. By default, the mean of
low
andhigh
for each distribution is used. Please refer to cma.CMAEvolutionStrategy for further details ofx0
. - sigma0 – Initial standard deviation of CMA-ES. By default,
sigma0
is set tomin_range / 6
, wheremin_range
denotes the minimum range of the distributions in the search space. If distribution is categorical,min_range
islen(choices) - 1
. Please refer to cma.CMAEvolutionStrategy for further details ofsigma0
. - cma_stds – A dictionary of multipliers of sigma0 for each parameters. The default value is 1.0.
Please refer to cma.CMAEvolutionStrategy for further details of
cma_stds
. - seed – A random seed for CMA-ES.
- cma_opts –
Options passed to the constructor of cma.CMAEvolutionStrategy class.
Note that
BoundaryHandler
,bounds
,CMA_stds
andseed
arguments incma_opts
will be ignored because it is added byCmaEsSampler
automatically. - n_startup_trials – The independent sampling is used instead of the CMA-ES algorithm until the given number of trials finish in the same study.
- independent_sampler –
A
BaseSampler
instance that is used for independent sampling. The parameters not contained in the relative search space are sampled by this sampler. The search space forCmaEsSampler
is determined byintersection_search_space()
.If
None
is specified,RandomSampler
is used as the default.See also
optuna.samplers
module provides built-in independent samplers such asRandomSampler
andTPESampler
. - warn_independent_sampling –
If this is
True
, a warning message is emitted when the value of a parameter is sampled by using an independent sampler.Note that the parameters of the first trial in a study are always sampled via an independent sampler, so no warning messages are emitted in this case.
- x0 – A dictionary of an initial parameter values for CMA-ES. By default, the mean of
-
class
optuna.integration.
FastAIPruningCallback
(learn, trial, monitor)[source]¶ FastAI callback to prune unpromising trials for fastai.
Note
This callback is for fastai<2.0, not the coming version developed in fastai/fastai_dev.
Example
Add a pruning callback which monitors validation loss directly to
Learner
.# If registering this callback in construction from functools import partial learn = Learner( data, model, callback_fns=[partial(FastAIPruningCallback, trial=trial, monitor='valid_loss')])
Example
Register a pruning callback to
learn.fit
andlearn.fit_one_cycle
.learn.fit(n_epochs, callbacks=[FastAIPruningCallback(learn, trial, 'valid_loss')]) learn.fit_one_cycle( n_epochs, cyc_len, max_lr, callbacks=[FastAIPruningCallback(learn, trial, 'valid_loss')])
Parameters: - learn – fastai.basic_train.Learner.
- trial – A
Trial
corresponding to the current evaluation of the objective function. - monitor – An evaluation metric for pruning, e.g.
valid_loss
andAccuracy
. Please refer to fastai.Callback reference for further details.
-
class
optuna.integration.
PyTorchIgnitePruningHandler
(trial, metric, trainer)[source]¶ PyTorch Ignite handler to prune unpromising trials.
Example
Add a pruning handler which observes validation accuracy.
evaluator = create_supervised_evaluator(model, metrics={'accuracy': Accuracy()}, device=device) handler = PyTorchIgnitePruningHandler(trial, 'accuracy', trainer) evaluator.add_event_handler(Events.COMPLETED, handler) @trainer.on(Events.EPOCH_COMPLETED) def log_validation_results(engine): evaluator.run(val_loader)
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - metric – A name of metric for pruning, e.g.,
accuracy
andloss
. - trainer – A trainer engine of PyTorch Ignite. Please refer to ignite.engine.Engine reference for further details.
- trial – A
-
class
optuna.integration.
KerasPruningCallback
(trial, monitor)[source]¶ Keras callback to prune unpromising trials.
Example
Add a pruning callback which observes validation losses.
model.fit(X, y, callbacks=[KerasPruningCallback(trial, 'val_loss')])
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - monitor – An evaluation metric for pruning, e.g.,
val_loss
andval_acc
. Please refer to keras.Callback reference for further details.
- trial – A
-
class
optuna.integration.
LightGBMPruningCallback
(trial, metric, valid_name='valid_0')[source]¶ Callback for LightGBM to prune unpromising trials.
Example
Add a pruning callback which observes validation scores to training of a LightGBM model.
param = {'objective': 'binary', 'metric': 'binary_error'} pruning_callback = LightGBMPruningCallback(trial, 'binary_error') gbm = lgb.train(param, dtrain, valid_sets=[dtest], callbacks=[pruning_callback])
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - metric – An evaluation metric for pruning, e.g.,
binary_error
andmulti_error
. Please refer to LightGBM reference for further details. - valid_name – The name of the target validation.
Validation names are specified by
valid_names
option of train method. If omitted,valid_0
is used which is the default name of the first validation. Note that this argument will be ignored if you are calling cv method instead of train method.
- trial – A
-
optuna.integration.lightgbm.
train
(*args, **kwargs) → Any[source]¶ Wrapper of LightGBM Training API to tune hyperparameters.
It tunes important hyperparameters (e.g., min_child_samples and feature_fraction) in a stepwise manner. Arguments and keyword arguments for lightgbm.train() can be passed.
Note
Added in v0.18.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v0.18.0.
-
class
optuna.integration.
MXNetPruningCallback
(trial, eval_metric)[source]¶ MXNet callback to prune unpromising trials.
Example
Add a pruning callback which observes validation accuracy.
model.fit(train_data=X, eval_data=Y, eval_end_callback=MXNetPruningCallback(trial, eval_metric='accuracy'))
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - eval_metric – An evaluation metric name for pruning, e.g.,
cross-entropy
andaccuracy
. If using default metrics like mxnet.metrics.Accuracy, use it’s default metric name. For custom metrics, use the metric_name provided to constructor. Please refer to mxnet.metrics reference for further details.
- trial – A
-
class
optuna.integration.
PyTorchLightningPruningCallback
(trial, monitor)[source]¶ PyTorch Lightning callback to prune unpromising trials.
Example
Add a pruning callback which observes validation accuracy.
trainer.pytorch_lightning.Trainer( early_stop_callback=PyTorchLightningPruningCallback(trial, monitor='avg_val_acc'))
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - monitor – An evaluation metric for pruning, e.g.,
val_loss
orval_acc
. The metrics are obtained from the returned dictionaries from e.g.pytorch_lightning.LightningModule.training_step
orpytorch_lightning.LightningModule.validation_end
and the names thus depend on how this dictionary is formatted.
- trial – A
-
class
optuna.integration.
SkoptSampler
(independent_sampler=None, warn_independent_sampling=True, skopt_kwargs=None, n_startup_trials=1)[source]¶ Sampler using Scikit-Optimize as the backend.
Example
Optimize a simple quadratic function by using
SkoptSampler
.import optuna def objective(trial): x = trial.suggest_uniform('x', -10, 10) y = trial.suggest_int('y', 0, 10) return x**2 + y sampler = optuna.integration.SkoptSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10)
Parameters: - independent_sampler –
A
BaseSampler
instance that is used for independent sampling. The parameters not contained in the relative search space are sampled by this sampler. The search space forSkoptSampler
is determined byintersection_search_space()
.If
None
is specified,RandomSampler
is used as the default.See also
optuna.samplers
module provides built-in independent samplers such asRandomSampler
andTPESampler
. - warn_independent_sampling –
If this is
True
, a warning message is emitted when the value of a parameter is sampled by using an independent sampler.Note that the parameters of the first trial in a study are always sampled via an independent sampler, so no warning messages are emitted in this case.
- skopt_kwargs –
Keyword arguments passed to the constructor of skopt.Optimizer class.
Note that
dimensions
argument inskopt_kwargs
will be ignored because it is added bySkoptSampler
automatically. - n_startup_trials – The independent sampling is used until the given number of trials finish in the same study.
- independent_sampler –
-
class
optuna.integration.
TensorFlowPruningHook
(trial, estimator, metric, run_every_steps, is_higher_better=None)[source]¶ TensorFlow SessionRunHook to prune unpromising trials.
Example
See the example if you want to add a pruning SessionRunHook for TensorFlow’s Estimator.
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - estimator – An estimator which you will use.
- metric – An evaluation metric for pruning, e.g.,
accuracy
andloss
. - run_every_steps – An interval to watch the summary file.
- is_higher_better – Please do not use this argument because this class refers to
StudyDirection
to check whether the current study isminimize
ormaximize
.
- trial – A
-
class
optuna.integration.
TFKerasPruningCallback
(trial, monitor)[source]¶ tf.keras callback to prune unpromising trials.
This callback is intend to be compatible for TensorFlow v1 and v2, but only tested with TensorFlow v1.
Example
Add a pruning callback which observes validation losses.
model.fit(x, y, callbacks=[TFKerasPruningCallback(trial, 'val_loss')])
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - monitor – An evaluation metric for pruning, e.g.,
val_loss
orval_acc
.
- trial – A
-
class
optuna.integration.
XGBoostPruningCallback
(trial, observation_key)[source]¶ Callback for XGBoost to prune unpromising trials.
Example
Add a pruning callback which observes validation errors to training of an XGBoost model.
pruning_callback = XGBoostPruningCallback(trial, 'validation-error') bst = xgb.train(param, dtrain, evals=[(dtest, 'validation')], callbacks=[pruning_callback])
Parameters: - trial – A
Trial
corresponding to the current evaluation of the objective function. - observation_key – An evaluation metric for pruning, e.g.,
validation-error
andvalidation-merror
. Please refer toeval_metric
in XGBoost reference for further details.
- trial – A
-
class
optuna.integration.
OptunaSearchCV
(estimator, param_distributions, cv=5, enable_pruning=False, error_score=nan, max_iter=1000, n_jobs=1, n_trials=10, random_state=None, refit=True, return_train_score=False, scoring=None, study=None, subsample=1.0, timeout=None, verbose=0)[source]¶ Hyperparameter search with cross-validation.
Warning
This feature is experimental. The interface may be changed in the future.
Parameters: - estimator – Object to use to fit the data. This is assumed to implement the
scikit-learn estimator interface. Either this needs to provide
score
, orscoring
must be passed. - param_distributions – Dictionary where keys are parameters and values are distributions. Distributions are assumed to implement the optuna distribution interface.
- cv –
Cross-validation strategy. Possible inputs for cv are:
- integer to specify the number of folds in a CV splitter,
- a CV splitter,
- an iterable yielding (train, test) splits as arrays of indices.
For integer, if
estimator
is a classifier andy
is either binary or multiclass,sklearn.model_selection.StratifiedKFold
is used. otherwise,sklearn.model_selection.KFold
is used. - enable_pruning – If
True
, pruning is performed in the case where the underlying estimator supportspartial_fit
. - error_score – Value to assign to the score if an error occurs in fitting. If
‘raise’, the error is raised. If numeric,
sklearn.exceptions.FitFailedWarning
is raised. This does not affect the refit step, which will always raise the error. - max_iter – Maximum number of epochs. This is only used if the underlying
estimator supports
partial_fit
. - n_jobs – Number of parallel jobs.
-1
means using all processors. - n_trials – Number of trials. If
None
, there is no limitation on the number of trials. Iftimeout
is also set toNone
, the study continues to create trials until it receives a termination signal such as Ctrl+C or SIGTERM. This trades off runtime vs quality of the solution. - random_state – Seed of the pseudo random number generator. If int, this is the
seed used by the random number generator. If
numpy.random.RandomState
object, this is the random number generator. IfNone
, the global random state fromnumpy.random
is used. - refit – If
True
, refit the estimator with the best found hyperparameters. The refitted estimator is made available at thebest_estimator_
attribute and permits usingpredict
directly. - return_train_score – If
True
, training scores will be included. Computing training scores is used to get insights on how different hyperparameter settings impact the overfitting/underfitting trade-off. However computing training scores can be computationally expensive and is not strictly required to select the hyperparameters that yield the best generalization performance. - scoring – String or callable to evaluate the predictions on the test data.
If
None
,score
on the estimator is used. - study – Study corresponds to the optimization task. If
None
, a new study is created. - subsample –
Proportion of samples that are used during hyperparameter search.
- If int, then draw
subsample
samples. - If float, then draw
subsample
*X.shape[0]
samples.
- If int, then draw
- timeout – Time limit in seconds for the search of appropriate models. If
None
, the study is executed without time limitation. Ifn_trials
is also set toNone
, the study continues to create trials until it receives a termination signal such as Ctrl+C or SIGTERM. This trades off runtime vs quality of the solution. - verbose – Verbosity level. The higher, the more messages.
-
best_estimator_
¶ Estimator that was chosen by the search. This is present only if
refit
is set toTrue
.
-
n_splits_
¶ Number of cross-validation splits.
-
sample_indices_
¶ Indices of samples that are used during hyperparameter search.
-
scorer_
¶ Scorer function.
-
study_
¶ Actual study.
Examples
>>> import optuna >>> from sklearn.datasets import load_iris >>> from sklearn.svm import SVC >>> clf = SVC(gamma='auto') >>> param_distributions = { ... 'C': optuna.distributions.LogUniformDistribution(1e-10, 1e+10) ... } >>> optuna_search = optuna.integration.OptunaSearchCV( ... clf, ... param_distributions ... ) >>> X, y = load_iris(return_X_y=True) >>> optuna_search.fit(X, y) # doctest: +ELLIPSIS OptunaSearchCV(...) >>> y_pred = optuna_search.predict(X)
-
best_index_
¶ Index which corresponds to the best candidate parameter setting.
-
best_score_
¶ Mean cross-validated score of the best estimator.
-
classes_
¶ Class labels.
-
decision_function
¶ Call
decision_function
on the best estimator.This is available only if the underlying estimator supports
decision_function
andrefit
is set toTrue
.
-
fit
(X, y=None, groups=None, **fit_params)[source]¶ Run fit with all sets of parameters.
Parameters: - X – Training data.
- y – Target variable.
- groups – Group labels for the samples used while splitting the dataset into train/test set.
- **fit_params – Parameters passed to
fit
on the estimator.
Returns: Return self.
Return type: self
-
inverse_transform
¶ Call
inverse_transform
on the best estimator.This is available only if the underlying estimator supports
inverse_transform
andrefit
is set toTrue
.
-
n_trials_
¶ Actual number of trials.
-
predict
¶ Call
predict
on the best estimator.This is available only if the underlying estimator supports
predict
andrefit
is set toTrue
.
-
predict_log_proba
¶ Call
predict_log_proba
on the best estimator.This is available only if the underlying estimator supports
predict_log_proba
andrefit
is set toTrue
.
-
predict_proba
¶ Call
predict_proba
on the best estimator.This is available only if the underlying estimator supports
predict_proba
andrefit
is set toTrue
.
-
score
(X, y=None)[source]¶ Return the score on the given data.
Parameters: - X – Data.
- y – Target variable.
Returns: Scaler score.
Return type: score
-
score_samples
¶ Call
score_samples
on the best estimator.This is available only if the underlying estimator supports
score_samples
andrefit
is set toTrue
.
- estimator – Object to use to fit the data. This is assumed to implement the
scikit-learn estimator interface. Either this needs to provide