Integration

class optuna.integration.ChainerPruningExtension(trial, observation_key, pruner_trigger)[source]

Chainer extension to prune unpromising trials.

Example

Add a pruning extension which observes validation losses to Chainer Trainer.

trainer.extend(
    ChainerPruningExtension(trial, 'validation/main/loss', (1, 'epoch')))
Parameters:
  • trial – A Trial corresponding to the current evaluation of the objective function.
  • observation_key – An evaluation metric for pruning, e.g., main/loss and validation/main/accuracy. Please refer to chainer.Reporter reference for further details.
  • pruner_trigger

    A trigger to execute pruning. pruner_trigger is an instance of IntervalTrigger or ManualScheduleTrigger. IntervalTrigger can be specified by a tuple of the interval length and its unit like (1, 'epoch').

class optuna.integration.ChainerMNStudy(study, comm)[source]

A wrapper of Study to incorporate Optuna with ChainerMN.

See also

ChainerMNStudy provides the same interface as Study. Please refer to optuna.study.Study for further details.

Example

Optimize an objective function that trains neural network written with ChainerMN.

comm = chainermn.create_communicator('naive')
study = optuna.load_study(study_name, storage_url)
chainermn_study = optuna.integration.ChainerMNStudy(study, comm)
chainermn_study.optimize(objective, n_trials=25)
Parameters:
optimize(func, n_trials=None, timeout=None, catch=(<class 'Exception'>, ))[source]

Optimize an objective function.

This method provides the same interface as optuna.study.Study.optimize() except the absence of n_jobs argument.

class optuna.integration.CmaEsSampler(x0=None, sigma0=None, cma_stds=None, seed=None, cma_opts=None, n_startup_trials=1, independent_sampler=None, warn_independent_sampling=True)[source]

A Sampler using cma library as the backend.

Example

Optimize a simple quadratic function by using CmaEsSampler.

def objective(trial):
    x = trial.suggest_uniform('x', -1, 1)
    y = trial.suggest_int('y', -1, 1)
    return x**2 + y

sampler = optuna.integration.CmaEsSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)

Note that parallel execution of trials may affect the optimization performance of CMA-ES, especially if the number of trials running in parallel exceeds the population size.

Parameters:
  • x0 – A dictionary of an initial parameter values for CMA-ES. By default, the mean of low and high for each distribution is used. Please refer to cma.CMAEvotionStrategy for further details of x0.
  • sigma0

    Initial standard deviation of CMA-ES. By default, sigma0 is set to min_range / 6, where min_range denotes the minimum range of the distributions in the search space. If distribution is categorical, min_range is len(choices) - 1. Please refer to cma.CMAEvotionStrategy for further details of sigma0.

  • cma_stds

    A dictionary of multipliers of sigma0 for each parameters. The default value is 1.0. Please refer to cma.CMAEvotionStrategy for further details of cma_stds.

  • seed – A random seed for CMA-ES.
  • cma_opts

    Options passed to the constructor of cma.CMAEvotionStrategy class.

    Note that BoundaryHandler, bounds, CMA_stds and seed arguments in cma_opts will be ignored because it is added by CmaEsSampler automatically.

  • n_startup_trials – The independent sampling is used instead of the CMA-ES algorithm until the given number of trials finish in the same study.
  • independent_sampler

    A BaseSampler instance that is used for independent sampling. The parameters not contained in the relative search space are sampled by this sampler. The search space for CmaEsSampler is determined by product_search_space().

    If None is specified, RandomSampler is used as the default.

    See also

    optuna.samplers module provides built-in independent samplers such as RandomSampler and TPESampler.

  • warn_independent_sampling

    If this is True, a warning message is emitted when the value of a parameter is sampled by using an independent sampler.

    Note that the parameters of the first trial in a study are always sampled via an independent sampler, so no warning messages are emitted in this case.

class optuna.integration.KerasPruningCallback(trial, monitor)[source]

Keras callback to prune unpromising trials.

Example

Add a pruning callback which observes validation losses.

model.fit(X, y, callbacks=KerasPruningCallback(trial, 'val_loss'))
Parameters:
  • trial – A Trial corresponding to the current evaluation of the objective function.
  • monitor – An evaluation metric for pruning, e.g., val_loss and val_acc. Please refer to keras.Callback reference for further details.
class optuna.integration.LightGBMPruningCallback(trial, metric, valid_name='valid_0')[source]

Callback for LightGBM to prune unpromising trials.

Example

Add a pruning callback which observes validation scores to training of a LightGBM model.

param = {'objective': 'binary', 'metric': 'binary_error'}
pruning_callback = LightGBMPruningCallback(trial, 'binary_error')
gbm = lgb.train(param, dtrain, valid_sets=[dtest], callbacks=[pruning_callback])
Parameters:
  • trial – A Trial corresponding to the current evaluation of the objective function.
  • metric – An evaluation metric for pruning, e.g., binary_error and multi_error. Please refer to LightGBM reference for further details.
  • valid_name – The name of the target validation. Validation names are specified by valid_names option of train method. If omitted, valid_0 is used which is the default name of the first validation. Note that this argument will be ignored if you are calling cv method instead of train method.
class optuna.integration.MXNetPruningCallback(trial, eval_metric)[source]

MXNet callback to prune unpromising trials.

Example

Add a pruning callback which observes validation accuracy.

model.fit(train_data=X, eval_data=Y,
          eval_end_callback=MXNetPruningCallback(trial, eval_metric='accuracy'))
Parameters:
  • trial – A Trial corresponding to the current evaluation of the objective function.
  • eval_metric – An evaluation metric name for pruning, e.g., cross-entropy and accuracy. If using default metrics like mxnet.metrics.Accuracy, use it’s default metric name. For custom metrics, use the metric_name provided to constructor. Please refer to mxnet.metrics reference for further details.
class optuna.integration.SkoptSampler(independent_sampler=None, warn_independent_sampling=True, skopt_kwargs=None)[source]

Sampler using Scikit-Optimize as the backend.

Example

Optimize a simple quadratic function by using SkoptSampler.

def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    y = trial.suggest_int('y', 0, 10)
    return x**2 + y

sampler = optuna.integration.SkoptSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
Parameters:
  • independent_sampler

    A BaseSampler instance that is used for independent sampling. The parameters not contained in the relative search space are sampled by this sampler. The search space for SkoptSampler is determined by intersection_search_space().

    If None is specified, RandomSampler is used as the default.

    See also

    optuna.samplers module provides built-in independent samplers such as RandomSampler and TPESampler.

  • warn_independent_sampling

    If this is True, a warning message is emitted when the value of a parameter is sampled by using an independent sampler.

    Note that the parameters of the first trial in a study are always sampled via an independent sampler, so no warning messages are emitted in this case.

  • skopt_kwargs

    Keyword arguments passed to the constructor of skopt.Optimizer class.

    Note that dimensions argument in skopt_kwargs will be ignored because it is added by SkoptSampler automatically.

class optuna.integration.TensorFlowPruningHook(trial, estimator, metric, run_every_steps, is_higher_better=None)[source]

TensorFlow SessionRunHook to prune unpromising trials.

Example

Add a pruning SessionRunHook for a TensorFlow’s Estimator.

pruning_hook = TensorFlowPruningHook(
    trial=trial,
    estimator=clf,
    metric="accuracy",
    is_higher_better=True,
    run_every_steps=10,
)
hooks = [pruning_hook]
tf.estimator.train_and_evaluate(
    clf,
    tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=500, hooks=hooks),
    eval_spec
)
Parameters:
  • trial – A Trial corresponding to the current evaluation of the objective function.
  • estimator – An estimator which you will use.
  • metric – An evaluation metric for pruning, e.g., accuracy and loss.
  • run_every_steps – An interval to watch the summary file.
  • is_higher_better – Please do not use this argument because this class refers to StudyDirection to check whether the current study is minimize or maximize.
class optuna.integration.XGBoostPruningCallback(trial, observation_key)[source]

Callback for XGBoost to prune unpromising trials.

Example

Add a pruning callback which observes validation errors to training of an XGBoost model.

pruning_callback = XGBoostPruningCallback(trial, 'validation-error')
bst = xgb.train(param, dtrain, evals=[(dtest, 'validation')],
                callbacks=[pruning_callback])
Parameters:
  • trial – A Trial corresponding to the current evaluation of the objective function.
  • observation_key – An evaluation metric for pruning, e.g., validation-error and validation-merror. Please refer to eval_metric in XGBoost reference for further details.