OPTUNA

Optuna: A hyperparameter optimization framework

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.

Key Features

Optuna has modern functionalities as follows:

Basic Concepts

We use the terms study and trial as follows:

  • Study: optimization based on an objective function

  • Trial: a single execution of the objective function

Please refer to sample code below. The goal of a study is to find out the optimal set of hyperparameter values (e.g., classifier and svm_c) through multiple trials (e.g., n_trials=100). Optuna is a framework designed for the automation and the acceleration of the optimization studies.

Open in Colab

import ...

# Define an objective function to be minimized.
def objective(trial):

    # Invoke suggest methods of a Trial object to generate hyperparameters.
    regressor_name = trial.suggest_categorical('classifier', ['SVR', 'RandomForest'])
    if regressor_name == 'SVR':
        svr_c = trial.suggest_loguniform('svr_c', 1e-10, 1e10)
        regressor_obj = sklearn.svm.SVR(C=svr_c)
    else:
        rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32)
        regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)

    X, y = sklearn.datasets.load_boston(return_X_y=True)
    X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)

    regressor_obj.fit(X_train, y_train)
    y_pred = regressor_obj.predict(X_val)

    error = sklearn.metrics.mean_squared_error(y_val, y_pred)

    return error  # An objective value linked with the Trial object.

study = optuna.create_study()  # Create a new study.
study.optimize(objective, n_trials=100)  # Invoke optimization of the objective function.

Communication

Contribution

Any contributions to Optuna are welcome! When you send a pull request, please follow the contribution guide.

License

MIT License (see LICENSE).

Reference

Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD (arXiv).

Installation

Optuna supports Python 3.6 or newer.

We recommend to install Optuna via pip:

$ pip install optuna

You can also install the development version of Optuna from master branch of Git repository:

$ pip install git+https://github.com/optuna/optuna.git

You can also install Optuna via conda:

$ conda install -c conda-forge optuna

Tutorial

Below tutorials cover the basic concepts and usage of Optuna. The order we assume is as follows:

Other Resources:

  • Examples: More examples including how to use Optuna with popular libraries for machine learning and deep learning.

First Optimization

Quadratic Function Example

Usually, Optuna is used to optimize hyper-parameters, but as an example, let us directly optimize a quadratic function in an IPython shell.

import optuna

The objective function is what will be optimized.

def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    return (x - 2) ** 2

This function returns the value of \((x - 2)^2\). Our goal is to find the value of x that minimizes the output of the objective function. This is the “optimization.” During the optimization, Optuna repeatedly calls and evaluates the objective function with different values of x.

A Trial object corresponds to a single execution of the objective function and is internally instantiated upon each invocation of the function.

The suggest APIs (for example, suggest_float()) are called inside the objective function to obtain parameters for a trial. suggest_float() selects parameters uniformly within the range provided. In our example, from \(-10\) to \(10\).

To start the optimization, we create a study object and pass the objective function to method optimize() as follows.

study = optuna.create_study()
study.optimize(objective, n_trials=100)

You can get the best parameter as follows.

print(study.best_params)

Out:

{'x': 1.9768548120705323}

We can see that the x value found by Optuna is close to the optimal value of 2.

Note

When used to search for hyper-parameters in machine learning, usually the objective function would return the loss or accuracy of the model.

Study Object

Let us clarify the terminology in Optuna as follows:

  • Trial: A single call of the objective function

  • Study: An optimization session, which is a set of trials

  • Parameter: A variable whose value is to be optimized, such as x in the above example

In Optuna, we use the study object to manage optimization. Method create_study() returns a study object. A study object has useful properties for analyzing the optimization outcome.

To get the best parameter:

study.best_params

Out:

{'x': 1.9768548120705323}

To get the best value:

study.best_value

Out:

0.000535699724290379

To get the best trial:

study.best_trial

Out:

FrozenTrial(number=12, value=0.000535699724290379, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 739335), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 743264), params={'x': 1.9768548120705323}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=12, state=TrialState.COMPLETE)

To get all trials:

study.trials

Out:

[FrozenTrial(number=0, value=26.108223413568215, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 726927), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 727134), params={'x': 7.109620672179904}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=0, state=TrialState.COMPLETE), FrozenTrial(number=1, value=18.72372699415082, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 727496), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 727716), params={'x': 6.327092210035605}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.COMPLETE), FrozenTrial(number=2, value=2.787588754776623, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 728019), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 728222), params={'x': 0.33039263454648626}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=2, state=TrialState.COMPLETE), FrozenTrial(number=3, value=70.97380957550165, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 728510), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 728698), params={'x': -6.424595514058917}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=3, state=TrialState.COMPLETE), FrozenTrial(number=4, value=44.40947655102064, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 728958), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 729175), params={'x': -4.664043558607689}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=4, state=TrialState.COMPLETE), FrozenTrial(number=5, value=32.74545216290248, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 729460), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 729666), params={'x': -3.7223642109623256}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=5, state=TrialState.COMPLETE), FrozenTrial(number=6, value=26.651678343423676, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 729941), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 730148), params={'x': -3.1625263528067027}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=6, state=TrialState.COMPLETE), FrozenTrial(number=7, value=51.47233368554702, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 730422), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 730630), params={'x': 9.174422184785826}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=7, state=TrialState.COMPLETE), FrozenTrial(number=8, value=63.408745753113045, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 730901), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 731114), params={'x': -5.962960865979001}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=8, state=TrialState.COMPLETE), FrozenTrial(number=9, value=24.492444658839297, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 731385), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 731590), params={'x': 6.948984204747404}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=9, state=TrialState.COMPLETE), FrozenTrial(number=10, value=0.0014216542654378207, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 731860), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 735369), params={'x': 2.0377048307970984}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=10, state=TrialState.COMPLETE), FrozenTrial(number=11, value=0.04358761164792005, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 735672), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 739016), params={'x': 1.791223536652428}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=11, state=TrialState.COMPLETE), FrozenTrial(number=12, value=0.000535699724290379, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 739335), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 743264), params={'x': 1.9768548120705323}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=12, state=TrialState.COMPLETE), FrozenTrial(number=13, value=0.7550323185361455, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 743561), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 746866), params={'x': 2.8689259568778835}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=13, state=TrialState.COMPLETE), FrozenTrial(number=14, value=3.401616272854961, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 747244), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 750560), params={'x': 3.8443471128979385}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=14, state=TrialState.COMPLETE), FrozenTrial(number=15, value=143.3250814593747, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 751010), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 754191), params={'x': -9.971845365664171}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=15, state=TrialState.COMPLETE), FrozenTrial(number=16, value=11.607849862566022, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 754476), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 757728), params={'x': -1.4070294777952865}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=16, state=TrialState.COMPLETE), FrozenTrial(number=17, value=8.712735356877305, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 758032), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 761373), params={'x': -0.9517342964564586}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=17, state=TrialState.COMPLETE), FrozenTrial(number=18, value=6.728665939633097, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 761670), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 765020), params={'x': 4.59396722023103}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=18, state=TrialState.COMPLETE), FrozenTrial(number=19, value=0.044737055018407884, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 765329), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 768615), params={'x': 1.7884886409234533}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=19, state=TrialState.COMPLETE), FrozenTrial(number=20, value=2.131548662113765, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 768916), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 772103), params={'x': 0.5400175815737489}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=20, state=TrialState.COMPLETE), FrozenTrial(number=21, value=0.002579815917549637, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 772352), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 775609), params={'x': 1.9492081116953737}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=21, state=TrialState.COMPLETE), FrozenTrial(number=22, value=5.691093993835442, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 775919), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 779286), params={'x': 4.385601390390994}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=22, state=TrialState.COMPLETE), FrozenTrial(number=23, value=0.0008400118596813619, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 779584), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 782892), params={'x': 2.028982958090598}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=23, state=TrialState.COMPLETE), FrozenTrial(number=24, value=17.022530632100256, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 783224), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 787599), params={'x': -2.1258369614055592}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=24, state=TrialState.COMPLETE), FrozenTrial(number=25, value=13.281698719724579, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 787901), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 791208), params={'x': 5.644406497596636}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=25, state=TrialState.COMPLETE), FrozenTrial(number=26, value=1.421688270420719, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 791487), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 794764), params={'x': 0.8076542991142548}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=26, state=TrialState.COMPLETE), FrozenTrial(number=27, value=44.7859280722061, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 795088), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 798426), params={'x': 8.692228931544863}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=27, state=TrialState.COMPLETE), FrozenTrial(number=28, value=0.4048656668802813, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 798728), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 801949), params={'x': 2.63629055224817}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=28, state=TrialState.COMPLETE), FrozenTrial(number=29, value=7.306360512697975, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 802236), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 805616), params={'x': -0.7030280266208813}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=29, state=TrialState.COMPLETE), FrozenTrial(number=30, value=38.37615056296512, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 805917), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 809309), params={'x': 8.194848711870623}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=30, state=TrialState.COMPLETE), FrozenTrial(number=31, value=1.7484037181145708, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 809608), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 813009), params={'x': 3.322272180042585}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=31, state=TrialState.COMPLETE), FrozenTrial(number=32, value=0.2060706209667366, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 813315), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 816720), params={'x': 1.5460499796599447}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=32, state=TrialState.COMPLETE), FrozenTrial(number=33, value=12.415806211910033, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 817024), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 820255), params={'x': 5.523606988855317}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=33, state=TrialState.COMPLETE), FrozenTrial(number=34, value=3.7897222701357567, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 820509), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 823822), params={'x': 0.053279098037996464}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=34, state=TrialState.COMPLETE), FrozenTrial(number=35, value=0.05723598115156259, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 824107), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 827345), params={'x': 1.760759574587482}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=35, state=TrialState.COMPLETE), FrozenTrial(number=36, value=9.51146545516059, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 827651), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 830811), params={'x': 5.084066383066452}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=36, state=TrialState.COMPLETE), FrozenTrial(number=37, value=0.6823697947254954, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 831119), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 834150), params={'x': 2.8260567745170397}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=37, state=TrialState.COMPLETE), FrozenTrial(number=38, value=20.941404634893978, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 834421), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 837623), params={'x': -2.57617795052749}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=38, state=TrialState.COMPLETE), FrozenTrial(number=39, value=5.360217744846539, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 837989), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 841430), params={'x': -0.3152144058049005}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=39, state=TrialState.COMPLETE), FrozenTrial(number=40, value=25.186264394335122, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 841739), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 844734), params={'x': 7.018591873656905}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=40, state=TrialState.COMPLETE), FrozenTrial(number=41, value=0.5939219916529807, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 845041), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 848417), params={'x': 1.2293366547882398}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=41, state=TrialState.COMPLETE), FrozenTrial(number=42, value=0.016759064428153916, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 848734), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 851940), params={'x': 2.1294568052601095}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=42, state=TrialState.COMPLETE), FrozenTrial(number=43, value=0.24399763532677102, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 852251), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 855690), params={'x': 2.4939611678328277}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=43, state=TrialState.COMPLETE), FrozenTrial(number=44, value=3.808029839728935, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 855998), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 859503), params={'x': 3.9514173924942186}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=44, state=TrialState.COMPLETE), FrozenTrial(number=45, value=2.1402064597508224, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 859817), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 863219), params={'x': 0.5370555513790614}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=45, state=TrialState.COMPLETE), FrozenTrial(number=46, value=2.495705646351366, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 863475), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 866488), params={'x': 3.5797802525514}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=46, state=TrialState.COMPLETE), FrozenTrial(number=47, value=0.12391133865990965, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 866777), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 870150), params={'x': 2.3520104240784776}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=47, state=TrialState.COMPLETE), FrozenTrial(number=48, value=35.083735651440236, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 870442), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 873840), params={'x': -3.9231525095543702}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=48, state=TrialState.COMPLETE), FrozenTrial(number=49, value=0.9257074131620384, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 874159), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 877607), params={'x': 1.0378631006129957}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=49, state=TrialState.COMPLETE), FrozenTrial(number=50, value=11.309314837600713, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 877915), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 881309), params={'x': -1.3629324759204895}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=50, state=TrialState.COMPLETE), FrozenTrial(number=51, value=2.1602300535032475, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 881760), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 885257), params={'x': 3.469772109377249}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=51, state=TrialState.COMPLETE), FrozenTrial(number=52, value=0.01605983225882903, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 885551), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 888933), params={'x': 2.1267273934823447}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=52, state=TrialState.COMPLETE), FrozenTrial(number=53, value=0.03252368306114132, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 889248), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 892654), params={'x': 2.180343236804548}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=53, state=TrialState.COMPLETE), FrozenTrial(number=54, value=4.529485282904583, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 892969), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 896251), params={'x': -0.12825874435055074}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=54, state=TrialState.COMPLETE), FrozenTrial(number=55, value=7.504095970581945, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 896544), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 899955), params={'x': 4.739360503946486}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=55, state=TrialState.COMPLETE), FrozenTrial(number=56, value=1.3728618449224683, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 900267), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 903695), params={'x': 3.1716918728584185}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=56, state=TrialState.COMPLETE), FrozenTrial(number=57, value=14.872701673963075, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 904010), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 907407), params={'x': 5.856514186926203}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=57, state=TrialState.COMPLETE), FrozenTrial(number=58, value=0.6700681771333539, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 907717), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 911104), params={'x': 1.1814230780596402}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=58, state=TrialState.COMPLETE), FrozenTrial(number=59, value=4.560957931442746, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 911396), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 914551), params={'x': 4.1356399348773065}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=59, state=TrialState.COMPLETE), FrozenTrial(number=60, value=0.0009416351200777964, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 914896), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 918411), params={'x': 2.0306860737155765}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=60, state=TrialState.COMPLETE), FrozenTrial(number=61, value=0.014019313821028393, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 918725), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 922009), params={'x': 2.1184031833230357}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=61, state=TrialState.COMPLETE), FrozenTrial(number=62, value=2.5497407091861928, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 922269), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 925684), params={'x': 0.40320924689983473}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=62, state=TrialState.COMPLETE), FrozenTrial(number=63, value=0.1748122728798292, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 925977), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 929154), params={'x': 1.5818944237637709}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=63, state=TrialState.COMPLETE), FrozenTrial(number=64, value=129.3929957707187, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 929471), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 932867), params={'x': -9.375104209224578}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=64, state=TrialState.COMPLETE), FrozenTrial(number=65, value=0.9089481099015729, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 933163), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 936572), params={'x': 2.953387701778019}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=65, state=TrialState.COMPLETE), FrozenTrial(number=66, value=8.203837016184718, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 936867), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 940147), params={'x': -0.8642341063859844}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=66, state=TrialState.COMPLETE), FrozenTrial(number=67, value=1.3257727100084467, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 940473), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 943933), params={'x': 0.8485779618191918}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=67, state=TrialState.COMPLETE), FrozenTrial(number=68, value=15.143872574857221, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 944231), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 947713), params={'x': -1.8915128902339795}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=68, state=TrialState.COMPLETE), FrozenTrial(number=69, value=0.0014167156742555067, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 948006), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 951459), params={'x': 1.9623607163424235}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=69, state=TrialState.COMPLETE), FrozenTrial(number=70, value=19.30238887609378, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 951774), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 955186), params={'x': 6.393448403713623}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=70, state=TrialState.COMPLETE), FrozenTrial(number=71, value=0.019810281818791773, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 955466), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 958720), params={'x': 2.1407490029051424}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=71, state=TrialState.COMPLETE), FrozenTrial(number=72, value=0.16871169720432258, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 959001), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 962498), params={'x': 1.5892547051951507}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=72, state=TrialState.COMPLETE), FrozenTrial(number=73, value=2.930255013950916, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 962817), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 966297), params={'x': 3.7117987656120435}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=73, state=TrialState.COMPLETE), FrozenTrial(number=74, value=5.825968551570679, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 966615), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 970093), params={'x': -0.4137043214881726}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=74, state=TrialState.COMPLETE), FrozenTrial(number=75, value=0.665872785501191, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 970410), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 973540), params={'x': 2.816010285168754}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=75, state=TrialState.COMPLETE), FrozenTrial(number=76, value=3.2145131330130496, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 973804), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 976868), params={'x': 0.20709366306740673}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=76, state=TrialState.COMPLETE), FrozenTrial(number=77, value=6.483468422998057, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 977185), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 980688), params={'x': 4.5462655837516355}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=77, state=TrialState.COMPLETE), FrozenTrial(number=78, value=0.01308281676473269, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 981007), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 984315), params={'x': 1.8856198585211021}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=78, state=TrialState.COMPLETE), FrozenTrial(number=79, value=0.6427022123279538, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 984644), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 988152), params={'x': 1.1983128962444551}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=79, state=TrialState.COMPLETE), FrozenTrial(number=80, value=1.3731589723903712, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 988491), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 991853), params={'x': 3.171818660198911}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=80, state=TrialState.COMPLETE), FrozenTrial(number=81, value=0.06937132462989902, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 992174), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 995595), params={'x': 2.263384366715071}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=81, state=TrialState.COMPLETE), FrozenTrial(number=82, value=0.02938787745917516, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 995919), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 22, 999487), params={'x': 1.8285710716968249}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=82, state=TrialState.COMPLETE), FrozenTrial(number=83, value=1.8331801077744525, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 22, 999810), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 3348), params={'x': 0.6460501826971383}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=83, state=TrialState.COMPLETE), FrozenTrial(number=84, value=0.04395233717093184, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 3673), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 7194), params={'x': 1.7903518729610688}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=84, state=TrialState.COMPLETE), FrozenTrial(number=85, value=0.3840790907615754, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 7518), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 11081), params={'x': 2.6197411481913844}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=85, state=TrialState.COMPLETE), FrozenTrial(number=86, value=0.4819899637306362, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 11403), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 14855), params={'x': 1.3057450297400557}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=86, state=TrialState.COMPLETE), FrozenTrial(number=87, value=3.53356400346849, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 15197), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 18654), params={'x': 3.879777647347816}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=87, state=TrialState.COMPLETE), FrozenTrial(number=88, value=1.6345958707059454, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 18981), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 22458), params={'x': 3.2785131484290435}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=88, state=TrialState.COMPLETE), FrozenTrial(number=89, value=0.24341421286390458, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 22761), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 26207), params={'x': 2.493370259403528}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=89, state=TrialState.COMPLETE), FrozenTrial(number=90, value=1.5877008040572769, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 26539), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 30093), params={'x': 0.739959999024921}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=90, state=TrialState.COMPLETE), FrozenTrial(number=91, value=0.0007840407526423287, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 30420), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 33829), params={'x': 1.9719992722837008}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=91, state=TrialState.COMPLETE), FrozenTrial(number=92, value=0.004876335302296941, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 34157), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 37731), params={'x': 1.9301692381375017}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=92, state=TrialState.COMPLETE), FrozenTrial(number=93, value=3.4096460779709172, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 38061), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 41612), params={'x': 0.15347730098682044}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=93, state=TrialState.COMPLETE), FrozenTrial(number=94, value=0.008049850308438236, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 41944), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 45452), params={'x': 1.910279041977706}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=94, state=TrialState.COMPLETE), FrozenTrial(number=95, value=0.28787193144168016, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 45792), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 49249), params={'x': 1.4634630195022154}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=95, state=TrialState.COMPLETE), FrozenTrial(number=96, value=0.8185657919358443, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 49578), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 53060), params={'x': 2.9047462583154706}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=96, state=TrialState.COMPLETE), FrozenTrial(number=97, value=1.164243350865885, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 53369), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 56828), params={'x': 0.9209989106280361}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=97, state=TrialState.COMPLETE), FrozenTrial(number=98, value=8.886738769324621, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 57158), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 60726), params={'x': 4.981063362178775}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=98, state=TrialState.COMPLETE), FrozenTrial(number=99, value=5.665518147496434, datetime_start=datetime.datetime(2020, 11, 4, 4, 32, 23, 61057), datetime_complete=datetime.datetime(2020, 11, 4, 4, 32, 23, 64617), params={'x': -0.3802348933448634}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=99, state=TrialState.COMPLETE)]

To get the number of trials:

len(study.trials)

Out:

100

By executing optimize() again, we can continue the optimization.

study.optimize(objective, n_trials=100)

To get the updated number of trials:

len(study.trials)

Out:

200

Total running time of the script: ( 0 minutes 0.763 seconds)

Gallery generated by Sphinx-Gallery

Advanced Configurations

Defining Parameter Spaces

Optuna supports five kinds of parameters.

def objective(trial):
    # Categorical parameter
    optimizer = trial.suggest_categorical('optimizer', ['MomentumSGD', 'Adam'])

    # Int parameter
    num_layers = trial.suggest_int('num_layers', 1, 3)

    # Uniform parameter
    dropout_rate = trial.suggest_uniform('dropout_rate', 0.0, 1.0)

    # Loguniform parameter
    learning_rate = trial.suggest_loguniform('learning_rate', 1e-5, 1e-2)

    # Discrete-uniform parameter
    drop_path_rate = trial.suggest_discrete_uniform('drop_path_rate', 0.0, 1.0, 0.1)

    ...
Branches and Loops

You can use branches or loops depending on the parameter values.

def objective(trial):
    classifier_name = trial.suggest_categorical('classifier', ['SVC', 'RandomForest'])
    if classifier_name == 'SVC':
        svc_c = trial.suggest_loguniform('svc_c', 1e-10, 1e10)
        classifier_obj = sklearn.svm.SVC(C=svc_c)
    else:
        rf_max_depth = int(trial.suggest_loguniform('rf_max_depth', 2, 32))
        classifier_obj = sklearn.ensemble.RandomForestClassifier(max_depth=rf_max_depth)

    ...
def create_model(trial):
    n_layers = trial.suggest_int('n_layers', 1, 3)

    layers = []
    for i in range(n_layers):
        n_units = int(trial.suggest_loguniform('n_units_l{}'.format(i), 4, 128))
        layers.append(L.Linear(None, n_units))
        layers.append(F.relu)
    layers.append(L.Linear(None, 10))

    return chainer.Sequential(*layers)

Please also refer to examples.

Note on the Number of Parameters

The difficulty of optimization increases roughly exponentially with regard to the number of parameters. That is, the number of necessary trials increases exponentially when you increase the number of parameters, so it is recommended to not add unimportant parameters.

Arguments for Study.optimize

The method optimize() (and optuna study optimize CLI command as well) has several useful options such as timeout. For details, please refer to the API reference for optimize().

FYI: If you give neither n_trials nor timeout options, the optimization continues until it receives a termination signal such as Ctrl+C or SIGTERM. This is useful for use cases such as when it is hard to estimate the computational costs required to optimize your objective function.

Total running time of the script: ( 0 minutes 0.000 seconds)

Gallery generated by Sphinx-Gallery

Saving/Resuming Study with RDB Backend

An RDB backend enables persistent experiments (i.e., to save and resume a study) as well as access to history of studies. In addition, we can run multi-node optimization tasks with this feature, which is described in Distributed Optimization.

In this section, let’s try simple examples running on a local environment with SQLite DB.

Note

You can also utilize other RDB backends, e.g., PostgreSQL or MySQL, by setting the storage argument to the DB’s URL. Please refer to SQLAlchemy’s document for how to set up the URL.

New Study

We can create a persistent study by calling create_study() function as follows. An SQLite file example.db is automatically initialized with a new study record.

import optuna
study_name = 'example-study'  # Unique identifier of the study.
study = optuna.create_study(study_name=study_name, storage='sqlite:///example.db')

To run a study, call optimize() method passing an objective function.

def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    return (x - 2) ** 2

study.optimize(objective, n_trials=3)
Resume Study

To resume a study, instantiate a Study object passing the study name example-study and the DB URL sqlite:///example.db.

study = optuna.create_study(study_name='example-study', storage='sqlite:///example.db', load_if_exists=True)
study.optimize(objective, n_trials=3)
Experimental History

We can access histories of studies and trials via the Study class. For example, we can get all trials of example-study as:

import optuna
study = optuna.create_study(study_name='example-study', storage='sqlite:///example.db', load_if_exists=True)
df = study.trials_dataframe(attrs=('number', 'value', 'params', 'state'))

The method trials_dataframe() returns a pandas dataframe like:

print(df)

Out:

   number       value  params_x     state
0       0   25.301959 -3.030105  COMPLETE
1       1    1.406223  0.814157  COMPLETE
2       2   44.010366 -4.634031  COMPLETE
3       3   55.872181  9.474770  COMPLETE
4       4  113.039223 -8.631991  COMPLETE
5       5   57.319570  9.570969  COMPLETE

A Study object also provides properties such as trials, best_value, best_params (see also First Optimization).

study.best_params  # Get best parameters for the objective function.
study.best_value  # Get best objective value.
study.best_trial  # Get best trial's information.
study.trials  # Get all trials' information.

Total running time of the script: ( 0 minutes 0.000 seconds)

Gallery generated by Sphinx-Gallery

Distributed Optimization

There is no complicated setup but just sharing the same study name among nodes/processes.

First, create a shared study using optuna create-study command (or using optuna.create_study() in a Python script).

$ optuna create-study --study-name "distributed-example" --storage "sqlite:///example.db"
[I 2020-07-21 13:43:39,642] A new study created with name: distributed-example

Then, write an optimization script. Let’s assume that foo.py contains the following code.

import optuna

def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    return (x - 2) ** 2

if __name__ == '__main__':
    study = optuna.load_study(study_name='distributed-example', storage='sqlite:///example.db')
    study.optimize(objective, n_trials=100)

Finally, run the shared study from multiple processes. For example, run Process 1 in a terminal, and do Process 2 in another one. They get parameter suggestions based on shared trials’ history.

Process 1:

$ python foo.py
[I 2020-07-21 13:45:02,973] Trial 0 finished with value: 45.35553104173011 and parameters: {'x': 8.73465151598285}. Best is trial 0 with value: 45.35553104173011.
[I 2020-07-21 13:45:04,013] Trial 2 finished with value: 4.6002397305938905 and parameters: {'x': 4.144816945707463}. Best is trial 1 with value: 0.028194513284051464.
...

Process 2 (the same command as process 1):

$ python foo.py
[I 2020-07-21 13:45:03,748] Trial 1 finished with value: 0.028194513284051464 and parameters: {'x': 1.8320877810162361}. Best is trial 1 with value: 0.028194513284051464.
[I 2020-07-21 13:45:05,783] Trial 3 finished with value: 24.45966755098074 and parameters: {'x': 6.945671597566982}. Best is trial 1 with value: 0.028194513284051464.
...

Note

We do not recommend SQLite for large scale distributed optimizations because it may cause serious performance issues. Please consider to use another database engine like PostgreSQL or MySQL.

Note

Please avoid putting the SQLite database on NFS when running distributed optimizations. See also: https://www.sqlite.org/faq.html#q5

Total running time of the script: ( 0 minutes 0.000 seconds)

Gallery generated by Sphinx-Gallery

Command-Line Interface

Command

Description

create-study

Create a new study.

delete-study

Delete a specified study.

dashboard

Launch web dashboard (beta).

storage upgrade

Upgrade the schema of a storage.

studies

Show a list of studies.

study optimize

Start optimization of a study.

study set-user-attr

Set a user attribute to a study.

Optuna provides command-line interface as shown in the above table.

Let us assume you are not in IPython shell and writing Python script files instead. It is totally fine to write scripts like the following:

import optuna


def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    return (x - 2) ** 2


if __name__ == '__main__':
    study = optuna.create_study()
    study.optimize(objective, n_trials=100)
    print('Best value: {} (params: {})\n'.format(study.best_value, study.best_params))

Out:

Best value: 7.533232621133377e-06 (params: {'x': 1.9972553265000854})

However, we can reduce boilerplate codes by using our optuna command. Let us assume that foo.py contains only the following code.

def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    return (x - 2) ** 2

Even so, we can invoke the optimization as follows. (Don’t care about --storage sqlite:///example.db for now, which is described in Saving/Resuming Study with RDB Backend.)

$ cat foo.py
def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    return (x - 2) ** 2

$ STUDY_NAME=`optuna create-study --storage sqlite:///example.db`
$ optuna study optimize foo.py objective --n-trials=100 --storage sqlite:///example.db --study-name $STUDY_NAME
[I 2018-05-09 10:40:25,196] Finished a trial resulted in value: 54.353767789264026. Current best value is 54.353767789264026 with parameters: {'x': -5.372500782588228}.
[I 2018-05-09 10:40:25,197] Finished a trial resulted in value: 15.784266965526376. Current best value is 15.784266965526376 with parameters: {'x': 5.972941852774387}.
...
[I 2018-05-09 10:40:26,204] Finished a trial resulted in value: 14.704254135013741. Current best value is 2.280758099793617e-06 with parameters: {'x': 1.9984897821018828}.

Please note that foo.py only contains the definition of the objective function. By giving the script file name and the method name of objective function to optuna study optimize command, we can invoke the optimization.

Total running time of the script: ( 0 minutes 0.342 seconds)

Gallery generated by Sphinx-Gallery

User Attributes

This feature is to annotate experiments with user-defined attributes.

Adding User Attributes to Studies

A Study object provides set_user_attr() method to register a pair of key and value as an user-defined attribute. A key is supposed to be a str, and a value be any object serializable with json.dumps.

import sklearn.datasets
import sklearn.model_selection
import sklearn.svm

import optuna


study = optuna.create_study(storage='sqlite:///example.db')
study.set_user_attr('contributors', ['Akiba', 'Sano'])
study.set_user_attr('dataset', 'MNIST')

We can access annotated attributes with user_attr property.

study.user_attrs  # {'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}

Out:

{'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}

StudySummary object, which can be retrieved by get_all_study_summaries(), also contains user-defined attributes.

study_summaries = optuna.get_all_study_summaries('sqlite:///example.db')
study_summaries[0].user_attrs  # {'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}

Out:

{'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}

See also

optuna study set-user-attr command, which sets an attribute via command line interface.

Adding User Attributes to Trials

As with Study, a Trial object provides set_user_attr() method. Attributes are set inside an objective function.

def objective(trial):
    iris = sklearn.datasets.load_iris()
    x, y = iris.data, iris.target

    svc_c = trial.suggest_loguniform('svc_c', 1e-10, 1e10)
    clf = sklearn.svm.SVC(C=svc_c)
    accuracy = sklearn.model_selection.cross_val_score(clf, x, y).mean()

    trial.set_user_attr('accuracy', accuracy)

    return 1.0 - accuracy  # return error for minimization


study.optimize(objective, n_trials=1)

We can access annotated attributes as:

study.trials[0].user_attrs

Out:

{'accuracy': 0.9266666666666667}

Note that, in this example, the attribute is not annotated to a Study but a single Trial.

Total running time of the script: ( 0 minutes 1.310 seconds)

Gallery generated by Sphinx-Gallery

Pruning Unpromising Trials

This feature automatically stops unpromising trials at the early stages of the training (a.k.a., automated early-stopping). Optuna provides interfaces to concisely implement the pruning mechanism in iterative training algorithms.

Activating Pruners

To turn on the pruning feature, you need to call report() and should_prune() after each step of the iterative training. report() periodically monitors the intermediate objective values. should_prune() decides termination of the trial that does not meet a predefined condition.

import sklearn.datasets
import sklearn.linear_model
import sklearn.model_selection

import optuna


def objective(trial):
    iris = sklearn.datasets.load_iris()
    classes = list(set(iris.target))
    train_x, valid_x, train_y, valid_y = \
        sklearn.model_selection.train_test_split(iris.data, iris.target, test_size=0.25, random_state=0)

    alpha = trial.suggest_loguniform('alpha', 1e-5, 1e-1)
    clf = sklearn.linear_model.SGDClassifier(alpha=alpha)

    for step in range(100):
        clf.partial_fit(train_x, train_y, classes=classes)

        # Report intermediate objective value.
        intermediate_value = 1.0 - clf.score(valid_x, valid_y)
        trial.report(intermediate_value, step)

        # Handle pruning based on the intermediate value.
        if trial.should_prune():
            raise optuna.TrialPruned()

    return 1.0 - clf.score(valid_x, valid_y)

Set up the median stopping rule as the pruning condition.

study = optuna.create_study(pruner=optuna.pruners.MedianPruner())
study.optimize(objective, n_trials=20)

Executing the script above:

$ python prune.py
[I 2020-06-12 16:54:23,876] Trial 0 finished with value: 0.3157894736842105 and parameters: {'alpha': 0.00181467547181131}. Best is trial 0 with value: 0.3157894736842105.
[I 2020-06-12 16:54:23,981] Trial 1 finished with value: 0.07894736842105265 and parameters: {'alpha': 0.015378744419287613}. Best is trial 1 with value: 0.07894736842105265.
[I 2020-06-12 16:54:24,083] Trial 2 finished with value: 0.21052631578947367 and parameters: {'alpha': 0.04089428832878595}. Best is trial 1 with value: 0.07894736842105265.
[I 2020-06-12 16:54:24,185] Trial 3 finished with value: 0.052631578947368474 and parameters: {'alpha': 0.004018735937374473}. Best is trial 3 with value: 0.052631578947368474.
[I 2020-06-12 16:54:24,303] Trial 4 finished with value: 0.07894736842105265 and parameters: {'alpha': 2.805688697062864e-05}. Best is trial 3 with value: 0.052631578947368474.
[I 2020-06-12 16:54:24,315] Trial 5 pruned.
[I 2020-06-12 16:54:24,355] Trial 6 pruned.
[I 2020-06-12 16:54:24,511] Trial 7 finished with value: 0.052631578947368474 and parameters: {'alpha': 2.243775785299103e-05}. Best is trial 3 with value: 0.052631578947368474.
[I 2020-06-12 16:54:24,625] Trial 8 finished with value: 0.1842105263157895 and parameters: {'alpha': 0.007021209286214553}. Best is trial 3 with value: 0.052631578947368474.
[I 2020-06-12 16:54:24,629] Trial 9 pruned.
...

Trial 5 pruned., etc. in the log messages means several trials were stopped before they finished all of the iterations.

Integration Modules for Pruning

To implement pruning mechanism in much simpler forms, Optuna provides integration modules for the following libraries.

For the complete list of Optuna’s integration modules, see integration.

For example, XGBoostPruningCallback introduces pruning without directly changing the logic of training iteration. (See also example for the entire script.)

pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')
bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])

Total running time of the script: ( 0 minutes 1.368 seconds)

Gallery generated by Sphinx-Gallery

User-Defined Sampler

Thanks to user-defined samplers, you can:

  • experiment your own sampling algorithms,

  • implement task-specific algorithms to refine the optimization performance, or

  • wrap other optimization libraries to integrate them into Optuna pipelines (e.g., SkoptSampler).

This section describes the internal behavior of sampler classes and shows an example of implementing a user-defined sampler.

Overview of Sampler

A sampler has the responsibility to determine the parameter values to be evaluated in a trial. When a suggest API (e.g., suggest_uniform()) is called inside an objective function, the corresponding distribution object (e.g., UniformDistribution) is created internally. A sampler samples a parameter value from the distribution. The sampled value is returned to the caller of the suggest API and evaluated in the objective function.

To create a new sampler, you need to define a class that inherits BaseSampler. The base class has three abstract methods; infer_relative_search_space(), sample_relative(), and sample_independent().

As the method names imply, Optuna supports two types of sampling: one is relative sampling that can consider the correlation of the parameters in a trial, and the other is independent sampling that samples each parameter independently.

At the beginning of a trial, infer_relative_search_space() is called to provide the relative search space for the trial. Then, sample_relative() is invoked to sample relative parameters from the search space. During the execution of the objective function, sample_independent() is used to sample parameters that don’t belong to the relative search space.

Note

Please refer to the document of BaseSampler for further details.

An Example: Implementing SimulatedAnnealingSampler

For example, the following code defines a sampler based on Simulated Annealing (SA):

import numpy as np
import optuna


class SimulatedAnnealingSampler(optuna.samplers.BaseSampler):
    def __init__(self, temperature=100):
        self._rng = np.random.RandomState()
        self._temperature = temperature  # Current temperature.
        self._current_trial = None  # Current state.

    def sample_relative(self, study, trial, search_space):
        if search_space == {}:
            return {}

        #
        # An implementation of SA algorithm.
        #

        # Calculate transition probability.
        prev_trial = study.trials[-2]
        if self._current_trial is None or prev_trial.value <= self._current_trial.value:
            probability = 1.0
        else:
            probability = np.exp((self._current_trial.value - prev_trial.value) / self._temperature)
        self._temperature *= 0.9  # Decrease temperature.

        # Transit the current state if the previous result is accepted.
        if self._rng.uniform(0, 1) < probability:
            self._current_trial = prev_trial

        # Sample parameters from the neighborhood of the current point.
        #
        # The sampled parameters will be used during the next execution of
        # the objective function passed to the study.
        params = {}
        for param_name, param_distribution in search_space.items():
            if not isinstance(param_distribution, optuna.distributions.UniformDistribution):
                raise NotImplementedError('Only suggest_uniform() is supported')

            current_value = self._current_trial.params[param_name]
            width = (param_distribution.high - param_distribution.low) * 0.1
            neighbor_low = max(current_value - width, param_distribution.low)
            neighbor_high = min(current_value + width, param_distribution.high)
            params[param_name] = self._rng.uniform(neighbor_low, neighbor_high)

        return params

    #
    # The rest is boilerplate code and unrelated to SA algorithm.
    #
    def infer_relative_search_space(self, study, trial):
        return optuna.samplers.intersection_search_space(study)

    def sample_independent(self, study, trial, param_name, param_distribution):
        independent_sampler = optuna.samplers.RandomSampler()
        return independent_sampler.sample_independent(study, trial, param_name, param_distribution)

Note

In favor of code simplicity, the above implementation doesn’t support some features (e.g., maximization). If you’re interested in how to support those features, please see examples/samplers/simulated_annealing.py.

You can use SimulatedAnnealingSampler in the same way as built-in samplers as follows:

def objective(trial):
    x = trial.suggest_uniform('x', -10, 10)
    y = trial.suggest_uniform('y', -5, 5)
    return x**2 + y

sampler = SimulatedAnnealingSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)

In this optimization, the values of x and y parameters are sampled by using SimulatedAnnealingSampler.sample_relative method.

Note

Strictly speaking, in the first trial, SimulatedAnnealingSampler.sample_independent method is used to sample parameter values. Because intersection_search_space() used in SimulatedAnnealingSampler.infer_relative_search_space cannot infer the search space if there are no complete trials.

Total running time of the script: ( 0 minutes 0.000 seconds)

Gallery generated by Sphinx-Gallery

Gallery generated by Sphinx-Gallery

API Reference

optuna

The optuna module is primarily used as an alias for basic Optuna functionality coded in other modules. Currently, two modules are aliased: (1) from optuna.study, functions regarding the Study lifecycle, and (2) from optuna.exceptions, the TrialPruned Exception raised when a trial is pruned.

optuna.create_study

Create a new Study.

optuna.load_study

Load the existing Study that has the specified name.

optuna.delete_study

Delete a Study object.

optuna.get_all_study_summaries

Get all history of studies stored in a specified storage.

optuna.TrialPruned

Exception for pruned trials.

optuna.cli

The cli module implements Optuna’s command-line functionality using the cliff framework.

optuna
    [--version]
    [-v | -q]
    [--log-file LOG_FILE]
    [--debug]
    [--storage STORAGE]
--version

show program’s version number and exit

-v, --verbose

Increase verbosity of output. Can be repeated.

-q, --quiet

Suppress output except warnings and errors.

--log-file <LOG_FILE>

Specify a file to log output. Disabled by default.

--debug

Show tracebacks on errors.

--storage <STORAGE>

DB URL. (e.g. sqlite:///example.db)

create-study

Create a new study.

optuna create-study
    [--study-name STUDY_NAME]
    [--direction {minimize,maximize}]
    [--skip-if-exists]
--study-name <STUDY_NAME>

A human-readable name of a study to distinguish it from others.

--direction <DIRECTION>

Set direction of optimization to a new study. Set ‘minimize’ for minimization and ‘maximize’ for maximization.

--skip-if-exists

If specified, the creation of the study is skipped without any error when the study name is duplicated.

This command is provided by the optuna plugin.

dashboard

Launch web dashboard (beta).

optuna dashboard
    [--study STUDY]
    [--study-name STUDY_NAME]
    [--out OUT]
    [--allow-websocket-origin BOKEH_ALLOW_WEBSOCKET_ORIGINS]
--study <STUDY>

This argument is deprecated. Use –study-name instead.

--study-name <STUDY_NAME>

The name of the study to show on the dashboard.

--out <OUT>, -o <OUT>

Output HTML file path. If it is not given, a HTTP server starts and the dashboard is served.

--allow-websocket-origin <BOKEH_ALLOW_WEBSOCKET_ORIGINS>

Allow websocket access from the specified host(s).Internally, it is used as the value of bokeh’s –allow-websocket-origin option. Please refer to https://bokeh.pydata.org/en/latest/docs/reference/command/subcommands/serve.html for more details.

This command is provided by the optuna plugin.

delete-study

Delete a specified study.

optuna delete-study [--study-name STUDY_NAME]
--study-name <STUDY_NAME>

The name of the study to delete.

This command is provided by the optuna plugin.

storage upgrade

Upgrade the schema of a storage.

optuna storage upgrade

This command is provided by the optuna plugin.

studies

Show a list of studies.

optuna studies
    [-f {csv,json,table,value,yaml}]
    [-c COLUMN]
    [--quote {all,minimal,none,nonnumeric}]
    [--noindent]
    [--max-width <integer>]
    [--fit-width]
    [--print-empty]
    [--sort-column SORT_COLUMN]
-f <FORMATTER>, --format <FORMATTER>

the output format, defaults to table

-c COLUMN, --column COLUMN

specify the column(s) to include, can be repeated to show multiple columns

--quote <QUOTE_MODE>

when to include quotes, defaults to nonnumeric

--noindent

whether to disable indenting the JSON

--max-width <integer>

Maximum display width, <1 to disable. You can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence.

--fit-width

Fit the table to the display width. Implied if –max-width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable

--print-empty

Print empty table if there is no data to show.

--sort-column SORT_COLUMN

specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated

This command is provided by the optuna plugin.

study optimize

Start optimization of a study. Deprecated since version 2.0.0.

optuna study optimize
    [--n-trials N_TRIALS]
    [--timeout TIMEOUT]
    [--n-jobs N_JOBS]
    [--study STUDY]
    [--study-name STUDY_NAME]
    file
    method
--n-trials <N_TRIALS>

The number of trials. If this argument is not given, as many trials run as possible.

--timeout <TIMEOUT>

Stop study after the given number of second(s). If this argument is not given, as many trials run as possible.

--n-jobs <N_JOBS>

The number of parallel jobs. If this argument is set to -1, the number is set to CPU counts.

--study <STUDY>

This argument is deprecated. Use –study-name instead.

--study-name <STUDY_NAME>

The name of the study to start optimization on.

file

Python script file where the objective function resides.

method

The method name of the objective function.

This command is provided by the optuna plugin.

study set-user-attr

Set a user attribute to a study.

optuna study set-user-attr
    [--study STUDY]
    [--study-name STUDY_NAME]
    --key KEY
    --value VALUE
--study <STUDY>

This argument is deprecated. Use –study-name instead.

--study-name <STUDY_NAME>

The name of the study to set the user attribute to.

--key <KEY>, -k <KEY>

Key of the user attribute.

--value <VALUE>, -v <VALUE>

Value to be set.

This command is provided by the optuna plugin.

optuna.distributions

The distributions module defines various classes representing probability distributions, mainly used to suggest initial hyperparameter values for an optimization trial. Distribution classes inherit from a library-internal BaseDistribution, and is initialized with specific parameters, such as the low and high endpoints for a UniformDistribution.

Optuna users should not use distribution classes directly, but instead use utility functions provided by Trial such as suggest_int().

optuna.distributions.UniformDistribution

A uniform distribution in the linear domain.

optuna.distributions.LogUniformDistribution

A uniform distribution in the log domain.

optuna.distributions.DiscreteUniformDistribution

A discretized uniform distribution in the linear domain.

optuna.distributions.IntUniformDistribution

A uniform distribution on integers.

optuna.distributions.IntLogUniformDistribution

A uniform distribution on integers in the log domain.

optuna.distributions.CategoricalDistribution

A categorical distribution.

optuna.distributions.distribution_to_json

Serialize a distribution to JSON format.

optuna.distributions.json_to_distribution

Deserialize a distribution in JSON format.

optuna.distributions.check_distribution_compatibility

A function to check compatibility of two distributions.

optuna.exceptions

The exceptions module defines Optuna-specific exceptions deriving from a base OptunaError class. Of special importance for library users is the TrialPruned exception to be raised if optuna.trial.Trial.should_prune() returns True for a trial that should be pruned.

optuna.exceptions.OptunaError

Base class for Optuna specific errors.

optuna.exceptions.TrialPruned

Exception for pruned trials.

optuna.exceptions.CLIUsageError

Exception for CLI.

optuna.exceptions.StorageInternalError

Exception for storage operation.

optuna.exceptions.DuplicatedStudyError

Exception for a duplicated study name.

optuna.importance

The importance module provides functionality for evaluating hyperparameter importances based on completed trials in a given study. The utility function get_param_importances() takes a Study and optional evaluator as two of its inputs. The evaluator must derive from BaseImportanceEvaluator, and is initialized as a FanovaImportanceEvaluator by default when not passed in. Users implementing custom evaluators should refer to either FanovaImportanceEvaluator or MeanDecreaseImpurityImportanceEvaluator as a guide, paying close attention to the format of the return value from the Evaluator’s evaluate() function.

optuna.importance.get_param_importances

Evaluate parameter importances based on completed trials in the given study.

optuna.importance.FanovaImportanceEvaluator

fANOVA importance evaluator.

optuna.importance.MeanDecreaseImpurityImportanceEvaluator

Mean Decrease Impurity (MDI) parameter importance evaluator.

optuna.integration

The integration module contains classes used to integrate Optuna with external machine learning frameworks.

For most of the ML frameworks supported by Optuna, the corresponding Optuna integration class serves only to implement a callback object and functions, compliant with the framework’s specific callback API, to be called with each intermediate step in the model training. The functionality implemented in these callbacks across the different ML frameworks includes:

  1. Reporting intermediate model scores back to the Optuna trial using optuna.trial.report(),

  2. According to the results of optuna.trial.Trial.should_prune(), pruning the current model by raising optuna.TrialPruned(), and

  3. Reporting intermediate Optuna data such as the current trial number back to the framework, as done in MLflowCallback.

For scikit-learn, an integrated OptunaSearchCV estimator is available that combines scikit-learn BaseEstimator functionality with access to a class-level Study object.

AllenNLP

optuna.integration.AllenNLPExecutor

AllenNLP extension to use optuna with Jsonnet config file.

optuna.integration.allennlp.dump_best_config

Save JSON config file after updating with parameters from the best trial in the study.

optuna.integration.AllenNLPPruningCallback

AllenNLP callback to prune unpromising trials.

Catalyst

optuna.integration.CatalystPruningCallback

Catalyst callback to prune unpromising trials.

Chainer

optuna.integration.ChainerPruningExtension

Chainer extension to prune unpromising trials.

optuna.integration.ChainerMNStudy

A wrapper of Study to incorporate Optuna with ChainerMN.

fast.ai

optuna.integration.FastAIPruningCallback

FastAI callback to prune unpromising trials for fastai.

Keras

optuna.integration.KerasPruningCallback

Keras callback to prune unpromising trials.

LightGBM

optuna.integration.LightGBMPruningCallback

Callback for LightGBM to prune unpromising trials.

optuna.integration.lightgbm.train

Wrapper of LightGBM Training API to tune hyperparameters.

optuna.integration.lightgbm.LightGBMTuner

Hyperparameter tuner for LightGBM.

optuna.integration.lightgbm.LightGBMTunerCV

Hyperparameter tuner for LightGBM with cross-validation.

MLflow

optuna.integration.MLflowCallback

Callback to track Optuna trials with MLflow.

MXNet

optuna.integration.MXNetPruningCallback

MXNet callback to prune unpromising trials.

pycma

optuna.integration.PyCmaSampler

A Sampler using cma library as the backend.

optuna.integration.CmaEsSampler

Wrapper class of PyCmaSampler for backward compatibility.

PyTorch

optuna.integration.PyTorchIgnitePruningHandler

PyTorch Ignite handler to prune unpromising trials.

optuna.integration.PyTorchLightningPruningCallback

PyTorch Lightning callback to prune unpromising trials.

scikit-learn

optuna.integration.OptunaSearchCV

Hyperparameter search with cross-validation.

scikit-optimize

optuna.integration.SkoptSampler

Sampler using Scikit-Optimize as the backend.

skorch

optuna.integration.SkorchPruningCallback

Skorch callback to prune unpromising trials.

TensorFlow

optuna.integration.TensorBoardCallback

Callback to track Optuna trials with TensorBoard.

optuna.integration.TensorFlowPruningHook

TensorFlow SessionRunHook to prune unpromising trials.

optuna.integration.TFKerasPruningCallback

tf.keras callback to prune unpromising trials.

XGBoost

optuna.integration.XGBoostPruningCallback

Callback for XGBoost to prune unpromising trials.

optuna.logging

The logging module implements logging using the Python logging package. Library users may be especially interested in setting verbosity levels using set_verbosity() to one of optuna.logging.CRITICAL (aka optuna.logging.FATAL), optuna.logging.ERROR, optuna.logging.WARNING (aka optuna.logging.WARN), optuna.logging.INFO, or optuna.logging.DEBUG.

optuna.logging.get_verbosity

Return the current level for the Optuna’s root logger.

optuna.logging.set_verbosity

Set the level for the Optuna’s root logger.

optuna.logging.disable_default_handler

Disable the default handler of the Optuna’s root logger.

optuna.logging.enable_default_handler

Enable the default handler of the Optuna’s root logger.

optuna.logging.disable_propagation

Disable propagation of the library log outputs.

optuna.logging.enable_propagation

Enable propagation of the library log outputs.

optuna.multi_objective

optuna.multi_objective.samplers

optuna.multi_objective.samplers.BaseMultiObjectiveSampler

Base class for multi-objective samplers.

optuna.multi_objective.samplers.NSGAIIMultiObjectiveSampler

Multi-objective sampler using the NSGA-II algorithm.

optuna.multi_objective.samplers.RandomMultiObjectiveSampler

Multi-objective sampler using random sampling.

optuna.multi_objective.samplers.MOTPEMultiObjectiveSampler

Multi-objective sampler using the MOTPE algorithm.

optuna.multi_objective.study

optuna.multi_objective.study.MultiObjectiveStudy

A study corresponds to a multi-objective optimization task, i.e., a set of trials.

optuna.multi_objective.study.create_study

Create a new MultiObjectiveStudy.

optuna.multi_objective.study.load_study

Load the existing MultiObjectiveStudy that has the specified name.

optuna.multi_objective.trial

optuna.multi_objective.trial.MultiObjectiveTrial

A trial is a process of evaluating an objective function.

optuna.multi_objective.trial.FrozenMultiObjectiveTrial

Status and results of a MultiObjectiveTrial.

optuna.multi_objective.visualization

Note

optuna.multi_objective.visualization module uses plotly to create figures, but JupyterLab cannot render them by default. Please follow this installation guide to show figures in JupyterLab.

optuna.multi_objective.visualization.plot_pareto_front

Plot the pareto front of a study.

optuna.pruners

The pruners module defines a BasePruner class characterized by an abstract prune() method, which, for a given trial and its associated study, returns a boolean value representing whether the trial should be pruned. This determination is made based on stored intermediate values of the objective function, as previously reported for the trial using optuna.trial.Trial.report(). The remaining classes in this module represent child classes, inheriting from BasePruner, which implement different pruning strategies.

optuna.pruners.BasePruner

Base class for pruners.

optuna.pruners.MedianPruner

Pruner using the median stopping rule.

optuna.pruners.NopPruner

Pruner which never prunes trials.

optuna.pruners.PercentilePruner

Pruner to keep the specified percentile of the trials.

optuna.pruners.SuccessiveHalvingPruner

Pruner using Asynchronous Successive Halving Algorithm.

optuna.pruners.HyperbandPruner

Pruner using Hyperband.

optuna.pruners.ThresholdPruner

Pruner to detect outlying metrics of the trials.

optuna.samplers

The samplers module defines a base class for parameter sampling as described extensively in BaseSampler. The remaining classes in this module represent child classes, deriving from BaseSampler, which implement different sampling strategies.

optuna.samplers.BaseSampler

Base class for samplers.

optuna.samplers.GridSampler

Sampler using grid search.

optuna.samplers.RandomSampler

Sampler using random sampling.

optuna.samplers.TPESampler

Sampler using TPE (Tree-structured Parzen Estimator) algorithm.

optuna.samplers.CmaEsSampler

A Sampler using CMA-ES algorithm.

optuna.samplers.IntersectionSearchSpace

A class to calculate the intersection search space of a BaseStudy.

optuna.samplers.intersection_search_space

Return the intersection search space of the BaseStudy.

optuna.storages

The storages module defines a BaseStorage class which abstracts a backend database and provides library-internal interfaces to read/write histories of studies and trials. Library users who wish to use storage solutions other than the default in-memory storage should use one of the child classes of BaseStorage documented below.

optuna.storages.RDBStorage

Storage class for RDB backend.

optuna.storages.RedisStorage

Storage class for Redis backend.

optuna.structs

This module is deprecated, with former functionality moved to optuna.trial and optuna.study.

class optuna.structs.TrialState[source]

State of a Trial.

RUNNING

The Trial is running.

COMPLETE

The Trial has been finished without any error.

PRUNED

The Trial has been pruned with TrialPruned.

FAIL

The Trial has failed due to an uncaught error.

Deprecated since version 1.4.0: This class is deprecated. Please use TrialState instead.

class optuna.structs.StudyDirection[source]

Direction of a Study.

NOT_SET

Direction has not been set.

MINIMIZE

Study minimizes the objective function.

MAXIMIZE

Study maximizes the objective function.

Deprecated since version 1.4.0: This class is deprecated. Please use StudyDirection instead.

class optuna.structs.FrozenTrial(number: int, state: optuna.trial._state.TrialState, value: Optional[float], datetime_start: Optional[datetime.datetime], datetime_complete: Optional[datetime.datetime], params: Dict[str, Any], distributions: Dict[str, optuna.distributions.BaseDistribution], user_attrs: Dict[str, Any], system_attrs: Dict[str, Any], intermediate_values: Dict[int, float], trial_id: int)[source]

Warning

Deprecated in v1.4.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v3.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v1.4.0.

This class was moved to trial. Please use FrozenTrial instead.

property distributions

Dictionary that contains the distributions of params.

property duration

Return the elapsed time taken to complete the trial.

Returns

The duration.

property last_step

Return the maximum step of intermediate_values in the trial.

Returns

The maximum step of intermediates.

report(value: float, step: int)None[source]

Interface of report function.

Since FrozenTrial is not pruned, this report function does nothing.

See also

Please refer to should_prune().

Parameters
  • value – A value returned from the objective function.

  • step – Step of the trial (e.g., Epoch of neural network training). Note that pruners assume that step starts at zero. For example, MedianPruner simply checks if step is less than n_warmup_steps as the warmup mechanism.

should_prune()bool[source]

Suggest whether the trial should be pruned or not.

The suggestion is always False regardless of a pruning algorithm.

Note

FrozenTrial only samples one combination of parameters.

Returns

False.

class optuna.structs.StudySummary(study_name: str, direction: optuna._study_direction.StudyDirection, best_trial: Optional[optuna.trial._frozen.FrozenTrial], user_attrs: Dict[str, Any], system_attrs: Dict[str, Any], n_trials: int, datetime_start: Optional[datetime.datetime], study_id: int)[source]

Basic attributes and aggregated results of a Study.

See also optuna.study.get_all_study_summaries().

study_name

Name of the Study.

direction

StudyDirection of the Study.

best_trial

FrozenTrial with best objective value in the Study.

user_attrs

Dictionary that contains the attributes of the Study set with optuna.study.Study.set_user_attr().

system_attrs

Dictionary that contains the attributes of the Study internally set by Optuna.

n_trials

The number of trials ran in the Study.

datetime_start

Datetime where the Study started.

Warning

Deprecated in v1.4.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v3.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v1.4.0.

This class was moved to study. Please use StudySummary instead.

optuna.study

The study module implements the Study object and related functions. A public constructor is available for the Study class, but direct use of this constructor is not recommended. Instead, library users should create and load a Study using create_study() and load_study() respectively.

optuna.study.Study

A study corresponds to an optimization task, i.e., a set of trials.

optuna.study.create_study

Create a new Study.

optuna.study.load_study

Load the existing Study that has the specified name.

optuna.study.delete_study

Delete a Study object.

optuna.study.get_all_study_summaries

Get all history of studies stored in a specified storage.

optuna.study.StudyDirection

Direction of a Study.

optuna.study.StudySummary

Basic attributes and aggregated results of a Study.

optuna.trial

The trial module contains Trial related classes and functions.

A Trial instance represents a process of evaluating an objective function. This instance is passed to an objective function and provides interfaces to get parameter suggestion, manage the trial’s state, and set/get user-defined attributes of the trial, so that Optuna users can define a custom objective function through the interfaces. Basically, Optuna users only use it in their custom objective functions.

optuna.trial.Trial

A trial is a process of evaluating an objective function.

optuna.trial.FixedTrial

A trial class which suggests a fixed value for each parameter.

optuna.trial.FrozenTrial

Status and results of a Trial.

optuna.trial.TrialState

State of a Trial.

optuna.trial.create_trial

Create a new FrozenTrial.

optuna.visualization

The visualization module provides utility functions for plotting the optimization process using plotly and matplotlib. Plotting functions take generally take a Study object and optional parameters passed as a list to a params argument.

Note

In the optuna.visualization module, the following functions use plotly to create figures, but JupyterLab cannot render them by default. Please follow this installation guide to show figures in JupyterLab.

optuna.visualization.plot_contour

Plot the parameter relationship as contour plot in a study.

optuna.visualization.plot_edf

Plot the objective value EDF (empirical distribution function) of a study.

optuna.visualization.plot_intermediate_values

Plot intermediate values of all trials in a study.

optuna.visualization.plot_optimization_history

Plot optimization history of all trials in a study.

optuna.visualization.plot_parallel_coordinate

Plot the high-dimentional parameter relationships in a study.

optuna.visualization.plot_param_importances

Plot hyperparameter importances.

optuna.visualization.plot_slice

Plot the parameter relationship as slice plot in a study.

optuna.visualization.is_available

Returns whether visualization with plotly is available or not.

Note

The following optuna.visualization.matplotlib module uses Matplotlib as a backend.

optuna.visualization.matplotlib

Note

The following functions use Matplotlib as a backend.

optuna.visualization.matplotlib.plot_edf

Plot the objective value EDF (empirical distribution function) of a study with Matplotlib.

optuna.visualization.matplotlib.plot_intermediate_values

Plot intermediate values of all trials in a study with Matplotlib.

optuna.visualization.matplotlib.plot_optimization_history

Plot optimization history of all trials in a study with Matplotlib.

optuna.visualization.matplotlib.plot_parallel_coordinate

Plot the high-dimentional parameter relationships in a study with Matplotlib.

optuna.visualization.matplotlib.is_available

Returns whether visualization with Matplotlib is available or not.

FAQ

Can I use Optuna with X? (where X is your favorite ML library)

Optuna is compatible with most ML libraries, and it’s easy to use Optuna with those. Please refer to examples.

How to define objective functions that have own arguments?

There are two ways to realize it.

First, callable classes can be used for that purpose as follows:

import optuna

class Objective(object):
    def __init__(self, min_x, max_x):
        # Hold this implementation specific arguments as the fields of the class.
        self.min_x = min_x
        self.max_x = max_x

    def __call__(self, trial):
        # Calculate an objective value by using the extra arguments.
        x = trial.suggest_uniform('x', self.min_x, self.max_x)
        return (x - 2) ** 2

# Execute an optimization by using an `Objective` instance.
study = optuna.create_study()
study.optimize(Objective(-100, 100), n_trials=100)

Second, you can use lambda or functools.partial for creating functions (closures) that hold extra arguments. Below is an example that uses lambda:

import optuna

# Objective function that takes three arguments.
def objective(trial, min_x, max_x):
    x = trial.suggest_uniform('x', min_x, max_x)
    return (x - 2) ** 2

# Extra arguments.
min_x = -100
max_x = 100

# Execute an optimization by using the above objective function wrapped by `lambda`.
study = optuna.create_study()
study.optimize(lambda trial: objective(trial, min_x, max_x), n_trials=100)

Please also refer to sklearn_addtitional_args.py example, which reuses the dataset instead of loading it in each trial execution.

Can I use Optuna without remote RDB servers?

Yes, it’s possible.

In the simplest form, Optuna works with in-memory storage:

study = optuna.create_study()
study.optimize(objective)

If you want to save and resume studies, it’s handy to use SQLite as the local storage:

study = optuna.create_study(study_name='foo_study', storage='sqlite:///example.db')
study.optimize(objective)  # The state of `study` will be persisted to the local SQLite file.

Please see Saving/Resuming Study with RDB Backend for more details.

How can I save and resume studies?

There are two ways of persisting studies, which depends if you are using in-memory storage (default) or remote databases (RDB). In-memory studies can be saved and loaded like usual Python objects using pickle or joblib. For example, using joblib:

study = optuna.create_study()
joblib.dump(study, 'study.pkl')

And to resume the study:

study = joblib.load('study.pkl')
print('Best trial until now:')
print(' Value: ', study.best_trial.value)
print(' Params: ')
for key, value in study.best_trial.params.items():
    print(f'    {key}: {value}')

If you are using RDBs, see Saving/Resuming Study with RDB Backend for more details.

How to suppress log messages of Optuna?

By default, Optuna shows log messages at the optuna.logging.INFO level. You can change logging levels by using optuna.logging.set_verbosity().

For instance, you can stop showing each trial result as follows:

optuna.logging.set_verbosity(optuna.logging.WARNING)

study = optuna.create_study()
study.optimize(objective)
# Logs like '[I 2020-07-21 13:41:45,627] Trial 0 finished with value:...' are disabled.

Please refer to optuna.logging for further details.

How to save machine learning models trained in objective functions?

Optuna saves hyperparameter values with its corresponding objective value to storage, but it discards intermediate objects such as machine learning models and neural network weights. To save models or weights, please use features of the machine learning library you used.

We recommend saving optuna.trial.Trial.number with a model in order to identify its corresponding trial. For example, you can save SVM models trained in the objective function as follows:

def objective(trial):
    svc_c = trial.suggest_loguniform('svc_c', 1e-10, 1e10)
    clf = sklearn.svm.SVC(C=svc_c)
    clf.fit(X_train, y_train)

    # Save a trained model to a file.
    with open('{}.pickle'.format(trial.number), 'wb') as fout:
        pickle.dump(clf, fout)
    return 1.0 - accuracy_score(y_valid, clf.predict(X_valid))


study = optuna.create_study()
study.optimize(objective, n_trials=100)

# Load the best model.
with open('{}.pickle'.format(study.best_trial.number), 'rb') as fin:
    best_clf = pickle.load(fin)
print(accuracy_score(y_valid, best_clf.predict(X_valid)))

How can I obtain reproducible optimization results?

To make the parameters suggested by Optuna reproducible, you can specify a fixed random seed via seed argument of RandomSampler or TPESampler as follows:

sampler = TPESampler(seed=10)  # Make the sampler behave in a deterministic way.
study = optuna.create_study(sampler=sampler)
study.optimize(objective)

However, there are two caveats.

First, when optimizing a study in distributed or parallel mode, there is inherent non-determinism. Thus it is very difficult to reproduce the same results in such condition. We recommend executing optimization of a study sequentially if you would like to reproduce the result.

Second, if your objective function behaves in a non-deterministic way (i.e., it does not return the same value even if the same parameters were suggested), you cannot reproduce an optimization. To deal with this problem, please set an option (e.g., random seed) to make the behavior deterministic if your optimization target (e.g., an ML library) provides it.

How are exceptions from trials handled?

Trials that raise exceptions without catching them will be treated as failures, i.e. with the FAIL status.

By default, all exceptions except TrialPruned raised in objective functions are propagated to the caller of optimize(). In other words, studies are aborted when such exceptions are raised. It might be desirable to continue a study with the remaining trials. To do so, you can specify in optimize() which exception types to catch using the catch argument. Exceptions of these types are caught inside the study and will not propagate further.

You can find the failed trials in log messages.

[W 2018-12-07 16:38:36,889] Setting status of trial#0 as TrialState.FAIL because of \
the following error: ValueError('A sample error in objective.')

You can also find the failed trials by checking the trial states as follows:

study.trials_dataframe()

number

state

value

params

system_attrs

0

TrialState.FAIL

0

Setting status of trial#0 as TrialState.FAIL because of the following error: ValueError(‘A test error in objective.’)

1

TrialState.COMPLETE

1269

1

See also

The catch argument in optimize().

How are NaNs returned by trials handled?

Trials that return NaN (float('nan')) are treated as failures, but they will not abort studies.

Trials which return NaN are shown as follows:

[W 2018-12-07 16:41:59,000] Setting status of trial#2 as TrialState.FAIL because the \
objective function returned nan.

What happens when I dynamically alter a search space?

Since parameters search spaces are specified in each call to the suggestion API, e.g. suggest_uniform() and suggest_int(), it is possible to, in a single study, alter the range by sampling parameters from different search spaces in different trials. The behavior when altered is defined by each sampler individually.

Note

Discussion about the TPE sampler. https://github.com/optuna/optuna/issues/822

How can I use two GPUs for evaluating two trials simultaneously?

If your optimization target supports GPU (CUDA) acceleration and you want to specify which GPU is used, the easiest way is to set CUDA_VISIBLE_DEVICES environment variable:

# On a terminal.
#
# Specify to use the first GPU, and run an optimization.
$ export CUDA_VISIBLE_DEVICES=0
$ optuna study optimize foo.py objective --study-name foo --storage sqlite:///example.db

# On another terminal.
#
# Specify to use the second GPU, and run another optimization.
$ export CUDA_VISIBLE_DEVICES=1
$ optuna study optimize bar.py objective --study-name bar --storage sqlite:///example.db

Please refer to CUDA C Programming Guide for further details.

How can I test my objective functions?

When you test objective functions, you may prefer fixed parameter values to sampled ones. In that case, you can use FixedTrial, which suggests fixed parameter values based on a given dictionary of parameters. For instance, you can input arbitrary values of \(x\) and \(y\) to the objective function \(x + y\) as follows:

def objective(trial):
    x = trial.suggest_uniform('x', -1.0, 1.0)
    y = trial.suggest_int('y', -5, 5)
    return x + y

objective(FixedTrial({'x': 1.0, 'y': -1}))  # 0.0
objective(FixedTrial({'x': -1.0, 'y': -4}))  # -5.0

Using FixedTrial, you can write unit tests as follows:

# A test function of pytest
def test_objective():
    assert 1.0 == objective(FixedTrial({'x': 1.0, 'y': 0}))
    assert -1.0 == objective(FixedTrial({'x': 0.0, 'y': -1}))
    assert 0.0 == objective(FixedTrial({'x': -1.0, 'y': 1}))

How do I avoid running out of memory (OOM) when optimizing studies?

If the memory footprint increases as you run more trials, try to periodically run the garbage collector. Specify gc_after_trial to True when calling optimize() or call gc.collect() inside a callback.

def objective(trial):
    x = trial.suggest_uniform('x', -1.0, 1.0)
    y = trial.suggest_int('y', -5, 5)
    return x + y

study = optuna.create_study()
study.optimize(objective, n_trials=10, gc_after_trial=True)

# `gc_after_trial=True` is more or less identical to the following.
study.optimize(objective, n_trials=10, callbacks=[lambda study, trial: gc.collect()])

There is a performance trade-off for running the garbage collector, which could be non-negligible depending on how fast your objective function otherwise is. Therefore, gc_after_trial is False by default. Note that the above examples are similar to running the garbage collector inside the objective function, except for the fact that gc.collect() is called even when errors, including TrialPruned are raised.

Note

ChainerMNStudy does currently not provide gc_after_trial nor callbacks for optimize(). When using this class, you will have to call the garbage collector inside the objective function.

Indices and tables