Optuna: A hyperparameter optimization framework¶
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
Key Features¶
Optuna has modern functionalities as follows:
Lightweight, versatile, and platform agnostic architecture
Handle a wide variety of tasks with a simple installation that has few requirements.
-
Define search spaces using familiar Python syntax including conditionals and loops.
Efficient optimization algorithms
Adopt state-of-the-art algorithms for sampling hyper parameters and efficiently pruning unpromising trials.
-
Scale studies to tens or hundreds or workers with little or no changes to the code.
-
Inspect optimization histories from a variety of plotting functions.
Basic Concepts¶
We use the terms study and trial as follows:
Study: optimization based on an objective function
Trial: a single execution of the objective function
Please refer to sample code below. The goal of a study is to find out
the optimal set of hyperparameter values (e.g., classifier
and
svm_c
) through multiple trials (e.g., n_trials=100
). Optuna is
a framework designed for the automation and the acceleration of the
optimization studies.
import ...
# Define an objective function to be minimized.
def objective(trial):
# Invoke suggest methods of a Trial object to generate hyperparameters.
regressor_name = trial.suggest_categorical('classifier', ['SVR', 'RandomForest'])
if regressor_name == 'SVR':
svr_c = trial.suggest_loguniform('svr_c', 1e-10, 1e10)
regressor_obj = sklearn.svm.SVR(C=svr_c)
else:
rf_max_depth = trial.suggest_int('rf_max_depth', 2, 32)
regressor_obj = sklearn.ensemble.RandomForestRegressor(max_depth=rf_max_depth)
X, y = sklearn.datasets.load_boston(return_X_y=True)
X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y, random_state=0)
regressor_obj.fit(X_train, y_train)
y_pred = regressor_obj.predict(X_val)
error = sklearn.metrics.mean_squared_error(y_val, y_pred)
return error # An objective value linked with the Trial object.
study = optuna.create_study() # Create a new study.
study.optimize(objective, n_trials=100) # Invoke optimization of the objective function.
Communication¶
GitHub Issues for bug reports, feature requests and questions.
Gitter for interactive chat with developers.
Stack Overflow for questions.
Contribution¶
Any contributions to Optuna are welcome! When you send a pull request, please follow the contribution guide.
Reference¶
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-generation Hyperparameter Optimization Framework. In KDD (arXiv).
Installation¶
Optuna supports Python 3.6 or newer.
We recommend to install Optuna via pip:
$ pip install optuna
You can also install the development version of Optuna from master branch of Git repository:
$ pip install git+https://github.com/optuna/optuna.git
You can also install Optuna via conda:
$ conda install -c conda-forge optuna
Tutorial¶
If you are new to Optuna or want a general introduction, we highly recommend the below video.
Key Features¶
Showcases Optuna’s Key Features.
Note
Click here to download the full example code
Lightweight, versatile, and platform agnostic architecture¶
Optuna is entirely written in Python and has few dependencies. This means that we can quickly move to the real example once you get interested in Optuna.
Quadratic Function Example¶
Usually, Optuna is used to optimize hyperparameters, but as an example, let’s optimize a simple quadratic function: \((x - 2)^2\).
First of all, import optuna
.
import optuna
In optuna, conventionally functions to be optimized are named objective.
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
This function returns the value of \((x - 2)^2\). Our goal is to find the value of x
that minimizes the output of the objective
function. This is the “optimization.”
During the optimization, Optuna repeatedly calls and evaluates the objective function with
different values of x
.
A Trial
object corresponds to a single execution of the objective
function and is internally instantiated upon each invocation of the function.
The suggest APIs (for example, suggest_float()
) are called
inside the objective function to obtain parameters for a trial.
suggest_float()
selects parameters uniformly within the range
provided. In our example, from \(-10\) to \(10\).
To start the optimization, we create a study object and pass the objective function to method
optimize()
as follows.
study = optuna.create_study()
study.optimize(objective, n_trials=100)
You can get the best parameter as follows.
best_params = study.best_params
found_x = best_params["x"]
print("Found x: {}, (x - 2)^2: {}".format(found_x, (found_x - 2) ** 2))
Out:
Found x: 1.9711403201212547, (x - 2)^2: 0.0008328811227036546
We can see that the x
value found by Optuna is close to the optimal value of 2
.
Note
When used to search for hyperparameters in machine learning, usually the objective function would return the loss or accuracy of the model.
Study Object¶
Let us clarify the terminology in Optuna as follows:
Trial: A single call of the objective function
Study: An optimization session, which is a set of trials
Parameter: A variable whose value is to be optimized, such as
x
in the above example
In Optuna, we use the study object to manage optimization.
Method create_study()
returns a study object.
A study object has useful properties for analyzing the optimization outcome.
To get the dictionary of parameter name and parameter values:
study.best_params
Out:
{'x': 1.9711403201212547}
To get the best observed value of the objective function:
study.best_value
Out:
0.0008328811227036546
To get the best trial:
study.best_trial
Out:
FrozenTrial(number=84, values=[0.0008328811227036546], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 958648), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 962596), params={'x': 1.9711403201212547}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=84, state=TrialState.COMPLETE, value=None)
To get all trials:
study.trials
Out:
[FrozenTrial(number=0, values=[62.236137611765024], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 657372), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 657658), params={'x': 9.888988376957151}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=0, state=TrialState.COMPLETE, value=None), FrozenTrial(number=1, values=[3.5588212263338814], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 658036), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 658263), params={'x': 0.11351617384779011}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.COMPLETE, value=None), FrozenTrial(number=2, values=[102.10905576730109], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 658635), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 658858), params={'x': -8.104902560999838}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=2, state=TrialState.COMPLETE, value=None), FrozenTrial(number=3, values=[43.819700590744496], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 659201), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 659434), params={'x': -4.619645050208092}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=3, state=TrialState.COMPLETE, value=None), FrozenTrial(number=4, values=[41.15646222665592], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 659738), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 659969), params={'x': 8.415330250786464}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=4, state=TrialState.COMPLETE, value=None), FrozenTrial(number=5, values=[4.66598054561782], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 660311), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 660527), params={'x': -0.16008808746722636}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=5, state=TrialState.COMPLETE, value=None), FrozenTrial(number=6, values=[6.884073245054106], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 660849), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 661069), params={'x': 4.623751749890623}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=6, state=TrialState.COMPLETE, value=None), FrozenTrial(number=7, values=[15.148107691815724], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 661393), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 661622), params={'x': 5.892057000072805}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=7, state=TrialState.COMPLETE, value=None), FrozenTrial(number=8, values=[2.3094220091087885], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 661950), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 662181), params={'x': 3.519678258418139}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=8, state=TrialState.COMPLETE, value=None), FrozenTrial(number=9, values=[0.24725364649966886], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 662491), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 662722), params={'x': 1.502753937673038}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=9, state=TrialState.COMPLETE, value=None), FrozenTrial(number=10, values=[15.510688965911077], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 663053), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 666724), params={'x': -1.938361203078138}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=10, state=TrialState.COMPLETE, value=None), FrozenTrial(number=11, values=[0.9016829593172434], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 667061), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 670137), params={'x': 2.9495698812184616}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=11, state=TrialState.COMPLETE, value=None), FrozenTrial(number=12, values=[0.13033710825506611], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 670441), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 673796), params={'x': 1.6389776900867952}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=12, state=TrialState.COMPLETE, value=None), FrozenTrial(number=13, values=[35.40088020050116], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 674102), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 677500), params={'x': -3.9498638808380444}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=13, state=TrialState.COMPLETE, value=None), FrozenTrial(number=14, values=[0.529933380215497], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 677826), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 681207), params={'x': 1.2720347671656997}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=14, state=TrialState.COMPLETE, value=None), FrozenTrial(number=15, values=[22.905059503153105], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 681532), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 684974), params={'x': 6.785923056543336}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=15, state=TrialState.COMPLETE, value=None), FrozenTrial(number=16, values=[23.35664247946882], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 685304), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 688779), params={'x': -2.832871038985918}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=16, state=TrialState.COMPLETE, value=None), FrozenTrial(number=17, values=[75.00944036218094], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 689103), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 692522), params={'x': -6.660799060258871}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=17, state=TrialState.COMPLETE, value=None), FrozenTrial(number=18, values=[0.05601221043984431], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 692846), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 696588), params={'x': 1.7633310108192366}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=18, state=TrialState.COMPLETE, value=None), FrozenTrial(number=19, values=[10.822292164640897], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 696914), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 700306), params={'x': -1.289725241511956}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=19, state=TrialState.COMPLETE, value=None), FrozenTrial(number=20, values=[0.010108511834002262], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 700611), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 704482), params={'x': 2.1005410952496653}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=20, state=TrialState.COMPLETE, value=None), FrozenTrial(number=21, values=[0.8109641234815467], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 704807), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 708164), params={'x': 2.900535464866069}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=21, state=TrialState.COMPLETE, value=None), FrozenTrial(number=22, values=[0.8662409388041011], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 708486), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 711891), params={'x': 1.0692793443765733}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=22, state=TrialState.COMPLETE, value=None), FrozenTrial(number=23, values=[7.261474609552873], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 712229), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 715432), params={'x': 4.6947123426356425}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=23, state=TrialState.COMPLETE, value=None), FrozenTrial(number=24, values=[0.0010574194241331075], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 715768), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 719174), params={'x': 1.9674820138364457}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=24, state=TrialState.COMPLETE, value=None), FrozenTrial(number=25, values=[20.092142755940905], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 719506), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 723212), params={'x': 6.482425990012652}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=25, state=TrialState.COMPLETE, value=None), FrozenTrial(number=26, values=[9.31810052137347], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 723580), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 727072), params={'x': -1.0525563911864873}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=26, state=TrialState.COMPLETE, value=None), FrozenTrial(number=27, values=[3.5971877094478497], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 727403), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 730861), params={'x': 3.8966253476761956}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=27, state=TrialState.COMPLETE, value=None), FrozenTrial(number=28, values=[0.11837233061504047], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 731209), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 734684), params={'x': 2.344052802074101}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=28, state=TrialState.COMPLETE, value=None), FrozenTrial(number=29, values=[2.920761500317454], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 735033), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 738510), params={'x': 0.29097644828473657}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=29, state=TrialState.COMPLETE, value=None), FrozenTrial(number=30, values=[51.04270282390579], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 738855), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 742398), params={'x': 9.144417598650417}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=30, state=TrialState.COMPLETE, value=None), FrozenTrial(number=31, values=[0.09159092482088217], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 742730), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 746249), params={'x': 2.3026399260191592}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=31, state=TrialState.COMPLETE, value=None), FrozenTrial(number=32, values=[8.6391645641606], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 746584), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 750672), params={'x': 4.939245577382162}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=32, state=TrialState.COMPLETE, value=None), FrozenTrial(number=33, values=[4.180328811339027], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 751056), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 754631), params={'x': -0.04458524188624313}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=33, state=TrialState.COMPLETE, value=None), FrozenTrial(number=34, values=[0.08768936354994061], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 754964), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 758532), params={'x': 2.296123898984767}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=34, state=TrialState.COMPLETE, value=None), FrozenTrial(number=35, values=[30.873293939967546], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 758868), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 762403), params={'x': 7.556374172062888}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=35, state=TrialState.COMPLETE, value=None), FrozenTrial(number=36, values=[2.282746043260728], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 762738), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 766282), params={'x': 0.4891240807860071}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=36, state=TrialState.COMPLETE, value=None), FrozenTrial(number=37, values=[3.6788242490727563], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 766619), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 770159), params={'x': 3.9180261335739814}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=37, state=TrialState.COMPLETE, value=None), FrozenTrial(number=38, values=[6.828182902990965], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 770496), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 774038), params={'x': -0.6130791995251434}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=38, state=TrialState.COMPLETE, value=None), FrozenTrial(number=39, values=[12.49802286713212], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 774375), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 777871), params={'x': 5.535254286063751}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=39, state=TrialState.COMPLETE, value=None), FrozenTrial(number=40, values=[23.8880782715364], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 778206), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 781631), params={'x': -2.8875431733680266}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=40, state=TrialState.COMPLETE, value=None), FrozenTrial(number=41, values=[0.04430965683793547], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 781968), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 785537), params={'x': 2.2104985910592645}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=41, state=TrialState.COMPLETE, value=None), FrozenTrial(number=42, values=[0.14477450743038295], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 785890), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 789480), params={'x': 2.380492453841575}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=42, state=TrialState.COMPLETE, value=None), FrozenTrial(number=43, values=[2.532454504412555], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 789818), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 793327), params={'x': 3.5913687518650588}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=43, state=TrialState.COMPLETE, value=None), FrozenTrial(number=44, values=[1.545051356436472], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 793666), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 797558), params={'x': 0.7569990521176293}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=44, state=TrialState.COMPLETE, value=None), FrozenTrial(number=45, values=[0.0666109048697983], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 797897), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 801414), params={'x': 1.741909115097417}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=45, state=TrialState.COMPLETE, value=None), FrozenTrial(number=46, values=[0.24874416914100658], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 801754), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 805345), params={'x': 1.5012574119437898}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=46, state=TrialState.COMPLETE, value=None), FrozenTrial(number=47, values=[18.749728567809665], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 805683), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 809273), params={'x': -2.330095676519131}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=47, state=TrialState.COMPLETE, value=None), FrozenTrial(number=48, values=[1.5035469352552928], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 809615), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 813218), params={'x': 3.2261920466449343}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=48, state=TrialState.COMPLETE, value=None), FrozenTrial(number=49, values=[5.991616554790701], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 813556), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 817157), params={'x': -0.4477778810159023}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=49, state=TrialState.COMPLETE, value=None), FrozenTrial(number=50, values=[5.505386853245152], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 817497), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 822117), params={'x': 4.346356079806548}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=50, state=TrialState.COMPLETE, value=None), FrozenTrial(number=51, values=[0.053323628920229404], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 822603), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 826406), params={'x': 2.2309190960493077}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=51, state=TrialState.COMPLETE, value=None), FrozenTrial(number=52, values=[0.0972398666139348], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 826753), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 832841), params={'x': 1.6881669250802045}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=52, state=TrialState.COMPLETE, value=None), FrozenTrial(number=53, values=[0.6726358366049145], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 833341), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 838424), params={'x': 2.820143790200788}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=53, state=TrialState.COMPLETE, value=None), FrozenTrial(number=54, values=[12.133118804469719], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 838797), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 842450), params={'x': 5.4832626665914415}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=54, state=TrialState.COMPLETE, value=None), FrozenTrial(number=55, values=[0.07025360069615909], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 842795), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 846529), params={'x': 1.7349460419156901}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=55, state=TrialState.COMPLETE, value=None), FrozenTrial(number=56, values=[1.364160037067698], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 846910), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 850448), params={'x': 0.8320273817132107}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=56, state=TrialState.COMPLETE, value=None), FrozenTrial(number=57, values=[12.961451609393412], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 850799), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 854420), params={'x': -1.6002016067705727}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=57, state=TrialState.COMPLETE, value=None), FrozenTrial(number=58, values=[1.082630759475878], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 854766), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 858368), params={'x': 3.0404954394306003}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=58, state=TrialState.COMPLETE, value=None), FrozenTrial(number=59, values=[141.0713516018902], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 858723), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 862322), params={'x': -9.877346151472146}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=59, state=TrialState.COMPLETE, value=None), FrozenTrial(number=60, values=[3.9510214757863773], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 862656), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 866250), params={'x': 3.9877176549465916}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=60, state=TrialState.COMPLETE, value=None), FrozenTrial(number=61, values=[0.03876603681293981], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 866596), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 870144), params={'x': 1.8031090738176596}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=61, state=TrialState.COMPLETE, value=None), FrozenTrial(number=62, values=[0.0016518419855895848], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 870469), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 874045), params={'x': 1.9593571410258876}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=62, state=TrialState.COMPLETE, value=None), FrozenTrial(number=63, values=[2.969759633400968], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 874395), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 878008), params={'x': 0.2767009448731865}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=63, state=TrialState.COMPLETE, value=None), FrozenTrial(number=64, values=[0.041369124068006934], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 878358), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 881972), params={'x': 2.20339401187844}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=64, state=TrialState.COMPLETE, value=None), FrozenTrial(number=65, values=[0.8185379605074382], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 882324), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 885946), params={'x': 1.0952691226074804}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=65, state=TrialState.COMPLETE, value=None), FrozenTrial(number=66, values=[0.39736355388601763], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 886297), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 889757), params={'x': 2.630367792551315}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=66, state=TrialState.COMPLETE, value=None), FrozenTrial(number=67, values=[9.626560622107796], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 890111), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 893777), params={'x': 5.102669918329663}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=67, state=TrialState.COMPLETE, value=None), FrozenTrial(number=68, values=[7.193464103888897], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 894127), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 898186), params={'x': -0.6820634041515308}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=68, state=TrialState.COMPLETE, value=None), FrozenTrial(number=69, values=[1.494883473584289], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 898540), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 902119), params={'x': 3.222654273940221}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=69, state=TrialState.COMPLETE, value=None), FrozenTrial(number=70, values=[18.572295032871807], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 902469), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 905995), params={'x': 6.309558565894169}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=70, state=TrialState.COMPLETE, value=None), FrozenTrial(number=71, values=[0.004987693229913129], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 906347), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 909943), params={'x': 1.9293763975011673}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=71, state=TrialState.COMPLETE, value=None), FrozenTrial(number=72, values=[0.11088992296050558], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 910292), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 913941), params={'x': 2.3330013858237013}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=72, state=TrialState.COMPLETE, value=None), FrozenTrial(number=73, values=[0.7523458550098076], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 914296), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 917930), params={'x': 1.1326212736008523}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=73, state=TrialState.COMPLETE, value=None), FrozenTrial(number=74, values=[4.128565427003404], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 918287), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 921946), params={'x': -0.031887159023207934}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=74, state=TrialState.COMPLETE, value=None), FrozenTrial(number=75, values=[5.843453896455969], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 922301), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 925954), params={'x': 4.417323705351844}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=75, state=TrialState.COMPLETE, value=None), FrozenTrial(number=76, values=[2.5679221565760333], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 926309), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 930017), params={'x': 3.602473761587388}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=76, state=TrialState.COMPLETE, value=None), FrozenTrial(number=77, values=[0.0036793778732600664], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 930374), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 934029), params={'x': 2.060657875607872}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=77, state=TrialState.COMPLETE, value=None), FrozenTrial(number=78, values=[2.070978779178735], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 934374), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 938111), params={'x': 0.5609104339275004}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=78, state=TrialState.COMPLETE, value=None), FrozenTrial(number=79, values=[0.04896347770252587], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 938474), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 942084), params={'x': 1.7787230746270053}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=79, state=TrialState.COMPLETE, value=None), FrozenTrial(number=80, values=[0.44333130508058255], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 942447), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 946032), params={'x': 1.3341687112484255}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=80, state=TrialState.COMPLETE, value=None), FrozenTrial(number=81, values=[0.00539640437548078], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 946415), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 950128), params={'x': 2.0734602230835217}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=81, state=TrialState.COMPLETE, value=None), FrozenTrial(number=82, values=[0.004493839820215944], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 950484), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 954213), params={'x': 2.0670361083313757}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=82, state=TrialState.COMPLETE, value=None), FrozenTrial(number=83, values=[1.015648083385248], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 954569), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 958292), params={'x': 3.007793671038496}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=83, state=TrialState.COMPLETE, value=None), FrozenTrial(number=84, values=[0.0008328811227036546], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 958648), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 962596), params={'x': 1.9711403201212547}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=84, state=TrialState.COMPLETE, value=None), FrozenTrial(number=85, values=[1.8241655417207785], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 963079), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 966947), params={'x': 0.6493832735669314}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=85, state=TrialState.COMPLETE, value=None), FrozenTrial(number=86, values=[2.467875693859409], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 967328), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 971349), params={'x': 3.570947387361973}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=86, state=TrialState.COMPLETE, value=None), FrozenTrial(number=87, values=[0.007149796217025023], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 971683), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 975460), params={'x': 1.9154435323761392}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=87, state=TrialState.COMPLETE, value=None), FrozenTrial(number=88, values=[4.50260601854136], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 976750), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 980485), params={'x': -0.12193449911663357}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=88, state=TrialState.COMPLETE, value=None), FrozenTrial(number=89, values=[0.6248944555939134], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 980845), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 984559), params={'x': 2.790502660080226}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=89, state=TrialState.COMPLETE, value=None), FrozenTrial(number=90, values=[4.55701554045824], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 984956), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 988697), params={'x': 4.134716735414383}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=90, state=TrialState.COMPLETE, value=None), FrozenTrial(number=91, values=[0.013659809746195215], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 989060), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 992756), params={'x': 1.883124811246376}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=91, state=TrialState.COMPLETE, value=None), FrozenTrial(number=92, values=[1.1115823065731307], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 993119), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 12, 996830), params={'x': 0.9456839626691005}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=92, state=TrialState.COMPLETE, value=None), FrozenTrial(number=93, values=[0.6524191955651099], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 12, 997191), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 835), params={'x': 1.1922752971679584}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=93, state=TrialState.COMPLETE, value=None), FrozenTrial(number=94, values=[0.003991262446918265], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 13, 1194), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 4866), params={'x': 2.063176439017392}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=94, state=TrialState.COMPLETE, value=None), FrozenTrial(number=95, values=[0.4649000768947512], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 13, 5230), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 8992), params={'x': 2.6818358137372598}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=95, state=TrialState.COMPLETE, value=None), FrozenTrial(number=96, values=[1.6925433669085481], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 13, 9351), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 13086), params={'x': 3.3009778502759177}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=96, state=TrialState.COMPLETE, value=None), FrozenTrial(number=97, values=[0.01347450783780069], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 13, 13448), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 17201), params={'x': 2.116079747750418}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=97, state=TrialState.COMPLETE, value=None), FrozenTrial(number=98, values=[8.854838840886162], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 13, 17562), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 21313), params={'x': -0.9757081242766672}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=98, state=TrialState.COMPLETE, value=None), FrozenTrial(number=99, values=[0.3735066125995011], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 13, 21675), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 13, 25424), params={'x': 1.3888481264043273}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=99, state=TrialState.COMPLETE, value=None)]
To get the number of trials:
len(study.trials)
Out:
100
By executing optimize()
again, we can continue the optimization.
study.optimize(objective, n_trials=100)
To get the updated number of trials:
len(study.trials)
Out:
200
As the objective function is so easy that the last 100 trials don’t improve the result. However, we can check the result again:
best_params = study.best_params
found_x = best_params["x"]
print("Found x: {}, (x - 2)^2: {}".format(found_x, (found_x - 2) ** 2))
Out:
Found x: 2.0024482156595917, (x - 2)^2: 5.993759915870247e-06
Total running time of the script: ( 0 minutes 0.834 seconds)
Note
Click here to download the full example code
Pythonic Search Space¶
For hyperparameter sampling, Optuna provides the following features:
optuna.trial.Trial.suggest_categorical()
for categorical parametersoptuna.trial.Trial.suggest_int()
for integer parametersoptuna.trial.Trial.suggest_float()
for floating point parameters
With optional arguments of step
and log
, we can discretize or take the logarithm of
integer and floating point parameters.
import optuna
def objective(trial):
# Categorical parameter
optimizer = trial.suggest_categorical("optimizer", ["MomentumSGD", "Adam"])
# Integer parameter
num_layers = trial.suggest_int("num_layers", 1, 3)
# Integer parameter (log)
num_channels = trial.suggest_int("num_channels", 32, 512, log=True)
# Integer parameter (discretized)
num_units = trial.suggest_int("num_units", 10, 100, step=5)
# Floating point parameter
dropout_rate = trial.suggest_float("dropout_rate", 0.0, 1.0)
# Floating point parameter (log)
learning_rate = trial.suggest_float("learning_rate", 1e-5, 1e-2, log=True)
# Floating point parameter (discretized)
drop_path_rate = trial.suggest_float("drop_path_rate", 0.0, 1.0, step=0.1)
Defining Parameter Spaces¶
In Optuna, we define search spaces using familiar Python syntax including conditionals and loops.
Also, you can use branches or loops depending on the parameter values.
For more various use, see examples.
Branches:
import sklearn.ensemble
import sklearn.svm
def objective(trial):
classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])
if classifier_name == "SVC":
svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
classifier_obj = sklearn.svm.SVC(C=svc_c)
else:
rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32, log=True)
classifier_obj = sklearn.ensemble.RandomForestClassifier(max_depth=rf_max_depth)
Loops:
import torch
import torch.nn as nn
def create_model(trial, in_size):
n_layers = trial.suggest_int("n_layers", 1, 3)
layers = []
for i in range(n_layers):
n_units = trial.suggest_int("n_units_l{}".format(i), 4, 128, log=True)
layers.append(nn.Linear(in_size, n_units))
layers.append(nn.ReLU())
in_size = n_units
layers.append(nn.Linear(in_size, 10))
return nn.Sequential(*layers)
The difficulty of optimization increases roughly exponentially with regard to the number of parameters. That is, the number of necessary trials increases exponentially when you increase the number of parameters, so it is recommended to not add unimportant parameters.
Total running time of the script: ( 0 minutes 0.002 seconds)
Note
Click here to download the full example code
Efficient Optimization Algorithms¶
Optuna enables efficient hyperparameter optimization by adopting state-of-the-art algorithms for sampling hyperparameters and pruning efficiently unpromising trials.
Sampling Algorithms¶
Samplers basically continually narrow down the search space using the records of suggested parameter values and evaluated objective values,
leading to an optimal search space which giving off parameters leading to better objective values.
More detailed explanation of how samplers suggest parameters is in optuna.samplers.BaseSampler
.
Optuna provides the following sampling algorithms:
Tree-structured Parzen Estimator algorithm implemented in
optuna.samplers.TPESampler
CMA-ES based algorithm implemented in
optuna.samplers.CmaEsSampler
Grid Search implemented in
optuna.samplers.GridSampler
Random Search implemented in
optuna.samplers.RandomSampler
The default sampler is optuna.samplers.TPESampler
.
Switching Samplers¶
import optuna
By default, Optuna uses TPESampler
as follows.
study = optuna.create_study()
print(f"Sampler is {study.sampler.__class__.__name__}")
Out:
Sampler is TPESampler
If you want to use different samplers for example RandomSampler
and CmaEsSampler
,
study = optuna.create_study(sampler=optuna.samplers.RandomSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
study = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())
print(f"Sampler is {study.sampler.__class__.__name__}")
Out:
Sampler is RandomSampler
Sampler is CmaEsSampler
Pruning Algorithms¶
Pruners
automatically stop unpromising trials at the early stages of the training (a.k.a., automated early-stopping).
Optuna provides the following pruning algorithms:
Asynchronous Successive Halving algorithm implemted in
optuna.pruners.SuccessiveHalvingPruner
Hyperband algorithm implemented in
optuna.pruners.HyperbandPruner
Median pruning algorithm implemented in
optuna.pruners.MedianPruner
Threshold pruning algorithm implemented in
optuna.pruners.ThresholdPruner
We use optuna.pruners.MedianPruner
in most examples,
though basically it is outperformed by optuna.pruners.SuccessiveHalvingPruner
and
optuna.pruners.HyperbandPruner
as in this benchmark result.
Activating Pruners¶
To turn on the pruning feature, you need to call report()
and should_prune()
after each step of the iterative training.
report()
periodically monitors the intermediate objective values.
should_prune()
decides termination of the trial that does not meet a predefined condition.
We would recommend using integration modules for major machine learning frameworks.
Exclusive list is optuna.integration
and usecases are available in optuna/examples.
import logging
import sys
import sklearn.datasets
import sklearn.linear_model
import sklearn.model_selection
def objective(trial):
iris = sklearn.datasets.load_iris()
classes = list(set(iris.target))
train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(
iris.data, iris.target, test_size=0.25, random_state=0
)
alpha = trial.suggest_loguniform("alpha", 1e-5, 1e-1)
clf = sklearn.linear_model.SGDClassifier(alpha=alpha)
for step in range(100):
clf.partial_fit(train_x, train_y, classes=classes)
# Report intermediate objective value.
intermediate_value = 1.0 - clf.score(valid_x, valid_y)
trial.report(intermediate_value, step)
# Handle pruning based on the intermediate value.
if trial.should_prune():
raise optuna.TrialPruned()
return 1.0 - clf.score(valid_x, valid_y)
Set up the median stopping rule as the pruning condition.
# Add stream handler of stdout to show the messages
optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study = optuna.create_study(pruner=optuna.pruners.MedianPruner())
study.optimize(objective, n_trials=20)
Out:
A new study created in memory with name: no-name-0625df58-cd5d-448a-af1c-b0fe60458533
Trial 0 finished with value: 0.10526315789473684 and parameters: {'alpha': 0.004111637321121054}. Best is trial 0 with value: 0.10526315789473684.
Trial 1 finished with value: 0.2894736842105263 and parameters: {'alpha': 0.07333314605635764}. Best is trial 0 with value: 0.10526315789473684.
Trial 2 finished with value: 0.02631578947368418 and parameters: {'alpha': 0.0006664053357678837}. Best is trial 2 with value: 0.02631578947368418.
Trial 3 finished with value: 0.02631578947368418 and parameters: {'alpha': 0.0026754617661103428}. Best is trial 2 with value: 0.02631578947368418.
Trial 4 finished with value: 0.07894736842105265 and parameters: {'alpha': 0.011587428640307693}. Best is trial 2 with value: 0.02631578947368418.
Trial 5 pruned.
Trial 6 pruned.
Trial 7 pruned.
Trial 8 pruned.
Trial 9 pruned.
Trial 10 pruned.
Trial 11 pruned.
Trial 12 pruned.
Trial 13 pruned.
Trial 14 finished with value: 0.21052631578947367 and parameters: {'alpha': 0.0014725328266443408}. Best is trial 2 with value: 0.02631578947368418.
Trial 15 pruned.
Trial 16 pruned.
Trial 17 pruned.
Trial 18 pruned.
Trial 19 pruned.
As you can see, several trials were pruned (stopped) before they finished all of the iterations.
The format of message is "Trial <Trial Number> pruned."
.
Which Sampler and Pruner Should be Used?¶
From the benchmark results which are available at optuna/optuna - wiki “Benchmarks with Kurobako”, at least for not deep learning tasks, we would say that
For
optuna.samplers.RandomSampler
,optuna.pruners.MedianPruner
is the best.For
optuna.samplers.TPESampler
,optuna.pruners.Hyperband
is the best.
However, note that the benchmark is not deep learning. For deep learning tasks, consult the below table from Ozaki et al, Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020,
Parallel Compute Resource |
Categorical/Conditional Hyperparameters |
Recommended Algorithms |
---|---|---|
Limited |
No |
TPE. GP-EI if search space is low-dimensional and continuous. |
Yes |
TPE. GP-EI if search space is low-dimensional and continuous |
|
Sufficient |
No |
CMA-ES, Random Search |
Yes |
Random Search or Genetic Algorithm |
Integration Modules for Pruning¶
To implement pruning mechanism in much simpler forms, Optuna provides integration modules for the following libraries.
For the complete list of Optuna’s integration modules, see optuna.integration
.
For example, XGBoostPruningCallback
introduces pruning without directly changing the logic of training iteration.
(See also example for the entire script.)
pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')
bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])
Total running time of the script: ( 0 minutes 1.891 seconds)
Note
Click here to download the full example code
Easy Parallelization¶
It’s straightforward to parallelize optuna.study.Study.optimize()
.
If you want to manually execute Optuna optimization:
start an RDB server (this example uses MySQL)
create a study with –storage argument
share the study among multiple nodes and processes
Of course, you can use Kubernetes as in the kubernetes examples.
To just see how parallel optimization works in Optuna, check the below video.
Create a Study¶
You can create a study using optuna create-study
command.
Alternatively, in Python script you can use optuna.create_study()
.
$ mysql -u root -e "CREATE DATABASE IF NOT EXISTS example"
$ optuna create-study --study-name "distributed-example" --storage "mysql://root@localhost/example"
[I 2020-07-21 13:43:39,642] A new study created with name: distributed-example
Then, write an optimization script. Let’s assume that foo.py
contains the following code.
import optuna
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
if __name__ == "__main__":
study = optuna.load_study(
study_name="distributed-example", storage="mysql://root@localhost/example"
)
study.optimize(objective, n_trials=100)
Note
Click here to download the full example code
Quick Visualization for Hyperparameter Optimization Analysis¶
Optuna provides various visualization features in optuna.visualization
to analyze optimization results visually.
This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.
import lightgbm as lgb
import numpy as np
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import optuna
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_slice
SEED = 42
np.random.seed(SEED)
Define the objective function.
def objective(trial):
data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)
train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
dtrain = lgb.Dataset(train_x, label=train_y)
dvalid = lgb.Dataset(valid_x, label=valid_y)
param = {
"objective": "binary",
"metric": "auc",
"verbosity": -1,
"boosting_type": "gbdt",
"bagging_fraction": trial.suggest_float("bagging_fraction", 0.4, 1.0),
"bagging_freq": trial.suggest_int("bagging_freq", 1, 7),
"min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
}
# Add a callback for pruning.
pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc")
gbm = lgb.train(
param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]
)
preds = gbm.predict(valid_x)
pred_labels = np.rint(preds)
accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
return accuracy
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.TPESampler(seed=SEED),
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10),
)
study.optimize(objective, n_trials=100, timeout=600)
Plot functions¶
Visualize the optimization history. See plot_optimization_history()
for the details.
plot_optimization_history(study)
Visualize the learning curves of the trials. See plot_intermediate_values()
for the details.
plot_intermediate_values(study)
Visualize high-dimensional parameter relationships. See plot_parallel_coordinate()
for the details.
plot_parallel_coordinate(study)
Select parameters to visualize.
plot_parallel_coordinate(study, params=["bagging_freq", "bagging_fraction"])
Visualize hyperparameter relationships. See plot_contour()
for the details.
plot_contour(study)
Select parameters to visualize.
plot_contour(study, params=["bagging_freq", "bagging_fraction"])
Visualize individual hyperparameters as slice plot. See plot_slice()
for the details.
plot_slice(study)
Select parameters to visualize.
plot_slice(study, params=["bagging_freq", "bagging_fraction"])
Visualize parameter importances. See plot_param_importances()
for the details.
plot_param_importances(study)
Visualize empirical distribution function. See plot_edf()
for the details.
plot_edf(study)
Total running time of the script: ( 0 minutes 5.846 seconds)
Recipes¶
Showcases the recipes that might help you using Optuna with comfort.
Note
Click here to download the full example code
Saving/Resuming Study with RDB Backend¶
An RDB backend enables persistent experiments (i.e., to save and resume a study) as well as access to history of studies. In addition, we can run multi-node optimization tasks with this feature, which is described in Easy Parallelization.
In this section, let’s try simple examples running on a local environment with SQLite DB.
Note
You can also utilize other RDB backends, e.g., PostgreSQL or MySQL, by setting the storage argument to the DB’s URL. Please refer to SQLAlchemy’s document for how to set up the URL.
New Study¶
We can create a persistent study by calling create_study()
function as follows.
An SQLite file example.db
is automatically initialized with a new study record.
import logging
import sys
import optuna
# Add stream handler of stdout to show the messages
optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study_name = "example-study" # Unique identifier of the study.
storage_name = "sqlite:///{}.db".format(study_name)
study = optuna.create_study(study_name=study_name, storage=storage_name)
Out:
A new study created in RDB with name: example-study
To run a study, call optimize()
method passing an objective function.
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
Out:
Trial 0 finished with value: 53.094297127034814 and parameters: {'x': -5.286583364446935}. Best is trial 0 with value: 53.094297127034814.
Trial 1 finished with value: 7.829025011539024 and parameters: {'x': 4.798039494277917}. Best is trial 1 with value: 7.829025011539024.
Trial 2 finished with value: 104.03356987292109 and parameters: {'x': -8.19968479282184}. Best is trial 1 with value: 7.829025011539024.
Resume Study¶
To resume a study, instantiate a Study
object
passing the study name example-study
and the DB URL sqlite:///example-study.db
.
study = optuna.create_study(study_name=study_name, storage=storage_name, load_if_exists=True)
study.optimize(objective, n_trials=3)
Out:
Using an existing study with name 'example-study' instead of creating a new one.
Trial 3 finished with value: 69.71829289844983 and parameters: {'x': -6.3497480739510825}. Best is trial 1 with value: 7.829025011539024.
Trial 4 finished with value: 1.8338309737715102 and parameters: {'x': 3.354190154214507}. Best is trial 4 with value: 1.8338309737715102.
Trial 5 finished with value: 44.9958846684698 and parameters: {'x': -4.70789718678438}. Best is trial 4 with value: 1.8338309737715102.
Experimental History¶
We can access histories of studies and trials via the Study
class.
For example, we can get all trials of example-study
as:
study = optuna.create_study(study_name=study_name, storage=storage_name, load_if_exists=True)
df = study.trials_dataframe(attrs=("number", "value", "params", "state"))
Out:
Using an existing study with name 'example-study' instead of creating a new one.
The method trials_dataframe()
returns a pandas dataframe like:
print(df)
Out:
number value params_x state
0 0 53.094297 -5.286583 COMPLETE
1 1 7.829025 4.798039 COMPLETE
2 2 104.033570 -8.199685 COMPLETE
3 3 69.718293 -6.349748 COMPLETE
4 4 1.833831 3.354190 COMPLETE
5 5 44.995885 -4.707897 COMPLETE
A Study
object also provides properties
such as trials
, best_value
,
best_params
(see also Lightweight, versatile, and platform agnostic architecture).
print("Best params: ", study.best_params)
print("Best value: ", study.best_value)
print("Best Trial: ", study.best_trial)
print("Trials: ", study.trials)
Out:
Best params: {'x': 3.354190154214507}
Best value: 1.8338309737715102
Best Trial: FrozenTrial(number=4, values=[1.8338309737715102], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 26, 453604), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 520114), params={'x': 3.354190154214507}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=5, state=TrialState.COMPLETE, value=None)
Trials: [FrozenTrial(number=0, values=[53.094297127034814], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 25, 968451), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 39041), params={'x': -5.286583364446935}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=1, state=TrialState.COMPLETE, value=None), FrozenTrial(number=1, values=[7.829025011539024], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 26, 102002), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 158621), params={'x': 4.798039494277917}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=2, state=TrialState.COMPLETE, value=None), FrozenTrial(number=2, values=[104.03356987292109], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 26, 204254), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 257518), params={'x': -8.19968479282184}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=3, state=TrialState.COMPLETE, value=None), FrozenTrial(number=3, values=[69.71829289844983], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 26, 346509), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 400564), params={'x': -6.3497480739510825}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=4, state=TrialState.COMPLETE, value=None), FrozenTrial(number=4, values=[1.8338309737715102], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 26, 453604), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 520114), params={'x': 3.354190154214507}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=5, state=TrialState.COMPLETE, value=None), FrozenTrial(number=5, values=[44.9958846684698], datetime_start=datetime.datetime(2021, 2, 1, 6, 20, 26, 588064), datetime_complete=datetime.datetime(2021, 2, 1, 6, 20, 26, 647586), params={'x': -4.70789718678438}, distributions={'x': UniformDistribution(high=10, low=-10)}, user_attrs={}, system_attrs={}, intermediate_values={}, trial_id=6, state=TrialState.COMPLETE, value=None)]
Total running time of the script: ( 0 minutes 4.997 seconds)
Note
Click here to download the full example code
User Attributes¶
This feature is to annotate experiments with user-defined attributes.
Adding User Attributes to Studies¶
A Study
object provides set_user_attr()
method
to register a pair of key and value as an user-defined attribute.
A key is supposed to be a str
, and a value be any object serializable with json.dumps
.
import sklearn.datasets
import sklearn.model_selection
import sklearn.svm
import optuna
study = optuna.create_study(storage="sqlite:///example.db")
study.set_user_attr("contributors", ["Akiba", "Sano"])
study.set_user_attr("dataset", "MNIST")
We can access annotated attributes with user_attr
property.
study.user_attrs # {'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}
Out:
{'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}
StudySummary
object, which can be retrieved by
get_all_study_summaries()
, also contains user-defined attributes.
study_summaries = optuna.get_all_study_summaries("sqlite:///example.db")
study_summaries[0].user_attrs # {"contributors": ["Akiba", "Sano"], "dataset": "MNIST"}
Out:
{'contributors': ['Akiba', 'Sano'], 'dataset': 'MNIST'}
See also
optuna study set-user-attr
command, which sets an attribute via command line interface.
Adding User Attributes to Trials¶
As with Study
, a Trial
object provides
set_user_attr()
method.
Attributes are set inside an objective function.
def objective(trial):
iris = sklearn.datasets.load_iris()
x, y = iris.data, iris.target
svc_c = trial.suggest_loguniform("svc_c", 1e-10, 1e10)
clf = sklearn.svm.SVC(C=svc_c)
accuracy = sklearn.model_selection.cross_val_score(clf, x, y).mean()
trial.set_user_attr("accuracy", accuracy)
return 1.0 - accuracy # return error for minimization
study.optimize(objective, n_trials=1)
We can access annotated attributes as:
study.trials[0].user_attrs
Out:
{'accuracy': 0.9266666666666667}
Note that, in this example, the attribute is not annotated to a Study
but a single Trial
.
Total running time of the script: ( 0 minutes 0.977 seconds)
Note
Click here to download the full example code
Command-Line Interface¶
Command |
Description |
---|---|
create-study |
Create a new study. |
delete-study |
Delete a specified study. |
dashboard |
Launch web dashboard (beta). |
storage upgrade |
Upgrade the schema of a storage. |
studies |
Show a list of studies. |
study optimize |
Start optimization of a study. |
study set-user-attr |
Set a user attribute to a study. |
Optuna provides command-line interface as shown in the above table.
Let us assume you are not in IPython shell and writing Python script files instead. It is totally fine to write scripts like the following:
import optuna
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
if __name__ == "__main__":
study = optuna.create_study()
study.optimize(objective, n_trials=100)
print("Best value: {} (params: {})\n".format(study.best_value, study.best_params))
Out:
Best value: 9.604928065910521e-05 (params: {'x': 1.990199526508423})
However, we can reduce boilerplate codes by using our optuna
command.
Let us assume that foo.py
contains only the following code.
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2
Even so, we can invoke the optimization as follows.
(Don’t care about --storage sqlite:///example.db
for now, which is described in Saving/Resuming Study with RDB Backend.)
$ cat foo.py
def objective(trial):
x = trial.suggest_uniform('x', -10, 10)
return (x - 2) ** 2
$ STUDY_NAME=`optuna create-study --storage sqlite:///example.db`
$ optuna study optimize foo.py objective --n-trials=100 --storage sqlite:///example.db --study-name $STUDY_NAME
[I 2018-05-09 10:40:25,196] Finished a trial resulted in value: 54.353767789264026. Current best value is 54.353767789264026 with parameters: {'x': -5.372500782588228}.
[I 2018-05-09 10:40:25,197] Finished a trial resulted in value: 15.784266965526376. Current best value is 15.784266965526376 with parameters: {'x': 5.972941852774387}.
...
[I 2018-05-09 10:40:26,204] Finished a trial resulted in value: 14.704254135013741. Current best value is 2.280758099793617e-06 with parameters: {'x': 1.9984897821018828}.
Please note that foo.py
only contains the definition of the objective function.
By giving the script file name and the method name of objective function to
optuna study optimize
command, we can invoke the optimization.
Total running time of the script: ( 0 minutes 0.379 seconds)
Note
Click here to download the full example code
User-Defined Sampler¶
Thanks to user-defined samplers, you can:
experiment your own sampling algorithms,
implement task-specific algorithms to refine the optimization performance, or
wrap other optimization libraries to integrate them into Optuna pipelines (e.g.,
SkoptSampler
).
This section describes the internal behavior of sampler classes and shows an example of implementing a user-defined sampler.
Overview of Sampler¶
A sampler has the responsibility to determine the parameter values to be evaluated in a trial.
When a suggest API (e.g., suggest_uniform()
) is called inside an objective function, the corresponding distribution object (e.g., UniformDistribution
) is created internally. A sampler samples a parameter value from the distribution. The sampled value is returned to the caller of the suggest API and evaluated in the objective function.
To create a new sampler, you need to define a class that inherits BaseSampler
.
The base class has three abstract methods;
infer_relative_search_space()
,
sample_relative()
, and
sample_independent()
.
As the method names imply, Optuna supports two types of sampling: one is relative sampling that can consider the correlation of the parameters in a trial, and the other is independent sampling that samples each parameter independently.
At the beginning of a trial, infer_relative_search_space()
is called to provide the relative search space for the trial. Then, sample_relative()
is invoked to sample relative parameters from the search space. During the execution of the objective function, sample_independent()
is used to sample parameters that don’t belong to the relative search space.
Note
Please refer to the document of BaseSampler
for further details.
An Example: Implementing SimulatedAnnealingSampler¶
For example, the following code defines a sampler based on Simulated Annealing (SA):
import numpy as np
import optuna
class SimulatedAnnealingSampler(optuna.samplers.BaseSampler):
def __init__(self, temperature=100):
self._rng = np.random.RandomState()
self._temperature = temperature # Current temperature.
self._current_trial = None # Current state.
def sample_relative(self, study, trial, search_space):
if search_space == {}:
return {}
# Simulated Annealing algorithm.
# 1. Calculate transition probability.
prev_trial = study.trials[-2]
if self._current_trial is None or prev_trial.value <= self._current_trial.value:
probability = 1.0
else:
probability = np.exp(
(self._current_trial.value - prev_trial.value) / self._temperature
)
self._temperature *= 0.9 # Decrease temperature.
# 2. Transit the current state if the previous result is accepted.
if self._rng.uniform(0, 1) < probability:
self._current_trial = prev_trial
# 3. Sample parameters from the neighborhood of the current point.
# The sampled parameters will be used during the next execution of
# the objective function passed to the study.
params = {}
for param_name, param_distribution in search_space.items():
if not isinstance(param_distribution, optuna.distributions.UniformDistribution):
raise NotImplementedError("Only suggest_uniform() is supported")
current_value = self._current_trial.params[param_name]
width = (param_distribution.high - param_distribution.low) * 0.1
neighbor_low = max(current_value - width, param_distribution.low)
neighbor_high = min(current_value + width, param_distribution.high)
params[param_name] = self._rng.uniform(neighbor_low, neighbor_high)
return params
# The rest are unrelated to SA algorithm: boilerplate
def infer_relative_search_space(self, study, trial):
return optuna.samplers.intersection_search_space(study)
def sample_independent(self, study, trial, param_name, param_distribution):
independent_sampler = optuna.samplers.RandomSampler()
return independent_sampler.sample_independent(study, trial, param_name, param_distribution)
Note
In favor of code simplicity, the above implementation doesn’t support some features (e.g., maximization). If you’re interested in how to support those features, please see examples/samplers/simulated_annealing.py.
You can use SimulatedAnnealingSampler
in the same way as built-in samplers as follows:
def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
y = trial.suggest_uniform("y", -5, 5)
return x ** 2 + y
sampler = SimulatedAnnealingSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
best_trial = study.best_trial
print("Best value: ", best_trial.value)
print("Parameters that achieve the best value: ", best_trial.params)
Out:
Best value: -3.3293862626297486
Parameters that achieve the best value: {'x': 0.3393530900616444, 'y': -3.444546782364135}
In this optimization, the values of x
and y
parameters are sampled by using
SimulatedAnnealingSampler.sample_relative
method.
Note
Strictly speaking, in the first trial,
SimulatedAnnealingSampler.sample_independent
method is used to sample parameter values.
Because intersection_search_space()
used in
SimulatedAnnealingSampler.infer_relative_search_space
cannot infer the search space
if there are no complete trials.
Total running time of the script: ( 0 minutes 0.400 seconds)
Note
Click here to download the full example code
Callback for Study.optimize¶
This tutorial showcases how to use & implement Optuna Callback
for optimize()
.
Callback
is called after every evaluation of objective
, and
it takes Study
and FrozenTrial
as arguments, and does some work.
MLflowCallback
is a great example.
Stop optimization after some trials are pruned in a row¶
This example implements a stateful callback which stops the optimization
if a certain number of trials are pruned in a row.
The number of trials pruned in a row is specified by threshold
.
import optuna
class StopWhenTrialKeepBeingPrunedCallback:
def __init__(self, threshold: int):
self.threshold = threshold
self._consequtive_pruned_count = 0
def __call__(self, study: optuna.study.Study, trial: optuna.trial.FrozenTrial) -> None:
if trial.state == optuna.trial.TrialState.PRUNED:
self._consequtive_pruned_count += 1
else:
self._consequtive_pruned_count = 0
if self._consequtive_pruned_count >= self.threshold:
study.stop()
This objective prunes all the trials except for the first 5 trials (trial.number
starts with 0).
def objective(trial):
if trial.number > 4:
raise optuna.TrialPruned
return trial.suggest_float("x", 0, 1)
Here, we set the threshold to 2
: optimization finishes once two trials are pruned in a row.
So, we expect this study to stop after 7 trials.
import logging
import sys
# Add stream handler of stdout to show the messages
optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study_stop_cb = StopWhenTrialKeepBeingPrunedCallback(2)
study = optuna.create_study()
study.optimize(objective, n_trials=10, callbacks=[study_stop_cb])
Out:
A new study created in memory with name: no-name-e5ca9527-f660-4b9a-b979-db8c4efd1318
Trial 0 finished with value: 0.9079049643616132 and parameters: {'x': 0.9079049643616132}. Best is trial 0 with value: 0.9079049643616132.
Trial 1 finished with value: 0.10537646136485146 and parameters: {'x': 0.10537646136485146}. Best is trial 1 with value: 0.10537646136485146.
Trial 2 finished with value: 0.1599227028214033 and parameters: {'x': 0.1599227028214033}. Best is trial 1 with value: 0.10537646136485146.
Trial 3 finished with value: 0.43359562966154064 and parameters: {'x': 0.43359562966154064}. Best is trial 1 with value: 0.10537646136485146.
Trial 4 finished with value: 0.9227999289778658 and parameters: {'x': 0.9227999289778658}. Best is trial 1 with value: 0.10537646136485146.
Trial 5 pruned.
Trial 6 pruned.
As you can see in the log above, the study stopped after 7 trials as expected.
Total running time of the script: ( 0 minutes 0.008 seconds)
Note
Click here to download the full example code
Specify Hyperparameters Manually¶
It’s natural that you have some specific sets of hyperparameters to try first such as initial learning rate values and the number of leaves. Also, it’s also possible that you’ve already tried those sets before having Optuna find better sets of hyperparameters.
Optuna provides two APIs to support such cases:
Passing those sets of hyperparameters and let Optuna evaluate them -
enqueue_trial()
Adding the results of those sets as completed
Trial
s -add_trial()
First Scenario: Have Optuna evaluate your hyperparameters¶
In this scenario, let’s assume you have some out-of-box sets of hyperparameters but have not evaluated them yet and decided to use Optuna to find better sets of hyperparameters.
Optuna has optuna.study.Study.enqueue_trial()
which lets you pass those sets of
hyperparameters to Optuna and Optuna will evaluate them.
This section walks you through how to use this lit API with LightGBM.
import lightgbm as lgb
import numpy as np
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import optuna
Define the objective function.
def objective(trial):
data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)
train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
dtrain = lgb.Dataset(train_x, label=train_y)
dvalid = lgb.Dataset(valid_x, label=valid_y)
param = {
"objective": "binary",
"metric": "auc",
"verbosity": -1,
"boosting_type": "gbdt",
"bagging_fraction": min(trial.suggest_float("bagging_fraction", 0.4, 1.0 + 1e-12), 1),
"bagging_freq": trial.suggest_int("bagging_freq", 0, 7),
"min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
}
# Add a callback for pruning.
pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc")
gbm = lgb.train(
param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]
)
preds = gbm.predict(valid_x)
pred_labels = np.rint(preds)
accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
return accuracy
Then, construct Study
for hyperparameter optimization.
study = optuna.create_study(direction="maximize", pruner=optuna.pruners.MedianPruner())
Here, we get Optuna evaluate some sets with larger "bagging_fraq"
value and
the default values.
study.enqueue_trial(
{
"bagging_fraction": 1.0,
"bagging_freq": 0,
"min_child_samples": 20,
}
)
study.enqueue_trial(
{
"bagging_fraction": 0.75,
"bagging_freq": 5,
"min_child_samples": 20,
}
)
import logging
import sys
# Add stream handler of stdout to show the messages to see Optuna works expectedly.
optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
study.optimize(objective, n_trials=100, timeout=600)
Out:
/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/v2.5.0/tutorial/20_recipes/008_specify_params.py:77: ExperimentalWarning:
enqueue_trial is experimental (supported from v1.2.0). The interface can change in the future.
/home/docs/checkouts/readthedocs.org/user_builds/optuna/envs/v2.5.0/lib/python3.8/site-packages/optuna/study.py:783: ExperimentalWarning:
create_trial is experimental (supported from v2.0.0). The interface can change in the future.
/home/docs/checkouts/readthedocs.org/user_builds/optuna/envs/v2.5.0/lib/python3.8/site-packages/optuna/study.py:782: ExperimentalWarning:
add_trial is experimental (supported from v2.0.0). The interface can change in the future.
/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/v2.5.0/tutorial/20_recipes/008_specify_params.py:85: ExperimentalWarning:
enqueue_trial is experimental (supported from v1.2.0). The interface can change in the future.
Trial 0 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 1.0, 'bagging_freq': 0, 'min_child_samples': 20}. Best is trial 0 with value: 0.972027972027972.
Trial 1 finished with value: 0.965034965034965 and parameters: {'bagging_fraction': 0.75, 'bagging_freq': 5, 'min_child_samples': 20}. Best is trial 0 with value: 0.972027972027972.
Trial 2 finished with value: 0.986013986013986 and parameters: {'bagging_fraction': 0.5087200690157058, 'bagging_freq': 1, 'min_child_samples': 23}. Best is trial 2 with value: 0.986013986013986.
Trial 3 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.4839727033430913, 'bagging_freq': 7, 'min_child_samples': 24}. Best is trial 2 with value: 0.986013986013986.
Trial 4 finished with value: 0.951048951048951 and parameters: {'bagging_fraction': 0.8964736748493063, 'bagging_freq': 2, 'min_child_samples': 57}. Best is trial 2 with value: 0.986013986013986.
Trial 5 pruned. Trial was pruned at iteration 0.
Trial 6 finished with value: 0.993006993006993 and parameters: {'bagging_fraction': 0.7717291040842011, 'bagging_freq': 1, 'min_child_samples': 33}. Best is trial 6 with value: 0.993006993006993.
Trial 7 pruned. Trial was pruned at iteration 1.
Trial 8 pruned. Trial was pruned at iteration 0.
Trial 9 pruned. Trial was pruned at iteration 0.
Trial 10 pruned. Trial was pruned at iteration 0.
Trial 11 finished with value: 0.9790209790209791 and parameters: {'bagging_fraction': 0.6058418252550434, 'bagging_freq': 0, 'min_child_samples': 45}. Best is trial 6 with value: 0.993006993006993.
Trial 12 pruned. Trial was pruned at iteration 0.
Trial 13 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.8266888185192081, 'bagging_freq': 2, 'min_child_samples': 5}. Best is trial 6 with value: 0.993006993006993.
Trial 14 pruned. Trial was pruned at iteration 0.
Trial 15 pruned. Trial was pruned at iteration 43.
Trial 16 pruned. Trial was pruned at iteration 0.
Trial 17 pruned. Trial was pruned at iteration 0.
Trial 18 pruned. Trial was pruned at iteration 0.
Trial 19 pruned. Trial was pruned at iteration 0.
Trial 20 pruned. Trial was pruned at iteration 2.
Trial 21 pruned. Trial was pruned at iteration 2.
Trial 22 pruned. Trial was pruned at iteration 1.
Trial 23 pruned. Trial was pruned at iteration 0.
Trial 24 pruned. Trial was pruned at iteration 0.
Trial 25 pruned. Trial was pruned at iteration 0.
Trial 26 pruned. Trial was pruned at iteration 0.
Trial 27 pruned. Trial was pruned at iteration 0.
Trial 28 pruned. Trial was pruned at iteration 0.
Trial 29 pruned. Trial was pruned at iteration 0.
Trial 30 pruned. Trial was pruned at iteration 46.
Trial 31 pruned. Trial was pruned at iteration 0.
Trial 32 pruned. Trial was pruned at iteration 0.
Trial 33 pruned. Trial was pruned at iteration 0.
Trial 34 pruned. Trial was pruned at iteration 0.
Trial 35 pruned. Trial was pruned at iteration 0.
Trial 36 pruned. Trial was pruned at iteration 0.
Trial 37 pruned. Trial was pruned at iteration 2.
Trial 38 pruned. Trial was pruned at iteration 0.
Trial 39 pruned. Trial was pruned at iteration 0.
Trial 40 pruned. Trial was pruned at iteration 0.
Trial 41 pruned. Trial was pruned at iteration 0.
Trial 42 pruned. Trial was pruned at iteration 0.
Trial 43 pruned. Trial was pruned at iteration 2.
Trial 44 pruned. Trial was pruned at iteration 0.
Trial 45 pruned. Trial was pruned at iteration 0.
Trial 46 pruned. Trial was pruned at iteration 0.
Trial 47 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.49438929606003434, 'bagging_freq': 0, 'min_child_samples': 30}. Best is trial 6 with value: 0.993006993006993.
Trial 48 pruned. Trial was pruned at iteration 0.
Trial 49 pruned. Trial was pruned at iteration 0.
Trial 50 pruned. Trial was pruned at iteration 0.
Trial 51 pruned. Trial was pruned at iteration 0.
Trial 52 pruned. Trial was pruned at iteration 2.
Trial 53 pruned. Trial was pruned at iteration 0.
Trial 54 pruned. Trial was pruned at iteration 0.
Trial 55 finished with value: 0.9790209790209791 and parameters: {'bagging_fraction': 0.7114403985474806, 'bagging_freq': 1, 'min_child_samples': 12}. Best is trial 6 with value: 0.993006993006993.
Trial 56 pruned. Trial was pruned at iteration 0.
Trial 57 pruned. Trial was pruned at iteration 0.
Trial 58 pruned. Trial was pruned at iteration 0.
Trial 59 pruned. Trial was pruned at iteration 0.
Trial 60 pruned. Trial was pruned at iteration 0.
Trial 61 pruned. Trial was pruned at iteration 0.
Trial 62 pruned. Trial was pruned at iteration 1.
Trial 63 finished with value: 0.986013986013986 and parameters: {'bagging_fraction': 0.7196656788109096, 'bagging_freq': 1, 'min_child_samples': 14}. Best is trial 6 with value: 0.993006993006993.
Trial 64 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.7200608614898812, 'bagging_freq': 2, 'min_child_samples': 12}. Best is trial 6 with value: 0.993006993006993.
Trial 65 pruned. Trial was pruned at iteration 0.
Trial 66 pruned. Trial was pruned at iteration 0.
Trial 67 pruned. Trial was pruned at iteration 0.
Trial 68 pruned. Trial was pruned at iteration 0.
Trial 69 pruned. Trial was pruned at iteration 0.
Trial 70 pruned. Trial was pruned at iteration 0.
Trial 71 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.7486529342560135, 'bagging_freq': 0, 'min_child_samples': 14}. Best is trial 6 with value: 0.993006993006993.
Trial 72 pruned. Trial was pruned at iteration 0.
Trial 73 pruned. Trial was pruned at iteration 0.
Trial 74 pruned. Trial was pruned at iteration 0.
Trial 75 pruned. Trial was pruned at iteration 0.
Trial 76 pruned. Trial was pruned at iteration 0.
Trial 77 pruned. Trial was pruned at iteration 0.
Trial 78 pruned. Trial was pruned at iteration 0.
Trial 79 pruned. Trial was pruned at iteration 0.
Trial 80 pruned. Trial was pruned at iteration 0.
Trial 81 pruned. Trial was pruned at iteration 0.
Trial 82 pruned. Trial was pruned at iteration 0.
Trial 83 pruned. Trial was pruned at iteration 0.
Trial 84 pruned. Trial was pruned at iteration 0.
Trial 85 pruned. Trial was pruned at iteration 0.
Trial 86 pruned. Trial was pruned at iteration 0.
Trial 87 pruned. Trial was pruned at iteration 0.
Trial 88 pruned. Trial was pruned at iteration 0.
Trial 89 pruned. Trial was pruned at iteration 0.
Trial 90 pruned. Trial was pruned at iteration 0.
Trial 91 pruned. Trial was pruned at iteration 0.
Trial 92 pruned. Trial was pruned at iteration 0.
Trial 93 pruned. Trial was pruned at iteration 0.
Trial 94 pruned. Trial was pruned at iteration 0.
Trial 95 pruned. Trial was pruned at iteration 0.
Trial 96 pruned. Trial was pruned at iteration 0.
Trial 97 pruned. Trial was pruned at iteration 0.
Trial 98 pruned. Trial was pruned at iteration 1.
Trial 99 pruned. Trial was pruned at iteration 0.
Second scenario: Have Optuna utilize already evaluated hyperparameters¶
In this scenario, let’s assume you have some out-of-box sets of hyperparameters and you have already evaluated them but the results are not desirable so that you are thinking of using Optuna.
Optuna has optuna.study.Study.add_trial()
which lets you register those results
to Optuna and then Optuna will sample hyperparameters taking them into account.
In this section, the objective
is the same as the first scenario.
study = optuna.create_study(direction="maximize", pruner=optuna.pruners.MedianPruner())
study.add_trial(
optuna.trial.create_trial(
params={
"bagging_fraction": 1.0,
"bagging_freq": 0,
"min_child_samples": 20,
},
distributions={
"bagging_fraction": optuna.distributions.UniformDistribution(0.4, 1.0 + 1e-12),
"bagging_freq": optuna.distributions.IntUniformDistribution(0, 7),
"min_child_samples": optuna.distributions.IntUniformDistribution(5, 100),
},
value=0.94,
)
)
study.add_trial(
optuna.trial.create_trial(
params={
"bagging_fraction": 0.75,
"bagging_freq": 5,
"min_child_samples": 20,
},
distributions={
"bagging_fraction": optuna.distributions.UniformDistribution(0.4, 1.0 + 1e-12),
"bagging_freq": optuna.distributions.IntUniformDistribution(0, 7),
"min_child_samples": optuna.distributions.IntUniformDistribution(5, 100),
},
value=0.95,
)
)
study.optimize(objective, n_trials=100, timeout=600)
Out:
A new study created in memory with name: no-name-62b7821a-cde0-4ca4-ac9b-a795a5129cf2
/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/v2.5.0/tutorial/20_recipes/008_specify_params.py:115: ExperimentalWarning:
create_trial is experimental (supported from v2.0.0). The interface can change in the future.
/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/v2.5.0/tutorial/20_recipes/008_specify_params.py:114: ExperimentalWarning:
add_trial is experimental (supported from v2.0.0). The interface can change in the future.
/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/v2.5.0/tutorial/20_recipes/008_specify_params.py:130: ExperimentalWarning:
create_trial is experimental (supported from v2.0.0). The interface can change in the future.
/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/v2.5.0/tutorial/20_recipes/008_specify_params.py:129: ExperimentalWarning:
add_trial is experimental (supported from v2.0.0). The interface can change in the future.
Trial 2 finished with value: 0.9440559440559441 and parameters: {'bagging_fraction': 0.7761900098321514, 'bagging_freq': 1, 'min_child_samples': 67}. Best is trial 1 with value: 0.95.
Trial 3 finished with value: 1.0 and parameters: {'bagging_fraction': 0.7567994812656097, 'bagging_freq': 7, 'min_child_samples': 43}. Best is trial 3 with value: 1.0.
Trial 4 finished with value: 0.965034965034965 and parameters: {'bagging_fraction': 0.44108711621926133, 'bagging_freq': 2, 'min_child_samples': 68}. Best is trial 3 with value: 1.0.
Trial 5 pruned. Trial was pruned at iteration 0.
Trial 6 pruned. Trial was pruned at iteration 2.
Trial 7 pruned. Trial was pruned at iteration 1.
Trial 8 pruned. Trial was pruned at iteration 0.
Trial 9 pruned. Trial was pruned at iteration 1.
Trial 10 pruned. Trial was pruned at iteration 0.
Trial 11 pruned. Trial was pruned at iteration 0.
Trial 12 pruned. Trial was pruned at iteration 0.
Trial 13 pruned. Trial was pruned at iteration 5.
Trial 14 pruned. Trial was pruned at iteration 0.
Trial 15 pruned. Trial was pruned at iteration 0.
Trial 16 pruned. Trial was pruned at iteration 0.
Trial 17 pruned. Trial was pruned at iteration 0.
Trial 18 pruned. Trial was pruned at iteration 0.
Trial 19 pruned. Trial was pruned at iteration 0.
Trial 20 pruned. Trial was pruned at iteration 0.
Trial 21 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.8518727410930631, 'bagging_freq': 5, 'min_child_samples': 24}. Best is trial 3 with value: 1.0.
Trial 22 pruned. Trial was pruned at iteration 0.
Trial 23 pruned. Trial was pruned at iteration 0.
Trial 24 pruned. Trial was pruned at iteration 0.
Trial 25 pruned. Trial was pruned at iteration 5.
Trial 26 pruned. Trial was pruned at iteration 0.
Trial 27 pruned. Trial was pruned at iteration 1.
Trial 28 pruned. Trial was pruned at iteration 0.
Trial 29 pruned. Trial was pruned at iteration 0.
Trial 30 pruned. Trial was pruned at iteration 1.
Trial 31 pruned. Trial was pruned at iteration 0.
Trial 32 pruned. Trial was pruned at iteration 1.
Trial 33 pruned. Trial was pruned at iteration 0.
Trial 34 pruned. Trial was pruned at iteration 0.
Trial 35 pruned. Trial was pruned at iteration 5.
Trial 36 pruned. Trial was pruned at iteration 0.
Trial 37 pruned. Trial was pruned at iteration 0.
Trial 38 pruned. Trial was pruned at iteration 0.
Trial 39 pruned. Trial was pruned at iteration 0.
Trial 40 pruned. Trial was pruned at iteration 0.
Trial 41 pruned. Trial was pruned at iteration 0.
Trial 42 pruned. Trial was pruned at iteration 0.
Trial 43 pruned. Trial was pruned at iteration 0.
Trial 44 pruned. Trial was pruned at iteration 0.
Trial 45 pruned. Trial was pruned at iteration 0.
Trial 46 pruned. Trial was pruned at iteration 0.
Trial 47 pruned. Trial was pruned at iteration 0.
Trial 48 pruned. Trial was pruned at iteration 0.
Trial 49 pruned. Trial was pruned at iteration 0.
Trial 50 pruned. Trial was pruned at iteration 0.
Trial 51 pruned. Trial was pruned at iteration 0.
Trial 52 pruned. Trial was pruned at iteration 0.
Trial 53 pruned. Trial was pruned at iteration 5.
Trial 54 pruned. Trial was pruned at iteration 0.
Trial 55 pruned. Trial was pruned at iteration 1.
Trial 56 pruned. Trial was pruned at iteration 0.
Trial 57 pruned. Trial was pruned at iteration 0.
Trial 58 finished with value: 0.972027972027972 and parameters: {'bagging_fraction': 0.6790799964643264, 'bagging_freq': 4, 'min_child_samples': 28}. Best is trial 3 with value: 1.0.
Trial 59 pruned. Trial was pruned at iteration 0.
Trial 60 pruned. Trial was pruned at iteration 0.
Trial 61 pruned. Trial was pruned at iteration 1.
Trial 62 pruned. Trial was pruned at iteration 9.
Trial 63 pruned. Trial was pruned at iteration 0.
Trial 64 pruned. Trial was pruned at iteration 1.
Trial 65 pruned. Trial was pruned at iteration 0.
Trial 66 pruned. Trial was pruned at iteration 1.
Trial 67 pruned. Trial was pruned at iteration 0.
Trial 68 pruned. Trial was pruned at iteration 0.
Trial 69 pruned. Trial was pruned at iteration 0.
Trial 70 pruned. Trial was pruned at iteration 0.
Trial 71 pruned. Trial was pruned at iteration 3.
Trial 72 pruned. Trial was pruned at iteration 1.
Trial 73 pruned. Trial was pruned at iteration 0.
Trial 74 finished with value: 1.0 and parameters: {'bagging_fraction': 0.7781939786844342, 'bagging_freq': 4, 'min_child_samples': 22}. Best is trial 3 with value: 1.0.
Trial 75 pruned. Trial was pruned at iteration 0.
Trial 76 pruned. Trial was pruned at iteration 0.
Trial 77 pruned. Trial was pruned at iteration 0.
Trial 78 pruned. Trial was pruned at iteration 0.
Trial 79 pruned. Trial was pruned at iteration 0.
Trial 80 pruned. Trial was pruned at iteration 0.
Trial 81 pruned. Trial was pruned at iteration 0.
Trial 82 pruned. Trial was pruned at iteration 0.
Trial 83 pruned. Trial was pruned at iteration 1.
Trial 84 pruned. Trial was pruned at iteration 0.
Trial 85 pruned. Trial was pruned at iteration 0.
Trial 86 pruned. Trial was pruned at iteration 0.
Trial 87 pruned. Trial was pruned at iteration 0.
Trial 88 pruned. Trial was pruned at iteration 0.
Trial 89 pruned. Trial was pruned at iteration 0.
Trial 90 pruned. Trial was pruned at iteration 0.
Trial 91 pruned. Trial was pruned at iteration 0.
Trial 92 pruned. Trial was pruned at iteration 0.
Trial 93 pruned. Trial was pruned at iteration 1.
Trial 94 pruned. Trial was pruned at iteration 0.
Trial 95 pruned. Trial was pruned at iteration 0.
Trial 96 pruned. Trial was pruned at iteration 0.
Trial 97 pruned. Trial was pruned at iteration 0.
Trial 98 pruned. Trial was pruned at iteration 1.
Trial 99 pruned. Trial was pruned at iteration 0.
Trial 100 pruned. Trial was pruned at iteration 1.
Trial 101 pruned. Trial was pruned at iteration 1.
Total running time of the script: ( 0 minutes 7.691 seconds)
API Reference¶
optuna¶
The optuna
module is primarily used as an alias for basic Optuna functionality coded in other modules. Currently, two modules are aliased: (1) from optuna.study
, functions regarding the Study lifecycle, and (2) from optuna.exceptions
, the TrialPruned Exception raised when a trial is pruned.
Create a new |
|
Load the existing |
|
Delete a |
|
Get all history of studies stored in a specified storage. |
|
Exception for pruned trials. |
optuna.cli¶
The cli
module implements Optuna’s command-line functionality using the cliff framework.
optuna
[--version]
[-v | -q]
[--log-file LOG_FILE]
[--debug]
[--storage STORAGE]
-
--version
¶
show program’s version number and exit
-
-v
,
--verbose
¶
Increase verbosity of output. Can be repeated.
-
-q
,
--quiet
¶
Suppress output except warnings and errors.
-
--log-file
<LOG_FILE>
¶ Specify a file to log output. Disabled by default.
-
--debug
¶
Show tracebacks on errors.
-
--storage
<STORAGE>
¶ DB URL. (e.g. sqlite:///example.db)
create-study¶
Create a new study.
optuna create-study
[--study-name STUDY_NAME]
[--direction {minimize,maximize}]
[--skip-if-exists]
-
--study-name
<STUDY_NAME>
¶ A human-readable name of a study to distinguish it from others.
-
--direction
<DIRECTION>
¶ Set direction of optimization to a new study. Set ‘minimize’ for minimization and ‘maximize’ for maximization.
-
--skip-if-exists
¶
If specified, the creation of the study is skipped without any error when the study name is duplicated.
This command is provided by the optuna plugin.
dashboard¶
Launch web dashboard (beta).
optuna dashboard
[--study STUDY]
[--study-name STUDY_NAME]
[--out OUT]
[--allow-websocket-origin BOKEH_ALLOW_WEBSOCKET_ORIGINS]
-
--study
<STUDY>
¶ This argument is deprecated. Use –study-name instead.
-
--study-name
<STUDY_NAME>
¶ The name of the study to show on the dashboard.
-
--out
<OUT>
,
-o
<OUT>
¶ Output HTML file path. If it is not given, a HTTP server starts and the dashboard is served.
-
--allow-websocket-origin
<BOKEH_ALLOW_WEBSOCKET_ORIGINS>
¶ Allow websocket access from the specified host(s).Internally, it is used as the value of bokeh’s –allow-websocket-origin option. Please refer to https://bokeh.pydata.org/en/latest/docs/reference/command/subcommands/serve.html for more details.
This command is provided by the optuna plugin.
delete-study¶
Delete a specified study.
optuna delete-study [--study-name STUDY_NAME]
-
--study-name
<STUDY_NAME>
¶ The name of the study to delete.
This command is provided by the optuna plugin.
storage upgrade¶
Upgrade the schema of a storage.
optuna storage upgrade
This command is provided by the optuna plugin.
studies¶
Show a list of studies.
optuna studies
[-f {csv,json,table,value,yaml}]
[-c COLUMN]
[--quote {all,minimal,none,nonnumeric}]
[--noindent]
[--max-width <integer>]
[--fit-width]
[--print-empty]
[--sort-column SORT_COLUMN]
-
-f
<FORMATTER>
,
--format
<FORMATTER>
¶ the output format, defaults to table
-
-c
COLUMN
,
--column
COLUMN
¶ specify the column(s) to include, can be repeated to show multiple columns
-
--quote
<QUOTE_MODE>
¶ when to include quotes, defaults to nonnumeric
-
--noindent
¶
whether to disable indenting the JSON
-
--max-width
<integer>
¶ Maximum display width, <1 to disable. You can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence.
-
--fit-width
¶
Fit the table to the display width. Implied if –max-width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable
-
--print-empty
¶
Print empty table if there is no data to show.
-
--sort-column
SORT_COLUMN
¶ specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated
This command is provided by the optuna plugin.
study optimize¶
Start optimization of a study. Deprecated since version 2.0.0.
optuna study optimize
[--n-trials N_TRIALS]
[--timeout TIMEOUT]
[--n-jobs N_JOBS]
[--study STUDY]
[--study-name STUDY_NAME]
file
method
-
--n-trials
<N_TRIALS>
¶ The number of trials. If this argument is not given, as many trials run as possible.
-
--timeout
<TIMEOUT>
¶ Stop study after the given number of second(s). If this argument is not given, as many trials run as possible.
-
--n-jobs
<N_JOBS>
¶ The number of parallel jobs. If this argument is set to -1, the number is set to CPU counts.
-
--study
<STUDY>
¶ This argument is deprecated. Use –study-name instead.
-
--study-name
<STUDY_NAME>
¶ The name of the study to start optimization on.
-
file
¶
Python script file where the objective function resides.
-
method
¶
The method name of the objective function.
This command is provided by the optuna plugin.
study set-user-attr¶
Set a user attribute to a study.
optuna study set-user-attr
[--study STUDY]
[--study-name STUDY_NAME]
--key KEY
--value VALUE
-
--study
<STUDY>
¶ This argument is deprecated. Use –study-name instead.
-
--study-name
<STUDY_NAME>
¶ The name of the study to set the user attribute to.
-
--key
<KEY>
,
-k
<KEY>
¶ Key of the user attribute.
-
--value
<VALUE>
,
-v
<VALUE>
¶ Value to be set.
This command is provided by the optuna plugin.
optuna.distributions¶
The distributions
module defines various classes representing probability distributions, mainly used to suggest initial hyperparameter values for an optimization trial. Distribution classes inherit from a library-internal BaseDistribution
, and is initialized with specific parameters, such as the low
and high
endpoints for a UniformDistribution
.
Optuna users should not use distribution classes directly, but instead use utility functions provided by Trial
such as suggest_int()
.
A uniform distribution in the linear domain. |
|
A uniform distribution in the log domain. |
|
A discretized uniform distribution in the linear domain. |
|
A uniform distribution on integers. |
|
A uniform distribution on integers in the log domain. |
|
A categorical distribution. |
|
Serialize a distribution to JSON format. |
|
Deserialize a distribution in JSON format. |
|
A function to check compatibility of two distributions. |
optuna.exceptions¶
The exceptions
module defines Optuna-specific exceptions deriving from a base OptunaError
class. Of special importance for library users is the TrialPruned
exception to be raised if optuna.trial.Trial.should_prune()
returns True
for a trial that should be pruned.
Base class for Optuna specific errors. |
|
Exception for pruned trials. |
|
Exception for CLI. |
|
Exception for storage operation. |
|
Exception for a duplicated study name. |
optuna.importance¶
The importance
module provides functionality for evaluating hyperparameter importances based on completed trials in a given study. The utility function get_param_importances()
takes a Study
and optional evaluator as two of its inputs. The evaluator must derive from BaseImportanceEvaluator
, and is initialized as a FanovaImportanceEvaluator
by default when not passed in. Users implementing custom evaluators should refer to either FanovaImportanceEvaluator
or MeanDecreaseImpurityImportanceEvaluator
as a guide, paying close attention to the format of the return value from the Evaluator’s evaluate()
function.
Evaluate parameter importances based on completed trials in the given study. |
|
fANOVA importance evaluator. |
|
Mean Decrease Impurity (MDI) parameter importance evaluator. |
optuna.integration¶
The integration
module contains classes used to integrate Optuna with external machine learning frameworks.
For most of the ML frameworks supported by Optuna, the corresponding Optuna integration class serves only to implement a callback object and functions, compliant with the framework’s specific callback API, to be called with each intermediate step in the model training. The functionality implemented in these callbacks across the different ML frameworks includes:
Reporting intermediate model scores back to the Optuna trial using
optuna.trial.report()
,According to the results of
optuna.trial.Trial.should_prune()
, pruning the current model by raisingoptuna.TrialPruned()
, andReporting intermediate Optuna data such as the current trial number back to the framework, as done in
MLflowCallback
.
For scikit-learn, an integrated OptunaSearchCV
estimator is available that combines scikit-learn BaseEstimator functionality with access to a class-level Study
object.
AllenNLP¶
AllenNLP extension to use optuna with Jsonnet config file. |
|
Save JSON config file after updating with parameters from the best trial in the study. |
|
AllenNLP callback to prune unpromising trials. |
BoTorch¶
A sampler that uses BoTorch, a Bayesian optimization library built on top of PyTorch. |
|
Quasi MC-based batch Expected Improvement (qEI). |
|
Quasi MC-based batch Expected Hypervolume Improvement (qEHVI). |
|
Quasi MC-based extended ParEGO (qParEGO) for constrained multi-objective optimization. |
Catalyst¶
Catalyst callback to prune unpromising trials. |
Chainer¶
Chainer extension to prune unpromising trials. |
|
A wrapper of |
fast.ai¶
FastAI callback to prune unpromising trials for fastai. |
|
FastAI callback to prune unpromising trials for fastai. |
|
alias of |
Keras¶
Keras callback to prune unpromising trials. |
LightGBM¶
Callback for LightGBM to prune unpromising trials. |
|
Wrapper of LightGBM Training API to tune hyperparameters. |
|
Hyperparameter tuner for LightGBM. |
|
Hyperparameter tuner for LightGBM with cross-validation. |
MLflow¶
Callback to track Optuna trials with MLflow. |
MXNet¶
MXNet callback to prune unpromising trials. |
pycma¶
A Sampler using cma library as the backend. |
|
Wrapper class of PyCmaSampler for backward compatibility. |
PyTorch¶
PyTorch Ignite handler to prune unpromising trials. |
|
PyTorch Lightning callback to prune unpromising trials. |
scikit-learn¶
Hyperparameter search with cross-validation. |
scikit-optimize¶
Sampler using Scikit-Optimize as the backend. |
skorch¶
Skorch callback to prune unpromising trials. |
TensorFlow¶
Callback to track Optuna trials with TensorBoard. |
|
TensorFlow SessionRunHook to prune unpromising trials. |
|
tf.keras callback to prune unpromising trials. |
XGBoost¶
Callback for XGBoost to prune unpromising trials. |
optuna.logging¶
The logging
module implements logging using the Python logging
package. Library users may be especially interested in setting verbosity levels using set_verbosity()
to one of optuna.logging.CRITICAL
(aka optuna.logging.FATAL
), optuna.logging.ERROR
, optuna.logging.WARNING
(aka optuna.logging.WARN
), optuna.logging.INFO
, or optuna.logging.DEBUG
.
Return the current level for the Optuna’s root logger. |
|
Set the level for the Optuna’s root logger. |
|
Disable the default handler of the Optuna’s root logger. |
|
Enable the default handler of the Optuna’s root logger. |
|
Disable propagation of the library log outputs. |
|
Enable propagation of the library log outputs. |
optuna.multi_objective¶
This module is deprecated, with former functionality moved to optuna.samplers
, optuna.study
, optuna.trial
and optuna.visualization
.
optuna.multi_objective.samplers¶
Base class for multi-objective samplers. |
|
Multi-objective sampler using the NSGA-II algorithm. |
|
Multi-objective sampler using random sampling. |
|
Multi-objective sampler using the MOTPE algorithm. |
optuna.multi_objective.study¶
A study corresponds to a multi-objective optimization task, i.e., a set of trials. |
|
Create a new |
|
Load the existing |
optuna.multi_objective.trial¶
A trial is a process of evaluating an objective function. |
|
Status and results of a |
optuna.multi_objective.visualization¶
Note
optuna.multi_objective.visualization
module uses plotly to create figures,
but JupyterLab cannot render them by default. Please follow this installation guide to
show figures in JupyterLab.
Plot the pareto front of a study. |
optuna.pruners¶
The pruners
module defines a BasePruner
class characterized by an abstract prune()
method, which, for a given trial and its associated study, returns a boolean value representing whether the trial should be pruned. This determination is made based on stored intermediate values of the objective function, as previously reported for the trial using optuna.trial.Trial.report()
. The remaining classes in this module represent child classes, inheriting from BasePruner
, which implement different pruning strategies.
Base class for pruners. |
|
Pruner using the median stopping rule. |
|
Pruner which never prunes trials. |
|
Pruner to keep the specified percentile of the trials. |
|
Pruner using Asynchronous Successive Halving Algorithm. |
|
Pruner using Hyperband. |
|
Pruner to detect outlying metrics of the trials. |
optuna.samplers¶
The samplers
module defines a base class for parameter sampling as described extensively in BaseSampler
. The remaining classes in this module represent child classes, deriving from BaseSampler
, which implement different sampling strategies.
Base class for samplers. |
|
Sampler using grid search. |
|
Sampler using random sampling. |
|
Sampler using TPE (Tree-structured Parzen Estimator) algorithm. |
|
A Sampler using CMA-ES algorithm. |
|
Sampler with partially fixed parameters. |
|
Multi-objective sampler using the NSGA-II algorithm. |
|
Multi-objective sampler using the MOTPE algorithm. |
|
A class to calculate the intersection search space of a |
|
Return the intersection search space of the |
optuna.storages¶
The storages
module defines a BaseStorage
class which abstracts a backend database and provides library-internal interfaces to read/write histories of studies and trials. Library users who wish to use storage solutions other than the default in-memory storage should use one of the child classes of BaseStorage
documented below.
Storage class for RDB backend. |
|
Storage class for Redis backend. |
optuna.structs¶
This module is deprecated, with former functionality moved to optuna.trial
and optuna.study
.
-
class
optuna.structs.
TrialState
(value)[source]¶ State of a
Trial
.-
PRUNED
¶ The
Trial
has been pruned withTrialPruned
.
Deprecated since version 1.4.0: This class is deprecated. Please use
TrialState
instead.-
-
class
optuna.structs.
StudyDirection
(value)[source]¶ Direction of a
Study
.-
NOT_SET
¶ Direction has not been set.
Deprecated since version 1.4.0: This class is deprecated. Please use
StudyDirection
instead.-
-
class
optuna.structs.
FrozenTrial
(number, state, value, datetime_start, datetime_complete, params, distributions, user_attrs, system_attrs, intermediate_values, trial_id, *, values=None)[source]¶ Warning
Deprecated in v1.4.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v3.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v1.4.0.
This class was moved to
trial
. Please useFrozenTrial
instead.-
property
distributions
¶ Dictionary that contains the distributions of
params
.
-
property
duration
¶ Return the elapsed time taken to complete the trial.
- Returns
The duration.
-
property
last_step
¶ Return the maximum step of intermediate_values in the trial.
- Returns
The maximum step of intermediates.
-
report
(value, step)[source]¶ Interface of report function.
Since
FrozenTrial
is not pruned, this report function does nothing.See also
Please refer to
should_prune()
.- Parameters
value (float) – A value returned from the objective function.
step (int) – Step of the trial (e.g., Epoch of neural network training). Note that pruners assume that
step
starts at zero. For example,MedianPruner
simply checks ifstep
is less thann_warmup_steps
as the warmup mechanism.
- Return type
-
property
-
class
optuna.structs.
StudySummary
(study_name, direction, best_trial, user_attrs, system_attrs, n_trials, datetime_start, study_id, *, directions=None)[source]¶ Warning
Deprecated in v1.4.0. This feature will be removed in the future. The removal of this feature is currently scheduled for v3.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/releases/tag/v1.4.0.
This class was moved to
study
. Please useStudySummary
instead.
optuna.study¶
The study
module implements the Study
object and related functions. A public constructor is available for the Study
class, but direct use of this constructor is not recommended. Instead, library users should create and load a Study
using create_study()
and load_study()
respectively.
A study corresponds to an optimization task, i.e., a set of trials. |
|
Create a new |
|
Load the existing |
|
Delete a |
|
Get all history of studies stored in a specified storage. |
|
Direction of a |
|
Basic attributes and aggregated results of a |
optuna.trial¶
The trial
module contains Trial
related classes and functions.
A Trial
instance represents a process of evaluating an objective function. This instance is passed to an objective function and provides interfaces to get parameter suggestion, manage the trial’s state, and set/get user-defined attributes of the trial, so that Optuna users can define a custom objective function through the interfaces. Basically, Optuna users only use it in their custom objective functions.
A trial is a process of evaluating an objective function. |
|
A trial class which suggests a fixed value for each parameter. |
|
Status and results of a |
|
State of a |
|
Create a new |
optuna.visualization¶
The visualization
module provides utility functions for plotting the optimization process using plotly and matplotlib. Plotting functions take generally take a Study
object and optional parameters passed as a list to a params
argument.
Note
In the optuna.visualization
module, the following functions use plotly to create figures, but JupyterLab cannot
render them by default. Please follow this installation guide to show figures in
JupyterLab.
Plot the parameter relationship as contour plot in a study. |
|
Plot the objective value EDF (empirical distribution function) of a study. |
|
Plot intermediate values of all trials in a study. |
|
Plot optimization history of all trials in a study. |
|
Plot the high-dimentional parameter relationships in a study. |
|
Plot hyperparameter importances. |
|
Plot the Pareto front of a study. |
|
Plot the parameter relationship as slice plot in a study. |
|
Returns whether visualization with plotly is available or not. |
Note
The following optuna.visualization.matplotlib
module uses Matplotlib as a backend.
optuna.visualization.matplotlib¶
Note
The following functions use Matplotlib as a backend.
Plot the parameter relationship as contour plot in a study with Matplotlib. |
|
Plot the objective value EDF (empirical distribution function) of a study with Matplotlib. |
|
Plot intermediate values of all trials in a study with Matplotlib. |
|
Plot optimization history of all trials in a study with Matplotlib. |
|
Plot the high-dimentional parameter relationships in a study with Matplotlib. |
|
Plot hyperparameter importances with Matplotlib. |
|
Plot the parameter relationship as slice plot in a study with Matplotlib. |
|
Returns whether visualization with Matplotlib is available or not. |
FAQ¶
Can I use Optuna with X? (where X is your favorite ML library)¶
Optuna is compatible with most ML libraries, and it’s easy to use Optuna with those. Please refer to examples.
How to define objective functions that have own arguments?¶
There are two ways to realize it.
First, callable classes can be used for that purpose as follows:
import optuna
class Objective(object):
def __init__(self, min_x, max_x):
# Hold this implementation specific arguments as the fields of the class.
self.min_x = min_x
self.max_x = max_x
def __call__(self, trial):
# Calculate an objective value by using the extra arguments.
x = trial.suggest_uniform("x", self.min_x, self.max_x)
return (x - 2) ** 2
# Execute an optimization by using an `Objective` instance.
study = optuna.create_study()
study.optimize(Objective(-100, 100), n_trials=100)
Second, you can use lambda
or functools.partial
for creating functions (closures) that hold extra arguments.
Below is an example that uses lambda
:
import optuna
# Objective function that takes three arguments.
def objective(trial, min_x, max_x):
x = trial.suggest_uniform("x", min_x, max_x)
return (x - 2) ** 2
# Extra arguments.
min_x = -100
max_x = 100
# Execute an optimization by using the above objective function wrapped by `lambda`.
study = optuna.create_study()
study.optimize(lambda trial: objective(trial, min_x, max_x), n_trials=100)
Please also refer to sklearn_addtitional_args.py example, which reuses the dataset instead of loading it in each trial execution.
Can I use Optuna without remote RDB servers?¶
Yes, it’s possible.
In the simplest form, Optuna works with in-memory storage:
study = optuna.create_study()
study.optimize(objective)
If you want to save and resume studies, it’s handy to use SQLite as the local storage:
study = optuna.create_study(study_name="foo_study", storage="sqlite:///example.db")
study.optimize(objective) # The state of `study` will be persisted to the local SQLite file.
Please see Saving/Resuming Study with RDB Backend for more details.
How can I save and resume studies?¶
There are two ways of persisting studies, which depends if you are using
in-memory storage (default) or remote databases (RDB). In-memory studies can be
saved and loaded like usual Python objects using pickle
or joblib
. For
example, using joblib
:
study = optuna.create_study()
joblib.dump(study, "study.pkl")
And to resume the study:
study = joblib.load("study.pkl")
print("Best trial until now:")
print(" Value: ", study.best_trial.value)
print(" Params: ")
for key, value in study.best_trial.params.items():
print(f" {key}: {value}")
If you are using RDBs, see Saving/Resuming Study with RDB Backend for more details.
How to suppress log messages of Optuna?¶
By default, Optuna shows log messages at the optuna.logging.INFO
level.
You can change logging levels by using optuna.logging.set_verbosity()
.
For instance, you can stop showing each trial result as follows:
optuna.logging.set_verbosity(optuna.logging.WARNING)
study = optuna.create_study()
study.optimize(objective)
# Logs like '[I 2020-07-21 13:41:45,627] Trial 0 finished with value:...' are disabled.
Please refer to optuna.logging
for further details.
How to save machine learning models trained in objective functions?¶
Optuna saves hyperparameter values with its corresponding objective value to storage, but it discards intermediate objects such as machine learning models and neural network weights. To save models or weights, please use features of the machine learning library you used.
We recommend saving optuna.trial.Trial.number
with a model in order to identify its corresponding trial.
For example, you can save SVM models trained in the objective function as follows:
def objective(trial):
svc_c = trial.suggest_loguniform("svc_c", 1e-10, 1e10)
clf = sklearn.svm.SVC(C=svc_c)
clf.fit(X_train, y_train)
# Save a trained model to a file.
with open("{}.pickle".format(trial.number), "wb") as fout:
pickle.dump(clf, fout)
return 1.0 - accuracy_score(y_valid, clf.predict(X_valid))
study = optuna.create_study()
study.optimize(objective, n_trials=100)
# Load the best model.
with open("{}.pickle".format(study.best_trial.number), "rb") as fin:
best_clf = pickle.load(fin)
print(accuracy_score(y_valid, best_clf.predict(X_valid)))
How can I obtain reproducible optimization results?¶
To make the parameters suggested by Optuna reproducible, you can specify a fixed random seed via seed
argument of RandomSampler
or TPESampler
as follows:
sampler = TPESampler(seed=10) # Make the sampler behave in a deterministic way.
study = optuna.create_study(sampler=sampler)
study.optimize(objective)
However, there are two caveats.
First, when optimizing a study in distributed or parallel mode, there is inherent non-determinism. Thus it is very difficult to reproduce the same results in such condition. We recommend executing optimization of a study sequentially if you would like to reproduce the result.
Second, if your objective function behaves in a non-deterministic way (i.e., it does not return the same value even if the same parameters were suggested), you cannot reproduce an optimization. To deal with this problem, please set an option (e.g., random seed) to make the behavior deterministic if your optimization target (e.g., an ML library) provides it.
How are exceptions from trials handled?¶
Trials that raise exceptions without catching them will be treated as failures, i.e. with the FAIL
status.
By default, all exceptions except TrialPruned
raised in objective functions are propagated to the caller of optimize()
.
In other words, studies are aborted when such exceptions are raised.
It might be desirable to continue a study with the remaining trials.
To do so, you can specify in optimize()
which exception types to catch using the catch
argument.
Exceptions of these types are caught inside the study and will not propagate further.
You can find the failed trials in log messages.
[W 2018-12-07 16:38:36,889] Setting status of trial#0 as TrialState.FAIL because of \
the following error: ValueError('A sample error in objective.')
You can also find the failed trials by checking the trial states as follows:
study.trials_dataframe()
number |
state |
value |
… |
params |
system_attrs |
0 |
TrialState.FAIL |
… |
0 |
Setting status of trial#0 as TrialState.FAIL because of the following error: ValueError(‘A test error in objective.’) |
|
1 |
TrialState.COMPLETE |
1269 |
… |
1 |
See also
The catch
argument in optimize()
.
How are NaNs returned by trials handled?¶
Trials that return NaN
(float('nan')
) are treated as failures, but they will not abort studies.
Trials which return NaN
are shown as follows:
[W 2018-12-07 16:41:59,000] Setting status of trial#2 as TrialState.FAIL because the \
objective function returned nan.
What happens when I dynamically alter a search space?¶
Since parameters search spaces are specified in each call to the suggestion API, e.g.
suggest_uniform()
and suggest_int()
,
it is possible to, in a single study, alter the range by sampling parameters from different search
spaces in different trials.
The behavior when altered is defined by each sampler individually.
Note
Discussion about the TPE sampler. https://github.com/optuna/optuna/issues/822
How can I use two GPUs for evaluating two trials simultaneously?¶
If your optimization target supports GPU (CUDA) acceleration and you want to specify which GPU is used, the easiest way is to set CUDA_VISIBLE_DEVICES
environment variable:
# On a terminal.
#
# Specify to use the first GPU, and run an optimization.
$ export CUDA_VISIBLE_DEVICES=0
$ optuna study optimize foo.py objective --study-name foo --storage sqlite:///example.db
# On another terminal.
#
# Specify to use the second GPU, and run another optimization.
$ export CUDA_VISIBLE_DEVICES=1
$ optuna study optimize bar.py objective --study-name bar --storage sqlite:///example.db
Please refer to CUDA C Programming Guide for further details.
How can I test my objective functions?¶
When you test objective functions, you may prefer fixed parameter values to sampled ones.
In that case, you can use FixedTrial
, which suggests fixed parameter values based on a given dictionary of parameters.
For instance, you can input arbitrary values of \(x\) and \(y\) to the objective function \(x + y\) as follows:
def objective(trial):
x = trial.suggest_uniform("x", -1.0, 1.0)
y = trial.suggest_int("y", -5, 5)
return x + y
objective(FixedTrial({"x": 1.0, "y": -1})) # 0.0
objective(FixedTrial({"x": -1.0, "y": -4})) # -5.0
Using FixedTrial
, you can write unit tests as follows:
# A test function of pytest
def test_objective():
assert 1.0 == objective(FixedTrial({"x": 1.0, "y": 0}))
assert -1.0 == objective(FixedTrial({"x": 0.0, "y": -1}))
assert 0.0 == objective(FixedTrial({"x": -1.0, "y": 1}))
How do I avoid running out of memory (OOM) when optimizing studies?¶
If the memory footprint increases as you run more trials, try to periodically run the garbage collector.
Specify gc_after_trial
to True
when calling optimize()
or call gc.collect()
inside a callback.
def objective(trial):
x = trial.suggest_uniform("x", -1.0, 1.0)
y = trial.suggest_int("y", -5, 5)
return x + y
study = optuna.create_study()
study.optimize(objective, n_trials=10, gc_after_trial=True)
# `gc_after_trial=True` is more or less identical to the following.
study.optimize(objective, n_trials=10, callbacks=[lambda study, trial: gc.collect()])
There is a performance trade-off for running the garbage collector, which could be non-negligible depending on how fast your objective function otherwise is. Therefore, gc_after_trial
is False
by default.
Note that the above examples are similar to running the garbage collector inside the objective function, except for the fact that gc.collect()
is called even when errors, including TrialPruned
are raised.
Note
ChainerMNStudy
does currently not provide gc_after_trial
nor callbacks for optimize()
.
When using this class, you will have to call the garbage collector inside the objective function.