Trial¶
-
class
optuna.trial.
Trial
(study, trial_id)[source]¶ A trial is a process of evaluating an objective function.
This object is passed to an objective function and provides interfaces to get parameter suggestion, manage the trial’s state, and set/get user-defined attributes of the trial.
Note that the direct use of this constructor is not recommended. This object is seamlessly instantiated and passed to the objective function behind the
optuna.study.Study.optimize()
method; hence library users do not care about instantiation of this object.Parameters: - study – A
Study
object. - trial_id – A trial ID that is automatically generated.
-
distributions
¶ Return distributions of parameters to be optimized.
Returns: A dictionary containing all distributions.
-
number
¶ Return trial’s number which is consecutive and unique in a study.
Returns: A trial number.
-
params
¶ Return parameters to be optimized.
Returns: A dictionary containing all parameters.
-
report
(value, step)[source]¶ Report an objective function value for a given step.
The reported values are used by the pruners to determine whether this trial should be pruned.
See also
Please refer to
BasePruner
.Note
The reported value is converted to
float
type by applyingfloat()
function internally. Thus, it accepts all float-like types (e.g.,numpy.float32
). If the conversion fails, aTypeError
is raised.Example
Report intermediate scores of SGDClassifier training.
import optuna from sklearn.linear_model import SGDClassifier def objective(trial): clf = SGDClassifier(random_state=0) for step in range(100): clf.partial_fit(X_train, y_train, np.unique(y)) intermediate_value = clf.score(X_test, y_test) trial.report(intermediate_value, step=step) if trial.should_prune(): raise TrialPruned() return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3)
Parameters: - value – A value returned from the objective function.
- step – Step of the trial (e.g., Epoch of neural network training).
-
set_user_attr
(key, value)[source]¶ Set user attributes to the trial.
The user attributes in the trial can be access via
optuna.trial.Trial.user_attrs()
.Example
Save fixed hyperparameters of neural network training.
import optuna from sklearn.neural_network import MLPClassifier def objective(trial): trial.set_user_attr('BATCHSIZE', 128) momentum = trial.suggest_uniform('momentum', 0, 1.0) clf = MLPClassifier(hidden_layer_sizes=(100, 50), batch_size=trial.user_attrs['BATCHSIZE'], momentum=momentum, solver='sgd', random_state=0) clf.fit(X_train, y_train) return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3) assert 'BATCHSIZE' in study.best_trial.user_attrs.keys() assert study.best_trial.user_attrs['BATCHSIZE'] == 128
Parameters: - key – A key string of the attribute.
- value – A value of the attribute. The value should be JSON serializable.
-
should_prune
(step=None)[source]¶ Suggest whether the trial should be pruned or not.
The suggestion is made by a pruning algorithm associated with the trial and is based on previously reported values. The algorithm can be specified when constructing a
Study
.Note
If no values have been reported, the algorithm cannot make meaningful suggestions. Similarly, if this method is called multiple times with the exact same set of reported values, the suggestions will be the same.
See also
Please refer to the example code in
optuna.trial.Trial.report()
.Parameters: step – Deprecated since 0.12.0: Step of the trial (e.g., epoch of neural network training). Deprecated in favor of always considering the most recent step. Returns: A boolean value. If True
, the trial should be pruned according to the configured pruning algorithm. Otherwise, the trial should continue.
-
study_id
¶ Return the study ID.
Deprecated since version 0.20.0: The direct use of this attribute is deprecated and it is recommended that you use
study
instead.Returns: The study ID.
-
suggest_categorical
(name, choices)[source]¶ Suggest a value for the categorical parameter.
The value is sampled from
choices
.Example
Suggest a kernel function of SVC.
import optuna from sklearn.svm import SVC def objective(trial): kernel = trial.suggest_categorical('kernel', ['linear', 'poly', 'rbf']) clf = SVC(kernel=kernel, gamma='scale', random_state=0) clf.fit(X_train, y_train) return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3)
Parameters: - name – A parameter name.
- choices – Parameter value candidates.
See also
Returns: A suggested value.
-
suggest_discrete_uniform
(name, low, high, q)[source]¶ Suggest a value for the discrete parameter.
The value is sampled from the range \([\mathsf{low}, \mathsf{high}]\), and the step of discretization is \(q\). More specifically, this method returns one of the values in the sequence \(\mathsf{low}, \mathsf{low} + q, \mathsf{low} + 2 q, \dots, \mathsf{low} + k q \le \mathsf{high}\), where \(k\) denotes an integer. Note that \(high\) may be changed due to round-off errors if \(q\) is not an integer. Please check warning messages to find the changed values.
Example
Suggest a fraction of samples used for fitting the individual learners of GradientBoostingClassifier.
import optuna from sklearn.ensemble import GradientBoostingClassifier def objective(trial): subsample = trial.suggest_discrete_uniform('subsample', 0.1, 1.0, 0.1) clf = GradientBoostingClassifier(subsample=subsample, random_state=0) clf.fit(X_train, y_train) return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3)
Parameters: - name – A parameter name.
- low – Lower endpoint of the range of suggested values.
low
is included in the range. - high – Upper endpoint of the range of suggested values.
high
is included in the range. - q – A step of discretization.
Returns: A suggested float value.
-
suggest_int
(name, low, high)[source]¶ Suggest a value for the integer parameter.
The value is sampled from the integers in \([\mathsf{low}, \mathsf{high}]\).
Example
Suggest the number of trees in RandomForestClassifier.
import optuna from sklearn.ensemble import RandomForestClassifier def objective(trial): n_estimators = trial.suggest_int('n_estimators', 50, 400) clf = RandomForestClassifier(n_estimators=n_estimators, random_state=0) clf.fit(X_train, y_train) return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3)
Parameters: - name – A parameter name.
- low – Lower endpoint of the range of suggested values.
low
is included in the range. - high – Upper endpoint of the range of suggested values.
high
is included in the range.
Returns: A suggested integer value.
-
suggest_loguniform
(name, low, high)[source]¶ Suggest a value for the continuous parameter.
The value is sampled from the range \([\mathsf{low}, \mathsf{high})\) in the log domain. When \(\mathsf{low} = \mathsf{high}\), the value of \(\mathsf{low}\) will be returned.
Example
Suggest penalty parameter
C
of SVC.import optuna from sklearn.svm import SVC def objective(trial): c = trial.suggest_loguniform('c', 1e-5, 1e2) clf = SVC(C=c, gamma='scale', random_state=0) clf.fit(X_train, y_train) return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3)
Parameters: - name – A parameter name.
- low – Lower endpoint of the range of suggested values.
low
is included in the range. - high – Upper endpoint of the range of suggested values.
high
is excluded from the range.
Returns: A suggested float value.
-
suggest_uniform
(name, low, high)[source]¶ Suggest a value for the continuous parameter.
The value is sampled from the range \([\mathsf{low}, \mathsf{high})\) in the linear domain. When \(\mathsf{low} = \mathsf{high}\), the value of \(\mathsf{low}\) will be returned.
Example
Suggest a momentum for neural network training.
import optuna from sklearn.neural_network import MLPClassifier def objective(trial): momentum = trial.suggest_uniform('momentum', 0.0, 1.0) clf = MLPClassifier(hidden_layer_sizes=(100, 50), momentum=momentum, solver='sgd', random_state=0) clf.fit(X_train, y_train) return clf.score(X_test, y_test) study = optuna.create_study() study.optimize(objective, n_trials=3)
Parameters: - name – A parameter name.
- low – Lower endpoint of the range of suggested values.
low
is included in the range. - high – Upper endpoint of the range of suggested values.
high
is excluded from the range.
Returns: A suggested float value.
-
user_attrs
¶ Return user attributes.
Returns: A dictionary containing all user attributes.
- study – A
-
class
optuna.trial.
FixedTrial
(params)[source]¶ A trial class which suggests a fixed value for each parameter.
This object has the same methods as
Trial
, and it suggests pre-defined parameter values. The parameter values can be determined at the construction of theFixedTrial
object. In contrast toTrial
,FixedTrial
does not depend onStudy
, and it is useful for deploying optimization results.Example
Evaluate an objective function with parameter values given by a user.
import optuna def objective(trial): x = trial.suggest_uniform('x', -100, 100) y = trial.suggest_categorical('y', [-1, 0, 1]) return x ** 2 + y assert objective(optuna.trial.FixedTrial({'x': 1, 'y': 0})) == 1
Note
Please refer to
Trial
for details of methods and properties.Parameters: params – A dictionary containing all parameters.