optuna.samplers.TPESampler

class optuna.samplers.TPESampler(consider_prior=True, prior_weight=1.0, consider_magic_clip=True, consider_endpoints=False, n_startup_trials=10, n_ei_candidates=24, gamma=<function default_gamma>, weights=<function default_weights>, seed=None, *, multivariate=False, group=False, warn_independent_sampling=True, constant_liar=False, constraints_func=None, categorical_distance_func=None)[source]

Sampler using TPE (Tree-structured Parzen Estimator) algorithm.

On each trial, for each parameter, TPE fits one Gaussian Mixture Model (GMM) l(x) to the set of parameter values associated with the best objective values, and another GMM g(x) to the remaining parameter values. It chooses the parameter value x that maximizes the ratio l(x)/g(x).

For further information about TPE algorithm, please refer to the following papers:

For multi-objective TPE (MOTPE), please refer to the following papers:

Please also check our articles:

Example

An example of a single-objective optimization is as follows:

import optuna
from optuna.samplers import TPESampler


def objective(trial):
    x = trial.suggest_float("x", -10, 10)
    return x**2


study = optuna.create_study(sampler=TPESampler())
study.optimize(objective, n_trials=10)

Note

TPESampler, which became much faster in v4.0.0, c.f. our article, can handle multi-objective optimization with many trials as well. Please note that NSGAIISampler will be used by default for multi-objective optimization, so if users would like to use TPESampler for multi-objective optimization, sampler must be explicitly specified when study is created.

Parameters:
  • consider_prior (bool) – Enhance the stability of Parzen estimator by imposing a Gaussian prior when True. The prior is only effective if the sampling distribution is either FloatDistribution, or IntDistribution.

  • prior_weight (float) – The weight of the prior. This argument is used in FloatDistribution, IntDistribution, and CategoricalDistribution.

  • consider_magic_clip (bool) – Enable a heuristic to limit the smallest variances of Gaussians used in the Parzen estimator.

  • consider_endpoints (bool) – Take endpoints of domains into account when calculating variances of Gaussians in Parzen estimator. See the original paper for details on the heuristics to calculate the variances.

  • n_startup_trials (int) – The random sampling is used instead of the TPE algorithm until the given number of trials finish in the same study.

  • n_ei_candidates (int) – Number of candidate samples used to calculate the expected improvement.

  • gamma (Callable[[int], int]) – A function that takes the number of finished trials and returns the number of trials to form a density function for samples with low grains. See the original paper for more details.

  • weights (Callable[[int], np.ndarray]) –

    A function that takes the number of finished trials and returns a weight for them. See Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures for more details.

    Note

    In the multi-objective case, this argument is only used to compute the weights of bad trials, i.e., trials to construct g(x) in the paper ). The weights of good trials, i.e., trials to construct l(x), are computed by a rule based on the hypervolume contribution proposed in the paper of MOTPE.

  • seed (int | None) – Seed for random number generator.

  • multivariate (bool) –

    If this is True, the multivariate TPE is used when suggesting parameters. The multivariate TPE is reported to outperform the independent TPE. See BOHB: Robust and Efficient Hyperparameter Optimization at Scale and our article for more details.

    Note

    Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.

  • group (bool) –

    If this and multivariate are True, the multivariate TPE with the group decomposed search space is used when suggesting parameters. The sampling algorithm decomposes the search space based on past trials and samples from the joint distribution in each decomposed subspace. The decomposed subspaces are a partition of the whole search space. Each subspace is a maximal subset of the whole search space, which satisfies the following: for a trial in completed trials, the intersection of the subspace and the search space of the trial becomes subspace itself or an empty set. Sampling from the joint distribution on the subspace is realized by multivariate TPE. If group is True, multivariate must be True as well.

    Note

    Added in v2.8.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.

    Example:

    import optuna
    
    
    def objective(trial):
        x = trial.suggest_categorical("x", ["A", "B"])
        if x == "A":
            return trial.suggest_float("y", -10, 10)
        else:
            return trial.suggest_int("z", -10, 10)
    
    
    sampler = optuna.samplers.TPESampler(multivariate=True, group=True)
    study = optuna.create_study(sampler=sampler)
    study.optimize(objective, n_trials=10)
    

  • warn_independent_sampling (bool) – If this is True and multivariate=True, a warning message is emitted when the value of a parameter is sampled by using an independent sampler. If multivariate=False, this flag has no effect.

  • constant_liar (bool) –

    If True, penalize running trials to avoid suggesting parameter configurations nearby.

    Note

    Abnormally terminated trials often leave behind a record with a state of RUNNING in the storage. Such “zombie” trial parameters will be avoided by the constant liar algorithm during subsequent sampling. When using an RDBStorage, it is possible to enable the heartbeat_interval to change the records for abnormally terminated trials to FAIL.

    Note

    It is recommended to set this value to True during distributed optimization to avoid having multiple workers evaluating similar parameter configurations. In particular, if each objective function evaluation is costly and the durations of the running states are significant, and/or the number of workers is high.

    Note

    Added in v2.8.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.

  • constraints_func (Callable[[FrozenTrial], Sequence[float]] | None) –

    An optional function that computes the objective constraints. It must take a FrozenTrial and return the constraints. The return value must be a sequence of float s. A value strictly larger than 0 means that a constraints is violated. A value equal to or smaller than 0 is considered feasible. If constraints_func returns more than one value for a trial, that trial is considered feasible if and only if all values are equal to 0 or smaller.

    The constraints_func will be evaluated after each successful trial. The function won’t be called when trials fail or they are pruned, but this behavior is subject to change in the future releases.

    Note

    Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.

  • categorical_distance_func (dict[str, Callable[[CategoricalChoiceType, CategoricalChoiceType], float]] | None) –

    A dictionary of distance functions for categorical parameters. The key is the name of the categorical parameter and the value is a distance function that takes two CategoricalChoiceType s and returns a float value. The distance function must return a non-negative value.

    While categorical choices are handled equally by default, this option allows users to specify prior knowledge on the structure of categorical parameters. When specified, categorical choices closer to current best choices are more likely to be sampled.

    Note

    Added in v3.4.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.4.0.

Methods

after_trial(study, trial, state, values)

Trial post-processing.

before_trial(study, trial)

Trial pre-processing.

hyperopt_parameters()

Return the the default parameters of hyperopt (v0.1.2).

infer_relative_search_space(study, trial)

Infer the search space that will be used by relative sampling in the target trial.

reseed_rng()

Reseed sampler's random number generator.

sample_independent(study, trial, param_name, ...)

Sample a parameter for a given distribution.

sample_relative(study, trial, search_space)

Sample parameters in a given search space.

after_trial(study, trial, state, values)[source]

Trial post-processing.

This method is called after the objective function returns and right before the trial is finished and its state is stored.

Note

Added in v2.4.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.

Parameters:
  • study (Study) – Target study object.

  • trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.

  • state (TrialState) – Resulting trial state.

  • values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None if trial succeeded.

Return type:

None

before_trial(study, trial)[source]

Trial pre-processing.

This method is called before the objective function is called and right after the trial is instantiated. More precisely, this method is called during trial initialization, just before the infer_relative_search_space() call. In other words, it is responsible for pre-processing that should be done before inferring the search space.

Note

Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.

Parameters:
  • study (Study) – Target study object.

  • trial (FrozenTrial) – Target trial object.

Return type:

None

static hyperopt_parameters()[source]

Return the the default parameters of hyperopt (v0.1.2).

TPESampler can be instantiated with the parameters returned by this method.

Example

Create a TPESampler instance with the default parameters of hyperopt.

import optuna
from optuna.samplers import TPESampler


def objective(trial):
    x = trial.suggest_float("x", -10, 10)
    return x**2


sampler = TPESampler(**TPESampler.hyperopt_parameters())
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Returns:

A dictionary containing the default parameters of hyperopt.

Return type:

dict[str, Any]

infer_relative_search_space(study, trial)[source]

Infer the search space that will be used by relative sampling in the target trial.

This method is called right before sample_relative() method, and the search space returned by this method is passed to it. The parameters not contained in the search space will be sampled by using sample_independent() method.

Parameters:
  • study (Study) – Target study object.

  • trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.

Returns:

A dictionary containing the parameter names and parameter’s distributions.

Return type:

dict[str, BaseDistribution]

See also

Please refer to intersection_search_space() as an implementation of infer_relative_search_space().

reseed_rng()[source]

Reseed sampler’s random number generator.

This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1. In that case, the sampler instance will be replicated including the state of the random number generator, and they may suggest the same values. To prevent this issue, this method assigns a different seed to each random number generator.

Return type:

None

sample_independent(study, trial, param_name, param_distribution)[source]

Sample a parameter for a given distribution.

This method is called only for the parameters not contained in the search space returned by sample_relative() method. This method is suitable for sampling algorithms that do not use relationship between parameters such as random sampling and TPE.

Note

The failed trials are ignored by any build-in samplers when they sample new parameters. Thus, failed trials are regarded as deleted in the samplers’ perspective.

Parameters:
  • study (Study) – Target study object.

  • trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.

  • param_name (str) – Name of the sampled parameter.

  • param_distribution (BaseDistribution) – Distribution object that specifies a prior and/or scale of the sampling algorithm.

Returns:

A parameter value.

Return type:

Any

sample_relative(study, trial, search_space)[source]

Sample parameters in a given search space.

This method is called once at the beginning of each trial, i.e., right before the evaluation of the objective function. This method is suitable for sampling algorithms that use relationship between parameters such as Gaussian Process and CMA-ES.

Note

The failed trials are ignored by any build-in samplers when they sample new parameters. Thus, failed trials are regarded as deleted in the samplers’ perspective.

Parameters:
Returns:

A dictionary containing the parameter names and the values.

Return type:

dict[str, Any]