optuna.integration.TorchDistributedTrial¶
-
class
optuna.integration.
TorchDistributedTrial
(trial, device=None)[source]¶ A wrapper of
Trial
to incorporate Optuna with PyTorch distributed.See also
TorchDistributedTrial
provides the same interface asTrial
. Please refer tooptuna.trial.Trial
for further details.See the example if you want to optimize an objective function that trains neural network written with PyTorch distributed data parallel.
- Parameters
Note
The methods of
TorchDistributedTrial
are expected to be called by all workers at once. They invoke synchronous data transmission to share processing results and synchronize timing.Note
Added in v2.6.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.6.0.
Methods
report
(value, step)set_system_attr
(key, value)set_user_attr
(key, value)should_prune
()suggest_categorical
(name, choices)suggest_discrete_uniform
(name, low, high, q)suggest_float
(name, low, high, *[, step, log])suggest_int
(name, low, high[, step, log])suggest_loguniform
(name, low, high)suggest_uniform
(name, low, high)Attributes
datetime_start
distributions
number
params
system_attrs
user_attrs