optuna.integration.TorchDistributedTrial
- class optuna.integration.TorchDistributedTrial(trial, device=None)[源代码]
A wrapper of
Trial
to incorporate Optuna with PyTorch distributed.参见
TorchDistributedTrial
provides the same interface asTrial
. Please refer tooptuna.trial.Trial
for further details.See the example if you want to optimize an objective function that trains neural network written with PyTorch distributed data parallel.
- 参数
trial (Optional[optuna.trial._trial.Trial]) – A
Trial
object orNone
. Please set trial object in rank-0 node and setNone
in the other rank node.device (Optional[torch.device]) – A torch.device to communicate with the other nodes. Please set a CUDA device assigned to the current node if you use “nccl” as torch.distributed backend.
- 返回类型
None
备注
The methods of
TorchDistributedTrial
are expected to be called by all workers at once. They invoke synchronous data transmission to share processing results and synchronize timing.备注
Added in v2.6.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.6.0.
Methods
report
(value, step)set_system_attr
(key, value)set_user_attr
(key, value)should_prune
()suggest_categorical
(name, choices)suggest_discrete_uniform
(name, low, high, q)suggest_float
(name, low, high, *[, step, log])suggest_int
(name, low, high[, step, log])suggest_loguniform
(name, low, high)suggest_uniform
(name, low, high)Attributes
datetime_start
distributions
number
params
system_attrs
user_attrs