A wrapper of
Trialto incorporate Optuna with PyTorch distributed.
See the example if you want to optimize an objective function that trains neural network written with PyTorch distributed data parallel.
The methods of
TorchDistributedTrialare expected to be called by all workers at once. They invoke synchronous data transmission to share processing results and synchronize timing.
Added in v2.6.0 as an experimental feature. The interface may change in newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.6.0.
suggest_discrete_uniform(name, low, high, q)
suggest_float(name, low, high, *[, step, log])
suggest_int(name, low, high[, step, log])
suggest_loguniform(name, low, high)
suggest_uniform(name, low, high)