Multi-objective Optimization with Optuna

This tutorial showcases Optuna’s multi-objective optimization feature by optimizing the validation accuracy of Fashion MNIST dataset and the FLOPS of the model implemented in PyTorch.

We use thop to measure FLOPS.

import thop
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision

import optuna


DEVICE = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
DIR = ".."
BATCHSIZE = 128
N_TRAIN_EXAMPLES = BATCHSIZE * 30
N_VALID_EXAMPLES = BATCHSIZE * 10


def define_model(trial):
    n_layers = trial.suggest_int("n_layers", 1, 3)
    layers = []

    in_features = 28 * 28
    for i in range(n_layers):
        out_features = trial.suggest_int("n_units_l{}".format(i), 4, 128)
        layers.append(nn.Linear(in_features, out_features))
        layers.append(nn.ReLU())
        p = trial.suggest_float("dropout_{}".format(i), 0.2, 0.5)
        layers.append(nn.Dropout(p))

        in_features = out_features

    layers.append(nn.Linear(in_features, 10))
    layers.append(nn.LogSoftmax(dim=1))

    return nn.Sequential(*layers)


# Defines training and evaluation.
def train_model(model, optimizer, train_loader):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.view(-1, 28 * 28).to(DEVICE), target.to(DEVICE)
        optimizer.zero_grad()
        F.nll_loss(model(data), target).backward()
        optimizer.step()


def eval_model(model, valid_loader):
    model.eval()
    correct = 0
    with torch.no_grad():
        for batch_idx, (data, target) in enumerate(valid_loader):
            data, target = data.view(-1, 28 * 28).to(DEVICE), target.to(DEVICE)
            pred = model(data).argmax(dim=1, keepdim=True)
            correct += pred.eq(target.view_as(pred)).sum().item()

    accuracy = correct / N_VALID_EXAMPLES

    flops, _ = thop.profile(model, inputs=(torch.randn(1, 28 * 28).to(DEVICE),), verbose=False)
    return flops, accuracy

Define multi-objective objective function. Objectives are FLOPS and accuracy.

def objective(trial):
    train_dataset = torchvision.datasets.FashionMNIST(
        DIR, train=True, download=True, transform=torchvision.transforms.ToTensor()
    )
    train_loader = torch.utils.data.DataLoader(
        torch.utils.data.Subset(train_dataset, list(range(N_TRAIN_EXAMPLES))),
        batch_size=BATCHSIZE,
        shuffle=True,
    )

    val_dataset = torchvision.datasets.FashionMNIST(
        DIR, train=False, transform=torchvision.transforms.ToTensor()
    )
    val_loader = torch.utils.data.DataLoader(
        torch.utils.data.Subset(val_dataset, list(range(N_VALID_EXAMPLES))),
        batch_size=BATCHSIZE,
        shuffle=True,
    )
    model = define_model(trial).to(DEVICE)

    optimizer = torch.optim.Adam(
        model.parameters(), trial.suggest_float("lr", 1e-5, 1e-1, log=True)
    )

    for epoch in range(10):
        train_model(model, optimizer, train_loader)
    flops, accuracy = eval_model(model, val_loader)
    return flops, accuracy

Run multi-objective optimization

If your optimization problem is multi-objective, Optuna assumes that you will specify the optimization direction for each objective. Specifically, in this example, we want to minimize the FLOPS (we want a faster model) and maximize the accuracy. So we set directions to ["minimize", "maximize"].

study = optuna.create_study(directions=["minimize", "maximize"])
study.optimize(objective, n_trials=30, timeout=300)

print("Number of finished trials: ", len(study.trials))

Out:

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ../FashionMNIST/raw/train-images-idx3-ubyte.gz

  0%|          | 0/26421880 [00:00<?, ?it/s]
  0%|          | 40960/26421880 [00:00<01:20, 326609.76it/s]
  0%|          | 104448/26421880 [00:00<01:13, 356632.97it/s]
  1%|1         | 306176/26421880 [00:00<00:27, 947830.36it/s]
  3%|2         | 694272/26421880 [00:00<00:13, 1936522.16it/s]
  4%|4         | 1179648/26421880 [00:00<00:08, 2884629.98it/s]
  9%|8         | 2260992/26421880 [00:00<00:04, 5398733.54it/s]
 16%|#6        | 4268032/26421880 [00:00<00:02, 9972716.55it/s]
 23%|##3       | 6116352/26421880 [00:00<00:01, 12595046.93it/s]
 29%|##9       | 7702528/26421880 [00:01<00:01, 12793884.48it/s]
 35%|###5      | 9307136/26421880 [00:01<00:01, 13740636.80it/s]
 42%|####2     | 11104256/26421880 [00:01<00:01, 14983647.95it/s]
 49%|####8     | 12826624/26421880 [00:01<00:00, 15645658.61it/s]
 55%|#####4    | 14458880/26421880 [00:01<00:00, 15846615.03it/s]
 61%|######    | 16075776/26421880 [00:01<00:00, 13316044.15it/s]
 67%|######6   | 17652736/26421880 [00:01<00:00, 13953441.36it/s]
 73%|#######2  | 19282944/26421880 [00:01<00:00, 14593181.59it/s]
 79%|#######9  | 20984832/26421880 [00:01<00:00, 15272697.19it/s]
 86%|########5 | 22654976/26421880 [00:01<00:00, 15680040.35it/s]
 92%|#########2| 24356864/26421880 [00:02<00:00, 16067288.51it/s]
 99%|#########8| 26068992/26421880 [00:02<00:00, 16375827.20it/s]
26422272it [00:02, 11979693.36it/s]
Extracting ../FashionMNIST/raw/train-images-idx3-ubyte.gz to ../FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to ../FashionMNIST/raw/train-labels-idx1-ubyte.gz

  0%|          | 0/29515 [00:00<?, ?it/s]
29696it [00:00, 301060.27it/s]
Extracting ../FashionMNIST/raw/train-labels-idx1-ubyte.gz to ../FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to ../FashionMNIST/raw/t10k-images-idx3-ubyte.gz

  0%|          | 0/4422102 [00:00<?, ?it/s]
  1%|          | 41984/4422102 [00:00<00:12, 344532.50it/s]
  2%|2         | 104448/4422102 [00:00<00:12, 352163.86it/s]
  7%|6         | 299008/4422102 [00:00<00:04, 847781.57it/s]
 15%|#5        | 663552/4422102 [00:00<00:02, 1738884.74it/s]
 30%|##9       | 1325056/4422102 [00:00<00:00, 3262692.72it/s]
 61%|######1   | 2716672/4422102 [00:00<00:00, 6551947.28it/s]
4422656it [00:00, 5395916.71it/s]
Extracting ../FashionMNIST/raw/t10k-images-idx3-ubyte.gz to ../FashionMNIST/raw
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to ../FashionMNIST/raw/t10k-labels-idx1-ubyte.gz

  0%|          | 0/5148 [00:00<?, ?it/s]
6144it [00:00, 35692249.00it/s]
Extracting ../FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to ../FashionMNIST/raw
Processing...
/home/docs/checkouts/readthedocs.org/user_builds/optuna/envs/stable/lib/python3.8/site-packages/torchvision/datasets/mnist.py:479: UserWarning:

The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /pytorch/torch/csrc/utils/tensor_numpy.cpp:143.)

Done!
Number of finished trials:  30

Check trials on pareto front visually

optuna.visualization.plot_pareto_front(study, target_names=["FLOPS", "accuracy"])

Out:

/home/docs/checkouts/readthedocs.org/user_builds/optuna/checkouts/stable/tutorial/20_recipes/002_multi_objective.py:123: ExperimentalWarning:

plot_pareto_front is experimental (supported from v2.4.0). The interface can change in the future.


Learn which hyperparameters are affecting the flops most with hyperparameter importance.

optuna.visualization.plot_param_importances(
    study, target=lambda t: t.values[0], target_name="flops"
)


Total running time of the script: ( 1 minutes 28.958 seconds)

Gallery generated by Sphinx-Gallery