Quick Visualization for Hyperparameter Optimization Analysis

Optuna provides various visualization features in optuna.visualization to analyze optimization results visually.

This tutorial walks you through this module by visualizing the optimization results of PyTorch model for FashionMNIST dataset.

For visualizing multi-objective optimization (i.e., the usage of optuna.visualization.plot_pareto_front()), please refer to the tutorial of Multi-objective Optimization with Optuna.

Note

By using Optuna Dashboard, you can also check the optimization history, hyperparameter importances, hyperparameter relationships, etc. in graphs and tables. Please make your study persistent using RDB backend and execute following commands to run Optuna Dashboard.

$ pip install optuna-dashboard
$ optuna-dashboard sqlite:///example-study.db

Please check out the GitHub repository for more details.

Manage Studies

Visualize with Interactive Graphs

https://user-images.githubusercontent.com/5564044/205545958-305f2354-c7cd-4687-be2f-9e46e7401838.gif https://user-images.githubusercontent.com/5564044/205545965-278cd7f4-da7d-4e2e-ac31-6d81b106cada.gif
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision


import optuna

# You can use Matplotlib instead of Plotly for visualization by simply replacing `optuna.visualization` with
# `optuna.visualization.matplotlib` in the following examples.
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_parallel_coordinate
from optuna.visualization import plot_param_importances
from optuna.visualization import plot_rank
from optuna.visualization import plot_slice
from optuna.visualization import plot_timeline


SEED = 13
torch.manual_seed(SEED)

DEVICE = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
DIR = ".."
BATCHSIZE = 128
N_TRAIN_EXAMPLES = BATCHSIZE * 30
N_VALID_EXAMPLES = BATCHSIZE * 10


def define_model(trial):
    n_layers = trial.suggest_int("n_layers", 1, 2)
    layers = []

    in_features = 28 * 28
    for i in range(n_layers):
        out_features = trial.suggest_int("n_units_l{}".format(i), 64, 512)
        layers.append(nn.Linear(in_features, out_features))
        layers.append(nn.ReLU())

        in_features = out_features

    layers.append(nn.Linear(in_features, 10))
    layers.append(nn.LogSoftmax(dim=1))

    return nn.Sequential(*layers)


# Defines training and evaluation.
def train_model(model, optimizer, train_loader):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.view(-1, 28 * 28).to(DEVICE), target.to(DEVICE)
        optimizer.zero_grad()
        F.nll_loss(model(data), target).backward()
        optimizer.step()


def eval_model(model, valid_loader):
    model.eval()
    correct = 0
    with torch.no_grad():
        for batch_idx, (data, target) in enumerate(valid_loader):
            data, target = data.view(-1, 28 * 28).to(DEVICE), target.to(DEVICE)
            pred = model(data).argmax(dim=1, keepdim=True)
            correct += pred.eq(target.view_as(pred)).sum().item()

    accuracy = correct / N_VALID_EXAMPLES

    return accuracy

Define the objective function.

def objective(trial):
    train_dataset = torchvision.datasets.FashionMNIST(
        DIR, train=True, download=True, transform=torchvision.transforms.ToTensor()
    )
    train_loader = torch.utils.data.DataLoader(
        torch.utils.data.Subset(train_dataset, list(range(N_TRAIN_EXAMPLES))),
        batch_size=BATCHSIZE,
        shuffle=True,
    )

    val_dataset = torchvision.datasets.FashionMNIST(
        DIR, train=False, transform=torchvision.transforms.ToTensor()
    )
    val_loader = torch.utils.data.DataLoader(
        torch.utils.data.Subset(val_dataset, list(range(N_VALID_EXAMPLES))),
        batch_size=BATCHSIZE,
        shuffle=True,
    )
    model = define_model(trial).to(DEVICE)

    optimizer = torch.optim.Adam(
        model.parameters(), trial.suggest_float("lr", 1e-5, 1e-1, log=True)
    )

    for epoch in range(10):
        train_model(model, optimizer, train_loader)

        val_accuracy = eval_model(model, val_loader)
        trial.report(val_accuracy, epoch)

        if trial.should_prune():
            raise optuna.exceptions.TrialPruned()

    return val_accuracy
study = optuna.create_study(
    direction="maximize",
    sampler=optuna.samplers.TPESampler(seed=SEED),
    pruner=optuna.pruners.MedianPruner(),
)
study.optimize(objective, n_trials=30, timeout=300)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ../FashionMNIST/raw/train-images-idx3-ubyte.gz

  0%|          | 0/26421880 [00:00<?, ?it/s]
  0%|          | 65536/26421880 [00:00<01:20, 329223.09it/s]
  0%|          | 131072/26421880 [00:00<00:56, 465444.95it/s]
  1%|          | 229376/26421880 [00:00<00:39, 658879.77it/s]
  1%|▏         | 360448/26421880 [00:00<00:29, 883988.42it/s]
  2%|▏         | 491520/26421880 [00:00<00:25, 1025372.18it/s]
  2%|▏         | 655360/26421880 [00:00<00:27, 928208.17it/s]
  3%|▎         | 819200/26421880 [00:00<00:23, 1110710.79it/s]
  4%|▎         | 983040/26421880 [00:00<00:20, 1251369.72it/s]
  4%|▍         | 1146880/26421880 [00:01<00:18, 1357334.77it/s]
  5%|▍         | 1310720/26421880 [00:01<00:17, 1435513.35it/s]
  6%|▌         | 1474560/26421880 [00:01<00:16, 1492607.36it/s]
  6%|▋         | 1671168/26421880 [00:01<00:15, 1628050.28it/s]
  7%|▋         | 1867776/26421880 [00:01<00:17, 1400824.61it/s]
  8%|▊         | 2064384/26421880 [00:01<00:16, 1478794.99it/s]
  9%|▊         | 2293760/26421880 [00:01<00:14, 1686757.30it/s]
 10%|▉         | 2523136/26421880 [00:01<00:12, 1846776.82it/s]
 11%|█         | 2785280/26421880 [00:01<00:11, 2059321.38it/s]
 12%|█▏        | 3047424/26421880 [00:02<00:10, 2215512.81it/s]
 13%|█▎        | 3309568/26421880 [00:02<00:09, 2330688.07it/s]
 14%|█▎        | 3571712/26421880 [00:02<00:09, 2412849.53it/s]
 15%|█▍        | 3866624/26421880 [00:02<00:08, 2565093.84it/s]
 16%|█▌        | 4161536/26421880 [00:02<00:08, 2675557.51it/s]
 17%|█▋        | 4489216/26421880 [00:02<00:07, 2848516.28it/s]
 18%|█▊        | 4816896/26421880 [00:02<00:07, 2972814.67it/s]
 20%|█▉        | 5177344/26421880 [00:02<00:06, 3156830.91it/s]
 21%|██        | 5537792/26421880 [00:02<00:07, 2679975.07it/s]
 22%|██▏       | 5931008/26421880 [00:03<00:07, 2873918.59it/s]
 24%|██▍       | 6356992/26421880 [00:03<00:06, 3231443.42it/s]
 26%|██▌       | 6782976/26421880 [00:03<00:05, 3505741.38it/s]
 27%|██▋       | 7241728/26421880 [00:03<00:05, 3802678.63it/s]
 29%|██▉       | 7700480/26421880 [00:03<00:04, 4020729.61it/s]
 31%|███       | 8192000/26421880 [00:03<00:04, 4274836.98it/s]
 33%|███▎      | 8716288/26421880 [00:03<00:03, 4553421.54it/s]
 35%|███▌      | 9273344/26421880 [00:03<00:03, 4846666.10it/s]
 37%|███▋      | 9863168/26421880 [00:03<00:03, 5148997.90it/s]
 40%|███▉      | 10485760/26421880 [00:04<00:02, 5457446.34it/s]
 42%|████▏     | 11141120/26421880 [00:04<00:02, 5782499.92it/s]
 45%|████▍     | 11796480/26421880 [00:04<00:02, 6011487.18it/s]
 47%|████▋     | 12484608/26421880 [00:04<00:02, 6270045.16it/s]
 50%|████▉     | 13205504/26421880 [00:04<00:02, 6546125.31it/s]
 53%|█████▎    | 13991936/26421880 [00:04<00:01, 6935800.97it/s]
 56%|█████▌    | 14811136/26421880 [00:04<00:01, 7310108.52it/s]
 59%|█████▉    | 15695872/26421880 [00:04<00:01, 7768191.44it/s]
 63%|██████▎   | 16613376/26421880 [00:04<00:01, 8188642.41it/s]
 66%|██████▋   | 17563648/26421880 [00:04<00:01, 8577813.00it/s]
 70%|███████   | 18579456/26421880 [00:05<00:00, 9047098.14it/s]
 74%|███████▍  | 19628032/26421880 [00:05<00:00, 9474983.27it/s]
 79%|███████▊  | 20742144/26421880 [00:05<00:00, 9971587.94it/s]
 83%|████████▎ | 21889024/26421880 [00:05<00:00, 10418127.87it/s]
 87%|████████▋ | 23101440/26421880 [00:05<00:00, 10925667.99it/s]
 92%|█████████▏| 24379392/26421880 [00:05<00:00, 10925755.39it/s]
 97%|█████████▋| 25722880/26421880 [00:05<00:00, 11083676.10it/s]
100%|██████████| 26421880/26421880 [00:05<00:00, 4635834.02it/s]
Extracting ../FashionMNIST/raw/train-images-idx3-ubyte.gz to ../FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to ../FashionMNIST/raw/train-labels-idx1-ubyte.gz

  0%|          | 0/29515 [00:00<?, ?it/s]
100%|██████████| 29515/29515 [00:00<00:00, 298276.48it/s]
Extracting ../FashionMNIST/raw/train-labels-idx1-ubyte.gz to ../FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to ../FashionMNIST/raw/t10k-images-idx3-ubyte.gz

  0%|          | 0/4422102 [00:00<?, ?it/s]
  1%|▏         | 65536/4422102 [00:00<00:13, 330657.91it/s]
  4%|▍         | 196608/4422102 [00:00<00:05, 721298.00it/s]
 10%|█         | 458752/4422102 [00:00<00:02, 1420399.33it/s]
 21%|██▏       | 950272/4422102 [00:00<00:01, 1997076.53it/s]
 45%|████▌     | 1998848/4422102 [00:00<00:00, 4308308.54it/s]
 90%|████████▉ | 3964928/4422102 [00:00<00:00, 8607013.51it/s]
100%|██████████| 4422102/4422102 [00:00<00:00, 5545977.55it/s]
Extracting ../FashionMNIST/raw/t10k-images-idx3-ubyte.gz to ../FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to ../FashionMNIST/raw/t10k-labels-idx1-ubyte.gz

  0%|          | 0/5148 [00:00<?, ?it/s]
100%|██████████| 5148/5148 [00:00<00:00, 31429806.39it/s]
Extracting ../FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to ../FashionMNIST/raw

Plot functions

Visualize the optimization history. See plot_optimization_history() for the details.

plot_optimization_history(study)


Visualize the learning curves of the trials. See plot_intermediate_values() for the details.

plot_intermediate_values(study)


Visualize high-dimensional parameter relationships. See plot_parallel_coordinate() for the details.

plot_parallel_coordinate(study)


Select parameters to visualize.

plot_parallel_coordinate(study, params=["lr", "n_layers"])


Visualize hyperparameter relationships. See plot_contour() for the details.

plot_contour(study)


Select parameters to visualize.

plot_contour(study, params=["lr", "n_layers"])


Visualize individual hyperparameters as slice plot. See plot_slice() for the details.

plot_slice(study)


Select parameters to visualize.

plot_slice(study, params=["lr", "n_layers"])


Visualize parameter importances. See plot_param_importances() for the details.

plot_param_importances(study)


Learn which hyperparameters are affecting the trial duration with hyperparameter importance.

optuna.visualization.plot_param_importances(
    study, target=lambda t: t.duration.total_seconds(), target_name="duration"
)


Visualize empirical distribution function. See plot_edf() for the details.

plot_edf(study)


Visualize parameter relations with scatter plots colored by objective values. See plot_rank() for the details.

plot_rank(study)


Visualize the optimization timeline of performed trials. See plot_timeline() for the details.

plot_timeline(study)


Customize generated figures

In optuna.visualization and optuna.visualization.matplotlib, a function returns an editable figure object: plotly.graph_objects.Figure or matplotlib.axes.Axes depending on the module. This allows users to modify the generated figure for their demand by using API of the visualization library. The following example replaces figure titles drawn by Plotly-based plot_intermediate_values() manually.

fig = plot_intermediate_values(study)

fig.update_layout(
    title="Hyperparameter optimization for FashionMNIST classification",
    xaxis_title="Epoch",
    yaxis_title="Validation Accuracy",
)


Total running time of the script: (1 minutes 56.415 seconds)

Gallery generated by Sphinx-Gallery