Shortcuts

Hyperparameter tuning with Ray Tune

Hyperparameter tuning can make the difference between an average model and a highly accurate one. Often simple things like choosing a different learning rate or changing a network layer size can have a dramatic impact on your model performance.

Fortunately, there are tools that help with finding the best combination of parameters. Ray Tune is an industry standard tool for distributed hyperparameter tuning. Ray Tune includes the latest hyperparameter search algorithms, integrates with various analysis libraries, and natively supports distributed training through Ray’s distributed machine learning engine.

In this tutorial, we will show you how to integrate Ray Tune into your PyTorch training workflow. We will extend this tutorial from the PyTorch documentation for training a CIFAR10 image classifier.

As you will see, we only need to add some slight modifications. In particular, we need to

  1. wrap data loading and training in functions,

  2. make some network parameters configurable,

  3. add checkpointing (optional),

  4. and define the search space for the model tuning


To run this tutorial, please make sure the following packages are installed:

  • ray[tune]: Distributed hyperparameter tuning library

  • torchvision: For the data transformers

Setup / Imports

Let’s start with the imports:

from functools import partial
import os
import tempfile
from pathlib import Path
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import random_split
import torchvision
import torchvision.transforms as transforms
from ray import tune
from ray import train
from ray.train import Checkpoint, get_checkpoint
from ray.tune.schedulers import ASHAScheduler
import ray.cloudpickle as pickle

Most of the imports are needed for building the PyTorch model. Only the last imports are for Ray Tune.

Data loaders

We wrap the data loaders in their own function and pass a global data directory. This way we can share a data directory between different trials.

def load_data(data_dir="./data"):
    transform = transforms.Compose(
        [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
    )

    trainset = torchvision.datasets.CIFAR10(
        root=data_dir, train=True, download=True, transform=transform
    )

    testset = torchvision.datasets.CIFAR10(
        root=data_dir, train=False, download=True, transform=transform
    )

    return trainset, testset

Configurable neural network

We can only tune those parameters that are configurable. In this example, we can specify the layer sizes of the fully connected layers:

class Net(nn.Module):
    def __init__(self, l1=120, l2=84):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, l1)
        self.fc2 = nn.Linear(l1, l2)
        self.fc3 = nn.Linear(l2, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1)  # flatten all dimensions except batch
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

The train function

Now it gets interesting, because we introduce some changes to the example from the PyTorch documentation.

We wrap the training script in a function train_cifar(config, data_dir=None). The config parameter will receive the hyperparameters we would like to train with. The data_dir specifies the directory where we load and store the data, so that multiple runs can share the same data source. We also load the model and optimizer state at the start of the run, if a checkpoint is provided. Further down in this tutorial you will find information on how to save the checkpoint and what it is used for.

net = Net(config["l1"], config["l2"])

checkpoint = get_checkpoint()
if checkpoint:
    with checkpoint.as_directory() as checkpoint_dir:
        data_path = Path(checkpoint_dir) / "data.pkl"
        with open(data_path, "rb") as fp:
            checkpoint_state = pickle.load(fp)
        start_epoch = checkpoint_state["epoch"]
        net.load_state_dict(checkpoint_state["net_state_dict"])
        optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
else:
    start_epoch = 0

The learning rate of the optimizer is made configurable, too:

optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)

We also split the training data into a training and validation subset. We thus train on 80% of the data and calculate the validation loss on the remaining 20%. The batch sizes with which we iterate through the training and test sets are configurable as well.

Adding (multi) GPU support with DataParallel

Image classification benefits largely from GPUs. Luckily, we can continue to use PyTorch’s abstractions in Ray Tune. Thus, we can wrap our model in nn.DataParallel to support data parallel training on multiple GPUs:

device = "cpu"
if torch.cuda.is_available():
    device = "cuda:0"
    if torch.cuda.device_count() > 1:
        net = nn.DataParallel(net)
net.to(device)

By using a device variable we make sure that training also works when we have no GPUs available. PyTorch requires us to send our data to the GPU memory explicitly, like this:

for i, data in enumerate(trainloader, 0):
    inputs, labels = data
    inputs, labels = inputs.to(device), labels.to(device)

The code now supports training on CPUs, on a single GPU, and on multiple GPUs. Notably, Ray also supports fractional GPUs so we can share GPUs among trials, as long as the model still fits on the GPU memory. We’ll come back to that later.

Communicating with Ray Tune

The most interesting part is the communication with Ray Tune:

checkpoint_data = {
    "epoch": epoch,
    "net_state_dict": net.state_dict(),
    "optimizer_state_dict": optimizer.state_dict(),
}
with tempfile.TemporaryDirectory() as checkpoint_dir:
    data_path = Path(checkpoint_dir) / "data.pkl"
    with open(data_path, "wb") as fp:
        pickle.dump(checkpoint_data, fp)

    checkpoint = Checkpoint.from_directory(checkpoint_dir)
    train.report(
        {"loss": val_loss / val_steps, "accuracy": correct / total},
        checkpoint=checkpoint,
    )

Here we first save a checkpoint and then report some metrics back to Ray Tune. Specifically, we send the validation loss and accuracy back to Ray Tune. Ray Tune can then use these metrics to decide which hyperparameter configuration lead to the best results. These metrics can also be used to stop bad performing trials early in order to avoid wasting resources on those trials.

The checkpoint saving is optional, however, it is necessary if we wanted to use advanced schedulers like Population Based Training. Also, by saving the checkpoint we can later load the trained models and validate them on a test set. Lastly, saving checkpoints is useful for fault tolerance, and it allows us to interrupt training and continue training later.

Full training function

The full code example looks like this:

def train_cifar(config, data_dir=None):
    net = Net(config["l1"], config["l2"])

    device = "cpu"
    if torch.cuda.is_available():
        device = "cuda:0"
        if torch.cuda.device_count() > 1:
            net = nn.DataParallel(net)
    net.to(device)

    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)

    checkpoint = get_checkpoint()
    if checkpoint:
        with checkpoint.as_directory() as checkpoint_dir:
            data_path = Path(checkpoint_dir) / "data.pkl"
            with open(data_path, "rb") as fp:
                checkpoint_state = pickle.load(fp)
            start_epoch = checkpoint_state["epoch"]
            net.load_state_dict(checkpoint_state["net_state_dict"])
            optimizer.load_state_dict(checkpoint_state["optimizer_state_dict"])
    else:
        start_epoch = 0

    trainset, testset = load_data(data_dir)

    test_abs = int(len(trainset) * 0.8)
    train_subset, val_subset = random_split(
        trainset, [test_abs, len(trainset) - test_abs]
    )

    trainloader = torch.utils.data.DataLoader(
        train_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
    )
    valloader = torch.utils.data.DataLoader(
        val_subset, batch_size=int(config["batch_size"]), shuffle=True, num_workers=8
    )

    for epoch in range(start_epoch, 10):  # loop over the dataset multiple times
        running_loss = 0.0
        epoch_steps = 0
        for i, data in enumerate(trainloader, 0):
            # get the inputs; data is a list of [inputs, labels]
            inputs, labels = data
            inputs, labels = inputs.to(device), labels.to(device)

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward + backward + optimize
            outputs = net(inputs)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

            # print statistics
            running_loss += loss.item()
            epoch_steps += 1
            if i % 2000 == 1999:  # print every 2000 mini-batches
                print(
                    "[%d, %5d] loss: %.3f"
                    % (epoch + 1, i + 1, running_loss / epoch_steps)
                )
                running_loss = 0.0

        # Validation loss
        val_loss = 0.0
        val_steps = 0
        total = 0
        correct = 0
        for i, data in enumerate(valloader, 0):
            with torch.no_grad():
                inputs, labels = data
                inputs, labels = inputs.to(device), labels.to(device)

                outputs = net(inputs)
                _, predicted = torch.max(outputs.data, 1)
                total += labels.size(0)
                correct += (predicted == labels).sum().item()

                loss = criterion(outputs, labels)
                val_loss += loss.cpu().numpy()
                val_steps += 1

        checkpoint_data = {
            "epoch": epoch,
            "net_state_dict": net.state_dict(),
            "optimizer_state_dict": optimizer.state_dict(),
        }
        with tempfile.TemporaryDirectory() as checkpoint_dir:
            data_path = Path(checkpoint_dir) / "data.pkl"
            with open(data_path, "wb") as fp:
                pickle.dump(checkpoint_data, fp)

            checkpoint = Checkpoint.from_directory(checkpoint_dir)
            train.report(
                {"loss": val_loss / val_steps, "accuracy": correct / total},
                checkpoint=checkpoint,
            )

    print("Finished Training")

As you can see, most of the code is adapted directly from the original example.

Test set accuracy

Commonly the performance of a machine learning model is tested on a hold-out test set with data that has not been used for training the model. We also wrap this in a function:

def test_accuracy(net, device="cpu"):
    trainset, testset = load_data()

    testloader = torch.utils.data.DataLoader(
        testset, batch_size=4, shuffle=False, num_workers=2
    )

    correct = 0
    total = 0
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = net(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

    return correct / total

The function also expects a device parameter, so we can do the test set validation on a GPU.

Configuring the search space

Lastly, we need to define Ray Tune’s search space. Here is an example:

config = {
    "l1": tune.choice([2 ** i for i in range(9)]),
    "l2": tune.choice([2 ** i for i in range(9)]),
    "lr": tune.loguniform(1e-4, 1e-1),
    "batch_size": tune.choice([2, 4, 8, 16])
}

The tune.choice() accepts a list of values that are uniformly sampled from. In this example, the l1 and l2 parameters should be powers of 2 between 4 and 256, so either 4, 8, 16, 32, 64, 128, or 256. The lr (learning rate) should be uniformly sampled between 0.0001 and 0.1. Lastly, the batch size is a choice between 2, 4, 8, and 16.

At each trial, Ray Tune will now randomly sample a combination of parameters from these search spaces. It will then train a number of models in parallel and find the best performing one among these. We also use the ASHAScheduler which will terminate bad performing trials early.

We wrap the train_cifar function with functools.partial to set the constant data_dir parameter. We can also tell Ray Tune what resources should be available for each trial:

gpus_per_trial = 2
# ...
result = tune.run(
    partial(train_cifar, data_dir=data_dir),
    resources_per_trial={"cpu": 8, "gpu": gpus_per_trial},
    config=config,
    num_samples=num_samples,
    scheduler=scheduler,
    checkpoint_at_end=True)

You can specify the number of CPUs, which are then available e.g. to increase the num_workers of the PyTorch DataLoader instances. The selected number of GPUs are made visible to PyTorch in each trial. Trials do not have access to GPUs that haven’t been requested for them - so you don’t have to care about two trials using the same set of resources.

Here we can also specify fractional GPUs, so something like gpus_per_trial=0.5 is completely valid. The trials will then share GPUs among each other. You just have to make sure that the models still fit in the GPU memory.

After training the models, we will find the best performing one and load the trained network from the checkpoint file. We then obtain the test set accuracy and report everything by printing.

The full main function looks like this:

def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
    data_dir = os.path.abspath("./data")
    load_data(data_dir)
    config = {
        "l1": tune.choice([2**i for i in range(9)]),
        "l2": tune.choice([2**i for i in range(9)]),
        "lr": tune.loguniform(1e-4, 1e-1),
        "batch_size": tune.choice([2, 4, 8, 16]),
    }
    scheduler = ASHAScheduler(
        metric="loss",
        mode="min",
        max_t=max_num_epochs,
        grace_period=1,
        reduction_factor=2,
    )
    result = tune.run(
        partial(train_cifar, data_dir=data_dir),
        resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},
        config=config,
        num_samples=num_samples,
        scheduler=scheduler,
    )

    best_trial = result.get_best_trial("loss", "min", "last")
    print(f"Best trial config: {best_trial.config}")
    print(f"Best trial final validation loss: {best_trial.last_result['loss']}")
    print(f"Best trial final validation accuracy: {best_trial.last_result['accuracy']}")

    best_trained_model = Net(best_trial.config["l1"], best_trial.config["l2"])
    device = "cpu"
    if torch.cuda.is_available():
        device = "cuda:0"
        if gpus_per_trial > 1:
            best_trained_model = nn.DataParallel(best_trained_model)
    best_trained_model.to(device)

    best_checkpoint = result.get_best_checkpoint(trial=best_trial, metric="accuracy", mode="max")
    with best_checkpoint.as_directory() as checkpoint_dir:
        data_path = Path(checkpoint_dir) / "data.pkl"
        with open(data_path, "rb") as fp:
            best_checkpoint_data = pickle.load(fp)

        best_trained_model.load_state_dict(best_checkpoint_data["net_state_dict"])
        test_acc = test_accuracy(best_trained_model, device)
        print("Best trial test set accuracy: {}".format(test_acc))


if __name__ == "__main__":
    # You can change the number of GPUs per trial here:
    main(num_samples=10, max_num_epochs=10, gpus_per_trial=0)
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to /var/lib/workspace/beginner_source/data/cifar-10-python.tar.gz

  0% 0.00/170M [00:00<?, ?B/s]
  0% 328k/170M [00:00<00:51, 3.27MB/s]
  1% 1.47M/170M [00:00<00:21, 8.01MB/s]
  2% 3.05M/170M [00:00<00:14, 11.4MB/s]
  3% 5.05M/170M [00:00<00:11, 14.7MB/s]
  4% 7.57M/170M [00:00<00:08, 18.3MB/s]
  6% 10.4M/170M [00:00<00:07, 21.7MB/s]
  8% 14.1M/170M [00:00<00:05, 26.6MB/s]
 11% 19.0M/170M [00:00<00:04, 33.6MB/s]
 15% 24.9M/170M [00:00<00:03, 41.6MB/s]
 19% 32.4M/170M [00:01<00:02, 51.8MB/s]
 24% 41.2M/170M [00:01<00:02, 62.8MB/s]
 29% 50.2M/170M [00:01<00:01, 71.1MB/s]
 35% 59.5M/170M [00:01<00:01, 77.5MB/s]
 40% 67.6M/170M [00:01<00:01, 78.6MB/s]
 45% 77.5M/170M [00:01<00:01, 84.5MB/s]
 51% 86.9M/170M [00:01<00:00, 87.2MB/s]
 56% 96.0M/170M [00:01<00:00, 88.6MB/s]
 62% 106M/170M [00:01<00:00, 91.2MB/s]
 67% 115M/170M [00:01<00:00, 90.6MB/s]
 73% 124M/170M [00:02<00:00, 92.0MB/s]
 78% 134M/170M [00:02<00:00, 90.9MB/s]
 84% 143M/170M [00:02<00:00, 89.8MB/s]
 89% 152M/170M [00:02<00:00, 77.6MB/s]
 94% 160M/170M [00:02<00:00, 65.4MB/s]
 98% 167M/170M [00:02<00:00, 58.5MB/s]
100% 170M/170M [00:02<00:00, 61.2MB/s]
Extracting /var/lib/workspace/beginner_source/data/cifar-10-python.tar.gz to /var/lib/workspace/beginner_source/data
Files already downloaded and verified
2024-11-05 22:24:09,945 WARNING services.py:1889 -- WARNING: The object store is using /tmp instead of /dev/shm because /dev/shm has only 2147479552 bytes available. This will harm performance! You may be able to free up space by deleting files in /dev/shm. If you are inside a Docker container, you can increase /dev/shm size by passing '--shm-size=10.24gb' to 'docker run' (or add it to the run_options list in a Ray cluster config). Make sure to set this to more than 30% of available RAM.
2024-11-05 22:24:10,205 INFO worker.py:1642 -- Started a local Ray instance.
2024-11-05 22:24:11,416 INFO tune.py:228 -- Initializing Ray automatically. For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run(...)`.
2024-11-05 22:24:11,418 INFO tune.py:654 -- [output] This will use the new output engine with verbosity 2. To disable the new output and use the legacy output engine, set the environment variable RAY_AIR_NEW_OUTPUT=0. For more information, please see https://github.com/ray-project/ray/issues/36949
+--------------------------------------------------------------------+
| Configuration for experiment     train_cifar_2024-11-05_22-24-11   |
+--------------------------------------------------------------------+
| Search algorithm                 BasicVariantGenerator             |
| Scheduler                        AsyncHyperBandScheduler           |
| Number of trials                 10                                |
+--------------------------------------------------------------------+

View detailed results here: /var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11
To visualize your results with TensorBoard, run: `tensorboard --logdir /var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11`

Trial status: 10 PENDING
Current time: 2024-11-05 22:24:11. Total running time: 0s
Logical resource usage: 0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name                status       l1     l2            lr     batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_ae725_00000   PENDING      16      1   0.00213327               2 |
| train_cifar_ae725_00001   PENDING       1      2   0.013416                 4 |
| train_cifar_ae725_00002   PENDING     256     64   0.0113784                2 |
| train_cifar_ae725_00003   PENDING      64    256   0.0274071                8 |
| train_cifar_ae725_00004   PENDING      16      2   0.056666                 4 |
| train_cifar_ae725_00005   PENDING       8     64   0.000353097              4 |
| train_cifar_ae725_00006   PENDING      16      4   0.000147684              8 |
| train_cifar_ae725_00007   PENDING     256    256   0.00477469               8 |
| train_cifar_ae725_00008   PENDING     128    256   0.0306227                8 |
| train_cifar_ae725_00009   PENDING       2     16   0.0286986                2 |
+-------------------------------------------------------------------------------+

Trial train_cifar_ae725_00007 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00007 config             |
+--------------------------------------------------+
| batch_size                                     8 |
| l1                                           256 |
| l2                                           256 |
| lr                                       0.00477 |
+--------------------------------------------------+

Trial train_cifar_ae725_00003 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00003 config             |
+--------------------------------------------------+
| batch_size                                     8 |
| l1                                            64 |
| l2                                           256 |
| lr                                       0.02741 |
+--------------------------------------------------+

Trial train_cifar_ae725_00004 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00004 config             |
+--------------------------------------------------+
| batch_size                                     4 |
| l1                                            16 |
| l2                                             2 |
| lr                                       0.05667 |
+--------------------------------------------------+

Trial train_cifar_ae725_00000 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00000 config             |
+--------------------------------------------------+
| batch_size                                     2 |
| l1                                            16 |
| l2                                             1 |
| lr                                       0.00213 |
+--------------------------------------------------+

Trial train_cifar_ae725_00001 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00001 config             |
+--------------------------------------------------+
| batch_size                                     4 |
| l1                                             1 |
| l2                                             2 |
| lr                                       0.01342 |
+--------------------------------------------------+

Trial train_cifar_ae725_00005 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00005 config             |
+--------------------------------------------------+
| batch_size                                     4 |
| l1                                             8 |
| l2                                            64 |
| lr                                       0.00035 |
+--------------------------------------------------+
(func pid=4883) Files already downloaded and verified

Trial train_cifar_ae725_00006 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00006 config             |
+--------------------------------------------------+
| batch_size                                     8 |
| l1                                            16 |
| l2                                             4 |
| lr                                       0.00015 |
+--------------------------------------------------+

Trial train_cifar_ae725_00002 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00002 config             |
+--------------------------------------------------+
| batch_size                                     2 |
| l1                                           256 |
| l2                                            64 |
| lr                                       0.01138 |
+--------------------------------------------------+
(func pid=4860) [1,  2000] loss: 2.339
(func pid=4866) Files already downloaded and verified [repeated 15x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)

Trial status: 8 RUNNING | 2 PENDING
Current time: 2024-11-05 22:24:41. Total running time: 30s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+-------------------------------------------------------------------------------+
| Trial name                status       l1     l2            lr     batch_size |
+-------------------------------------------------------------------------------+
| train_cifar_ae725_00000   RUNNING      16      1   0.00213327               2 |
| train_cifar_ae725_00001   RUNNING       1      2   0.013416                 4 |
| train_cifar_ae725_00002   RUNNING     256     64   0.0113784                2 |
| train_cifar_ae725_00003   RUNNING      64    256   0.0274071                8 |
| train_cifar_ae725_00004   RUNNING      16      2   0.056666                 4 |
| train_cifar_ae725_00005   RUNNING       8     64   0.000353097              4 |
| train_cifar_ae725_00006   RUNNING      16      4   0.000147684              8 |
| train_cifar_ae725_00007   RUNNING     256    256   0.00477469               8 |
| train_cifar_ae725_00008   PENDING     128    256   0.0306227                8 |
| train_cifar_ae725_00009   PENDING       2     16   0.0286986                2 |
+-------------------------------------------------------------------------------+
(func pid=4860) [1,  4000] loss: 1.153 [repeated 8x across cluster]
(func pid=4860) [1,  6000] loss: 0.768 [repeated 8x across cluster]

Trial train_cifar_ae725_00007 finished iteration 1 at 2024-11-05 22:25:10. Total running time: 59s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  53.76891 |
| time_total_s                                      53.76891 |
| training_iteration                                       1 |
| accuracy                                            0.4795 |
| loss                                               1.43336 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000000
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000000)

Trial train_cifar_ae725_00006 finished iteration 1 at 2024-11-05 22:25:11. Total running time: 1min 0s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00006 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  53.79316 |
| time_total_s                                      53.79316 |
| training_iteration                                       1 |
| accuracy                                            0.1286 |
| loss                                               2.27684 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00006 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00006_6_batch_size=8,l1=16,l2=4,lr=0.0001_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00006 completed after 1 iterations at 2024-11-05 22:25:11. Total running time: 1min 0s

Trial status: 7 RUNNING | 1 TERMINATED | 2 PENDING
Current time: 2024-11-05 22:25:11. Total running time: 1min 0s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00000   RUNNING        16      1   0.00213327               2                                                    |
| train_cifar_ae725_00001   RUNNING         1      2   0.013416                 4                                                    |
| train_cifar_ae725_00002   RUNNING       256     64   0.0113784                2                                                    |
| train_cifar_ae725_00003   RUNNING        64    256   0.0274071                8                                                    |
| train_cifar_ae725_00004   RUNNING        16      2   0.056666                 4                                                    |
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4                                                    |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        1            53.7689   1.43336       0.4795 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   PENDING       128    256   0.0306227                8                                                    |
| train_cifar_ae725_00009   PENDING         2     16   0.0286986                2                                                    |
+------------------------------------------------------------------------------------------------------------------------------------+

Trial train_cifar_ae725_00008 started with configuration:
+--------------------------------------------------+
| Trial train_cifar_ae725_00008 config             |
+--------------------------------------------------+
| batch_size                                     8 |
| l1                                           128 |
| l2                                           256 |
| lr                                       0.03062 |
+--------------------------------------------------+
(func pid=4882) Files already downloaded and verified
(func pid=4866) [1,  6000] loss: 0.772 [repeated 4x across cluster]

Trial train_cifar_ae725_00003 finished iteration 1 at 2024-11-05 22:25:12. Total running time: 1min 1s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00003 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  55.33864 |
| time_total_s                                      55.33864 |
| training_iteration                                       1 |
| accuracy                                             0.221 |
| loss                                               2.14447 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00003 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00003_3_batch_size=8,l1=64,l2=256,lr=0.0274_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00003 completed after 1 iterations at 2024-11-05 22:25:12. Total running time: 1min 1s

Trial train_cifar_ae725_00009 started with configuration:
+-------------------------------------------------+
| Trial train_cifar_ae725_00009 config            |
+-------------------------------------------------+
| batch_size                                    2 |
| l1                                            2 |
| l2                                           16 |
| lr                                       0.0287 |
+-------------------------------------------------+
(func pid=4867) Files already downloaded and verified [repeated 3x across cluster]
(func pid=4860) [1,  8000] loss: 0.576
(func pid=4861) [1,  8000] loss: 0.577
(func pid=4883) [2,  2000] loss: 1.388 [repeated 3x across cluster]
(func pid=4860) [1, 10000] loss: 0.461 [repeated 4x across cluster]

Trial status: 8 RUNNING | 2 TERMINATED
Current time: 2024-11-05 22:25:41. Total running time: 1min 30s
Logical resource usage: 16.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00000   RUNNING        16      1   0.00213327               2                                                    |
| train_cifar_ae725_00001   RUNNING         1      2   0.013416                 4                                                    |
| train_cifar_ae725_00002   RUNNING       256     64   0.0113784                2                                                    |
| train_cifar_ae725_00004   RUNNING        16      2   0.056666                 4                                                    |
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4                                                    |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        1            53.7689   1.43336       0.4795 |
| train_cifar_ae725_00008   RUNNING       128    256   0.0306227                8                                                    |
| train_cifar_ae725_00009   RUNNING         2     16   0.0286986                2                                                    |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4867) [1,  4000] loss: 1.165 [repeated 4x across cluster]
(func pid=4882) [1,  4000] loss: 1.024 [repeated 3x across cluster]

Trial train_cifar_ae725_00001 finished iteration 1 at 2024-11-05 22:25:52. Total running time: 1min 40s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00001 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  94.89002 |
| time_total_s                                      94.89002 |
| training_iteration                                       1 |
| accuracy                                            0.0996 |
| loss                                               2.31571 |
+------------------------------------------------------------+
(func pid=4861) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00001_1_batch_size=4,l1=1,l2=2,lr=0.0134_2024-11-05_22-24-11/checkpoint_000000) [repeated 3x across cluster]
Trial train_cifar_ae725_00001 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00001_1_batch_size=4,l1=1,l2=2,lr=0.0134_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00001 completed after 1 iterations at 2024-11-05 22:25:52. Total running time: 1min 40s

Trial train_cifar_ae725_00005 finished iteration 1 at 2024-11-05 22:25:52. Total running time: 1min 41s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00005 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  94.83869 |
| time_total_s                                      94.83869 |
| training_iteration                                       1 |
| accuracy                                            0.3663 |
| loss                                               1.69022 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00005 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00004 finished iteration 1 at 2024-11-05 22:25:53. Total running time: 1min 41s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00004 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  95.96432 |
| time_total_s                                      95.96432 |
| training_iteration                                       1 |
| accuracy                                            0.0977 |
| loss                                               2.33947 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00004 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00004_4_batch_size=4,l1=16,l2=2,lr=0.0567_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00004 completed after 1 iterations at 2024-11-05 22:25:53. Total running time: 1min 41s
(func pid=4867) [1,  6000] loss: 0.779 [repeated 2x across cluster]

Trial train_cifar_ae725_00007 finished iteration 2 at 2024-11-05 22:25:59. Total running time: 1min 48s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000001 |
| time_this_iter_s                                  48.72199 |
| time_total_s                                     102.49091 |
| training_iteration                                       2 |
| accuracy                                            0.5316 |
| loss                                               1.33046 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000001
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000001) [repeated 3x across cluster]

Trial train_cifar_ae725_00008 finished iteration 1 at 2024-11-05 22:26:03. Total running time: 1min 52s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00008 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  51.96809 |
| time_total_s                                      51.96809 |
| training_iteration                                       1 |
| accuracy                                            0.2303 |
| loss                                               2.04184 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00008 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2024-11-05_22-24-11/checkpoint_000000
(func pid=4873) [2,  2000] loss: 1.689 [repeated 3x across cluster]

Trial status: 6 RUNNING | 4 TERMINATED
Current time: 2024-11-05 22:26:11. Total running time: 2min 0s
Logical resource usage: 12.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00000   RUNNING        16      1   0.00213327               2                                                    |
| train_cifar_ae725_00002   RUNNING       256     64   0.0113784                2                                                    |
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4        1            94.8387   1.69022       0.3663 |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        2           102.491    1.33046       0.5316 |
| train_cifar_ae725_00008   RUNNING       128    256   0.0306227                8        1            51.9681   2.04184       0.2303 |
| train_cifar_ae725_00009   RUNNING         2     16   0.0286986                2                                                    |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4860) [1, 16000] loss: 0.288 [repeated 2x across cluster]
(func pid=4882) [2,  2000] loss: 2.060 [repeated 4x across cluster]
(func pid=4860) [1, 18000] loss: 0.256 [repeated 2x across cluster]
(func pid=4866) [1, 16000] loss: 0.289 [repeated 3x across cluster]
(func pid=4860) [1, 20000] loss: 0.231 [repeated 3x across cluster]

Trial train_cifar_ae725_00007 finished iteration 3 at 2024-11-05 22:26:39. Total running time: 2min 27s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000002 |
| time_this_iter_s                                  39.61496 |
| time_total_s                                     142.10587 |
| training_iteration                                       3 |
| accuracy                                            0.5638 |
| loss                                               1.22363 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000002
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000002) [repeated 2x across cluster]

Trial status: 6 RUNNING | 4 TERMINATED
Current time: 2024-11-05 22:26:42. Total running time: 2min 30s
Logical resource usage: 12.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00000   RUNNING        16      1   0.00213327               2                                                    |
| train_cifar_ae725_00002   RUNNING       256     64   0.0113784                2                                                    |
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4        1            94.8387   1.69022       0.3663 |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        3           142.106    1.22363       0.5638 |
| train_cifar_ae725_00008   RUNNING       128    256   0.0306227                8        1            51.9681   2.04184       0.2303 |
| train_cifar_ae725_00009   RUNNING         2     16   0.0286986                2                                                    |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4867) [1, 14000] loss: 0.333 [repeated 2x across cluster]

Trial train_cifar_ae725_00008 finished iteration 2 at 2024-11-05 22:26:46. Total running time: 2min 34s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00008 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000001 |
| time_this_iter_s                                  42.40846 |
| time_total_s                                      94.37656 |
| training_iteration                                       2 |
| accuracy                                            0.2119 |
| loss                                               2.05471 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00008 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2024-11-05_22-24-11/checkpoint_000001

Trial train_cifar_ae725_00008 completed after 2 iterations at 2024-11-05 22:26:46. Total running time: 2min 34s
(func pid=4882) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00008_8_batch_size=8,l1=128,l2=256,lr=0.0306_2024-11-05_22-24-11/checkpoint_000001)
(func pid=4873) [2, 10000] loss: 0.302 [repeated 2x across cluster]

Trial train_cifar_ae725_00000 finished iteration 1 at 2024-11-05 22:26:52. Total running time: 2min 40s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00000 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                  155.0891 |
| time_total_s                                      155.0891 |
| training_iteration                                       1 |
| accuracy                                            0.0963 |
| loss                                               2.30621 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00000 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00000 completed after 1 iterations at 2024-11-05 22:26:52. Total running time: 2min 40s
(func pid=4860) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00000_0_batch_size=2,l1=16,l2=1,lr=0.0021_2024-11-05_22-24-11/checkpoint_000000)
(func pid=4866) [1, 20000] loss: 0.232 [repeated 3x across cluster]

Trial train_cifar_ae725_00005 finished iteration 2 at 2024-11-05 22:26:59. Total running time: 2min 47s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00005 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000001 |
| time_this_iter_s                                  66.25371 |
| time_total_s                                      161.0924 |
| training_iteration                                       2 |
| accuracy                                            0.4353 |
| loss                                               1.52007 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00005 saved a checkpoint for iteration 2 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000001
(func pid=4873) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000001)
(func pid=4867) [1, 18000] loss: 0.259
(func pid=4883) [4,  4000] loss: 0.582

Trial train_cifar_ae725_00002 finished iteration 1 at 2024-11-05 22:27:11. Total running time: 3min 0s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00002 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                 174.03879 |
| time_total_s                                     174.03879 |
| training_iteration                                       1 |
| accuracy                                            0.0986 |
| loss                                               2.31324 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00002 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00002_2_batch_size=2,l1=256,l2=64,lr=0.0114_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00002 completed after 1 iterations at 2024-11-05 22:27:12. Total running time: 3min 0s
(func pid=4866) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00002_2_batch_size=2,l1=256,l2=64,lr=0.0114_2024-11-05_22-24-11/checkpoint_000000)

Trial status: 7 TERMINATED | 3 RUNNING
Current time: 2024-11-05 22:27:12. Total running time: 3min 0s
Logical resource usage: 6.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4        2           161.092    1.52007       0.4353 |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        3           142.106    1.22363       0.5638 |
| train_cifar_ae725_00009   RUNNING         2     16   0.0286986                2                                                    |
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4867) [1, 20000] loss: 0.233 [repeated 2x across cluster]

Trial train_cifar_ae725_00007 finished iteration 4 at 2024-11-05 22:27:15. Total running time: 3min 3s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000003 |
| time_this_iter_s                                  36.12762 |
| time_total_s                                     178.23349 |
| training_iteration                                       4 |
| accuracy                                            0.5451 |
| loss                                               1.32266 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000003
(func pid=4873) [3,  4000] loss: 0.728

Trial train_cifar_ae725_00009 finished iteration 1 at 2024-11-05 22:27:26. Total running time: 3min 15s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00009 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000000 |
| time_this_iter_s                                 134.23627 |
| time_total_s                                     134.23627 |
| training_iteration                                       1 |
| accuracy                                            0.0984 |
| loss                                               2.32277 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00009 saved a checkpoint for iteration 1 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00009_9_batch_size=2,l1=2,l2=16,lr=0.0287_2024-11-05_22-24-11/checkpoint_000000

Trial train_cifar_ae725_00009 completed after 1 iterations at 2024-11-05 22:27:26. Total running time: 3min 15s
(func pid=4867) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00009_9_batch_size=2,l1=2,l2=16,lr=0.0287_2024-11-05_22-24-11/checkpoint_000000) [repeated 2x across cluster]
(func pid=4883) [5,  2000] loss: 1.049
(func pid=4873) [3,  8000] loss: 0.349 [repeated 2x across cluster]

Trial status: 8 TERMINATED | 2 RUNNING
Current time: 2024-11-05 22:27:42. Total running time: 3min 30s
Logical resource usage: 4.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4        2           161.092    1.52007       0.4353 |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        4           178.233    1.32266       0.5451 |
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
| train_cifar_ae725_00009   TERMINATED      2     16   0.0286986                2        1           134.236    2.32277       0.0984 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4873) [3, 10000] loss: 0.276 [repeated 2x across cluster]

Trial train_cifar_ae725_00007 finished iteration 5 at 2024-11-05 22:27:45. Total running time: 3min 33s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000004 |
| time_this_iter_s                                  29.83009 |
| time_total_s                                     208.06358 |
| training_iteration                                       5 |
| accuracy                                            0.5552 |
| loss                                               1.30174 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 5 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000004
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000004)

Trial train_cifar_ae725_00005 finished iteration 3 at 2024-11-05 22:27:51. Total running time: 3min 39s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00005 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000002 |
| time_this_iter_s                                  52.32373 |
| time_total_s                                     213.41613 |
| training_iteration                                       3 |
| accuracy                                            0.4922 |
| loss                                               1.38935 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00005 saved a checkpoint for iteration 3 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000002
(func pid=4873) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000002)
(func pid=4883) [6,  2000] loss: 1.014
(func pid=4873) [4,  2000] loss: 1.354
(func pid=4883) [6,  4000] loss: 0.528
(func pid=4873) [4,  4000] loss: 0.669

Trial status: 8 TERMINATED | 2 RUNNING
Current time: 2024-11-05 22:28:12. Total running time: 4min 0s
Logical resource usage: 4.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00005   RUNNING         8     64   0.000353097              4        3           213.416    1.38935       0.4922 |
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        5           208.064    1.30174       0.5552 |
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
| train_cifar_ae725_00009   TERMINATED      2     16   0.0286986                2        1           134.236    2.32277       0.0984 |
+------------------------------------------------------------------------------------------------------------------------------------+

Trial train_cifar_ae725_00007 finished iteration 6 at 2024-11-05 22:28:13. Total running time: 4min 2s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000005 |
| time_this_iter_s                                   28.2499 |
| time_total_s                                     236.31348 |
| training_iteration                                       6 |
| accuracy                                            0.5626 |
| loss                                               1.30345 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 6 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000005
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000005)
(func pid=4873) [4,  6000] loss: 0.448
(func pid=4883) [7,  2000] loss: 0.968
(func pid=4883) [7,  4000] loss: 0.516 [repeated 2x across cluster]

Trial train_cifar_ae725_00005 finished iteration 4 at 2024-11-05 22:28:39. Total running time: 4min 28s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00005 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000003 |
| time_this_iter_s                                  48.41701 |
| time_total_s                                     261.83314 |
| training_iteration                                       4 |
| accuracy                                            0.5152 |
| loss                                               1.33675 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00005 saved a checkpoint for iteration 4 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000003

Trial train_cifar_ae725_00005 completed after 4 iterations at 2024-11-05 22:28:39. Total running time: 4min 28s
(func pid=4873) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00005_5_batch_size=4,l1=8,l2=64,lr=0.0004_2024-11-05_22-24-11/checkpoint_000003)

Trial train_cifar_ae725_00007 finished iteration 7 at 2024-11-05 22:28:41. Total running time: 4min 30s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000006 |
| time_this_iter_s                                  28.18884 |
| time_total_s                                     264.50232 |
| training_iteration                                       7 |
| accuracy                                            0.5823 |
| loss                                               1.28516 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 7 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000006

Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2024-11-05 22:28:42. Total running time: 4min 30s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        7           264.502    1.28516       0.5823 |
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00005   TERMINATED      8     64   0.000353097              4        4           261.833    1.33675       0.5152 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
| train_cifar_ae725_00009   TERMINATED      2     16   0.0286986                2        1           134.236    2.32277       0.0984 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4883) [8,  2000] loss: 0.947 [repeated 2x across cluster]
(func pid=4883) [8,  4000] loss: 0.507

Trial train_cifar_ae725_00007 finished iteration 8 at 2024-11-05 22:29:07. Total running time: 4min 56s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000007 |
| time_this_iter_s                                  26.22838 |
| time_total_s                                      290.7307 |
| training_iteration                                       8 |
| accuracy                                            0.5814 |
| loss                                               1.31958 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 8 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000007
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000007) [repeated 2x across cluster]

Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2024-11-05 22:29:12. Total running time: 5min 0s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        8           290.731    1.31958       0.5814 |
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00005   TERMINATED      8     64   0.000353097              4        4           261.833    1.33675       0.5152 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
| train_cifar_ae725_00009   TERMINATED      2     16   0.0286986                2        1           134.236    2.32277       0.0984 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4883) [9,  2000] loss: 0.925
(func pid=4883) [9,  4000] loss: 0.490

Trial train_cifar_ae725_00007 finished iteration 9 at 2024-11-05 22:29:34. Total running time: 5min 22s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000008 |
| time_this_iter_s                                  26.15166 |
| time_total_s                                     316.88236 |
| training_iteration                                       9 |
| accuracy                                            0.5308 |
| loss                                               1.52309 |
+------------------------------------------------------------+
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000008)
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 9 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000008

Trial status: 9 TERMINATED | 1 RUNNING
Current time: 2024-11-05 22:29:42. Total running time: 5min 30s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00007   RUNNING       256    256   0.00477469               8        9           316.882    1.52309       0.5308 |
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00005   TERMINATED      8     64   0.000353097              4        4           261.833    1.33675       0.5152 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
| train_cifar_ae725_00009   TERMINATED      2     16   0.0286986                2        1           134.236    2.32277       0.0984 |
+------------------------------------------------------------------------------------------------------------------------------------+
(func pid=4883) [10,  2000] loss: 0.912
(func pid=4883) [10,  4000] loss: 0.481

Trial train_cifar_ae725_00007 finished iteration 10 at 2024-11-05 22:30:00. Total running time: 5min 48s
+------------------------------------------------------------+
| Trial train_cifar_ae725_00007 result                       |
+------------------------------------------------------------+
| checkpoint_dir_name                      checkpoint_000009 |
| time_this_iter_s                                  26.20712 |
| time_total_s                                     343.08949 |
| training_iteration                                      10 |
| accuracy                                             0.557 |
| loss                                               1.37514 |
+------------------------------------------------------------+
Trial train_cifar_ae725_00007 saved a checkpoint for iteration 10 at: (local)/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000009

Trial train_cifar_ae725_00007 completed after 10 iterations at 2024-11-05 22:30:00. Total running time: 5min 48s
(func pid=4883) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/var/lib/ci-user/ray_results/train_cifar_2024-11-05_22-24-11/train_cifar_ae725_00007_7_batch_size=8,l1=256,l2=256,lr=0.0048_2024-11-05_22-24-11/checkpoint_000009)

Trial status: 10 TERMINATED
Current time: 2024-11-05 22:30:00. Total running time: 5min 48s
Logical resource usage: 2.0/16 CPUs, 0/1 GPUs (0.0/1.0 accelerator_type:M60)
+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name                status         l1     l2            lr     batch_size     iter     total time (s)      loss     accuracy |
+------------------------------------------------------------------------------------------------------------------------------------+
| train_cifar_ae725_00000   TERMINATED     16      1   0.00213327               2        1           155.089    2.30621       0.0963 |
| train_cifar_ae725_00001   TERMINATED      1      2   0.013416                 4        1            94.89     2.31571       0.0996 |
| train_cifar_ae725_00002   TERMINATED    256     64   0.0113784                2        1           174.039    2.31324       0.0986 |
| train_cifar_ae725_00003   TERMINATED     64    256   0.0274071                8        1            55.3386   2.14447       0.221  |
| train_cifar_ae725_00004   TERMINATED     16      2   0.056666                 4        1            95.9643   2.33947       0.0977 |
| train_cifar_ae725_00005   TERMINATED      8     64   0.000353097              4        4           261.833    1.33675       0.5152 |
| train_cifar_ae725_00006   TERMINATED     16      4   0.000147684              8        1            53.7932   2.27684       0.1286 |
| train_cifar_ae725_00007   TERMINATED    256    256   0.00477469               8       10           343.089    1.37514       0.557  |
| train_cifar_ae725_00008   TERMINATED    128    256   0.0306227                8        2            94.3766   2.05471       0.2119 |
| train_cifar_ae725_00009   TERMINATED      2     16   0.0286986                2        1           134.236    2.32277       0.0984 |
+------------------------------------------------------------------------------------------------------------------------------------+

Best trial config: {'l1': 8, 'l2': 64, 'lr': 0.0003530972286268149, 'batch_size': 4}
Best trial final validation loss: 1.3367543444275856
Best trial final validation accuracy: 0.5152
Files already downloaded and verified
Files already downloaded and verified
Best trial test set accuracy: 0.5233

If you run the code, an example output could look like this:

Number of trials: 10/10 (10 TERMINATED)
+-----+--------------+------+------+-------------+--------+---------+------------+
| ... |   batch_size |   l1 |   l2 |          lr |   iter |    loss |   accuracy |
|-----+--------------+------+------+-------------+--------+---------+------------|
| ... |            2 |    1 |  256 | 0.000668163 |      1 | 2.31479 |     0.0977 |
| ... |            4 |   64 |    8 | 0.0331514   |      1 | 2.31605 |     0.0983 |
| ... |            4 |    2 |    1 | 0.000150295 |      1 | 2.30755 |     0.1023 |
| ... |           16 |   32 |   32 | 0.0128248   |     10 | 1.66912 |     0.4391 |
| ... |            4 |    8 |  128 | 0.00464561  |      2 | 1.7316  |     0.3463 |
| ... |            8 |  256 |    8 | 0.00031556  |      1 | 2.19409 |     0.1736 |
| ... |            4 |   16 |  256 | 0.00574329  |      2 | 1.85679 |     0.3368 |
| ... |            8 |    2 |    2 | 0.00325652  |      1 | 2.30272 |     0.0984 |
| ... |            2 |    2 |    2 | 0.000342987 |      2 | 1.76044 |     0.292  |
| ... |            4 |   64 |   32 | 0.003734    |      8 | 1.53101 |     0.4761 |
+-----+--------------+------+------+-------------+--------+---------+------------+

Best trial config: {'l1': 64, 'l2': 32, 'lr': 0.0037339984519545164, 'batch_size': 4}
Best trial final validation loss: 1.5310075663924216
Best trial final validation accuracy: 0.4761
Best trial test set accuracy: 0.4737

Most trials have been stopped early in order to avoid wasting resources. The best performing trial achieved a validation accuracy of about 47%, which could be confirmed on the test set.

So that’s it! You can now tune the parameters of your PyTorch models.

Total running time of the script: ( 6 minutes 7.048 seconds)

Gallery generated by Sphinx-Gallery

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources