Shortcuts

MixUp

class torchvision.transforms.v2.MixUp(*, alpha: float = 1.0, num_classes: int, labels_getter='default')[source]

[BETA] Apply MixUp to the provided batch of images and labels.

Note

The MixUp transform is in Beta stage, and while we do not expect disruptive breaking changes, some APIs may slightly change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753.

Paper: mixup: Beyond Empirical Risk Minimization.

Note

This transform is meant to be used on batches of samples, not individual images. See How to use CutMix and MixUp for detailed usage examples. The sample pairing is deterministic and done by matching consecutive samples in the batch, so the batch needs to be shuffled (this is an implementation detail, not a guaranteed convention.)

In the input, the labels are expected to be a tensor of shape (batch_size,). They will be transformed into a tensor of shape (batch_size, num_classes).

Parameters:
  • alpha (float, optional) – hyperparameter of the Beta distribution used for mixup. Default is 1.

  • num_classes (int) – number of classes in the batch. Used for one-hot-encoding.

  • labels_getter (callable or "default", optional) – indicates how to identify the labels in the input. By default, this will pick the second parameter as the labels if it’s a tensor. This covers the most common scenario where this transform is called as MixUp()(imgs_batch, labels_batch). It can also be a callable that takes the same input as the transform, and returns the labels.

Examples using MixUp:

How to use CutMix and MixUp

How to use CutMix and MixUp

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources