Shortcuts

MixUp

class torchvision.transforms.v2.MixUp(*, alpha: float = 1.0, num_classes: Optional[int] = None, labels_getter='default')[source]

Apply MixUp to the provided batch of images and labels.

Paper: mixup: Beyond Empirical Risk Minimization.

Note

This transform is meant to be used on batches of samples, not individual images. See How to use CutMix and MixUp for detailed usage examples. The sample pairing is deterministic and done by matching consecutive samples in the batch, so the batch needs to be shuffled (this is an implementation detail, not a guaranteed convention.)

In the input, the labels are expected to be a tensor of shape (batch_size,). They will be transformed into a tensor of shape (batch_size, num_classes).

Parameters:
  • alpha (float, optional) – hyperparameter of the Beta distribution used for mixup. Default is 1.

  • num_classes (int, optional) – number of classes in the batch. Used for one-hot-encoding. Can be None only if the labels are already one-hot-encoded.

  • labels_getter (callable or "default", optional) – indicates how to identify the labels in the input. By default, this will pick the second parameter as the labels if it’s a tensor. This covers the most common scenario where this transform is called as MixUp()(imgs_batch, labels_batch). It can also be a callable that takes the same input as the transform, and returns the labels.

Examples using MixUp:

How to use CutMix and MixUp

How to use CutMix and MixUp
make_params(flat_inputs: List[Any]) Dict[str, Any][source]

Method to override for custom transforms.

See How to write your own v2 transforms

transform(inpt: Any, params: Dict[str, Any]) Any[source]

Method to override for custom transforms.

See How to write your own v2 transforms

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources