Shortcuts

Attention

June 2024 Status Update: Removing DataPipes and DataLoader V2

We are re-focusing the torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader. We do not plan on continuing development or maintaining the [DataPipes] and [DataLoaderV2] solutions, and they will be removed from the torchdata repo. We’ll also be revisiting the DataPipes references in pytorch/pytorch. In release torchdata==0.8.0 (July 2024) they will be marked as deprecated, and in 0.9.0 (Oct 2024) they will be deleted. Existing users are advised to pin to torchdata==0.8.0 or an older version until they are able to migrate away. Subsequent releases will not include DataPipes or DataLoaderV2. Please reach out if you suggestions or comments (please use this issue for feedback)

RandomSplitter

class torchdata.datapipes.iter.RandomSplitter(source_datapipe: IterDataPipe, weights: Dict[T, Union[int, float]], seed, total_length: Optional[int] = None, target: Optional[T] = None)

Randomly split samples from a source DataPipe into groups (functional name: random_split). Since there is no buffer, only ONE group of samples (i.e. one child DataPipe) can be iterated through at any time. Attempts to iterate through multiple of them simultaneously will fail.

Note that by default, multiple iterations of this DataPipe will yield the same split for consistency across epochs. You can invoke override_seed on the output(s) to update the seed whenever needed (such as per epoch to get a different split per epoch).

Parameters:
  • source_datapipe – Iterable DataPipe being split

  • weights – Dict of weights; the length of this list determines how many output DataPipes there will be. It is recommended to provide integer weights that sum up to total_length, which allows resulting DataPipes’ length values to be known in advance.

  • seed – random _seed used to determine the randomness of the split

  • total_length – Length of the source_datapipe, optional but providing an integer is highly encouraged, because not all IterDataPipe has len, espeically ones that can be easily known in advance.

  • target – Optional key (that must exist in weights) to indicate the specific group to return. If set to the default None, returns List[IterDataPipe]. If target is specified, returns IterDataPipe.

Example

>>> from torchdata.datapipes.iter import IterableWrapper
>>> dp = IterableWrapper(range(10))
>>> train, valid = dp.random_split(total_length=10, weights={"train": 0.5, "valid": 0.5}, seed=0)
>>> list(train)
[2, 3, 5, 7, 8]
>>> list(valid)
[0, 1, 4, 6, 9]
>>> # You can also specify a target key if you only need a specific group of samples
>>> train = dp.random_split(total_length=10, weights={"train": 0.5, "valid": 0.5}, seed=0, target='train')
>>> list(train)
[2, 3, 5, 7, 8]
>>> # Be careful to use the same seed as before when specifying `target` to get the correct split.
>>> valid = dp.random_split(total_length=10, weights={"train": 0.5, "valid": 0.5}, seed=0, target='valid')
>>> list(valid)
[0, 1, 4, 6, 9]

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources