Attention
June 2024 Status Update: Removing DataPipes and DataLoader V2
We are re-focusing the torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader. We do not plan on continuing development or maintaining the [DataPipes] and [DataLoaderV2] solutions, and they will be removed from the torchdata repo. We’ll also be revisiting the DataPipes references in pytorch/pytorch. In release torchdata==0.8.0 (July 2024) they will be marked as deprecated, and in 0.9.0 (Oct 2024) they will be deleted. Existing users are advised to pin to torchdata==0.8.0 or an older version until they are able to migrate away. Subsequent releases will not include DataPipes or DataLoaderV2. Please reach out if you suggestions or comments (please use this issue for feedback)
SampleMultiplexer¶
- class torchdata.datapipes.iter.SampleMultiplexer(pipes_to_weights_dict: Dict[IterDataPipe[T_co], float], seed: Optional[int] = None)¶
Takes a Dict of (IterDataPipe, Weight), and yields items by sampling from these DataPipes with respect to their weights. When individual DataPipes are exhausted, continues to sample from the remaining DataPipes according to their relative weights. If you wish to maintain the same ratio of weights indefinitely, you need to ensure that the inputs are never exhausted, by, for instance, applying
cycle
to them.Sampling is controlled by the provided random
seed
. If you don’t provide it, the sampling will not be deterministic.- Parameters:
pipes_to_weights_dict – a Dict of IterDataPipes and Weights. The total weight of unexhausted DataPipes will be normalized to 1 for the purpose of sampling.
seed – random seed to initialize the random number generator
Example
>>> from torchdata.datapipes.iter import IterableWrapper, SampleMultiplexer >>> source_dp1 = IterableWrapper([0] * 5) >>> source_dp2 = IterableWrapper([1] * 5) >>> d = {source_dp1: 99999999, source_dp2: 0.0000001} >>> sample_mul_dp = SampleMultiplexer(pipes_to_weights_dict=d, seed=0) >>> list(sample_mul_dp) [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]