Attention
June 2024 Status Update: Removing DataPipes and DataLoader V2
We are re-focusing the torchdata repo to be an iterative enhancement of torch.utils.data.DataLoader. We do not plan on continuing development or maintaining the [DataPipes] and [DataLoaderV2] solutions, and they will be removed from the torchdata repo. We’ll also be revisiting the DataPipes references in pytorch/pytorch. In release torchdata==0.8.0 (July 2024) they will be marked as deprecated, and in 0.9.0 (Oct 2024) they will be deleted. Existing users are advised to pin to torchdata==0.8.0 or an older version until they are able to migrate away. Subsequent releases will not include DataPipes or DataLoaderV2. Please reach out if you suggestions or comments (please use this issue for feedback)
DistributedReadingService¶
- class torchdata.dataloader2.DistributedReadingService(timeout: int = 1800)¶
DistributedReadingSerivce
handles distributed sharding on the graph ofDataPipe
and guarantee the randomness by sharing the same seed across the distributed processes.- Parameters:
timeout – Timeout for operations executed against the process group in seconds. Default value equals 30 minutes.
- finalize() None ¶
Clean up the distributed process group.
- initialize(datapipe: Union[IterDataPipe, MapDataPipe]) Union[IterDataPipe, MapDataPipe] ¶
Launches the
gloo
-backend distributed process group. Carries out distributed sharding on the graph ofDataPipe
and returns the graph attached with aFullSyncIterDataPipe
at the end.
- initialize_iteration(seed_generator: SeedGenerator, iter_reset_fn: Optional[Callable[[Union[IterDataPipe, MapDataPipe]], Union[IterDataPipe, MapDataPipe]]] = None) Optional[Callable[[Union[IterDataPipe, MapDataPipe]], Union[IterDataPipe, MapDataPipe]]] ¶
Shares the same seed from rank 0 to other ranks across the distributed processes and apply the random seed to the
DataPipe
graph.