Shortcuts

Map-style DataPipes

A Map-style DataPipe is one that implements the __getitem__() and __len__() protocols, and represents a map from (possibly non-integral) indices/keys to data samples. This is a close equivalent of Dataset from the PyTorch core library.

For example, when accessed with mapdatapipe[idx], could read the idx-th image and its corresponding label from a folder on the disk.

class torchdata.datapipes.map.MapDataPipe(*args, **kwds)

Map-style DataPipe.

All datasets that represent a map from keys to data samples should subclass this. Subclasses should overwrite __getitem__(), supporting fetching a data sample for a given, unique key. Subclasses can also optionally overwrite __len__(), which is expected to return the size of the dataset by many Sampler implementations and the default options of DataLoader.

These DataPipes can be invoked in two ways, using the class constructor or applying their functional form onto an existing MapDataPipe (recommend, available to most but not all DataPipes).

Note

DataLoader by default constructs an index sampler that yields integral indices. To make it work with a map-style DataPipe with non-integral indices/keys, a custom sampler must be provided.

Example

>>> # xdoctest: +SKIP
>>> from torchdata.datapipes.map import SequenceWrapper, Mapper
>>> dp = SequenceWrapper(range(10))
>>> map_dp_1 = dp.map(lambda x: x + 1)  # Using functional form (recommended)
>>> list(map_dp_1)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> map_dp_2 = Mapper(dp, lambda x: x + 1)  # Using class constructor
>>> list(map_dp_2)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> batch_dp = map_dp_1.batch(batch_size=2)
>>> list(batch_dp)
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]

By design, there are fewer MapDataPipe than IterDataPipe to avoid duplicate implementations of the same functionalities as MapDataPipe. We encourage users to use the built-in IterDataPipe for various functionalities, and convert it to MapDataPipe as needed using IterToMapConverter or .to_map_datapipe(). If you have any question about usage or best practices while using MapDataPipe, feel free to ask on the PyTorch forum under the ‘data’ category.

We are open to add additional MapDataPipe where the operations can be lazily executed and __len__ can be known in advance. Feel free to make suggestions with description of your use case in this Github issue. Feedback about our design choice is also welcomed in that Github issue.

Here is the list of available Map-style DataPipes:

List of MapDataPipes

Batcher

Create mini-batches of data (functional name: batch).

Concater

Concatenate multiple Map DataPipes (functional name: concat).

InMemoryCacheHolder

Stores elements from the source DataPipe in memory (functional name: in_memory_cache).

IterToMapConverter

Lazily load data from IterDataPipe to construct a MapDataPipe with the key-value pair generated by key_value_fn (functional name: to_map_datapipe).

Mapper

Apply the input function over each item from the source DataPipe (functional name: map).

SequenceWrapper

Wraps a sequence object into a MapDataPipe.

Shuffler

Shuffle the input MapDataPipe via its indices (functional name: shuffle).

UnZipper

Takes in a DataPipe of Sequences, unpacks each Sequence, and return the elements in separate DataPipes based on their position in the Sequence (functional name: unzip).

Zipper

Aggregates elements into a tuple from each of the input DataPipes (functional name: zip).

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources