Source code for torch.distributed.tensor.experimental
# mypy: allow-untyped-defs# Copyright (c) Meta Platforms, Inc. and affiliatesfromcontextlibimportcontextmanagerfromtorch.distributed.tensor._apiimportDTensorfromtorch.distributed.tensor.experimental._attentionimportcontext_parallelfromtorch.distributed.tensor.experimental._func_mapimportlocal_mapfromtorch.distributed.tensor.experimental._register_shardingimportregister_sharding__all__=["context_parallel","implicit_replication","local_map","register_sharding"]@contextmanagerdefimplicit_replication():""" This context manager allows :class:`DTensor` to implicitly treat all non-DTensors (``torch.Tensor``) in the program be replicate :class:`DTensor` s during the operator computation. .. warning:: This might possible lead to incorrect results if ``torch.Tensor`` s are not replicated in practice, please use it at your discretion. """try:DTensor._op_dispatcher._allow_implicit_replication=Trueyieldfinally:DTensor._op_dispatcher._allow_implicit_replication=False# Set namespace for exposed private namescontext_parallel.__module__="torch.distributed.tensor.experimental"implicit_replication.__module__="torch.distributed.tensor.experimental"local_map.__module__="torch.distributed.tensor.experimental"register_sharding.__module__="torch.distributed.tensor.experimental"
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.