"""torch.multiprocessing is a wrapper around the native :mod:`multiprocessing`module. It registers custom reducers, that use shared memory to provide sharedviews on the same data in different processes. Once the tensor/storage is movedto shared_memory (see :func:`~torch.Tensor.share_memory_`), it will be possibleto send it to other processes without making any copies.The API is 100% compatible with the original module - it's enough to change``import multiprocessing`` to ``import torch.multiprocessing`` to have all thetensors sent through the queues or shared via other mechanisms, moved to sharedmemory.Because of the similarity of APIs we do not document most of this packagecontents, and we recommend referring to very good docs of the original module."""importmultiprocessingimportsysimporttorchfrom.reductionsimportinit_reductions__all__=["set_sharing_strategy","get_sharing_strategy","get_all_sharing_strategies"]frommultiprocessingimport*# noqa: F403__all__+=multiprocessing.__all__# noqa: PLE0605 type: ignore[attr-defined]# This call adds a Linux specific prctl(2) wrapper function to this module.# See https://github.com/pytorch/pytorch/pull/14391 for more information.torch._C._multiprocessing_init()"""Add helper function to spawn N processes and wait for completion of any ofthem. This depends `mp.get_context` which was added in Python 3.4."""from.spawnimport(ProcessContext,ProcessExitedException,ProcessRaisedException,spawn,SpawnContext,start_processes,)ifsys.platform=="darwin"orsys.platform=="win32":_sharing_strategy="file_system"_all_sharing_strategies={"file_system"}else:_sharing_strategy="file_descriptor"_all_sharing_strategies={"file_descriptor","file_system"}
[docs]defset_sharing_strategy(new_strategy):"""Sets the strategy for sharing CPU tensors. Args: new_strategy (str): Name of the selected strategy. Should be one of the values returned by :func:`get_all_sharing_strategies()`. """global_sharing_strategyassertnew_strategyin_all_sharing_strategies_sharing_strategy=new_strategy
[docs]defget_sharing_strategy():"""Returns the current strategy for sharing CPU tensors."""return_sharing_strategy
[docs]defget_all_sharing_strategies():"""Returns a set of sharing strategies supported on a current system."""return_all_sharing_strategies
init_reductions()
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.