Multiprocessing package - torch.multiprocessing¶
torch.multiprocessing is a wrapper around the native
module. It registers custom reducers, that use shared memory to provide shared
views on the same data in different processes. Once the tensor/storage is moved
to shared_memory (see
share_memory_()), it will be possible
to send it to other processes without making any copies.
The API is 100% compatible with the original module - it’s enough to change
import multiprocessing to
import torch.multiprocessing to have all the
tensors sent through the queues or shared via other mechanisms, moved to shared
Because of the similarity of APIs we do not document most of this package contents, and we recommend referring to very good docs of the original module.
If the main process exits abruptly (e.g. because of an incoming signal),
multiprocessing sometimes fails to clean up its children.
It’s a known caveat, so if you’re seeing any resource leaks after
interrupting the interpreter, it probably means that this has just happened
Returns a set of sharing strategies supported on a current system.
Returns the current strategy for sharing CPU tensors.
Sets the strategy for sharing CPU tensors.
new_strategy (str) – Name of the selected strategy. Should be one of the values returned by
Available for Python >= 3.4.
This depends on the
spawn start method in Python’s
Spawning a number of subprocesses to perform some function can be done
Process instances and calling
join to wait for
their completion. This approach works fine when dealing with a single
subprocess but presents potential issues when dealing with multiple
Namely, joining processes sequentially implies they will terminate sequentially. If they don’t, and the first process does not terminate, the process termination will go unnoticed. Also, there are no native facilities for error propagation.
spawn function below addresses these concerns and takes care
of error propagation, out of order termination, and will actively
terminate processes upon detecting an error in one of them.
spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method='spawn')[source]¶
nprocsprocesses that run
If one of the processes exits with a non-zero exit status, the remaining processes are killed and an exception is raised with the cause of termination. In the case an exception was caught in the child process, it is forwarded and its traceback is included in the exception raised in the parent process.
fn (function) –
Function is called as the entrypoint of the spawned process. This function must be defined at the top level of a module so it can be pickled and spawned. This is a requirement imposed by multiprocessing.
The function is called as
fn(i, *args), where
iis the process index and
argsis the passed through tuple of arguments.
args (tuple) – Arguments passed to
nprocs (int) – Number of processes to spawn.
join (bool) – Perform a blocking join on all processes.
daemon (bool) – The spawned processes’ daemon flag. If set to True, daemonic processes will be created.
start_method (string) – (deprecated) this method will always use
spawnas the start method. To use a different start method use
spawn()when called with
Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of the first process exiting.
Trueif all processes have been joined successfully,
Falseif there are more processes that need to be joined.
timeout (float) – Wait this long before giving up on waiting.