Shortcuts

Template Function torch::nn::parallel::parallel_apply

Function Documentation

template<typename ModuleType>
std::vector<Tensor> torch::nn::parallel::parallel_apply(std::vector<ModuleType> &modules, const std::vector<Tensor> &inputs, const optional<std::vector<Device>> &devices = nullopt)

Applies the given inputs to the given modules in a parallel fashion.

Conceptually, a thread is spawned for each (module, input) pair, in which forward() is called on the module with its corresponding input. The outputs of the individual calls are stored in a vector and returned.

The first exception caught by any thread is stashed and rethrown after all threads have completed their operation.

Further remarks:

  1. The length of the module container must match the length of the inputs.

  2. If a list of devices is supplied, it must match the list of modules in length. Each device will be set to the current default device during the invocation of the respective module. This means any tensors allocated on the default device inside the module will be constructed on this device.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources