Shortcuts

torch.nn.functional.torch.nn.parallel.data_parallel

torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)[source]

Evaluates module(input) in parallel across the GPUs given in device_ids.

This is the functional version of the DataParallel module.

Parameters:
  • module (Module) – the module to evaluate in parallel

  • inputs (Tensor) – inputs to the module

  • device_ids (list of python:int or torch.device) – GPU ids on which to replicate module

  • output_device (list of python:int or torch.device) – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0])

Returns:

a Tensor containing the result of module(input) located on output_device

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources