Shortcuts

Error Propagation

Each host in a distributed PyTorch job runs with a single TorchElastic agent, and multiple workers (as children processes of the TorchElastic agent). Since the workers are user-provided (your PyTorch script/job), TorchElastic has a way to propagate errors on the trainers through the agent and up to the scheduler, which ultimately informs the end-user about the state of the job and applies any retry policies.

TorchElastic categorizes errors into 3 categories:

Category

Sub-Category

Description

User Error

Input Error

invalid inputs to TorchElastic APIs (e.g. min > max nodes)

Worker Failure

any failures on the worker child process

Platform Error

n/a

failures caused by the agent

Infra Error

n/a

failures outside the domain of the agent and workers (e.g. host failures)

All errors other than “Worker Failure” are either raised canonically from the agent process or implicitly or explicitly crash the agent process. So the standard language (python) provided exception handling strategies apply.

Worker Failures are special because the exception/failure originates on a different process from the agent so the error needs to be propagated inter-process (e.g. the agent cannot simply try-catch an exception raised on the worker process).

TorchElastic agents use torch.distributed.elastic.multiprocessing.start_processes() to launch the workers which has a simple file based inter-process error propagation built-in.

Any function or binary entrypoint decorated with record() will write uncaught exceptions (with the trace information) to a file specified by the environment variable TORCHELASTIC_ERROR_FILE. The parent process (e.g. agent) sets this env var on each child it launches, then aggregates the error files for all children, and propagates the one with the smallest timestamp (e.g. the first error).

Methods and Classes

torch.distributed.elastic.multiprocessing.errors.record(fn, error_handler=None)[source]

Syntactic sugar to record errors/exceptions that happened in the decorated function using the provided error_handler.

Using this decorator is equivalent to:

error_handler = get_error_handler()
error_handler.initialize()
try:
   foobar()
except ChildFailedError as e:
   _, failure = e.get_first_failure()
   error_handler.dump_error_file(failure.error_file, failure.exitcode)
   raise
except Exception as e:
   error_handler.record(e)
   raise

Important

use this decorator once per process at the top level method, typically this is the main method.

Example

@record
def main():
    pass

if __name__=="__main__":
   main()
Return type:

Callable[[…], T]

class torch.distributed.elastic.multiprocessing.errors.ChildFailedError(name, failures)[source]

Special exception type that can be raised from a function annotated with the @record decorator to have the child process’ (root exception) propagate up the stack as-is (e.g. without being wrapped in the parent’s traceback).

Useful in cases where the parent is a simple nanny process and the child (worker) processes are actually doing meaningful compute. In this case, errors typically occur on the child process as the parent is not doing anything non-trivial, and child errors should be propagated to the scheduler for accurate root cause diagnostics.

Note

The propagation relies on error files rather than exception handling to support both function and binary launches.

Example:

# process tree on a host (container)
0: scheduler-init-process:
           |- 1: torchelastic_agent:
                    |- 2: trainer_0 (ok)
                    |- 3: trainer_1 (fail) -> error.json
                    |- ...
                    |- n+2: trainer_n (ok)
           |- n+3: other processes
           |- ...

In the example above, trainer 1’s failure (written into error.json) is the root cause and should be reported to the scheduler’s init process. The torchelastic agent raises a ChildFailedError("trainer", {1: "trainer_1/error.json"}) upon detecting trainer 1’s failure which would propagate the contents of trainer 1’s error file to the scheduler’s init process.

class torch.distributed.elastic.multiprocessing.errors.ErrorHandler[source]

Writes the provided exception object along with some other metadata about the error in a structured way in JSON format to an error file specified by the environment variable: TORCHELASTIC_ERROR_FILE. If this environment variable is not set, then simply logs the contents of what would have been written to the error file.

This handler may be subclassed to customize the handling of the error. Subclasses should override initialize() and record_exception().

class torch.distributed.elastic.multiprocessing.errors.ProcessFailure(local_rank, pid, exitcode, error_file)[source]

Represents the failed process result. When the worker process fails, it may record failure root cause into the file. Tries to read the failure timestamp from the provided error_file, if the error_file does not exist, the timestamp is the current timestamp (seconds since epoch).

The message field is a concise explanation of the failure. If the error file exists then the message is obtained from the error file. Otherwise one is generated based on the failure signature.

Note

It is assumed that the error_file is written by torch.distributed.elastic.multiprocessing.errors.error_handler.ErrorHandler. Otherwise the behavior is undefined.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources