This module is an early prototype and is subject to change.


Customize which functions TorchDynamo will include in the generated graph. Similar to torch.fx.wrap().


def fn(a):
    x = torch.add(x, 1)
    x = my_custom_function(x)
    x = torch.add(x, 1)
    return x


Will capture a single graph containing my_custom_function().


Customize which functions TorchDynamo will exclude in the generated graph and force a graph break on.


def fn(a):
    x = torch.add(x, 1)
    x = torch.sub(x, 1)
    x = torch.add(x, 1)
    return x


Will break the graph on torch.sub, and give two graphs each with a single torch.add() op.


Force a graph break

torch._dynamo.optimize(backend='inductor', *, nopython=False, guard_export_fn=None, guard_fail_fn=None, disable=False, dynamic=False)[source]

The main entrypoint of TorchDynamo. Do graph capture and call backend() to optimize extracted graphs.

  • backend – One of the two things: - Either, a function/callable taking a torch.fx.GraphModule and example_inputs and returning a python callable that runs the graph faster. One can also provide additional context for the backend, like torch.jit.fuser(“fuser2”), by setting the backend_ctx_ctor attribute. See AOTAutogradMemoryEfficientFusionWithContext for the usage. - Or, a string backend name in torch._dynamo.list_backends()

  • nopython – If True, graph breaks will be errors and there will be a single whole-program graph.

  • disable – If True, turn this decorator into a no-op

  • dynamic – If True, turn on dynamic shapes support

Example Usage:

def toy_example(a, b):
torch._dynamo.optimize_assert(backend, *, hooks=Hooks(guard_export_fn=None, guard_fail_fn=None), export=False, dynamic=False)[source]

The same as torch._dynamo.optimize(backend, nopython=True)[source]

Don’t do any dynamic compiles, just use prior optimizations


Decorator and context manager to disable TorchDynamo


Clear all compile caches and restore initial state


Skip frames associated with the function code, but still process recursively invoked frames

class torch._dynamo.OptimizedModule(mod, dynamo_ctx)[source]

Wraps the original nn.Module object and later patches its forward method to optimized self.forward method.

torch._dynamo.register_backend(compiler_fn=None, name=None, tags=())[source]

Decorator to add a given compiler to the registry to allow calling torch.compile with string shorthand. Note: for projects not imported by default, it might be easier to pass a function directly as a backend and not use a string.

torch._dynamo.list_backends(exclude_tags=('debug', 'experimental'))[source]

Return valid strings that can be passed to:

torch.compile(…, backend=”name”)


Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources