Shortcuts

python.builtin

dynamic_shape_round

Note

Tags: torch.dynamic-shape, python.builtin

Support Level: NOT_SUPPORTED_YET

Original source code:

import torch

from torch.export import Dim

x = torch.ones(3, 2)
dim0_x = Dim("dim0_x")

class DynamicShapeRound(torch.nn.Module):
    """
    Calling round on dynamic shapes is not supported.
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x[: round(x.shape[0] / 2)]

Result:

AssertionError:

tensor_setattr

Note

Tags: python.builtin

Support Level: SUPPORTED

Original source code:

import torch



class TensorSetattr(torch.nn.Module):
    """
    setattr() call onto tensors is not supported.
    """
    def forward(self, x, attr):
        setattr(x, attr, torch.randn(3, 2))
        return x + 4

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 4);  arg0_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=ConstantArgument(value='attr'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

type_reflection_method

Note

Tags: python.builtin

Support Level: SUPPORTED

Original source code:

import torch



class A:
    @classmethod
    def func(cls, x):
        return 1 + x


class TypeReflectionMethod(torch.nn.Module):
    """
    type() calls on custom objects followed by attribute accesses are not allowed
    due to its overly dynamic nature.
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        a = A()
        return type(a).func(x)

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 4]"):
                add: "f32[3, 4]" = torch.ops.aten.add.Tensor(arg0_1, 1);  arg0_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

You can rewrite the example above to something like the following:

class TypeReflectionMethodRewrite(torch.nn.Module):
    """
    Custom object class methods will be inlined.
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return A.func(x)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources