Shortcuts

python.builtin

dynamic_shape_round

Note

Tags: python.builtin, torch.dynamic-shape

Support Level: NOT_SUPPORTED_YET

Original source code:

# mypy: allow-untyped-defs
import torch

from torch.export import Dim

x = torch.randn(3, 2)
dim0_x = Dim("dim0_x")

class DynamicShapeRound(torch.nn.Module):
    """
    Calling round on dynamic shapes is not supported.
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x[: round(x.shape[0] / 2)]

Result:

AssertionError: RoundToInt(IntTrueDiv(dim0_x, 2)) <= dim0_x

tensor_setattr

Note

Tags: python.builtin

Support Level: SUPPORTED

Original source code:

# mypy: allow-untyped-defs
import torch



class TensorSetattr(torch.nn.Module):
    """
    setattr() call onto tensors is not supported.
    """
    def forward(self, x, attr):
        setattr(x, attr, torch.randn(3, 2))
        return x + 4

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, x: "f32[3, 2]", attr):
             # File: /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_export/db/examples/tensor_setattr.py:18 in forward, code: return x + 4
            add: "f32[3, 2]" = torch.ops.aten.add.Tensor(x, 4);  x = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=ConstantArgument(name='attr', value='attr'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

type_reflection_method

Note

Tags: python.builtin

Support Level: SUPPORTED

Original source code:

# mypy: allow-untyped-defs
import torch



class A:
    @classmethod
    def func(cls, x):
        return 1 + x


class TypeReflectionMethod(torch.nn.Module):
    """
    type() calls on custom objects followed by attribute accesses are not allowed
    due to its overly dynamic nature.
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        a = A()
        return type(a).func(x)

Result:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, x: "f32[3, 4]"):
             # File: /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_export/db/examples/type_reflection_method.py:10 in func, code: return 1 + x
            add: "f32[3, 4]" = torch.ops.aten.add.Tensor(x, 1);  x = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

You can rewrite the example above to something like the following:

class TypeReflectionMethodRewrite(torch.nn.Module):
    """
    Custom object class methods will be inlined.
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return A.func(x)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources