# Nodes represent a definition of a value in our graph of operators.fromtypingimportTYPE_CHECKING,Union,Callable,Any,Tuple,List,Optional,Dict,Setfrom._compatibilityimportcompatibilityfrom.immutable_collectionsimportimmutable_dict,immutable_listimporttorchimportbuiltinsimporttypesimportwarningsfromtorch.fx.operator_schemasimportnormalize_function,normalize_module,ArgsKwargsPairfrom.._opsimportopsas_opsifTYPE_CHECKING:from.graphimportGraph__all__=['Node','map_arg','map_aggregate',"has_side_effect"]BaseArgumentTypes=Union[str,int,float,bool,complex,torch.dtype,torch.Tensor,torch.device,torch.memory_format,torch.layout,torch._ops.OpOverload]base_types=BaseArgumentTypes.__args__# type: ignore[attr-defined]Target=Union[Callable[...,Any],str]Argument=Optional[Union[Tuple[Any,...],# actually Argument, but mypy can't represent recursive typesList[Any],# actually ArgumentDict[str,Any],# actually Argumentslice,# Slice[Argument, Argument, Argument], but slice is not a templated type in typingrange,'Node',BaseArgumentTypes]]_side_effectful_functions:Set[Callable]={torch._assert,torch._assert_async,_ops.aten._assert_async.msg,_ops.aten.copy_.default,_ops.aten.sym_constrain_range.default,_ops.aten.sym_constrain_range_for_size.default,_ops.profiler._record_function_enter,_ops.profiler._record_function_enter_new,_ops.profiler._record_function_exit}@compatibility(is_backward_compatible=False)defhas_side_effect(fn:Callable)->None:_side_effectful_functions.add(fn)returnfn# this is fixed on master, WAR for 1.5def_find_module_of_method(orig_method:Callable[...,Any])->str:name=orig_method.__name__module=orig_method.__module__ifmoduleisnotNone:returnmoduleforguessin[torch,torch.nn.functional]:ifgetattr(guess,name,None)isorig_method:returnguess.__name__raiseRuntimeError(f'cannot find module for {orig_method}')# Borrowed from CPython typing module# https://github.com/python/cpython/blob/f90dc36c15d7fee0efaf6d39e97be0bdf2683e93/Lib/typing.py#L156def_type_repr(obj):"""Return the repr() of an object, special-casing types (internal helper). If obj is a type, we return a shorter version than the default type.__repr__, based on the module and qualified name, which is typically enough to uniquely identify a type. For everything else, we fall back on repr(obj). """ifisinstance(obj,type):ifobj.__module__=='builtins':returnobj.__qualname__returnf'{obj.__module__}.{obj.__qualname__}'ifobjis...:return('...')ifisinstance(obj,types.FunctionType):returnobj.__name__returnrepr(obj)def_get_qualified_name(func:Callable[...,Any])->str:# things like getattr just appear in builtinsifgetattr(builtins,func.__name__,None)isfunc:returnfunc.__name__# torch.Tensor.{fn}if(isinstance(func,(types.MethodDescriptorType,types.WrapperDescriptorType))andfuncisgetattr(torch.Tensor,func.__name__,None)):returnf"torch.Tensor.{func.__name__}"name=func.__name__module=_find_module_of_method(func)module=module.replace('torch._ops','torch.ops')# WAR for bug in how torch.ops assigns module# Fixup segment_reduce mismatchifmodule=="torch"andname=="segment_reduce":name="_"+namereturnf'{module}.{name}'def_format_arg(arg,max_list_len=float('inf'))->str:ifhasattr(arg,'_custom_fx_repr_fn'):returnarg._custom_fx_repr_fn()elifisinstance(arg,list):items=', '.join(_format_arg(a)foridx,ainenumerate(arg)ifidx<max_list_len)maybe_len=''iflen(arg)<max_list_len+1elsef', ...[total_len={len(arg)}]'returnf'[{items}{maybe_len}]'elifisinstance(arg,tuple):items=', '.join(_format_arg(a)foridx,ainenumerate(arg)ifidx<max_list_len)maybe_len=''iflen(arg)<max_list_len+1elsef', ...[total_len={len(arg)}]'maybe_comma=','iflen(arg)==1else''returnf'({items}{maybe_comma}{maybe_len})'elifisinstance(arg,dict):items_str=', '.join(f'{k}: {_format_arg(v)}'fork,vinarg.items())returnf'{{{items_str}}}'ifisinstance(arg,Node):return'%'+str(arg)else:returnstr(arg)
[docs]@compatibility(is_backward_compatible=True)classNode:""" ``Node`` is the data structure that represents individual operations within a ``Graph``. For the most part, Nodes represent callsites to various entities, such as operators, methods, and Modules (some exceptions include nodes that specify function inputs and outputs). Each ``Node`` has a function specified by its ``op`` property. The ``Node`` semantics for each value of ``op`` are as follows: - ``placeholder`` represents a function input. The ``name`` attribute specifies the name this value will take on. ``target`` is similarly the name of the argument. ``args`` holds either: 1) nothing, or 2) a single argument denoting the default parameter of the function input. ``kwargs`` is don't-care. Placeholders correspond to the function parameters (e.g. ``x``) in the graph printout. - ``get_attr`` retrieves a parameter from the module hierarchy. ``name`` is similarly the name the result of the fetch is assigned to. ``target`` is the fully-qualified name of the parameter's position in the module hierarchy. ``args`` and ``kwargs`` are don't-care - ``call_function`` applies a free function to some values. ``name`` is similarly the name of the value to assign to. ``target`` is the function to be applied. ``args`` and ``kwargs`` represent the arguments to the function, following the Python calling convention - ``call_module`` applies a module in the module hierarchy's ``forward()`` method to given arguments. ``name`` is as previous. ``target`` is the fully-qualified name of the module in the module hierarchy to call. ``args`` and ``kwargs`` represent the arguments to invoke the module on, *excluding the self argument*. - ``call_method`` calls a method on a value. ``name`` is as similar. ``target`` is the string name of the method to apply to the ``self`` argument. ``args`` and ``kwargs`` represent the arguments to invoke the module on, *including the self argument* - ``output`` contains the output of the traced function in its ``args[0]`` attribute. This corresponds to the "return" statement in the Graph printout. """@compatibility(is_backward_compatible=True)def__init__(self,graph:'Graph',name:str,op:str,target:'Target',args:Tuple['Argument',...],kwargs:Dict[str,'Argument'],return_type:Optional[Any]=None)->None:""" Instantiate an instance of ``Node``. Note: most often, you want to use the Graph APIs, i.e. ``Graph.call_module``, ``Graph.call_method``, etc. rather than instantiating a ``Node`` directly. Args: graph (Graph): The ``Graph`` to which this ``Node`` should belong. name (str): The name to which the output of this ``Node`` should be assigned op (str): The opcode for this ``Node``. Can be one of 'placeholder', 'call_method', 'call_module', 'call_function', 'get_attr', 'output' target ('Target'): The target this op should call. See the broader ``Node`` docstring for more details. args (Tuple['Argument']): The args to be passed to ``target`` kwargs (Dict[str, 'Argument']): The kwargs to be passed to ``target`` return_type (Optional[Any]): The python type expression representing the type of the output of this node. This field can be used for annotation of values in the generated code or for other types of analyses. """self.graph=graphself.name=name# unique name of value being createdassertopin['placeholder','call_method','call_module','call_function','get_attr','output','root']self.op=op# the kind of operation = placeholder|call_method|call_module|call_function|get_attrifop=='call_function':ifnotcallable(target):raiseValueError(f'Node [graph = {graph}, name = \'{name}\'] target {target} has type {torch.typename(target)} ''but a Callable is expected')else:ifnotisinstance(target,str):raiseValueError(f'Node [graph = {graph}, name = \'{name}\'] target {target} has type {torch.typename(target)} ''but a str is expected')self.target=target# for method/module/function, the name of the method/module/function/attr# being invoked, e.g add, layer1, or torch.add# All `Node`-valued inputs. Key is the Node, value is don't-care.# The public API for this is `all_input_nodes`, this private attribute# should not be accessed directly.self._input_nodes:Dict[Node,None]={}self.__update_args_kwargs(map_arg(args,lambdax:x),map_arg(kwargs,lambdax:x))# type: ignore[arg-type]# All of the nodes that use the value produced by this Node# Note one user may correspond to several uses, e.g. the node fo ``x + x``# would appear once here, but represents two uses.## Is a dict to act as an "ordered set". Keys are significant, value dont-careself.users:Dict[Node,None]={}# Type expression representing the output value of this node.# This should contain the same class of Type objects that would appear# as type annotations for function inputs/outputs.## For placeholder nodes, this value will be used to type-annotate the# generated function parameters.# For the return node, this value will be used to type-annotate the# generated function return type. (Note this is a special case. ``return``# does not produce a value, it's more of a notation. Thus, this value# describes the type of args[0] in the ``return`` node.self.type:Optional[Any]=return_typeself._prev=selfself._next=selfself._erased=False# If set, use this fn to print this nodeself._repr_fn:Optional[Callable[[Node],str]]=None# Dictionary to store metadata passes need to do their# transformations. This metadata is preserved across node copiesself.meta:Dict[str,Any]={}@propertydefnext(self)->'Node':""" Returns the next ``Node`` in the linked list of Nodes. Returns: The next ``Node`` in the linked list of Nodes. """returnself._next@propertydefprev(self)->'Node':""" Returns the previous ``Node`` in the linked list of Nodes. Returns: The previous ``Node`` in the linked list of Nodes. """returnself._prev
[docs]@compatibility(is_backward_compatible=True)defprepend(self,x:'Node')->None:""" Insert x before this node in the list of nodes in the graph. Example:: Before: p -> self bx -> x -> ax After: p -> x -> self bx -> ax Args: x (Node): The node to put before this node. Must be a member of the same graph. """assertself.graph==x.graph,"Attempting to move a Node into a different Graph"ifself==x:warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.")returnx._remove_from_list()p=self._prevp._next,x._prev=x,px._next,self._prev=self,x
[docs]@compatibility(is_backward_compatible=True)defappend(self,x:'Node')->None:""" Insert ``x`` after this node in the list of nodes in the graph. Equivalent to ``self.next.prepend(x)`` Args: x (Node): The node to put after this node. Must be a member of the same graph. """self._next.prepend(x)
def_remove_from_list(self):p,n=self._prev,self._nextp._next,n._prev=n,p@propertydefargs(self)->Tuple[Argument,...]:""" The tuple of arguments to this ``Node``. The interpretation of arguments depends on the node's opcode. See the :class:`Node` docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment. """returnself._args@args.setterdefargs(self,a:Tuple[Argument,...]):""" Set the tuple of arguments to this Node. The interpretation of arguments depends on the node's opcode. See the ``fx.Graph`` docstring for more information. """# DO NOT CALL `__update_args_kwargs` directly. The correct way to# set `args` is via direct assignment, i.e. `node.args = new_args`self.__update_args_kwargs(map_arg(a,lambdax:x),self._kwargs)# type: ignore[arg-type]@propertydefkwargs(self)->Dict[str,Argument]:""" The dict of keyword arguments to this ``Node``. The interpretation of arguments depends on the node's opcode. See the :class:`Node` docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment. """returnself._kwargs@kwargs.setterdefkwargs(self,k:Dict[str,Argument]):""" Set the dict of kwargs to this Node. The interpretation of arguments depends on the node's opcode. See the ``fx.Graph`` docstring for more information. """# DO NOT CALL `__update_args_kwargs` directly. The correct way to# set `args` is via direct assignment, i.e. `node.kwargs = new_kwargs`self.__update_args_kwargs(self._args,map_arg(k,lambdax:x))# type: ignore[arg-type]@propertydefall_input_nodes(self)->List['Node']:""" Return all Nodes that are inputs to this Node. This is equivalent to iterating over ``args`` and ``kwargs`` and only collecting the values that are Nodes. Returns: List of ``Nodes`` that appear in the ``args`` and ``kwargs`` of this ``Node``, in that order. """returnlist(self._input_nodes.keys())
[docs]@compatibility(is_backward_compatible=True)defupdate_arg(self,idx:int,arg:Argument)->None:""" Update an existing positional argument to contain the new value ``arg``. After calling, ``self.args[idx] == arg``. Args: idx (int): The index into ``self.args`` of the element to update arg (Argument): The new argument value to write into ``args`` """args=list(self.args)args[idx]=argself.args=tuple(args)
[docs]@compatibility(is_backward_compatible=True)defupdate_kwarg(self,key:str,arg:Argument)->None:""" Update an existing keyword argument to contain the new value ``arg``. After calling, ``self.kwargs[key] == arg``. Args: key (str): The key in ``self.kwargs`` of the element to update arg (Argument): The new argument value to write into ``kwargs`` """kwargs=dict(self.kwargs)kwargs[key]=argself.kwargs=kwargs
@propertydefstack_trace(self)->Optional[str]:""" Return the Python stack trace that was recorded during tracing, if any. When traced with fx.Tracer, this property is usually populated by `Tracer.create_proxy`. To record stack traces during tracing for debug purposes, set `record_stack_traces = True` on the `Tracer` instance. When traced with dynamo, this property will be populated by default by `OutputGraph.create_proxy`. stack_trace would have the innermost frame at the end of the string. """returnself.meta.get("stack_trace",None)@stack_trace.setterdefstack_trace(self,trace:Optional[str]):self.meta["stack_trace"]=tracedef__update_args_kwargs(self,new_args:Tuple['Argument',...],new_kwargs:Dict[str,'Argument']):""" This API is internal. Do *not* call it directly. """self._args=new_argsself._kwargs=new_kwargsforold_useinself._input_nodes.keys():old_use.users.pop(self)self._input_nodes={}map_arg(self._args,lambdan:self._input_nodes.setdefault(n))map_arg(self._kwargs,lambdan:self._input_nodes.setdefault(n))fornew_useinself._input_nodes.keys():new_use.users.setdefault(self)def__repr__(self)->str:ifself._repr_fn:returnself._repr_fn(self)returnself.namedef_pretty_print_target(self,target):""" Make target printouts more user-friendly. 1) builtins will be printed as `builtins.xyz` 2) operators will be printed as `operator.xyz` 3) other callables will be printed with qualified name, e.g. torch.add """ifisinstance(target,str):returntargetifhasattr(target,'__module__'):ifnothasattr(target,'__name__'):# Just to be defensive, if we don't have `__name__`, get the# qualname. Not sure if this happens for any members of `operator`# or `builtins`. This fallback path is not as good, since e.g.# things in `operator` have `_operator` as their __module__.return_get_qualified_name(target)iftarget.__module__=='builtins':returnf'builtins.{target.__name__}'eliftarget.__module__=='_operator':returnf'operator.{target.__name__}'return_get_qualified_name(target)
[docs]@compatibility(is_backward_compatible=True)defformat_node(self,placeholder_names:Optional[List[str]]=None,maybe_return_typename:Optional[List[str]]=None)->Optional[str]:""" Return a descriptive string representation of ``self``. This method can be used with no arguments as a debugging utility. This function is also used internally in the ``__str__`` method of ``Graph``. Together, the strings in ``placeholder_names`` and ``maybe_return_typename`` make up the signature of the autogenerated ``forward`` function in this Graph's surrounding GraphModule. ``placeholder_names`` and ``maybe_return_typename`` should not be used otherwise. Args: placeholder_names: A list that will store formatted strings representing the placeholders in the generated ``forward`` function. Internal use only. maybe_return_typename: A single-element list that will store a formatted string representing the output of the generated ``forward`` function. Internal use only. Returns: str: If 1) we're using ``format_node`` as an internal helper in the ``__str__`` method of ``Graph``, and 2) ``self`` is a placeholder Node, return ``None``. Otherwise, return a descriptive string representation of the current Node. """ifself.op=='placeholder':assertisinstance(self.target,str)arg_str=self.targetarg_str+=arg_str+f': {_type_repr(self.type)}'ifself.typeelse''ifplaceholder_names:placeholder_names.append(arg_str)returnNonemaybe_typename=f'{_type_repr(self.type)} 'ifself.typeelse''default_val='(default='+str(self.args[0])+')'ifself.argselse''returnf'%{self.name} : {maybe_typename}[num_users={len(self.users)}] = {self.op}[target={self.target}]{default_val}'elifself.op=='get_attr':maybe_typename=f'{_type_repr(self.type)} 'ifself.typeisnotNoneelse''returnf'%{self.name} : {maybe_typename}[num_users={len(self.users)}] = ' \
f'{self.op}[target={self._pretty_print_target(self.target)}]'elifself.op=='output':ifself.typeandmaybe_return_typename:maybe_return_typename[0]=f' -> {_type_repr(self.type)}'returnf'return {self.args[0]}'else:maybe_typename=f'{_type_repr(self.type)} 'ifself.typeisnotNoneelse''returnf'%{self.name} : {maybe_typename}[num_users={len(self.users)}] = ' \
f'{self.op}[target={self._pretty_print_target(self.target)}](' \
f'args = {_format_arg(self.args)}, kwargs = {_format_arg(self.kwargs)})'
[docs]@compatibility(is_backward_compatible=True)defreplace_all_uses_with(self,replace_with:'Node',delete_user_cb:Callable[['Node'],bool]=lambdauser:True,*,propagate_meta=False)->List['Node']:""" Replace all uses of ``self`` in the Graph with the Node ``replace_with``. Args: replace_with (Node): The node to replace all uses of ``self`` with. delete_user_cb (Callable): Callback that is called to determine whether a given user of the self node should be removed. propagate_meta (bool): Whether or not to copy all properties on the .meta field of the original node onto the replacement node. For safety, this is only valid to do if the replacement node doesn't already have an existing .meta field. Returns: The list of Nodes on which this change was made. """ifpropagate_meta:assertlen(replace_with.meta)==0, \
'Called node.replace_all_uses_with(replace_with, propagate_meta=True), ' \
'but replace_with already has .meta keys'fork,vinself.meta.items():replace_with.meta[k]=vto_process=list(self.users)skipped=[]foruse_nodeinto_process:ifnotdelete_user_cb(use_node):skipped.append(use_node)continuedefmaybe_replace_node(n:Node)->Node:ifn==self:returnreplace_withelse:returnnnew_args=map_arg(use_node.args,maybe_replace_node)new_kwargs=map_arg(use_node.kwargs,maybe_replace_node)assertisinstance(new_args,tuple)assertisinstance(new_kwargs,dict)use_node.__update_args_kwargs(new_args,new_kwargs)assertlen(self.users)-len(skipped)==0return[nforninto_processifnnotinskipped]
[docs]@compatibility(is_backward_compatible=False)defis_impure(self):""" Returns whether this op is impure, i.e. if its op is a placeholder or output, or if a call_function or call_module which is impure. Returns: bool: If the op is impure or not. """ifself.opin{"placeholder","output"}:returnTrue# Check if an impure function.ifself.op=="call_function":returnself.targetin_side_effectful_functions# Check if an impure module.ifself.op=="call_module":assert(self.graph.owning_moduleisnotNone),"self.graph.owning_module not set for purity check"target_mod=self.graph.owning_module.get_submodule(self.target)assert(target_modisnotNone),f"Did not find expected submodule target {self.target}"returngetattr(target_mod,"_is_impure",False)returnFalse
[docs]@compatibility(is_backward_compatible=False)defnormalized_arguments(self,root:torch.nn.Module,arg_types:Optional[Tuple[Any]]=None,kwarg_types:Optional[Dict[str,Any]]=None,normalize_to_only_use_kwargs:bool=False)->Optional[ArgsKwargsPair]:""" Returns normalized arguments to Python targets. This means that `args/kwargs` will be matched up to the module/functional's signature and return exclusively kwargs in positional order if `normalize_to_only_use_kwargs` is true. Also populates default values. Does not support positional-only parameters or varargs parameters. Supports module calls. May require `arg_types` and `kwarg_types` in order to disambiguate overloads. Args: root (torch.nn.Module): Module upon which to resolve module targets. arg_types (Optional[Tuple[Any]]): Tuple of arg types for the args kwarg_types (Optional[Dict[str, Any]]): Dict of arg types for the kwargs normalize_to_only_use_kwargs (bool): Whether to normalize to only use kwargs. Returns: Returns NamedTuple ArgsKwargsPair, or `None` if not successful. """ifself.op=='call_function':assertcallable(self.target)returnnormalize_function(self.target,self.args,self.kwargs,arg_types,kwarg_types)# type: ignore[arg-type]elifself.op=='call_module':assertisinstance(self.target,str)returnnormalize_module(root,self.target,self.args,self.kwargs)# type: ignore[arg-type]returnNone
[docs]@compatibility(is_backward_compatible=True)defreplace_input_with(self,old_input:'Node',new_input:'Node'):""" Loop through input nodes of ``self``, and replace all instances of ``old_input`` with ``new_input``. Args: old_input (Node): The old input node to be replaced. new_input (Node): The new input node to replace ``old_input``. """defmaybe_replace_node(n:Node)->Node:returnnew_inputifn==old_inputelsennew_args=map_arg(self.args,maybe_replace_node)new_kwargs=map_arg(self.kwargs,maybe_replace_node)assertisinstance(new_args,tuple)assertisinstance(new_kwargs,dict)self.__update_args_kwargs(new_args,new_kwargs)
@compatibility(is_backward_compatible=True)defmap_arg(a:Argument,fn:Callable[[Node],Argument])->Argument:""" Apply fn to each Node appearing arg. arg may be a list, tuple, slice, or dict with string keys. """assertcallable(fn),"torch.fx.map_arg(a, fn): fn must be a callable"returnmap_aggregate(a,lambdax:fn(x)ifisinstance(x,Node)elsex)@compatibility(is_backward_compatible=True)defmap_aggregate(a:Argument,fn:Callable[[Argument],Argument])->Argument:""" Apply fn to each Node appearing arg. arg may be a list, tuple, slice, or dict with string keys. """ifisinstance(a,tuple):t=tuple(map_aggregate(elem,fn)forelemina)# Support NamedTuple (if it has `_fields`) by repacking into original type.returntifnothasattr(a,'_fields')elsetype(a)(*t)elifisinstance(a,list):returnimmutable_list(map_aggregate(elem,fn)forelemina)elifisinstance(a,dict):returnimmutable_dict((k,map_aggregate(v,fn))fork,vina.items())elifisinstance(a,slice):returnslice(map_aggregate(a.start,fn),map_aggregate(a.stop,fn),map_aggregate(a.step,fn))else:returnfn(a)
Docs
Access comprehensive developer documentation for PyTorch
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.