Struct Node¶
Defined in File function.h
Page Contents
Inheritance Relationships¶
Base Type¶
public std::enable_shared_from_this< Node >
Derived Types¶
public torch::autograd::CppNode< T >
(Template Struct CppNode)public torch::autograd::TraceableFunction
(Struct TraceableFunction)
Struct Documentation¶
-
struct Node : public std::enable_shared_from_this<Node>¶
Subclassed by torch::autograd::CppNode< T >, torch::autograd::TraceableFunction
Public Functions
-
inline explicit Node(uint64_t sequence_nr, edge_list &&next_edges = edge_list())¶
Construct a new
Node
with the givennext_edges
-
virtual ~Node() = default¶
-
inline variable_list operator()(variable_list &&inputs)¶
Evaluates the function on the given inputs and returns the result of the function call.
-
inline uint32_t add_input_metadata(const at::TensorOptions &options, c10::SymIntArrayRef shape, bool is_tensor_subclass, bool is_nested) noexcept¶
Adds the type and shape metadata for a new input.
Returns the index of of the new input.
-
inline uint32_t add_input_metadata(undefined_input u) noexcept¶
Adds a placeholder for an input that will not be used.
-
inline uint32_t num_inputs() const noexcept¶
-
inline const InputMetadata &input_metadata(size_t index) const¶
-
inline InputMetadata &mutable_input_metadata(size_t index)¶
-
inline std::optional<c10::Stream> stream()¶
Note: Function Streams A function’s stream (for a given device type) is the stream of the first element of its input buffer on a device of that type.
If all elements are on the same device they MUST share a stream. If elements are on different devices (across multiple GPUs, for example) they may have different streams.
-
inline at::Device device()¶
-
inline void clear_input_metadata()¶
-
inline void update_topological_nr(const Edge &edge)¶
-
inline void set_next_edge(size_t index, Edge edge)¶
-
inline void add_next_edge(Edge edge)¶
-
inline const Edge &next_edge(size_t index) const noexcept¶
-
inline uint32_t num_outputs() const noexcept¶
-
inline uint64_t sequence_nr() const noexcept¶
NOTE [ Sequence Number].
The sequence_nr has two main usages in autograd:
1) Helps determine the node’s execution priority in the engine. All else being equal, nodes with higher priority numbers are executed first. Thus, nodes corresponding to ops executed later are the first to be executed in the backward pass. One caveat is that we prioritize AccumulateGrad nodes by explicitly setting its sequence_nr to be UINT64_MAX. 2) The sequence number of this
Node
is paired with with thread_id it was created in as a unique identifier by the profiler to annotate recorded events. The purpose of this is to help users (and possibly programs) interpreting the profiler’s output to correlate backward nodes with its forward ops. We need both sequence_nr and thread_id to identify a node because sequence_nr is thread_local, i.e., starts counting up from zero in a new thread
-
inline void set_sequence_nr(uint64_t sequence_nr)¶
-
inline uint64_t topological_nr() const noexcept¶
-
void assign_parent()¶
-
virtual std::string name() const¶
Returns the name of the dynamic type of the function, for debugging.
-
inline bool should_compute_output(size_t output_edge_index) const¶
The difference between functions
should_compute_output
andtask_should_compute_output
:should_compute_output
should only be used during graph construction and takes into account only requires_grad informationtask_should_compute_output
should only be called during the backward pass (unless called directly through grad_fn) and takes into account the current graph task. Specifically, the autograd engine trims unnecessary edges wheninputs
are specified, and during backward untrimmed nodes left on the graph can/should checktask_should_compute_output
to see if any outgoing edges have been trimmed by the engine. If that is the case, gradient computation wrt those edges can be omitted.
Returns true if the particular output edge is active, and that particular output of this function should be computed.
-
inline bool should_compute_output(std::initializer_list<IndexRange> idxs) const¶
Returns true if any of the output edges in any of the ranges are active.
-
inline bool task_should_compute_output(size_t output_edge_index) const¶
Same as the above
should_compute_output
function but will also check whether this edge is needed within the current graph task.
-
inline bool task_should_compute_output(std::initializer_list<IndexRange> idxs) const¶
Returns true if any of the output edges in any of the ranges are active and should be computed in the current graph task.
-
inline PyObject *pyobj() const noexcept¶
Returns the
PyObject
stored for thisNode
(for Python interaction).
-
inline void set_pyobj(PyObject *pyobj) noexcept¶
Sets the
PyObject
stored for thisNode
(for Python interaction).
-
AnomalyMetadata *metadata() noexcept¶
Returns the anomaly metadata stored for this
Node
.If none exist, creates a new empty one.
-
inline uintptr_t add_post_hook(std::unique_ptr<FunctionPostHook> &&post_hook)¶
-
inline const std::vector<std::unique_ptr<FunctionPostHook>> &post_hooks() const noexcept¶
-
inline bool del_post_hook(const uintptr_t &key)¶
-
inline std::vector<std::unique_ptr<FunctionPostHook>> &post_hooks() noexcept¶
-
inline void add_pre_hook(std::unique_ptr<FunctionPreHook> &&pre_hook)¶
-
inline void add_tensor_pre_hook(std::unique_ptr<FunctionPreHook> &&pre_hook)¶
-
inline void add_retains_grad_hook(std::unique_ptr<FunctionPreHook> &&pre_hook, size_t output_idx)¶
-
inline std::unique_ptr<FunctionPreHook> pop_retains_grad_hook(size_t output_idx)¶
-
inline const std::vector<std::unique_ptr<FunctionPreHook>> &pre_hooks() const noexcept¶
-
inline std::vector<std::unique_ptr<FunctionPreHook>> &pre_hooks() noexcept¶
-
inline virtual std::vector<std::unique_ptr<FunctionPreHook>> &tensor_pre_hooks() noexcept¶
-
inline virtual std::unique_ptr<PostAccumulateGradHook> &tensor_post_acc_grad_hooks() noexcept¶
-
inline std::unordered_map<size_t, std::unique_ptr<FunctionPreHook>> &retains_grad_hooks() noexcept¶
-
inline virtual void release_variables()¶
Releases saved variables if the operation won’t be reused.
-
inline virtual void will_release_variables()¶
Called before an apply if
release_variables()
is going to be called.Allows larger ops like
InterpreterAutogradFunction
to incrementally release variables as they run.
-
inline virtual bool is_traceable()¶
Returns true if this function is traceable.
An op is traceable if all operations happening within
apply()
are performed on autogradVariables
(i.e. apply mostly instantiates and applies other functions).
-
inline virtual bool passes_state_transparently()¶
A
Node
is said to pass state transparently to backward, if the state consists only of (Saved)Variables and only non-variable objects that parameterize the operation in some way that defines the graph structure AND the backward function is traceable.In particular, parametrization MUST NOT depend on the data of any
Variable
. TODO: it might be possible to handle cases where backward is non-traceable but state passing could be considered transparent. This will probably depend on saved_variable_list being mutable. NOTE: this value matters only if is_traceable() returns false.
-
inline virtual void compiled_args(CompiledNodeArgs &args)¶
-
inline virtual variable_list apply_with_saved(const variable_list &inputs, SwapSavedVariables &saved)¶
Protected Functions
-
virtual variable_list apply(variable_list &&inputs) = 0¶
Performs the
Node
’s actual operation.
-
variable_list traced_apply(variable_list inputs)¶
Calls
apply()
, but instruments it with tracing machinery.
Protected Attributes
-
uint64_t sequence_nr_¶
-
uint64_t topological_nr_ = 0¶
-
mutable bool has_parent_ = false¶
-
uint64_t thread_id_ = 0¶
-
std::mutex mutex_¶
-
PyObject *pyobj_ = nullptr¶
-
std::unique_ptr<AnomalyMetadata> anomaly_metadata_ = nullptr¶
-
std::vector<std::unique_ptr<FunctionPreHook>> pre_hooks_¶
-
std::vector<std::unique_ptr<FunctionPreHook>> tensor_pre_hooks_¶
-
std::unordered_map<size_t, std::unique_ptr<FunctionPreHook>> retains_grad_hooks_¶
-
std::vector<std::unique_ptr<FunctionPostHook>> post_hooks_¶
-
at::SmallVector<InputMetadata, 2> input_metadata_¶
-
struct undefined_input¶
-
inline explicit Node(uint64_t sequence_nr, edge_list &&next_edges = edge_list())¶