Struct Node

Nested Relationships

Inheritance Relationships

Base Type

  • public std::enable_shared_from_this< Node >

Derived Types

Struct Documentation

struct torch::autograd::Node : public std::enable_shared_from_this<Node>

Subclassed by torch::autograd::CppNode< T >, torch::autograd::TraceableFunction

Public Functions

Node(uint64_t sequence_nr, edge_list &&next_edges = edge_list())

Construct a new Node with the given next_edges

Node(edge_list &&next_edges = edge_list())
Node(const Node &other) = delete

Nodes are neither copyable nor moveable.

Node(Node &&other) = delete
Node &operator=(const Node &other) = delete
Node &operator=(Node &&other) = delete
~Node() = default
variable_list operator()(variable_list &&inputs)

Evaluates the function on the given inputs and returns the result of the function call.

uint32_t add_input_metadata(const at::TensorOptions &options, at::IntArrayRef shape, bool is_tensor_subclass) noexcept

Adds the type and shape metadata for a new input.

Returns the index of of the new input.

uint32_t add_input_metadata(const at::Tensor &t) noexcept
uint32_t add_input_metadata(undefined_input u) noexcept

Adds a placeholder for an input that will not be used.

uint32_t num_inputs() const noexcept
const InputMetadata &input_metadata(size_t index) const
c10::optional<c10::Stream> stream(const c10::DeviceType device_type)

Note: Function Streams A function’s stream (for a given device type) is the stream of the first element of its input buffer on a device of that type.

If all elements are on the same device they MUST share a stream. If elements are on different devices (across multiple GPUs, for example) they may have different streams.

void clear_input_metadata()
void update_topological_nr(const Edge &edge)
void set_next_edge(size_t index, Edge edge)
void add_next_edge(Edge edge)
void set_next_edges(edge_list &&next_edges)
const Edge &next_edge(size_t index) const noexcept
const edge_list &next_edges() const noexcept
edge_list &next_edges() noexcept
uint32_t num_outputs() const noexcept
uint64_t sequence_nr() const noexcept

NOTE [ Sequence Number].

The sequence_nr has two main usages in autograd:

1) Helps determine the node’s execution priority in the engine. All else being equal, nodes with higher priority numbers are executed first. Thus, nodes corresponding to ops executed later are the first to be executed in the backward pass. One caveat is that we prioritize AccumulateGrad nodes by explicitly setting its sequence_nr to be UINT64_MAX. 2) The sequence number of this Node is paired with with thread_id it was created in as a unique identifier by the profiler to annotate recorded events. The purpose of this is to help users (and possibly programs) interpreting the profiler’s output to correlate backward nodes with its forward ops. We need both sequence_nr and thread_id to identify a node because sequence_nr is thread_local, i.e., starts counting up from zero in a new thread

uint64_t topological_nr() const noexcept
void assign_parent()
uint64_t thread_id() const noexcept

Id of the thread that created Node.

std::string name() const

Returns the name of the dynamic type of the function, for debugging.

bool should_compute_output(size_t output_edge_index) const

Returns true if the particular output edge is active, and that particular output of this function should be computed.

bool should_compute_output(std::initializer_list<IndexRange> idxs) const

Returns true if any of the output edges in any of the ranges are active.

PyObject *pyobj() const noexcept

Returns the PyObject stored for this Node (for Python interaction).

void set_pyobj(PyObject *pyobj) noexcept

Sets the PyObject stored for this Node (for Python interaction).

AnomalyMetadata *metadata() noexcept

Returns the anomaly metadata stored for this Node.

If none exist, creates a new empty one.

uintptr_t add_post_hook(std::unique_ptr<FunctionPostHook> &&post_hook)
const std::vector<std::unique_ptr<FunctionPostHook>> &post_hooks() const noexcept
bool del_post_hook(const uintptr_t &key)
std::vector<std::unique_ptr<FunctionPostHook>> &post_hooks() noexcept
void add_pre_hook(std::unique_ptr<FunctionPreHook> &&pre_hook)
const std::vector<std::unique_ptr<FunctionPreHook>> &pre_hooks() const noexcept
std::vector<std::unique_ptr<FunctionPreHook>> &pre_hooks() noexcept
void release_variables()

Releases saved variables if the operation won’t be reused.

void will_release_variables()

Called before an apply if release_variables() is going to be called.

Allows larger ops like InterpreterAutogradFunction to incrementally release variables as they run.

bool is_traceable()

Returns true if this function is traceable.

An op is traceable if all operations happening within apply() are performed on autograd Variables (i.e. apply mostly instantiates and applies other functions).

bool passes_state_transparently()

A Node is said to pass state transparently to backward, if the state consists only of (Saved)Variables and only non-variable objects that parameterize the operation in some way that defines the graph structure AND the backward function is traceable.

In particular, parametrization MUST NOT depend on the data of any Variable. TODO: it might be possible to handle cases where backward is non-traceable but state passing could be considered transparent. This will probably depend on saved_variable_list being mutable. NOTE: this value matters only if is_traceable() returns false.

Protected Functions

variable_list apply(variable_list &&inputs) = 0

Performs the Node’s actual operation.

variable_list traced_apply(variable_list inputs)

Calls apply(), but instruments it with tracing machinery.

Protected Attributes

const uint64_t sequence_nr_
uint64_t topological_nr_ = 0
bool has_parent_ = false
uint64_t thread_id_ = 0
std::mutex mutex_
edge_list next_edges_
PyObject *pyobj_ = nullptr
std::unique_ptr<AnomalyMetadata> anomaly_metadata_ = nullptr
std::vector<std::unique_ptr<FunctionPreHook>> pre_hooks_
std::vector<std::unique_ptr<FunctionPostHook>> post_hooks_
at::SmallVector<InputMetadata, 2> input_metadata_
struct undefined_input


Access comprehensive developer documentation for PyTorch

View Docs


Get in-depth tutorials for beginners and advanced developers

View Tutorials


Find development resources and get your questions answered

View Resources