Rate this Page

Struct AutogradContext#

Page Contents

Struct Documentation#

struct AutogradContext#

Context to save information during forward that can be accessed in backward in custom autograd operations (see torch::autograd::Function for details).

Public Functions

AutogradContext() = default#
AutogradContext(const AutogradContext &other) = delete#
AutogradContext &operator=(const AutogradContext &other) = delete#
AutogradContext(AutogradContext &&other) = delete#
AutogradContext &operator=(AutogradContext &&other) = delete#
~AutogradContext() = default#
AutogradContext(PackedArgs &packed_args)#
void save_for_backward(variable_list to_save)#

Saves the list of variables for a future call to backward.

This should be called at most once from inside of forward.

void mark_dirty(const variable_list &inputs)#

Marks variables in the list as modified in an in-place operation.

This should be called at most once from inside of forward and all arguments should be inputs.

void mark_non_differentiable(const variable_list &outputs)#

Marks outputs in the list as not requiring gradients.

This should be called at most once from inside of forward and all arguments should be outputs.

void set_materialize_grads(bool value)#
variable_list get_saved_variables() const#

Get the list of variables that were saved in forward using save_for_backward().

Before returning them to the user, a check is made to ensure that they were not modified by any in-place operations.

const std::unordered_set<at::TensorImpl*> &get_and_bump_dirty() const#
const std::unordered_set<at::TensorImpl*> &get_non_differentiable() const#
bool needs_input_grad(size_t output_edge_index) const#

Expose the Node’s task_should_compute_output method to the cpp custom autograd Function as needs_input_grad.

bool needs_input_grad(std::initializer_list<IndexRange> idxs) const#

Public Members

ska::flat_hash_map<std::string, at::IValue> saved_data#

Can be used to save non-variable data for backward.