Shortcuts

Class Tensor

Inheritance Relationships

Base Type

  • public TensorBase

Class Documentation

class at::Tensor : public TensorBase

Public Types

using hook_return_void_t = std::enable_if_t<std::is_void<typename std::result_of<T&(Tensor)>::type>::value, unsigned>
using hook_return_var_t = std::enable_if_t<std::is_same<typename std::result_of<T&(Tensor)>::type, Tensor>::value, unsigned>

Public Functions

Tensor() = default
Tensor(c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl> tensor_impl)
Tensor(const Tensor &tensor) = default
Tensor(Tensor &&tensor) = default
Tensor(const TensorBase &base)
Tensor(TensorBase &&base)
Tensor contiguous(MemoryFormat memory_format = MemoryFormat::Contiguous) const
Tensor conj() const
c10::MaybeOwned<Tensor> expect_contiguous(MemoryFormat memory_format = MemoryFormat::Contiguous) const &

Should be used if *this can reasonably be expected to be contiguous and performance is important.

Compared to contiguous, it saves a reference count increment/decrement if *this is already contiguous, at the cost in all cases of an extra pointer of stack usage, an extra branch to access, and an extra branch at destruction time.

c10::MaybeOwned<Tensor> expect_contiguous(MemoryFormat memory_format = MemoryFormat::Contiguous) && = delete
Tensor &operator=(const TensorBase &x) &
Tensor &operator=(TensorBase &&x) &
Tensor &operator=(const Tensor &x) &
Tensor &operator=(Tensor &&x) &
Tensor &operator=(Scalar v) &&
Tensor &operator=(const Tensor&) &&
Tensor &operator=(Tensor&&) &&
C10_DEPRECATED_MESSAGE ("Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device().") DeprecatedTypeProperties &type() const
Tensor toType(ScalarType t) const
Tensor toBackend(Backend b) const
C10_DEPRECATED_MESSAGE ("Tensor.is_variable() is deprecated; everything is a variable now. (If you want to assert that variable has been appropriately handled already, use at::impl::variable_excluded_from_dispatch())") bool is_variable() const noexcept
template<typename T> C10_DEPRECATED_MESSAGE ("Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead.") T *data() const
template<typename T>
T item() const
void print() const
template<typename T, size_t N, template< typename U > class PtrTraits = DefaultPtrTraits, typename index_t = int64_t> C10_DEPRECATED_MESSAGE ("packed_accessor is deprecated, use packed_accessor32 or packed_accessor64 instead") GenericPackedTensorAccessor<T
index_t packed_accessor() const &
template<typename T, size_t N, template< typename U > class PtrTraits = DefaultPtrTraits, typename index_t = int64_t> C10_DEPRECATED_MESSAGE ("packed_accessor is deprecated, use packed_accessor32 or packed_accessor64 instead") GenericPackedTensorAccessor<T
index_t packed_accessor() && = delete
Tensor operator~() const
Tensor operator-() const
Tensor &operator+=(const Tensor &other)
Tensor &operator+=(Scalar other)
Tensor &operator-=(const Tensor &other)
Tensor &operator-=(Scalar other)
Tensor &operator*=(const Tensor &other)
Tensor &operator*=(Scalar other)
Tensor &operator/=(const Tensor &other)
Tensor &operator/=(Scalar other)
Tensor &operator&=(const Tensor &other)
Tensor &operator|=(const Tensor &other)
Tensor &operator^=(const Tensor &other)
Tensor operator[](Scalar index) const
Tensor operator[](Tensor index) const
Tensor operator[](int64_t index) const
Tensor index(ArrayRef<at::indexing::TensorIndex> indices) const
Tensor index(std::initializer_list<at::indexing::TensorIndex> indices) const
Tensor &index_put_(ArrayRef<at::indexing::TensorIndex> indices, Tensor const &rhs)
Tensor &index_put_(ArrayRef<at::indexing::TensorIndex> indices, const Scalar &v)
Tensor &index_put_(std::initializer_list<at::indexing::TensorIndex> indices, Tensor const &rhs)
Tensor &index_put_(std::initializer_list<at::indexing::TensorIndex> indices, const Scalar &v)
Tensor cpu() const
Tensor cuda() const
Tensor hip() const
Tensor ve() const
Tensor vulkan() const
Tensor metal() const
void backward(const Tensor &gradient = {}, c10::optional<bool> retain_graph = c10::nullopt, bool create_graph = false, c10::optional<TensorList> inputs = c10::nullopt) const

Computes the gradient of current tensor with respect to graph leaves.

The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. this Tensor.

This function accumulates gradients in the leaves - you might need to zero them before calling it.

Parameters
  • gradient: Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional.

  • retain_graph: If false, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.

  • create_graph: If true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to false.

  • inputs: Inputs w.r.t. which the gradient will be accumulated into at::Tensor::grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the current tensor. When inputs are provided and a given input is not a leaf, the current implementation will call its grad_fn (even though it is not strictly needed to get this gradients). It is an implementation detail on which the user should not rely. See https://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780 for more details.

const Tensor &set_requires_grad(bool requires_grad) const
Tensor &mutable_grad() const

Return a mutable reference to the gradient.

This is conventionally used as t.grad() = x to set a gradient to a completely new tensor. Note that this function work with a non-const Tensor and is not thread safe.

const Tensor &grad() const

This function returns an undefined tensor by default and returns a defined tensor the first time a call to backward() computes gradients for this Tensor.

The attribute will then contain the gradients computed and future calls to backward() will accumulate (add) gradients into it.

const Tensor &_fw_grad(uint64_t level) const

This function returns the forward gradient for this Tensor at the given level.

void _set_fw_grad(const TensorBase &new_grad, uint64_t level, bool is_inplace_op) const

This function can be used to set the value of the forward grad.

Note that the given new_grad might not be used directly if it has different metadata (size/stride/storage offset) compared to this Tensor. In that case, new_grad content will be copied into a new Tensor

void __dispatch__backward(at::TensorList inputs, const c10::optional<at::Tensor> &gradient = {}, c10::optional<bool> retain_graph = c10::nullopt, bool create_graph = false) const
void __dispatch_set_data(const at::Tensor &new_data) const
at::Tensor __dispatch_data() const
bool __dispatch_is_leaf() const
int64_t __dispatch_output_nr() const
int64_t __dispatch__version() const
at::Tensor &__dispatch_requires_grad_(bool requires_grad = true) const
void __dispatch_retain_grad() const
bool __dispatch_retains_grad() const
at::Tensor _fw_primal(int64_t level) const
at::Tensor &rename_(c10::optional<at::DimnameList> names) const
at::Tensor rename(c10::optional<at::DimnameList> names) const
at::Tensor align_to(at::DimnameList names) const
at::Tensor align_to(at::DimnameList order, int64_t ellipsis_idx) const
at::Tensor align_as(const at::Tensor &other) const
at::Tensor refine_names(at::DimnameList names) const
at::Tensor abs() const
at::Tensor &abs_() const
at::Tensor absolute() const
at::Tensor &absolute_() const
at::Tensor angle() const
at::Tensor sgn() const
at::Tensor &sgn_() const
at::Tensor _conj() const
at::Tensor __dispatch_conj() const
at::Tensor _conj_physical() const
at::Tensor conj_physical() const
at::Tensor &conj_physical_() const
at::Tensor resolve_conj() const
at::Tensor resolve_neg() const
at::Tensor _neg_view() const
at::Tensor acos() const
at::Tensor &acos_() const
at::Tensor arccos() const
at::Tensor &arccos_() const
at::Tensor add(const at::Tensor &other, const at::Scalar &alpha = 1) const
at::Tensor &add_(const at::Tensor &other, const at::Scalar &alpha = 1) const
at::Tensor add(const at::Scalar &other, const at::Scalar &alpha = 1) const
at::Tensor &add_(const at::Scalar &other, const at::Scalar &alpha = 1) const
at::Tensor addmv(const at::Tensor &mat, const at::Tensor &vec, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor &addmv_(const at::Tensor &mat, const at::Tensor &vec, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor addr(const at::Tensor &vec1, const at::Tensor &vec2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor &addr_(const at::Tensor &vec1, const at::Tensor &vec2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor all(int64_t dim, bool keepdim = false) const
at::Tensor all(at::Dimname dim, bool keepdim = false) const
bool allclose(const at::Tensor &other, double rtol = 1e-05, double atol = 1e-08, bool equal_nan = false) const
at::Tensor any(int64_t dim, bool keepdim = false) const
at::Tensor any(at::Dimname dim, bool keepdim = false) const
at::Tensor argmax(c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
at::Tensor argmin(c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
at::Tensor acosh() const
at::Tensor &acosh_() const
at::Tensor arccosh() const
at::Tensor &arccosh_() const
at::Tensor asinh() const
at::Tensor &asinh_() const
at::Tensor arcsinh() const
at::Tensor &arcsinh_() const
at::Tensor atanh() const
at::Tensor &atanh_() const
at::Tensor arctanh() const
at::Tensor &arctanh_() const
at::Tensor as_strided(at::IntArrayRef size, at::IntArrayRef stride, c10::optional<int64_t> storage_offset = c10::nullopt) const
const at::Tensor &as_strided_(at::IntArrayRef size, at::IntArrayRef stride, c10::optional<int64_t> storage_offset = c10::nullopt) const
at::Tensor asin() const
at::Tensor &asin_() const
at::Tensor arcsin() const
at::Tensor &arcsin_() const
at::Tensor atan() const
at::Tensor &atan_() const
at::Tensor arctan() const
at::Tensor &arctan_() const
at::Tensor baddbmm(const at::Tensor &batch1, const at::Tensor &batch2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor &baddbmm_(const at::Tensor &batch1, const at::Tensor &batch2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor bernoulli(c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &bernoulli_(const at::Tensor &p, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &bernoulli_(double p = 0.5, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor bernoulli(double p, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor bincount(const c10::optional<at::Tensor> &weights = {}, int64_t minlength = 0) const
at::Tensor bitwise_not() const
at::Tensor &bitwise_not_() const
at::Tensor copysign(const at::Tensor &other) const
at::Tensor &copysign_(const at::Tensor &other) const
at::Tensor copysign(const at::Scalar &other) const
at::Tensor &copysign_(const at::Scalar &other) const
at::Tensor logical_not() const
at::Tensor &logical_not_() const
at::Tensor logical_xor(const at::Tensor &other) const
at::Tensor &logical_xor_(const at::Tensor &other) const
at::Tensor logical_and(const at::Tensor &other) const
at::Tensor &logical_and_(const at::Tensor &other) const
at::Tensor logical_or(const at::Tensor &other) const
at::Tensor &logical_or_(const at::Tensor &other) const
at::Tensor bmm(const at::Tensor &mat2) const
at::Tensor broadcast_to(at::IntArrayRef size) const
at::Tensor ceil() const
at::Tensor &ceil_() const
::std::vector<at::Tensor> unsafe_chunk(int64_t chunks, int64_t dim = 0) const
::std::vector<at::Tensor> chunk(int64_t chunks, int64_t dim = 0) const
::std::vector<at::Tensor> tensor_split(int64_t sections, int64_t dim = 0) const
::std::vector<at::Tensor> tensor_split(at::IntArrayRef indices, int64_t dim = 0) const
::std::vector<at::Tensor> tensor_split(const at::Tensor &tensor_indices_or_sections, int64_t dim = 0) const
at::Tensor clamp(const c10::optional<at::Scalar> &min, const c10::optional<at::Scalar> &max = c10::nullopt) const
at::Tensor clamp(const c10::optional<at::Tensor> &min = {}, const c10::optional<at::Tensor> &max = {}) const
at::Tensor &clamp_(const c10::optional<at::Scalar> &min, const c10::optional<at::Scalar> &max = c10::nullopt) const
at::Tensor &clamp_(const c10::optional<at::Tensor> &min = {}, const c10::optional<at::Tensor> &max = {}) const
at::Tensor clamp_max(const at::Scalar &max) const
at::Tensor clamp_max(const at::Tensor &max) const
at::Tensor &clamp_max_(const at::Scalar &max) const
at::Tensor &clamp_max_(const at::Tensor &max) const
at::Tensor clamp_min(const at::Scalar &min) const
at::Tensor clamp_min(const at::Tensor &min) const
at::Tensor &clamp_min_(const at::Scalar &min) const
at::Tensor &clamp_min_(const at::Tensor &min) const
at::Tensor clip(const c10::optional<at::Scalar> &min, const c10::optional<at::Scalar> &max = c10::nullopt) const
at::Tensor clip(const c10::optional<at::Tensor> &min = {}, const c10::optional<at::Tensor> &max = {}) const
at::Tensor &clip_(const c10::optional<at::Scalar> &min, const c10::optional<at::Scalar> &max = c10::nullopt) const
at::Tensor &clip_(const c10::optional<at::Tensor> &min = {}, const c10::optional<at::Tensor> &max = {}) const
at::Tensor __dispatch_contiguous(at::MemoryFormat memory_format = MemoryFormat::Contiguous) const
at::Tensor &copy_(const at::Tensor &src, bool non_blocking = false) const
at::Tensor cos() const
at::Tensor &cos_() const
at::Tensor cosh() const
at::Tensor &cosh_() const
at::Tensor count_nonzero(at::IntArrayRef dim) const
at::Tensor count_nonzero(c10::optional<int64_t> dim = c10::nullopt) const
at::Tensor cov(int64_t correction = 1, const c10::optional<at::Tensor> &fweights = {}, const c10::optional<at::Tensor> &aweights = {}) const
at::Tensor corrcoef() const
::std::tuple<at::Tensor, at::Tensor> cummax(int64_t dim) const
::std::tuple<at::Tensor, at::Tensor> cummax(at::Dimname dim) const
::std::tuple<at::Tensor, at::Tensor> cummin(int64_t dim) const
::std::tuple<at::Tensor, at::Tensor> cummin(at::Dimname dim) const
at::Tensor cumprod(int64_t dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor &cumprod_(int64_t dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor cumprod(at::Dimname dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor &cumprod_(at::Dimname dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor cumsum(int64_t dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor &cumsum_(int64_t dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor cumsum(at::Dimname dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor &cumsum_(at::Dimname dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor diag_embed(int64_t offset = 0, int64_t dim1 = -2, int64_t dim2 = -1) const
at::Tensor diagflat(int64_t offset = 0) const
at::Tensor diagonal(int64_t offset = 0, int64_t dim1 = 0, int64_t dim2 = 1) const
at::Tensor diagonal(at::Dimname outdim, at::Dimname dim1, at::Dimname dim2, int64_t offset = 0) const
at::Tensor &fill_diagonal_(const at::Scalar &fill_value, bool wrap = false) const
at::Tensor diff(int64_t n = 1, int64_t dim = -1, const c10::optional<at::Tensor> &prepend = {}, const c10::optional<at::Tensor> &append = {}) const
at::Tensor div(const at::Tensor &other) const
at::Tensor &div_(const at::Tensor &other) const
at::Tensor div(const at::Tensor &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor &div_(const at::Tensor &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor div(const at::Scalar &other) const
at::Tensor &div_(const at::Scalar &other) const
at::Tensor div(const at::Scalar &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor &div_(const at::Scalar &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor divide(const at::Tensor &other) const
at::Tensor &divide_(const at::Tensor &other) const
at::Tensor divide(const at::Scalar &other) const
at::Tensor &divide_(const at::Scalar &other) const
at::Tensor divide(const at::Tensor &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor &divide_(const at::Tensor &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor divide(const at::Scalar &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor &divide_(const at::Scalar &other, c10::optional<c10::string_view> rounding_mode) const
at::Tensor true_divide(const at::Tensor &other) const
at::Tensor &true_divide_(const at::Tensor &other) const
at::Tensor true_divide(const at::Scalar &other) const
at::Tensor &true_divide_(const at::Scalar &other) const
at::Tensor dot(const at::Tensor &tensor) const
at::Tensor vdot(const at::Tensor &other) const
at::Tensor new_empty(at::IntArrayRef size, at::TensorOptions options = {}) const
at::Tensor new_empty(at::IntArrayRef size, c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory) const
at::Tensor new_empty_strided(at::IntArrayRef size, at::IntArrayRef stride, at::TensorOptions options = {}) const
at::Tensor new_empty_strided(at::IntArrayRef size, at::IntArrayRef stride, c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory) const
at::Tensor new_full(at::IntArrayRef size, const at::Scalar &fill_value, at::TensorOptions options = {}) const
at::Tensor new_full(at::IntArrayRef size, const at::Scalar &fill_value, c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory) const
at::Tensor new_zeros(at::IntArrayRef size, at::TensorOptions options = {}) const
at::Tensor new_zeros(at::IntArrayRef size, c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory) const
at::Tensor new_ones(at::IntArrayRef size, at::TensorOptions options = {}) const
at::Tensor new_ones(at::IntArrayRef size, c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory) const
const at::Tensor &resize_(at::IntArrayRef size, c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Tensor erf() const
at::Tensor &erf_() const
at::Tensor erfc() const
at::Tensor &erfc_() const
at::Tensor exp() const
at::Tensor &exp_() const
at::Tensor exp2() const
at::Tensor &exp2_() const
at::Tensor expm1() const
at::Tensor &expm1_() const
at::Tensor expand(at::IntArrayRef size, bool implicit = false) const
at::Tensor expand_as(const at::Tensor &other) const
at::Tensor flatten(int64_t start_dim = 0, int64_t end_dim = -1) const
at::Tensor flatten(int64_t start_dim, int64_t end_dim, at::Dimname out_dim) const
at::Tensor flatten(at::Dimname start_dim, at::Dimname end_dim, at::Dimname out_dim) const
at::Tensor flatten(at::DimnameList dims, at::Dimname out_dim) const
at::Tensor unflatten(int64_t dim, at::IntArrayRef sizes, c10::optional<at::DimnameList> names = c10::nullopt) const
at::Tensor unflatten(at::Dimname dim, at::IntArrayRef sizes, at::DimnameList names) const
at::Tensor &fill_(const at::Scalar &value) const
at::Tensor &fill_(const at::Tensor &value) const
at::Tensor floor() const
at::Tensor &floor_() const
at::Tensor floor_divide(const at::Tensor &other) const
at::Tensor &floor_divide_(const at::Tensor &other) const
at::Tensor floor_divide(const at::Scalar &other) const
at::Tensor &floor_divide_(const at::Scalar &other) const
at::Tensor frac() const
at::Tensor &frac_() const
at::Tensor gcd(const at::Tensor &other) const
at::Tensor &gcd_(const at::Tensor &other) const
at::Tensor lcm(const at::Tensor &other) const
at::Tensor &lcm_(const at::Tensor &other) const
at::Tensor index(const c10::List<c10::optional<at::Tensor>> &indices) const
at::Tensor &index_copy_(int64_t dim, const at::Tensor &index, const at::Tensor &source) const
at::Tensor index_copy(int64_t dim, const at::Tensor &index, const at::Tensor &source) const
at::Tensor &index_copy_(at::Dimname dim, const at::Tensor &index, const at::Tensor &source) const
at::Tensor index_copy(at::Dimname dim, const at::Tensor &index, const at::Tensor &source) const
at::Tensor &index_put_(const c10::List<c10::optional<at::Tensor>> &indices, const at::Tensor &values, bool accumulate = false) const
at::Tensor index_put(const c10::List<c10::optional<at::Tensor>> &indices, const at::Tensor &values, bool accumulate = false) const
at::Tensor inverse() const
at::Tensor isclose(const at::Tensor &other, double rtol = 1e-05, double atol = 1e-08, bool equal_nan = false) const
at::Tensor isnan() const
bool is_distributed() const
bool __dispatch_is_floating_point() const
bool __dispatch_is_complex() const
bool __dispatch_is_conj() const
bool __dispatch_is_neg() const
at::Tensor isreal() const
bool is_nonzero() const
bool is_same_size(const at::Tensor &other) const
bool __dispatch_is_signed() const
bool __dispatch_is_inference() const
at::Tensor kron(const at::Tensor &other) const
::std::tuple<at::Tensor, at::Tensor> kthvalue(int64_t k, int64_t dim = -1, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> kthvalue(int64_t k, at::Dimname dim, bool keepdim = false) const
at::Tensor nan_to_num(c10::optional<double> nan = c10::nullopt, c10::optional<double> posinf = c10::nullopt, c10::optional<double> neginf = c10::nullopt) const
at::Tensor &nan_to_num_(c10::optional<double> nan = c10::nullopt, c10::optional<double> posinf = c10::nullopt, c10::optional<double> neginf = c10::nullopt) const
at::Tensor ldexp(const at::Tensor &other) const
at::Tensor &ldexp_(const at::Tensor &other) const
at::Tensor log() const
at::Tensor &log_() const
at::Tensor log10() const
at::Tensor &log10_() const
at::Tensor log1p() const
at::Tensor &log1p_() const
at::Tensor log2() const
at::Tensor &log2_() const
at::Tensor logaddexp(const at::Tensor &other) const
at::Tensor logaddexp2(const at::Tensor &other) const
at::Tensor xlogy(const at::Tensor &other) const
at::Tensor xlogy(const at::Scalar &other) const
at::Tensor &xlogy_(const at::Tensor &other) const
at::Tensor &xlogy_(const at::Scalar &other) const
at::Tensor logdet() const
at::Tensor log_softmax(int64_t dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor log_softmax(at::Dimname dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor logcumsumexp(int64_t dim) const
at::Tensor logcumsumexp(at::Dimname dim) const
at::Tensor logsumexp(at::IntArrayRef dim, bool keepdim = false) const
at::Tensor logsumexp(at::DimnameList dim, bool keepdim = false) const
at::Tensor matmul(const at::Tensor &other) const
at::Tensor matrix_power(int64_t n) const
at::Tensor matrix_exp() const
::std::tuple<at::Tensor, at::Tensor> aminmax(c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> max(int64_t dim, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> max(at::Dimname dim, bool keepdim = false) const
at::Tensor amax(at::IntArrayRef dim = {}, bool keepdim = false) const
at::Tensor mean(c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor mean(at::IntArrayRef dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor mean(at::DimnameList dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor nanmean(at::IntArrayRef dim = {}, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor median() const
::std::tuple<at::Tensor, at::Tensor> median(int64_t dim, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> median(at::Dimname dim, bool keepdim = false) const
at::Tensor nanmedian() const
::std::tuple<at::Tensor, at::Tensor> nanmedian(int64_t dim, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> nanmedian(at::Dimname dim, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> min(int64_t dim, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> min(at::Dimname dim, bool keepdim = false) const
at::Tensor amin(at::IntArrayRef dim = {}, bool keepdim = false) const
at::Tensor mm(const at::Tensor &mat2) const
::std::tuple<at::Tensor, at::Tensor> mode(int64_t dim = -1, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> mode(at::Dimname dim, bool keepdim = false) const
at::Tensor mul(const at::Tensor &other) const
at::Tensor &mul_(const at::Tensor &other) const
at::Tensor mul(const at::Scalar &other) const
at::Tensor &mul_(const at::Scalar &other) const
at::Tensor multiply(const at::Tensor &other) const
at::Tensor &multiply_(const at::Tensor &other) const
at::Tensor multiply(const at::Scalar &other) const
at::Tensor &multiply_(const at::Scalar &other) const
at::Tensor mv(const at::Tensor &vec) const
at::Tensor mvlgamma(int64_t p) const
at::Tensor &mvlgamma_(int64_t p) const
at::Tensor narrow_copy(int64_t dim, int64_t start, int64_t length) const
at::Tensor narrow(int64_t dim, int64_t start, int64_t length) const
at::Tensor narrow(int64_t dim, const at::Tensor &start, int64_t length) const
at::Tensor permute(at::IntArrayRef dims) const
at::Tensor movedim(at::IntArrayRef source, at::IntArrayRef destination) const
at::Tensor movedim(int64_t source, int64_t destination) const
at::Tensor moveaxis(at::IntArrayRef source, at::IntArrayRef destination) const
at::Tensor moveaxis(int64_t source, int64_t destination) const
at::Tensor numpy_T() const
at::Tensor matrix_H() const
at::Tensor mT() const
at::Tensor mH() const
at::Tensor adjoint() const
bool is_pinned(c10::optional<at::Device> device = c10::nullopt) const
at::Tensor pin_memory(c10::optional<at::Device> device = c10::nullopt) const
at::Tensor pinverse(double rcond = 1e-15) const
at::Tensor rad2deg() const
at::Tensor &rad2deg_() const
at::Tensor deg2rad() const
at::Tensor &deg2rad_() const
at::Tensor ravel() const
at::Tensor reciprocal() const
at::Tensor &reciprocal_() const
at::Tensor neg() const
at::Tensor &neg_() const
at::Tensor negative() const
at::Tensor &negative_() const
at::Tensor repeat(at::IntArrayRef repeats) const
at::Tensor repeat_interleave(const at::Tensor &repeats, c10::optional<int64_t> dim = c10::nullopt, c10::optional<int64_t> output_size = c10::nullopt) const
at::Tensor repeat_interleave(int64_t repeats, c10::optional<int64_t> dim = c10::nullopt, c10::optional<int64_t> output_size = c10::nullopt) const
at::Tensor reshape(at::IntArrayRef shape) const
at::Tensor _reshape_alias(at::IntArrayRef size, at::IntArrayRef stride) const
at::Tensor reshape_as(const at::Tensor &other) const
at::Tensor round() const
at::Tensor &round_() const
at::Tensor relu() const
at::Tensor &relu_() const
at::Tensor prelu(const at::Tensor &weight) const
::std::tuple<at::Tensor, at::Tensor> prelu_backward(const at::Tensor &grad_output, const at::Tensor &weight) const
at::Tensor hardshrink(const at::Scalar &lambd = 0.5) const
at::Tensor hardshrink_backward(const at::Tensor &grad_out, const at::Scalar &lambd) const
at::Tensor rsqrt() const
at::Tensor &rsqrt_() const
at::Tensor select(at::Dimname dim, int64_t index) const
at::Tensor select(int64_t dim, int64_t index) const
at::Tensor sigmoid() const
at::Tensor &sigmoid_() const
at::Tensor logit(c10::optional<double> eps = c10::nullopt) const
at::Tensor &logit_(c10::optional<double> eps = c10::nullopt) const
at::Tensor sin() const
at::Tensor &sin_() const
at::Tensor sinc() const
at::Tensor &sinc_() const
at::Tensor sinh() const
at::Tensor &sinh_() const
at::Tensor detach() const

Returns a new Tensor, detached from the current graph.

The result will never require gradient.

at::Tensor &detach_() const

Detaches the Tensor from the graph that created it, making it a leaf.

Views cannot be detached in-place.

int64_t size(at::Dimname dim) const
at::Tensor slice(int64_t dim = 0, c10::optional<int64_t> start = c10::nullopt, c10::optional<int64_t> end = c10::nullopt, int64_t step = 1) const
::std::tuple<at::Tensor, at::Tensor> slogdet() const
at::Tensor smm(const at::Tensor &mat2) const
at::Tensor softmax(int64_t dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor softmax(at::Dimname dim, c10::optional<at::ScalarType> dtype = c10::nullopt) const
::std::vector<at::Tensor> unsafe_split(int64_t split_size, int64_t dim = 0) const
::std::vector<at::Tensor> split(int64_t split_size, int64_t dim = 0) const
::std::vector<at::Tensor> unsafe_split_with_sizes(at::IntArrayRef split_sizes, int64_t dim = 0) const
::std::vector<at::Tensor> split_with_sizes(at::IntArrayRef split_sizes, int64_t dim = 0) const
::std::vector<at::Tensor> hsplit(int64_t sections) const
::std::vector<at::Tensor> hsplit(at::IntArrayRef indices) const
::std::vector<at::Tensor> vsplit(int64_t sections) const
::std::vector<at::Tensor> vsplit(at::IntArrayRef indices) const
::std::vector<at::Tensor> dsplit(int64_t sections) const
::std::vector<at::Tensor> dsplit(at::IntArrayRef indices) const
at::Tensor squeeze() const
at::Tensor squeeze(int64_t dim) const
at::Tensor squeeze(at::Dimname dim) const
at::Tensor &squeeze_() const
at::Tensor &squeeze_(int64_t dim) const
at::Tensor &squeeze_(at::Dimname dim) const
at::Tensor sspaddmm(const at::Tensor &mat1, const at::Tensor &mat2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor stft(int64_t n_fft, c10::optional<int64_t> hop_length = c10::nullopt, c10::optional<int64_t> win_length = c10::nullopt, const c10::optional<at::Tensor> &window = {}, bool normalized = false, c10::optional<bool> onesided = c10::nullopt, c10::optional<bool> return_complex = c10::nullopt) const
at::Tensor istft(int64_t n_fft, c10::optional<int64_t> hop_length = c10::nullopt, c10::optional<int64_t> win_length = c10::nullopt, const c10::optional<at::Tensor> &window = {}, bool center = true, bool normalized = false, c10::optional<bool> onesided = c10::nullopt, c10::optional<int64_t> length = c10::nullopt, bool return_complex = false) const
int64_t stride(at::Dimname dim) const
at::Tensor sum(c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor sum(at::IntArrayRef dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor sum(at::DimnameList dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor nansum(c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor nansum(at::IntArrayRef dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor sum_to_size(at::IntArrayRef size) const
at::Tensor sqrt() const
at::Tensor &sqrt_() const
at::Tensor square() const
at::Tensor &square_() const
at::Tensor std(bool unbiased = true) const
at::Tensor std(at::IntArrayRef dim, bool unbiased = true, bool keepdim = false) const
at::Tensor std(c10::optional<at::IntArrayRef> dim, c10::optional<int64_t> correction, bool keepdim = false) const
at::Tensor std(at::DimnameList dim, bool unbiased = true, bool keepdim = false) const
at::Tensor std(at::DimnameList dim, c10::optional<int64_t> correction, bool keepdim = false) const
at::Tensor prod(c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor prod(int64_t dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor prod(at::Dimname dim, bool keepdim = false, c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor t() const
at::Tensor &t_() const
at::Tensor tan() const
at::Tensor &tan_() const
at::Tensor tanh() const
at::Tensor &tanh_() const
at::Tensor tile(at::IntArrayRef dims) const
at::Tensor transpose(int64_t dim0, int64_t dim1) const
at::Tensor transpose(at::Dimname dim0, at::Dimname dim1) const
at::Tensor &transpose_(int64_t dim0, int64_t dim1) const
at::Tensor flip(at::IntArrayRef dims) const
at::Tensor fliplr() const
at::Tensor flipud() const
at::Tensor roll(at::IntArrayRef shifts, at::IntArrayRef dims = {}) const
at::Tensor rot90(int64_t k = 1, at::IntArrayRef dims = {0, 1}) const
at::Tensor trunc() const
at::Tensor &trunc_() const
at::Tensor fix() const
at::Tensor &fix_() const
at::Tensor type_as(const at::Tensor &other) const
at::Tensor unsqueeze(int64_t dim) const
at::Tensor &unsqueeze_(int64_t dim) const
at::Tensor var(bool unbiased = true) const
at::Tensor var(at::IntArrayRef dim, bool unbiased = true, bool keepdim = false) const
at::Tensor var(c10::optional<at::IntArrayRef> dim, c10::optional<int64_t> correction, bool keepdim = false) const
at::Tensor var(at::DimnameList dim, bool unbiased = true, bool keepdim = false) const
at::Tensor var(at::DimnameList dim, c10::optional<int64_t> correction, bool keepdim = false) const
at::Tensor view_as(const at::Tensor &other) const
at::Tensor where(const at::Tensor &condition, const at::Tensor &other) const
at::Tensor norm(const c10::optional<at::Scalar> &p, at::ScalarType dtype) const
at::Tensor norm(const at::Scalar &p = 2) const
at::Tensor norm(const c10::optional<at::Scalar> &p, at::IntArrayRef dim, bool keepdim, at::ScalarType dtype) const
at::Tensor norm(const c10::optional<at::Scalar> &p, at::IntArrayRef dim, bool keepdim = false) const
at::Tensor norm(const c10::optional<at::Scalar> &p, at::DimnameList dim, bool keepdim, at::ScalarType dtype) const
at::Tensor norm(const c10::optional<at::Scalar> &p, at::DimnameList dim, bool keepdim = false) const
::std::tuple<at::Tensor, at::Tensor> frexp() const
at::Tensor clone(c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Tensor positive() const
const at::Tensor &resize_as_(const at::Tensor &the_template, c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Tensor &zero_() const
at::Tensor sub(const at::Tensor &other, const at::Scalar &alpha = 1) const
at::Tensor &sub_(const at::Tensor &other, const at::Scalar &alpha = 1) const
at::Tensor sub(const at::Scalar &other, const at::Scalar &alpha = 1) const
at::Tensor &sub_(const at::Scalar &other, const at::Scalar &alpha = 1) const
at::Tensor subtract(const at::Tensor &other, const at::Scalar &alpha = 1) const
at::Tensor &subtract_(const at::Tensor &other, const at::Scalar &alpha = 1) const
at::Tensor subtract(const at::Scalar &other, const at::Scalar &alpha = 1) const
at::Tensor &subtract_(const at::Scalar &other, const at::Scalar &alpha = 1) const
at::Tensor heaviside(const at::Tensor &values) const
at::Tensor &heaviside_(const at::Tensor &values) const
at::Tensor addmm(const at::Tensor &mat1, const at::Tensor &mat2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor &addmm_(const at::Tensor &mat1, const at::Tensor &mat2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
const at::Tensor &sparse_resize_(at::IntArrayRef size, int64_t sparse_dim, int64_t dense_dim) const
const at::Tensor &sparse_resize_and_clear_(at::IntArrayRef size, int64_t sparse_dim, int64_t dense_dim) const
at::Tensor sparse_mask(const at::Tensor &mask) const
at::Tensor to_dense(c10::optional<at::ScalarType> dtype = c10::nullopt) const
int64_t sparse_dim() const
int64_t _dimI() const
int64_t dense_dim() const
int64_t _dimV() const
int64_t _nnz() const
at::Tensor coalesce() const
bool is_coalesced() const
at::Tensor _indices() const
at::Tensor _values() const
at::Tensor &_coalesced_(bool coalesced) const
at::Tensor indices() const
at::Tensor values() const
at::Tensor crow_indices() const
at::Tensor col_indices() const
::std::vector<at::Tensor> unbind(int64_t dim = 0) const
::std::vector<at::Tensor> unbind(at::Dimname dim) const
at::Tensor to_sparse(int64_t sparse_dim) const
at::Tensor to_sparse() const
at::Tensor to_mkldnn(c10::optional<at::ScalarType> dtype = c10::nullopt) const
at::Tensor dequantize() const
double q_scale() const
int64_t q_zero_point() const
at::Tensor q_per_channel_scales() const
at::Tensor q_per_channel_zero_points() const
int64_t q_per_channel_axis() const
at::Tensor int_repr() const
at::QScheme qscheme() const
at::Tensor to(at::TensorOptions options = {}, bool non_blocking = false, bool copy = false, c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Tensor to(c10::optional<at::ScalarType> dtype, c10::optional<at::Layout> layout, c10::optional<at::Device> device, c10::optional<bool> pin_memory, bool non_blocking, bool copy, c10::optional<at::MemoryFormat> memory_format) const
at::Tensor to(at::Device device, at::ScalarType dtype, bool non_blocking = false, bool copy = false, c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Tensor to(at::ScalarType dtype, bool non_blocking = false, bool copy = false, c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Tensor to(const at::Tensor &other, bool non_blocking = false, bool copy = false, c10::optional<at::MemoryFormat> memory_format = c10::nullopt) const
at::Scalar item() const
at::Tensor &set_(at::Storage source) const
at::Tensor &set_(at::Storage source, int64_t storage_offset, at::IntArrayRef size, at::IntArrayRef stride = {}) const
at::Tensor &set_(const at::Tensor &source) const
at::Tensor &set_() const
bool is_set_to(const at::Tensor &tensor) const
at::Tensor &masked_fill_(const at::Tensor &mask, const at::Scalar &value) const
at::Tensor masked_fill(const at::Tensor &mask, const at::Scalar &value) const
at::Tensor &masked_fill_(const at::Tensor &mask, const at::Tensor &value) const
at::Tensor masked_fill(const at::Tensor &mask, const at::Tensor &value) const
at::Tensor &masked_scatter_(const at::Tensor &mask, const at::Tensor &source) const
at::Tensor masked_scatter(const at::Tensor &mask, const at::Tensor &source) const
at::Tensor view(at::IntArrayRef size) const
at::Tensor view(at::ScalarType dtype) const
at::Tensor &put_(const at::Tensor &index, const at::Tensor &source, bool accumulate = false) const
at::Tensor put(const at::Tensor &index, const at::Tensor &source, bool accumulate = false) const
at::Tensor &index_add_(int64_t dim, const at::Tensor &index, const at::Tensor &source) const
at::Tensor &index_add_(int64_t dim, const at::Tensor &index, const at::Tensor &source, const at::Scalar &alpha) const
at::Tensor index_add(int64_t dim, const at::Tensor &index, const at::Tensor &source) const
at::Tensor index_add(int64_t dim, const at::Tensor &index, const at::Tensor &source, const at::Scalar &alpha) const
at::Tensor index_add(at::Dimname dim, const at::Tensor &index, const at::Tensor &source, const at::Scalar &alpha = 1) const
at::Tensor &index_fill_(int64_t dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor index_fill(int64_t dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor &index_fill_(int64_t dim, const at::Tensor &index, const at::Tensor &value) const
at::Tensor index_fill(int64_t dim, const at::Tensor &index, const at::Tensor &value) const
at::Tensor &index_fill_(at::Dimname dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor &index_fill_(at::Dimname dim, const at::Tensor &index, const at::Tensor &value) const
at::Tensor index_fill(at::Dimname dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor index_fill(at::Dimname dim, const at::Tensor &index, const at::Tensor &value) const
at::Tensor scatter(int64_t dim, const at::Tensor &index, const at::Tensor &src) const
at::Tensor &scatter_(int64_t dim, const at::Tensor &index, const at::Tensor &src) const
at::Tensor scatter(int64_t dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor &scatter_(int64_t dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor scatter(int64_t dim, const at::Tensor &index, const at::Tensor &src, c10::string_view reduce) const
at::Tensor &scatter_(int64_t dim, const at::Tensor &index, const at::Tensor &src, c10::string_view reduce) const
at::Tensor scatter(int64_t dim, const at::Tensor &index, const at::Scalar &value, c10::string_view reduce) const
at::Tensor &scatter_(int64_t dim, const at::Tensor &index, const at::Scalar &value, c10::string_view reduce) const
at::Tensor scatter(at::Dimname dim, const at::Tensor &index, const at::Tensor &src) const
at::Tensor scatter(at::Dimname dim, const at::Tensor &index, const at::Scalar &value) const
at::Tensor scatter_add(int64_t dim, const at::Tensor &index, const at::Tensor &src) const
at::Tensor &scatter_add_(int64_t dim, const at::Tensor &index, const at::Tensor &src) const
at::Tensor scatter_add(at::Dimname dim, const at::Tensor &index, const at::Tensor &src) const
at::Tensor &eq_(const at::Scalar &other) const
at::Tensor &eq_(const at::Tensor &other) const
at::Tensor bitwise_and(const at::Scalar &other) const
at::Tensor bitwise_and(const at::Tensor &other) const
at::Tensor &bitwise_and_(const at::Scalar &other) const
at::Tensor &bitwise_and_(const at::Tensor &other) const
at::Tensor __and__(const at::Scalar &other) const
at::Tensor __and__(const at::Tensor &other) const
at::Tensor &__iand__(const at::Scalar &other) const
at::Tensor &__iand__(const at::Tensor &other) const
at::Tensor bitwise_or(const at::Scalar &other) const
at::Tensor bitwise_or(const at::Tensor &other) const
at::Tensor &bitwise_or_(const at::Scalar &other) const
at::Tensor &bitwise_or_(const at::Tensor &other) const
at::Tensor __or__(const at::Scalar &other) const
at::Tensor __or__(const at::Tensor &other) const
at::Tensor &__ior__(const at::Scalar &other) const
at::Tensor &__ior__(const at::Tensor &other) const
at::Tensor bitwise_xor(const at::Scalar &other) const
at::Tensor bitwise_xor(const at::Tensor &other) const
at::Tensor &bitwise_xor_(const at::Scalar &other) const
at::Tensor &bitwise_xor_(const at::Tensor &other) const
at::Tensor __xor__(const at::Scalar &other) const
at::Tensor __xor__(const at::Tensor &other) const
at::Tensor &__ixor__(const at::Scalar &other) const
at::Tensor &__ixor__(const at::Tensor &other) const
at::Tensor __lshift__(const at::Scalar &other) const
at::Tensor __lshift__(const at::Tensor &other) const
at::Tensor &__ilshift__(const at::Scalar &other) const
at::Tensor &__ilshift__(const at::Tensor &other) const
at::Tensor bitwise_left_shift(const at::Tensor &other) const
at::Tensor &bitwise_left_shift_(const at::Tensor &other) const
at::Tensor bitwise_left_shift(const at::Scalar &other) const
at::Tensor &bitwise_left_shift_(const at::Scalar &other) const
at::Tensor __rshift__(const at::Scalar &other) const
at::Tensor __rshift__(const at::Tensor &other) const
at::Tensor &__irshift__(const at::Scalar &other) const
at::Tensor &__irshift__(const at::Tensor &other) const
at::Tensor bitwise_right_shift(const at::Tensor &other) const
at::Tensor &bitwise_right_shift_(const at::Tensor &other) const
at::Tensor bitwise_right_shift(const at::Scalar &other) const
at::Tensor &bitwise_right_shift_(const at::Scalar &other) const
at::Tensor &tril_(int64_t diagonal = 0) const
at::Tensor &triu_(int64_t diagonal = 0) const
at::Tensor &digamma_() const
at::Tensor &lerp_(const at::Tensor &end, const at::Scalar &weight) const
at::Tensor &lerp_(const at::Tensor &end, const at::Tensor &weight) const
at::Tensor &addbmm_(const at::Tensor &batch1, const at::Tensor &batch2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor addbmm(const at::Tensor &batch1, const at::Tensor &batch2, const at::Scalar &beta = 1, const at::Scalar &alpha = 1) const
at::Tensor &random_(int64_t from, c10::optional<int64_t> to, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &random_(int64_t to, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &random_(c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &uniform_(double from = 0, double to = 1, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &cauchy_(double median = 0, double sigma = 1, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &log_normal_(double mean = 1, double std = 2, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &exponential_(double lambd = 1, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &geometric_(double p, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor diag(int64_t diagonal = 0) const
at::Tensor cross(const at::Tensor &other, c10::optional<int64_t> dim = c10::nullopt) const
at::Tensor triu(int64_t diagonal = 0) const
at::Tensor tril(int64_t diagonal = 0) const
at::Tensor trace() const
at::Tensor ne(const at::Scalar &other) const
at::Tensor ne(const at::Tensor &other) const
at::Tensor &ne_(const at::Scalar &other) const
at::Tensor &ne_(const at::Tensor &other) const
at::Tensor not_equal(const at::Scalar &other) const
at::Tensor not_equal(const at::Tensor &other) const
at::Tensor &not_equal_(const at::Scalar &other) const
at::Tensor &not_equal_(const at::Tensor &other) const
at::Tensor eq(const at::Scalar &other) const
at::Tensor eq(const at::Tensor &other) const
at::Tensor ge(const at::Scalar &other) const
at::Tensor ge(const at::Tensor &other) const
at::Tensor &ge_(const at::Scalar &other) const
at::Tensor &ge_(const at::Tensor &other) const
at::Tensor greater_equal(const at::Scalar &other) const
at::Tensor greater_equal(const at::Tensor &other) const
at::Tensor &greater_equal_(const at::Scalar &other) const
at::Tensor &greater_equal_(const at::Tensor &other) const
at::Tensor le(const at::Scalar &other) const
at::Tensor le(const at::Tensor &other) const
at::Tensor &le_(const at::Scalar &other) const
at::Tensor &le_(const at::Tensor &other) const
at::Tensor less_equal(const at::Scalar &other) const
at::Tensor less_equal(const at::Tensor &other) const
at::Tensor &less_equal_(const at::Scalar &other) const
at::Tensor &less_equal_(const at::Tensor &other) const
at::Tensor gt(const at::Scalar &other) const
at::Tensor gt(const at::Tensor &other) const
at::Tensor &gt_(const at::Scalar &other) const
at::Tensor &gt_(const at::Tensor &other) const
at::Tensor greater(const at::Scalar &other) const
at::Tensor greater(const at::Tensor &other) const
at::Tensor &greater_(const at::Scalar &other) const
at::Tensor &greater_(const at::Tensor &other) const
at::Tensor lt(const at::Scalar &other) const
at::Tensor lt(const at::Tensor &other) const
at::Tensor &lt_(const at::Scalar &other) const
at::Tensor &lt_(const at::Tensor &other) const
at::Tensor less(const at::Scalar &other) const
at::Tensor less(const at::Tensor &other) const
at::Tensor &less_(const at::Scalar &other) const
at::Tensor &less_(const at::Tensor &other) const
at::Tensor take(const at::Tensor &index) const
at::Tensor take_along_dim(const at::Tensor &indices, c10::optional<int64_t> dim = c10::nullopt) const
at::Tensor index_select(int64_t dim, const at::Tensor &index) const
at::Tensor index_select(at::Dimname dim, const at::Tensor &index) const
at::Tensor masked_select(const at::Tensor &mask) const
at::Tensor nonzero() const
::std::vector<at::Tensor> nonzero_numpy() const
at::Tensor gather(int64_t dim, const at::Tensor &index, bool sparse_grad = false) const
at::Tensor gather(at::Dimname dim, const at::Tensor &index, bool sparse_grad = false) const
at::Tensor addcmul(const at::Tensor &tensor1, const at::Tensor &tensor2, const at::Scalar &value = 1) const
at::Tensor &addcmul_(const at::Tensor &tensor1, const at::Tensor &tensor2, const at::Scalar &value = 1) const
at::Tensor addcdiv(const at::Tensor &tensor1, const at::Tensor &tensor2, const at::Scalar &value = 1) const
at::Tensor &addcdiv_(const at::Tensor &tensor1, const at::Tensor &tensor2, const at::Scalar &value = 1) const
::std::tuple<at::Tensor, at::Tensor> lstsq(const at::Tensor &A) const
::std::tuple<at::Tensor, at::Tensor> triangular_solve(const at::Tensor &A, bool upper = true, bool transpose = false, bool unitriangular = false) const
::std::tuple<at::Tensor, at::Tensor> symeig(bool eigenvectors = false, bool upper = true) const
::std::tuple<at::Tensor, at::Tensor> eig(bool eigenvectors = false) const
::std::tuple<at::Tensor, at::Tensor, at::Tensor> svd(bool some = true, bool compute_uv = true) const
at::Tensor swapaxes(int64_t axis0, int64_t axis1) const
at::Tensor &swapaxes_(int64_t axis0, int64_t axis1) const
at::Tensor swapdims(int64_t dim0, int64_t dim1) const
at::Tensor &swapdims_(int64_t dim0, int64_t dim1) const
at::Tensor cholesky(bool upper = false) const
at::Tensor cholesky_solve(const at::Tensor &input2, bool upper = false) const
::std::tuple<at::Tensor, at::Tensor> solve(const at::Tensor &A) const
at::Tensor cholesky_inverse(bool upper = false) const
::std::tuple<at::Tensor, at::Tensor> qr(bool some = true) const
::std::tuple<at::Tensor, at::Tensor> geqrf() const
at::Tensor orgqr(const at::Tensor &input2) const
at::Tensor ormqr(const at::Tensor &input2, const at::Tensor &input3, bool left = true, bool transpose = false) const
at::Tensor lu_solve(const at::Tensor &LU_data, const at::Tensor &LU_pivots) const
at::Tensor multinomial(int64_t num_samples, bool replacement = false, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor &lgamma_() const
at::Tensor lgamma() const
at::Tensor digamma() const
at::Tensor polygamma(int64_t n) const
at::Tensor &polygamma_(int64_t n) const
at::Tensor erfinv() const
at::Tensor &erfinv_() const
at::Tensor i0() const
at::Tensor &i0_() const
at::Tensor sign() const
at::Tensor &sign_() const
at::Tensor signbit() const
at::Tensor dist(const at::Tensor &other, const at::Scalar &p = 2) const
at::Tensor &atan2_(const at::Tensor &other) const
at::Tensor atan2(const at::Tensor &other) const
at::Tensor lerp(const at::Tensor &end, const at::Scalar &weight) const
at::Tensor lerp(const at::Tensor &end, const at::Tensor &weight) const
at::Tensor histc(int64_t bins = 100, const at::Scalar &min = 0, const at::Scalar &max = 0) const
::std::tuple<at::Tensor, at::Tensor> histogram(const at::Tensor &bins, const c10::optional<at::Tensor> &weight = {}, bool density = false) const
::std::tuple<at::Tensor, at::Tensor> histogram(int64_t bins = 100, c10::optional<at::ArrayRef<double>> range = c10::nullopt, const c10::optional<at::Tensor> &weight = {}, bool density = false) const
at::Tensor fmod(const at::Scalar &other) const
at::Tensor &fmod_(const at::Scalar &other) const
at::Tensor fmod(const at::Tensor &other) const
at::Tensor &fmod_(const at::Tensor &other) const
at::Tensor hypot(const at::Tensor &other) const
at::Tensor &hypot_(const at::Tensor &other) const
at::Tensor igamma(const at::Tensor &other) const
at::Tensor &igamma_(const at::Tensor &other) const
at::Tensor igammac(const at::Tensor &other) const
at::Tensor &igammac_(const at::Tensor &other) const
at::Tensor nextafter(const at::Tensor &other) const
at::Tensor &nextafter_(const at::Tensor &other) const
at::Tensor remainder(const at::Scalar &other) const
at::Tensor &remainder_(const at::Scalar &other) const
at::Tensor remainder(const at::Tensor &other) const
at::Tensor &remainder_(const at::Tensor &other) const
at::Tensor min() const
at::Tensor fmin(const at::Tensor &other) const
at::Tensor max() const
at::Tensor fmax(const at::Tensor &other) const
at::Tensor maximum(const at::Tensor &other) const
at::Tensor max(const at::Tensor &other) const
at::Tensor minimum(const at::Tensor &other) const
at::Tensor min(const at::Tensor &other) const
at::Tensor quantile(double q, c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
at::Tensor quantile(const at::Tensor &q, c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
at::Tensor nanquantile(double q, c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
at::Tensor nanquantile(const at::Tensor &q, c10::optional<int64_t> dim = c10::nullopt, bool keepdim = false) const
at::Tensor quantile(double q, c10::optional<int64_t> dim, bool keepdim, c10::string_view interpolation) const
at::Tensor quantile(const at::Tensor &q, c10::optional<int64_t> dim, bool keepdim, c10::string_view interpolation) const
at::Tensor nanquantile(double q, c10::optional<int64_t> dim, bool keepdim, c10::string_view interpolation) const
at::Tensor nanquantile(const at::Tensor &q, c10::optional<int64_t> dim, bool keepdim, c10::string_view interpolation) const
::std::tuple<at::Tensor, at::Tensor> sort(int64_t dim = -1, bool descending = false) const
::std::tuple<at::Tensor, at::Tensor> sort(c10::optional<bool> stable, int64_t dim = -1, bool descending = false) const
::std::tuple<at::Tensor, at::Tensor> sort(at::Dimname dim, bool descending = false) const
::std::tuple<at::Tensor, at::Tensor> sort(c10::optional<bool> stable, at::Dimname dim, bool descending = false) const
at::Tensor msort() const
at::Tensor argsort(int64_t dim = -1, bool descending = false) const
at::Tensor argsort(at::Dimname dim, bool descending = false) const
::std::tuple<at::Tensor, at::Tensor> topk(int64_t k, int64_t dim = -1, bool largest = true, bool sorted = true) const
at::Tensor all() const
at::Tensor any() const
at::Tensor renorm(const at::Scalar &p, int64_t dim, const at::Scalar &maxnorm) const
at::Tensor &renorm_(const at::Scalar &p, int64_t dim, const at::Scalar &maxnorm) const
at::Tensor unfold(int64_t dimension, int64_t size, int64_t step) const
bool equal(const at::Tensor &other) const
at::Tensor pow(const at::Tensor &exponent) const
at::Tensor pow(const at::Scalar &exponent) const
at::Tensor &pow_(const at::Scalar &exponent) const
at::Tensor &pow_(const at::Tensor &exponent) const
at::Tensor float_power(const at::Tensor &exponent) const
at::Tensor float_power(const at::Scalar &exponent) const
at::Tensor &float_power_(const at::Scalar &exponent) const
at::Tensor &float_power_(const at::Tensor &exponent) const
at::Tensor &normal_(double mean = 0, double std = 1, c10::optional<at::Generator> generator = c10::nullopt) const
at::Tensor alias() const
at::Tensor isfinite() const
at::Tensor isinf() const
void record_stream(at::Stream s) const
at::Tensor isposinf() const
at::Tensor isneginf() const
at::Tensor special_polygamma(int64_t n) const
at::Tensor det() const
at::Tensor inner(const at::Tensor &other) const
at::Tensor outer(const at::Tensor &vec2) const
at::Tensor ger(const at::Tensor &vec2) const
Tensor var(int dim) const
Tensor std(int dim) const
Tensor to(caffe2::TypeMeta type_meta, bool non_blocking = false, bool copy = false) const
Tensor to(Device device, caffe2::TypeMeta type_meta, bool non_blocking = false, bool copy = false) const
template<typename F, typename ...Args>
decltype(auto) m(F func, Args&&... params) const
at::Tensor tensor_data() const

NOTE: This is similar to the legacy .data() function on Variable, and is intended to be used from functions that need to access the Variable’s equivalent Tensor (i.e.

Tensor that shares the same storage and tensor metadata with the Variable).

One notable difference with the legacy .data() function is that changes to the returned Tensor’s tensor metadata (e.g. sizes / strides / storage / storage_offset) will not update the original Variable, due to the fact that this function shallow-copies the Variable’s underlying TensorImpl.

at::Tensor variable_data() const

NOTE: var.variable_data() in C++ has the same semantics as tensor.data in Python, which create a new Variable that shares the same storage and tensor metadata with the original Variable, but with a completely new autograd history.

NOTE: If we change the tensor metadata (e.g. sizes / strides / storage / storage_offset) of a variable created from var.variable_data(), those changes will not update the original variable var. In .variable_data(), we set allow_tensor_metadata_change_ to false to make such changes explicitly illegal, in order to prevent users from changing metadata of var.variable_data() and expecting the original variable var to also be updated.

template<typename T>
hook_return_void_t<T> register_hook(T &&hook) const

Registers a backward hook.

The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have one of the following signature:

hook(Tensor grad) -> Tensor
hook(Tensor grad) -> void
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad.

This function returns the index of the hook in the list which can be used to remove hook.

Example:

auto v = torch::tensor({0., 0., 0.}, torch::requires_grad());
auto h = v.register_hook([](torch::Tensor grad){ return grad * 2; }); // double the gradient
v.backward(torch::tensor({1., 2., 3.}));
// This prints:
// ```
//  2
//  4
//  6
// [ CPUFloatType{3} ]
// ```
std::cout << v.grad() << std::endl;
v.remove_hook(h);  // removes the hook

template<typename T>
hook_return_var_t<T> register_hook(T &&hook) const
Tensor data() const
void _backward(TensorList inputs, const c10::optional<Tensor> &gradient, c10::optional<bool> keep_graph, bool create_graph) const
const Tensor &requires_grad_(bool _requires_grad = true) const
template<typename T>
auto register_hook(T &&hook) const -> Tensor::hook_return_void_t<T>

Public Members

N
PtrTraits

Public Static Functions

Tensor wrap_tensor_impl(c10::intrusive_ptr<TensorImpl, UndefinedTensorImpl> tensor_impl)

Protected Functions

Tensor(unsafe_borrow_t, const TensorBase &rhs)

Protected Attributes

friend MaybeOwnedTraits< Tensor >
friend OptionalTensorRef

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources