Shortcuts

Function at::_efficient_attention_backward_symint

Function Documentation

inline ::std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor> at::_efficient_attention_backward_symint(const at::Tensor &grad_out_, const at::Tensor &query, const at::Tensor &key, const at::Tensor &value, const ::std::optional<at::Tensor> &bias, const at::Tensor &out, const ::std::optional<at::Tensor> &cu_seqlens_q, const ::std::optional<at::Tensor> &cu_seqlens_k, c10::SymInt max_seqlen_q, c10::SymInt max_seqlen_k, const at::Tensor &logsumexp, double dropout_p, const at::Tensor &philox_seed, const at::Tensor &philox_offset, int64_t custom_mask_type, bool bias_requires_grad, ::std::optional<double> scale = ::std::nullopt, ::std::optional<int64_t> num_splits_key = ::std::nullopt, ::std::optional<int64_t> window_size = ::std::nullopt, bool shared_storage_dqdkdv = false)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources