Shortcuts

Function at::_scaled_dot_product_fused_attention_overrideable_backward_symint

Function Documentation

inline ::std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor> at::_scaled_dot_product_fused_attention_overrideable_backward_symint(const at::Tensor &grad_out, const at::Tensor &query, const at::Tensor &key, const at::Tensor &value, const at::Tensor &attn_bias, ::std::array<bool, 4> grad_input_mask, const at::Tensor &out, const at::Tensor &logsumexp, const at::Tensor &cum_seq_q, const at::Tensor &cum_seq_k, c10::SymInt max_q, c10::SymInt max_k, double dropout_p, bool is_causal, const at::Tensor &philox_seed, const at::Tensor &philox_offset, ::std::optional<double> scale = ::std::nullopt)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources