Shortcuts

Function at::_flash_attention_forward_symint

Function Documentation

inline ::std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor> at::_flash_attention_forward_symint(const at::Tensor &query, const at::Tensor &key, const at::Tensor &value, const ::std::optional<at::Tensor> &cum_seq_q, const ::std::optional<at::Tensor> &cum_seq_k, c10::SymInt max_q, c10::SymInt max_k, double dropout_p, bool is_causal, bool return_debug_mask, ::std::optional<double> scale = ::std::nullopt, ::std::optional<c10::SymInt> window_size_left = ::std::nullopt, ::std::optional<c10::SymInt> window_size_right = ::std::nullopt, const ::std::optional<at::Tensor> &seqused_k = {}, const ::std::optional<at::Tensor> &alibi_slopes = {})

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources