Shortcuts

Function at::_triton_scaled_dot_attention_out

Function Documentation

inline at::Tensor &at::_triton_scaled_dot_attention_out(at::Tensor &out, const at::Tensor &q, const at::Tensor &k, const at::Tensor &v, double dropout_p = 0.0)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources