• Docs >
  • Library API >
  • Function at::_fused_adam(at::TensorList, at::TensorList, at::TensorList, at::TensorList, at::TensorList, at::TensorList, const at::Tensor&, double, double, double, double, bool, bool, const ::std::optional<at::Tensor>&, const ::std::optional<at::Tensor>&)
  • Edit on GitHub
Shortcuts

Function at::_fused_adam(at::TensorList, at::TensorList, at::TensorList, at::TensorList, at::TensorList, at::TensorList, const at::Tensor&, double, double, double, double, bool, bool, const ::std::optional<at::Tensor>&, const ::std::optional<at::Tensor>&)

Function Documentation

inline ::std::tuple<::std::vector<at::Tensor>, ::std::vector<at::Tensor>, ::std::vector<at::Tensor>, ::std::vector<at::Tensor>, ::std::vector<at::Tensor>> at::_fused_adam(at::TensorList self, at::TensorList grads, at::TensorList exp_avgs, at::TensorList exp_avg_sqs, at::TensorList max_exp_avg_sqs, at::TensorList state_steps, const at::Tensor &lr, double beta1, double beta2, double weight_decay, double eps, bool amsgrad, bool maximize, const ::std::optional<at::Tensor> &grad_scale = {}, const ::std::optional<at::Tensor> &found_inf = {})

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources