Shortcuts

Struct EmbeddingOptions

Page Contents

Struct Documentation

struct torch::nn::EmbeddingOptions

Options for the Embedding module.

Example:

Embedding model(EmbeddingOptions(10, 2).padding_idx(3).max_norm(2).norm_type(2.5).scale_grad_by_freq(true).sparse(true));

Public Functions

EmbeddingOptions(int64_t num_embeddings, int64_t embedding_dim)
auto num_embeddings(const int64_t &new_num_embeddings) -> decltype(*this)

The size of the dictionary of embeddings.

auto num_embeddings(int64_t &&new_num_embeddings) -> decltype(*this)
const int64_t &num_embeddings() const noexcept
int64_t &num_embeddings() noexcept
auto embedding_dim(const int64_t &new_embedding_dim) -> decltype(*this)

The size of each embedding vector.

auto embedding_dim(int64_t &&new_embedding_dim) -> decltype(*this)
const int64_t &embedding_dim() const noexcept
int64_t &embedding_dim() noexcept
auto padding_idx(const c10::optional<int64_t> &new_padding_idx) -> decltype(*this)

If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e.

it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector at padding_idx will default to all zeros, but can be updated to another value to be used as the padding vector.

auto padding_idx(c10::optional<int64_t> &&new_padding_idx) -> decltype(*this)
const c10::optional<int64_t> &padding_idx() const noexcept
c10::optional<int64_t> &padding_idx() noexcept
auto max_norm(const c10::optional<double> &new_max_norm) -> decltype(*this)

If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm.

auto max_norm(c10::optional<double> &&new_max_norm) -> decltype(*this)
const c10::optional<double> &max_norm() const noexcept
c10::optional<double> &max_norm() noexcept
auto norm_type(const double &new_norm_type) -> decltype(*this)

The p of the p-norm to compute for the max_norm option. Default 2.

auto norm_type(double &&new_norm_type) -> decltype(*this)
const double &norm_type() const noexcept
double &norm_type() noexcept
auto scale_grad_by_freq(const bool &new_scale_grad_by_freq) -> decltype(*this)

If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default false.

auto scale_grad_by_freq(bool &&new_scale_grad_by_freq) -> decltype(*this)
const bool &scale_grad_by_freq() const noexcept
bool &scale_grad_by_freq() noexcept
auto sparse(const bool &new_sparse) -> decltype(*this)

If true, gradient w.r.t. weight matrix will be a sparse tensor.

auto sparse(bool &&new_sparse) -> decltype(*this)
const bool &sparse() const noexcept
bool &sparse() noexcept
auto _weight(const torch::Tensor &new__weight) -> decltype(*this)

The learnable weights of the module of shape (num_embeddings, embedding_dim)

auto _weight(torch::Tensor &&new__weight) -> decltype(*this)
const torch::Tensor &_weight() const noexcept
torch::Tensor &_weight() noexcept

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources