Shortcuts

Class TransformerDecoderImpl

Inheritance Relationships

Base Type

Class Documentation

class TransformerDecoderImpl : public torch::nn::Cloneable<TransformerDecoderImpl>

TransformerDecoder is a stack of N decoder layers.

See https://pytorch.org/docs/main/generated/torch.nn.TransformerDecoder.html to learn abouut the exact behavior of this decoder module

See the documentation for torch::nn::TransformerDecoderOptions class to learn what constructor arguments are supported for this decoder module

Example:

TransformerDecoderLayer decoder_layer(TransformerDecoderLayerOptions(512,
8).dropout(0.1)); TransformerDecoder
transformer_decoder(TransformerDecoderOptions(decoder_layer,
6).norm(LayerNorm(LayerNormOptions({2})))); const auto memory =
torch::rand({10, 32, 512}); const auto tgt = torch::rand({20, 32, 512});
auto out = transformer_decoder(tgt, memory);

Public Functions

inline TransformerDecoderImpl(TransformerDecoderLayer decoder_layer, int64_t num_layers)
explicit TransformerDecoderImpl(TransformerDecoderOptions options_)
virtual void reset() override

reset() must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.

void reset_parameters()
Tensor forward(const Tensor &tgt, const Tensor &memory, const Tensor &tgt_mask = {}, const Tensor &memory_mask = {}, const Tensor &tgt_key_padding_mask = {}, const Tensor &memory_key_padding_mask = {})

Pass the inputs (and mask) through the decoder layer in turn.

Args: tgt: the sequence to the decoder layer (required). memory: the sequence from the last layer of the encoder (required). tgt_mask: the mask for the tgt sequence (optional). memory_mask: the mask for the memory sequence (optional). tgt_key_padding_mask: the mask for the tgt keys per batch (optional). memory_key_padding_mask: the mask for the memory keys per batch (optional).

Public Members

TransformerDecoderOptions options

The options used to configure this module.

ModuleList layers = {nullptr}

Cloned layers of decoder layers.

AnyModule norm

optional layer normalization module

Protected Functions

inline virtual bool _forward_has_default_args() override

The following three functions allow a module with default arguments in its forward method to be used in a Sequential module.

You should NEVER override these functions manually. Instead, you should use the FORWARD_HAS_DEFAULT_ARGS macro.

inline virtual unsigned int _forward_num_required_args() override
inline std::vector<torch::nn::AnyValue> _forward_populate_default_args(std::vector<torch::nn::AnyValue> &&arguments) override

Friends

friend struct torch::nn::AnyModuleHolder

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources