- class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)¶
Holds the data and list of
batch_sizesof a packed sequence.
All RNN modules accept packed sequences as inputs.
Instances of this class should never be created manually. They are meant to be instantiated by functions like
Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence(). For instance, given data
PackedSequencewould contain data
data (Tensor) – Tensor containing packed sequence
batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
batch_sizesshould always be a CPU
This invariant is maintained throughout
PackedSequenceclass, and all functions that construct a :class:PackedSequence in PyTorch (i.e., they only pass in tensors conforming to this constraint).
- count(value, /)¶
Return number of occurrences of value.
- index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
- property is_cuda¶
Returns true if self.data stored on a gpu
Returns true if self.data stored on in pinned memory
- to(*args, **kwargs)¶
Performs dtype and/or device conversion on self.data.
It has similar signature as
torch.Tensor.to(), except optional arguments like non_blocking and copy should be passed as kwargs, not args, or they will not apply to the index tensors.