- class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]¶
Holds the data and list of
batch_sizesof a packed sequence.
All RNN modules accept packed sequences as inputs.
Instances of this class should never be created manually. They are meant to be instantiated by functions like
Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence(). For instance, given data
PackedSequencewould contain data
data (Tensor) – Tensor containing packed sequence
batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequenceis constructed from sequences.
unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
datacan be on arbitrary device and of arbitrary dtype.
torch.int64tensors on the same device as
batch_sizesshould always be a CPU
This invariant is maintained throughout
PackedSequenceclass, and all functions that construct a :class:PackedSequence in PyTorch (i.e., they only pass in tensors conforming to this constraint).
- count(value, /)¶
Return number of occurrences of value.
- index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
- property is_cuda¶
Returns true if self.data stored on a gpu
- to(*args, **kwargs)[source]¶
Performs dtype and/or device conversion on self.data.
It has similar signature as
torch.Tensor.to(), except optional arguments like non_blocking and copy should be passed as kwargs, not args, or they will not apply to the index tensors.
self.dataTensor already has the correct
selfis returned. Otherwise, returns a copy with the desired configuration.