PackedSequence¶
- class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]¶
Holds the data and list of
batch_sizes
of a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence()
. For instance, given dataabc
andx
thePackedSequence
would contain dataaxbc
withbatch_sizes=[2,1,1]
.- Variables
data (Tensor) – Tensor containing packed sequence
batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequence
is constructed from sequences.unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
- Return type
Self
Note
data
can be on arbitrary device and of arbitrary dtype.sorted_indices
andunsorted_indices
must betorch.int64
tensors on the same device asdata
.However,
batch_sizes
should always be a CPUtorch.int64
tensor.This invariant is maintained throughout
PackedSequence
class, and all functions that construct aPackedSequence
in PyTorch (i.e., they only pass in tensors conforming to this constraint).- count(value, /)¶
Return number of occurrences of value.
- index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present.
- to(dtype: dtype, non_blocking: bool = ..., copy: bool = ...) Self [source]¶
- to(device: Optional[Union[str, device, int]] = ..., dtype: Optional[dtype] = ..., non_blocking: bool = ..., copy: bool = ...) Self
- to(other: Tensor, non_blocking: bool = ..., copy: bool = ...) Self
Perform dtype and/or device conversion on self.data.
It has similar signature as
torch.Tensor.to()
, except optional arguments like non_blocking and copy should be passed as kwargs, not args, or they will not apply to the index tensors.Note
If the
self.data
Tensor already has the correcttorch.dtype
andtorch.device
, thenself
is returned. Otherwise, returns a copy with the desired configuration.