Stream¶
- class torch.cuda.Stream(device=None, priority=0, **kwargs)[source][source]¶
Wrapper around a CUDA stream.
A CUDA stream is a linear sequence of execution that belongs to a specific device, independent from other streams. See CUDA semantics for details.
- Parameters
device (torch.device or int, optional) – a device on which to allocate the stream. If
device
isNone
(default) or a negative integer, this will use the current device.priority (int, optional) – priority of the stream, should be 0 or negative, where negative numbers indicate higher priority. By default, streams have priority 0.
- query()[source][source]¶
Check if all the work submitted has been completed.
- Returns
A boolean indicating if all kernels in this stream are completed.
- Return type
- record_event(event=None)[source][source]¶
Record an event.
- Parameters
event (torch.cuda.Event, optional) – event to record. If not given, a new one will be allocated.
- Returns
Recorded event.
- synchronize()[source][source]¶
Wait for all the kernels in this stream to complete.
Note
This is a wrapper around
cudaStreamSynchronize()
: see CUDA Stream documentation for more info.
- wait_event(event)[source][source]¶
Make all future work submitted to the stream wait for an event.
- Parameters
event (torch.cuda.Event) – an event to wait for.
Note
This is a wrapper around
cudaStreamWaitEvent()
: see CUDA Stream documentation for more info.This function returns without waiting for
event
: only future operations are affected.
- wait_stream(stream)[source][source]¶
Synchronize with another stream.
All future work submitted to this stream will wait until all kernels submitted to a given stream at the time of call complete.
- Parameters
stream (Stream) – a stream to synchronize.
Note
This function returns without waiting for currently enqueued kernels in
stream
: only future operations are affected.