Stream¶
- class torch.mtia.Stream(device, *, priority)¶
An in-order queue of executing the respective tasks asynchronously in first in first out (FIFO) order. It can control or synchronize the execution of other Stream or block the current host thread to ensure the correct task sequencing.
See in-depth description of the CUDA behavior at CUDA semantics for details on the exact semantic that applies to all devices.
- Parameters
device (
torch.device
, optional) – the desired device for the Stream. If not given, the current accelerator type will be used.priority (int, optional) – priority of the stream, should be 0 or negative, where negative numbers indicate higher priority. By default, streams have priority 0.
- Returns
An torch.Stream object.
- Return type
Example:
>>> s_cuda = torch.Stream(device='cuda')
- query() bool ¶
Check if all the work submitted has been completed.
- Returns
A boolean indicating if all kernels in this stream are completed.
- Return type
Example:
>>> s_cuda = torch.Stream(device='cuda') >>> s_cuda.query() True
- record_event(event) Event ¶
Record an event. En-queuing it into the Stream to allow further synchronization from the current point in the FIFO queue.
- Parameters
event (
torch.Event
, optional) – event to record. If not given, a new one will be allocated.- Returns
Recorded event.
- Return type
Example:
>>> s_cuda = torch.Stream(device='cuda') >>> e_cuda = s_cuda.record_event()
- synchronize() None ¶
Wait for all the kernels in this stream to complete.
Example:
>>> s_cuda = torch.Stream(device='cuda') >>> s_cuda.synchronize()
- wait_event(event) None ¶
Make all future work submitted to the stream wait for an event.
- Parameters
event (
torch.Event
) – an event to wait for.
Example:
>>> s1_cuda = torch.Stream(device='cuda') >>> s2_cuda = torch.Stream(device='cuda') >>> e_cuda = s1_cuda.record_event() >>> s2_cuda.wait_event(e_cuda)
- wait_stream(stream) None ¶
Synchronize with another stream. All future work submitted to this stream will wait until all kernels already submitted to the given stream are completed.
- Parameters
stream (
torch.Stream
) – a stream to synchronize.
Example:
>>> s1_cuda = torch.Stream(device='cuda') >>> s2_cuda = torch.Stream(device='cuda') >>> s2_cuda.wait_stream(s1_cuda)