graph(cuda_graph, pool=None, stream=None)¶
Context-manager that captures CUDA work into a
torch.cuda.CUDAGraphobject for later replay.
See CUDA Graphs for a general introduction, detailed use, and constraints.
cuda_graph (torch.cuda.CUDAGraph) – Graph object used for capture.
pool (optional) – Opaque token (returned by a call to
other_Graph_instance.pool()) hinting this graph’s capture may share memory from the specified pool. See Graph memory management.
stream (torch.cuda.Stream, optional) – If supplied, will be set as the current stream in the context. If not supplied,
graphsets its own internal side stream as the current stream in the context.
For effective memory sharing, if you pass a
poolused by a previous capture and the previous capture used an explicit
streamargument, you should pass the same
streamargument to this capture.
This API is in beta and may change in future releases.