Shortcuts

CUDAGraph

class torch.cuda.CUDAGraph[source]

Wrapper around a CUDA graph.

Warning

This API is in beta and may change in future releases.

capture_begin(pool=None)[source]

Begins capturing CUDA work on the current stream.

Typically, you shouldn’t call capture_begin yourself. Use graph or make_graphed_callables(), which call capture_begin internally.

Parameters

pool (optional) – Token (returned by graph_pool_handle() or other_Graph_instance.pool()) that hints this graph may share memory with the indicated pool. See Graph memory management.

capture_end()[source]

Ends CUDA graph capture on the current stream. After capture_end, replay may be called on this instance.

Typically, you shouldn’t call capture_end yourself. Use graph or make_graphed_callables(), which call capture_end internally.

pool()[source]

Returns an opaque token representing the id of this graph’s memory pool. This id can optionally be passed to another graph’s capture_begin, which hints the other graph may share the same memory pool.

replay()[source]

Replays the CUDA work captured by this graph.

reset()[source]

Deletes the graph currently held by this instance.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources