torch.mtia¶
The MTIA backend is implemented out of the tree, only interfaces are be defined here.
This package enables an interface for accessing MTIA backend in python
StreamContext |
Context-manager that selects a given stream. |
current_device |
Return the index of a currently selected device. |
current_stream |
Return the currently selected |
default_stream |
Return the default |
device_count |
Return the number of MTIA devices available. |
init |
|
is_available |
Return true if MTIA device is available |
is_initialized |
Return whether PyTorch's MTIA state has been initialized. |
memory_stats |
Return a dictionary of MTIA memory allocator statistics for a given device. |
get_device_capability |
Return capability of a given device as a tuple of (major version, minor version). |
empty_cache |
Empty the MTIA device cache. |
record_memory_history |
Enable/Disable the memory profiler on MTIA allocator |
snapshot |
Return a dictionary of MTIA memory allocator history |
set_device |
Set the current device. |
set_stream |
Set the current stream.This is a wrapper API to set the stream. |
stream |
Wrap around the Context-manager StreamContext that selects a given stream. |
synchronize |
Waits for all jobs in all streams on a MTIA device to complete. |
device |
Context-manager that changes the selected device. |
set_rng_state |
Sets the random number generator state. |
get_rng_state |
Returns the random number generator state as a ByteTensor. |
DeferredMtiaCallError |