Shortcuts

torch.xpu

This package introduces support for the XPU backend, specifically tailored for Intel GPU optimization.

This package is lazily initialized, so you can always import it, and use is_available() to determine if your system supports XPU.

StreamContext

Context-manager that selects a given stream.

current_device

Return the index of a currently selected device.

current_stream

Return the currently selected Stream for a given device.

device

Context-manager that changes the selected device.

device_count

Return the number of XPU device available.

device_of

Context-manager that changes the current device to that of given object.

get_arch_list

Return list XPU architectures this library was compiled for.

get_device_capability

Get the xpu capability of a device.

get_device_name

Get the name of a device.

get_device_properties

Get the properties of a device.

get_gencode_flags

Return XPU AOT(ahead-of-time) build flags this library was compiled with.

init

Initialize PyTorch's XPU state.

is_available

Return a bool indicating if XPU is currently available.

is_initialized

Return whether PyTorch's XPU state has been initialized.

set_device

Set the current device.

set_stream

Set the current stream.This is a wrapper API to set the stream.

stream

Wrap around the Context-manager StreamContext that selects a given stream.

synchronize

Wait for all kernels in all streams on a XPU device to complete.

Random Number Generator

get_rng_state

Return the random number generator state of the specified GPU as a ByteTensor.

get_rng_state_all

Return a list of ByteTensor representing the random number states of all devices.

initial_seed

Return the current random seed of the current GPU.

manual_seed

Set the seed for generating random numbers for the current GPU.

manual_seed_all

Set the seed for generating random numbers on all GPUs.

seed

Set the seed for generating random numbers to a random number for the current GPU.

seed_all

Set the seed for generating random numbers to a random number on all GPUs.

set_rng_state

Set the random number generator state of the specified GPU.

set_rng_state_all

Set the random number generator state of all devices.

Streams and events

Event

Wrapper around a XPU event.

Stream

Wrapper around a XPU stream.

Memory management

empty_cache

Release all unoccupied cached memory currently held by the caching allocator so that those can be used in other XPU application.

max_memory_allocated

Return the maximum GPU memory occupied by tensors in bytes for a given device.

max_memory_reserved

Return the maximum GPU memory managed by the caching allocator in bytes for a given device.

memory_allocated

Return the current GPU memory occupied by tensors in bytes for a given device.

memory_reserved

Return the current GPU memory managed by the caching allocator in bytes for a given device.

memory_stats

Return a dictionary of XPU memory allocator statistics for a given device.

memory_stats_as_nested_dict

Return the result of memory_stats() as a nested dictionary.

reset_accumulated_memory_stats

Reset the "accumulated" (historical) stats tracked by the XPU memory allocator.

reset_peak_memory_stats

Reset the "peak" stats tracked by the XPU memory allocator.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources