# torch.utils.cpp_extension¶

torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs)[source]

Creates a setuptools.Extension for C++.

Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a C++ extension.

All arguments are forwarded to the setuptools.Extension constructor.

Example

>>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CppExtension
>>> setup(
name='extension',
ext_modules=[
CppExtension(
name='extension',
sources=['extension.cpp'],
extra_compile_args=['-g'])),
],
cmdclass={
'build_ext': BuildExtension
})

torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs)[source]

Creates a setuptools.Extension for CUDA/C++.

Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path and runtime library.

All arguments are forwarded to the setuptools.Extension constructor.

Example

>>> from setuptools import setup
>>> from torch.utils.cpp_extension import BuildExtension, CppExtension
>>> setup(
name='cuda_extension',
ext_modules=[
CUDAExtension(
name='cuda_extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_compile_args={'cxx': ['-g'],
'nvcc': ['-O2']})
],
cmdclass={
'build_ext': BuildExtension
})

torch.utils.cpp_extension.BuildExtension(dist, **kw)[source]

A custom setuptools build extension .

This setuptools.build_ext subclass takes care of passing the minimum required compiler flags (e.g. -std=c++11) as well as mixed C++/CUDA compilation (and support for CUDA files in general).

When using BuildExtension, it is allowed to supply a dictionary for extra_compile_args (rather than the usual list) that maps from languages (cxx or cuda) to a list of additional compiler flags to supply to the compiler. This makes it possible to supply different flags to the C++ and CUDA compiler during mixed compilation.

torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False)[source]

Loads a PyTorch C++ extension just-in-time (JIT).

To load an extension, a Ninja build file is emitted, which is used to compile the given sources into a dynamic library. This library is subsequently loaded into the current Python process as a module and returned from this function, ready for use.

By default, the directory to which the build file is emitted and the resulting library compiled to is <tmp>/torch_extensions/<name>, where <tmp> is the temporary folder on the current platform and <name> the name of the extension. This location can be overridden in two ways. First, if the TORCH_EXTENSIONS_DIR environment variable is set, it replaces <tmp>/torch_extensions and all extensions will be compiled into subfolders of this directory. Second, if the build_directory argument to this function is supplied, it overrides the entire path, i.e. the library will be compiled into that folder directly.

To compile the sources, the default system compiler (c++) is used, which can be overridden by setting the CXX environment variable. To pass additional arguments to the compilation process, extra_cflags or extra_ldflags can be provided. For example, to compile your extension with optimizations, pass extra_cflags=['-O3']. You can also use extra_cflags to pass further include directories.

CUDA support with mixed compilation is provided. Simply pass CUDA source files (.cu or .cuh) along with other sources. Such files will be detected and compiled with nvcc rather than the C++ compiler. This includes passing the CUDA lib64 directory as a library directory, and linking cudart. You can pass additional flags to nvcc via extra_cuda_cflags, just like with extra_cflags for C++. Various heuristics for finding the CUDA install directory are used, which usually work fine. If not, setting the CUDA_HOME environment variable is the safest option.

Parameters: name – The name of the extension to build. This MUST be the same as the name of the pybind11 module! sources – A list of relative or absolute paths to C++ source files. extra_cflags – optional list of compiler flags to forward to the build. extra_cuda_cflags – optional list of compiler flags to forward to nvcc when building CUDA sources. extra_ldflags – optional list of linker flags to forward to the build. extra_include_paths – optional list of include directories to forward to the build. build_directory – optional path to use as build workspace. verbose – If True, turns on verbose logging of load steps. The loaded PyTorch extension as a Python module.

Example

>>> from torch.utils.cpp_extension import load
name='extension',
sources=['extension.cpp', 'extension_kernel.cu'],
extra_cflags=['-O2'],
verbose=True)

torch.utils.cpp_extension.include_paths(cuda=False)[source]

Get the include paths required to build a C++ or CUDA extension.

Parameters: cuda – If True, includes CUDA-specific include paths. A list of include path strings.
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler)[source]

Verifies that the given compiler is ABI-compatible with PyTorch.

Parameters: compiler (str) – The compiler executable name to check (e.g. g++). Must be executable in a shell process. False if the compiler is (likely) ABI-incompatible with PyTorch, else True.
torch.utils.cpp_extension.verify_ninja_availability()[source]

Returns True if the ninja build system is available on the system.