Define TORCH_LIBRARY_IMPL¶
Defined in File library.h
Define Documentation¶
-
TORCH_LIBRARY_IMPL(ns, k, m)¶
Macro for defining a function that will be run at static initialization time to define operator overrides for dispatch key
k
(must be an unqualified enum member of c10::DispatchKey) in namespacens
(must be a valid C++ identifer, no quotes).Use this macro when you want to implement a preexisting set of custom operators on a new dispatch key (e.g., you want to provide CUDA implementations of already existing operators). One common usage pattern is to use TORCH_LIBRARY() to define schema for all new operators you want to define, and then use several TORCH_LIBRARY_IMPL() blocks to provide implementations of the operator for CPU, CUDA and Autograd.
In some cases, you need to define something that applies to all namespaces, not just one namespace (usually a fallback). In that case, use the reserved namespace _, e.g.,
TORCH_LIBRARY_IMPL(_, XLA, m) { m.fallback(xla_fallback); }
Example usage:
TORCH_LIBRARY_IMPL(myops, CPU, m) { // m is a torch::Library; methods on it will define // CPU implementations of operators in the myops namespace. // It is NOT valid to call torch::Library::def() // in this context. m.impl("add", add_cpu_impl); }
If
add_cpu_impl
is an overloaded function, use astatic_cast
to specify which overload you want (by providing the full type).