Shortcuts

torch.linalg.eig

torch.linalg.eig(A, *, out=None)

Computes the eigenvalue decomposition of a square matrix if it exists.

Letting K\mathbb{K} be R\mathbb{R} or C\mathbb{C}, the eigenvalue decomposition of a square matrix AKn×nA \in \mathbb{K}^{n \times n} (if it exists) is defined as

A=Vdiag(Λ)V1VCn×n,ΛCnA = V \operatorname{diag}(\Lambda) V^{-1}\mathrlap{\qquad V \in \mathbb{C}^{n \times n}, \Lambda \in \mathbb{C}^n}

This decomposition exists if and only if AA is diagonalizable. This is the case when all its eigenvalues are different.

Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if A is a batch of matrices then the output has the same batch dimensions.

Note

The eigenvalues and eigenvectors of a real matrix may be complex.

Note

When inputs are on a CUDA device, this function synchronizes that device with the CPU.

Warning

This function assumes that A is diagonalizable (for example, when all the eigenvalues are different). If it is not diagonalizable, the returned eigenvalues will be correct but AVdiag(Λ)V1A \neq V \operatorname{diag}(\Lambda)V^{-1}.

Warning

The eigenvectors of a matrix are not unique, nor are they continuous with respect to A. Due to this lack of uniqueness, different hardware and software may compute different eigenvectors.

This non-uniqueness is caused by the fact that multiplying an eigenvector by a non-zero number produces another set of valid eigenvectors of the matrix. In this implmentation, the returned eigenvectors are normalized to have norm 1 and largest real component.

Warning

Gradients computed using V will only be finite when A does not have repeated eigenvalues. Furthermore, if the distance between any two eigenvalues is close to zero, the gradient will be numerically unstable, as it depends on the eigenvalues λi\lambda_i through the computation of 1minijλiλj\frac{1}{\min_{i \neq j} \lambda_i - \lambda_j}.

See also

torch.linalg.eigvals() computes only the eigenvalues. Unlike torch.linalg.eig(), the gradients of eigvals() are always numerically stable.

torch.linalg.eigh() for a (faster) function that computes the eigenvalue decomposition for Hermitian and symmetric matrices.

torch.linalg.svd() for a function that computes another type of spectral decomposition that works on matrices of any shape.

torch.linalg.qr() for another (much faster) decomposition that works on matrices of any shape.

Parameters

A (Tensor) – tensor of shape (*, n, n) where * is zero or more batch dimensions consisting of diagonalizable matrices.

Keyword Arguments

out (tuple, optional) – output tuple of two tensors. Ignored if None. Default: None.

Returns

A named tuple (eigenvalues, eigenvectors) which corresponds to Λ\Lambda and VV above.

eigenvalues and eigenvectors will always be complex-valued, even when A is real. The eigenvectors will be given by the columns of eigenvectors.

Examples:

>>> A = torch.randn(2, 2, dtype=torch.complex128)
>>> A
tensor([[ 0.9828+0.3889j, -0.4617+0.3010j],
        [ 0.1662-0.7435j, -0.6139+0.0562j]], dtype=torch.complex128)
>>> L, V = torch.linalg.eig(A)
>>> L
tensor([ 1.1226+0.5738j, -0.7537-0.1286j], dtype=torch.complex128)
>>> V
tensor([[ 0.9218+0.0000j,  0.1882-0.2220j],
        [-0.0270-0.3867j,  0.9567+0.0000j]], dtype=torch.complex128)
>>> torch.dist(V @ torch.diag(L) @ torch.linalg.inv(V), A)
tensor(7.7119e-16, dtype=torch.float64)

>>> A = torch.randn(3, 2, 2, dtype=torch.float64)
>>> L, V = torch.linalg.eig(A)
>>> torch.dist(V @ torch.diag_embed(L) @ torch.linalg.inv(V), A)
tensor(3.2841e-16, dtype=torch.float64)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources