svd(A, full_matrices=True, *, out=None) -> (Tensor, Tensor, Tensor)¶
Computes the singular value decomposition (SVD) of a matrix.
Letting be or , the full SVD of a matrix , if k = min(m,n), is defined as
where , is the conjugate transpose when is complex, and the transpose when is real-valued. The matrices , (and thus ) are orthogonal in the real case, and unitary in the complex case.
When m > n (resp. m < n) we can drop the last m - n (resp. n - m) columns of U (resp. V) to form the reduced SVD:
where . In this case, and also have orthonormal columns.
Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if
Ais a batch of matrices then the output has the same batch dimensions.
The returned decomposition is a named tuple (U, S, Vh) which corresponds to , , above.
The singular values are returned in descending order.
full_matriceschooses between the full (default) and reduced SVD.
Differences with numpy.linalg.svd:
Unlike numpy.linalg.svd, this function always returns a tuple of three tensors and it doesn’t support compute_uv argument. Please use
torch.linalg.svdvals(), which computes only the singular values, instead of compute_uv=False.
full_matrices= True, the gradients with respect to U[…, :, min(m, n):] and Vh[…, min(m, n):, :] will be ignored, as those vectors can be arbitrary bases of the corresponding subspaces.
The returned tensors U and V are not unique, nor are they continuous with respect to
A. Due to this lack of uniqueness, different hardware and software may compute different singular vectors.
This non-uniqueness is caused by the fact that multiplying any pair of singular vectors by -1 in the real case or by in the complex case produces another two valid singular vectors of the matrix. This non-uniqueness problem is even worse when the matrix has repeated singular values. In this case, one may multiply the associated singular vectors of U and V spanning the subspace by a rotation matrix and the resulting vectors will span the same subspace.
Gradients computed using U or Vh will only be finite when
Adoes not have zero as a singular value or repeated singular values. Furthermore, if the distance between any two singular values is close to zero, the gradient will be numerically unstable, as it depends on the singular values through the computation of . The gradient will also be numerically unstable when
Ahas small singular values, as it also depends on the computaiton of .
torch.linalg.eig()for a function that computes another type of spectral decomposition of a matrix. The eigendecomposition works just on on square matrices.
torch.linalg.eigh()for a (faster) function that computes the eigenvalue decomposition for Hermitian and symmetric matrices.
torch.linalg.qr()for another (much faster) decomposition that works on general matrices.
- Keyword Arguments
out (tuple, optional) – output tuple of three tensors. Ignored if None.
A named tuple (U, S, Vh) which corresponds to , , above.
S will always be real-valued, even when
Ais complex. It will also be ordered in descending order.
U and Vh will have the same dtype as
>>> a = torch.randn(5, 3) >>> a tensor([[-0.3357, -0.2987, -1.1096], [ 1.4894, 1.0016, -0.4572], [-1.9401, 0.7437, 2.0968], [ 0.1515, 1.3812, 1.5491], [-1.8489, -0.5907, -2.5673]]) >>> >>> # reconstruction in the full_matrices=False case >>> u, s, vh = torch.linalg.svd(a, full_matrices=False) >>> u.shape, s.shape, vh.shape (torch.Size([5, 3]), torch.Size(), torch.Size([3, 3])) >>> torch.dist(a, u @ torch.diag(s) @ vh) tensor(1.0486e-06) >>> >>> # reconstruction in the full_matrices=True case >>> u, s, vh = torch.linalg.svd(a) >>> u.shape, s.shape, vh.shape (torch.Size([5, 5]), torch.Size(), torch.Size([3, 3])) >>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh) >>> torch.dist(a, u[:, :3] @ torch.diag(s) @ vh) tensor(1.0486e-06) >>> >>> # extra dimensions >>> a_big = torch.randn(7, 5, 3) >>> u, s, vh = torch.linalg.svd(a_big, full_matrices=False) >>> torch.dist(a_big, u @ torch.diag_embed(s) @ vh) tensor(3.0957e-06)