svd(input, some=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)¶
This function returns a namedtuple
(U, S, V)which is the singular value decomposition of a input real matrix or batches of real matrices
inputsuch that .
True(default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of
n, then the returned U matrix will contain only orthonormal columns and the size of V will be .
False, the returned U and V matrices will be zero matrices of shape and respectively.
somewill be ignored here.
The singular values are returned in descending order. If
inputis a batch of matrices, then the singular values of each matrix in the batch is returned in descending order.
The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the MAGMA routine gesdd as well.
Irrespective of the original strides, the returned matrix U will be transposed, i.e. with strides
Extra care needs to be taken when backward through U and V outputs. Such operation is really only stable when
inputis full rank with all distinct singular values. Otherwise,
NaNcan appear as the gradients are not properly defined. Also, notice that double backward will usually do an additional backward through U and V even if the original backward is only on S.
False, the gradients on
U[..., :, min(m, n):]and
V[..., :, min(m, n):]will be ignored in backward as those vectors can be arbitrary bases of the subspaces.
False, backward cannot be performed since U and V from the forward pass is required for the backward operation.
- Keyword Arguments
out (tuple, optional) – the output tuple of tensors
>>> a = torch.randn(5, 3) >>> a tensor([[ 0.2364, -0.7752, 0.6372], [ 1.7201, 0.7394, -0.0504], [-0.3371, -1.0584, 0.5296], [ 0.3550, -0.4022, 1.5569], [ 0.2445, -0.0158, 1.1414]]) >>> u, s, v = torch.svd(a) >>> u tensor([[ 0.4027, 0.0287, 0.5434], [-0.1946, 0.8833, 0.3679], [ 0.4296, -0.2890, 0.5261], [ 0.6604, 0.2717, -0.2618], [ 0.4234, 0.2481, -0.4733]]) >>> s tensor([2.3289, 2.0315, 0.7806]) >>> v tensor([[-0.0199, 0.8766, 0.4809], [-0.5080, 0.4054, -0.7600], [ 0.8611, 0.2594, -0.4373]]) >>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t())) tensor(8.6531e-07) >>> a_big = torch.randn(7, 5, 3) >>> u, s, v = torch.svd(a_big) >>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.transpose(-2, -1))) tensor(2.6503e-06)