fft(input, signal_ndim, normalized=False) → Tensor¶
Complex-to-complex Discrete Fourier Transform
This method computes the complex-to-complex discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:
signal_ndimis number of dimensions for the signal, and is the size of signal dimension .
This method supports 1D, 2D and 3D complex-to-complex transforms, indicated by
inputmust be a tensor with last dimension of size 2, representing the real and imaginary components of complex numbers, and should have at least
signal_ndim + 1dimensions with optionally arbitrary number of leading batch dimensions. If
normalizedis set to
True, this normalizes the result by dividing it with so that the operator is unitary.
Returns the real and the imaginary parts together as one tensor of the same shape of
The inverse of this function is
Deprecated since version 1.7.0: The function
torch.fft()is deprecated and will be removed in PyTorch 1.8. Use the new torch.fft module functions, instead, by importing torch.fft and calling
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.
If the torch.fft module is imported then “torch.fft” will refer to the module and not this function. Use
Due to limited dynamic range of half datatype, performing this operation in half precision may cause the first element of result to overflow for certain inputs.
For CPU tensors, this method is currently only available with MKL. Use
torch.backends.mkl.is_available()to check if MKL is installed.
A tensor containing the complex-to-complex Fourier transform result
- Return type
>>> # unbatched 2D FFT >>> x = torch.randn(4, 3, 2) >>> torch.fft(x, 2) tensor([[[-0.0876, 1.7835], [-2.0399, -2.9754], [ 4.4773, -5.0119]], [[-1.5716, 2.7631], [-3.8846, 5.2652], [ 0.2046, -0.7088]], [[ 1.9938, -0.5901], [ 6.5637, 6.4556], [ 2.9865, 4.9318]], [[ 7.0193, 1.1742], [-1.3717, -2.1084], [ 2.0289, 2.9357]]]) >>> # batched 1D FFT >>> torch.fft(x, 1) tensor([[[ 1.8385, 1.2827], [-0.1831, 1.6593], [ 2.4243, 0.5367]], [[-0.9176, -1.5543], [-3.9943, -2.9860], [ 1.2838, -2.9420]], [[-0.8854, -0.6860], [ 2.4450, 0.0808], [ 1.3076, -0.5768]], [[-0.1231, 2.7411], [-0.3075, -1.7295], [-0.5384, -2.0299]]]) >>> # arbitrary number of batch dimensions, 2D FFT >>> x = torch.randn(3, 3, 5, 5, 2) >>> y = torch.fft(x, 2) >>> y.shape torch.Size([3, 3, 5, 5, 2])