# torch.ifft¶

torch.ifft(input, signal_ndim, normalized=False) → Tensor

Complex-to-complex Inverse Discrete Fourier Transform

This method computes the complex-to-complex inverse discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:

where $d$ = signal_ndim is number of dimensions for the signal, and $N_i$ is the size of signal dimension $i$ .

The argument specifications are almost identical with fft(). However, if normalized is set to True, this instead returns the results multiplied by $\sqrt{\prod_{i=1}^d N_i}$ , to become a unitary operator. Therefore, to invert a fft(), the normalized argument should be set identically for fft().

Returns the real and the imaginary parts together as one tensor of the same shape of input.

The inverse of this function is fft().

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cuFFT plan cache for more details on how to monitor and control the cache.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch.backends.mkl.is_available() to check if MKL is installed.

Parameters
• input (Tensor) – the input tensor of at least signal_ndim + 1 dimensions

• signal_ndim (int) – the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

• normalized (bool, optional) – controls whether to return normalized results. Default: False

Returns

A tensor containing the complex-to-complex inverse Fourier transform result

Return type

Tensor

Example:

>>> x = torch.randn(3, 3, 2)
>>> x
tensor([[[ 1.2766,  1.3680],
[-0.8337,  2.0251],
[ 0.9465, -1.4390]],

[[-0.1890,  1.6010],
[ 1.1034, -1.9230],
[-0.9482,  1.0775]],

[[-0.7708, -0.8176],
[-0.1843, -0.2287],
[-1.9034, -0.2196]]])
>>> y = torch.fft(x, 2)
>>> torch.ifft(y, 2)  # recover x
tensor([[[ 1.2766,  1.3680],
[-0.8337,  2.0251],
[ 0.9465, -1.4390]],

[[-0.1890,  1.6010],
[ 1.1034, -1.9230],
[-0.9482,  1.0775]],

[[-0.7708, -0.8176],
[-0.1843, -0.2287],
[-1.9034, -0.2196]]])