Discrete Fourier transform















Fourier transforms

Continuous Fourier transform

Fourier series

Discrete-time Fourier transform

Discrete Fourier transform

Discrete Fourier transform over a ring

Fourier analysis

Related transforms



Relationship between the (continuous) Fourier transform and the discrete Fourier transform. Left column: A continuous function (top) and its Fourier transform (bottom). Center-left column: Periodic summation of the original function (top). Fourier transform (bottom) is zero except at discrete points. The inverse transform is a sum of sinusoids called Fourier series. Center-right column: Original function is discretized (multiplied by a Dirac comb) (top). Its Fourier transform (bottom) is a periodic summation (DTFT) of the original transform. Right column: The DFT (bottom) computes discrete samples of the continuous DTFT. The inverse DFT (top) is a periodic summation of the original samples. The FFT algorithm computes one cycle of the DFT and its inverse is one cycle of the DFT inverse.




Depiction of a Fourier transform (upper left) and its periodic summation (DTFT) in the lower left corner. The spectral sequences at (a) upper right and (b) lower right are respectively computed from (a) one cycle of the periodic summation of s(t) and (b) one cycle of the periodic summation of the s(nT) sequence. The respective formulas are (a) the Fourier series integral and (b) the DFT summation. Its similarities to the original transform, S(f), and its relative computational ease are often the motivation for computing a DFT sequence.


In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.


The DFT is the most important discrete transform, used to perform Fourier analysis in many practical applications.[1] In digital signal processing, the function is any quantity or signal that varies over time, such as the pressure of a sound wave, a radio signal, or daily temperature readings, sampled over a finite time interval (often defined by a window function[2]). In image processing, the samples can be the values of pixels along a row or column of a raster image. The DFT is also used to efficiently solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers.


Since it deals with a finite amount of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms;[3] so much so that the terms "FFT" and "DFT" are often used interchangeably. Prior to its current usage, the "FFT" initialism may have also been used for the ambiguous term "finite Fourier transform".




Contents






  • 1 Definition


  • 2 Motivation


  • 3 Example


  • 4 Inverse transform


  • 5 Properties


    • 5.1 Linearity


    • 5.2 Time and frequency reversal


    • 5.3 Conjugation in time


    • 5.4 Real and imaginary part


    • 5.5 Orthogonality


    • 5.6 The Plancherel theorem and Parseval's theorem


    • 5.7 Periodicity


    • 5.8 Shift theorem


    • 5.9 Circular convolution theorem and cross-correlation theorem


    • 5.10 Convolution theorem duality


    • 5.11 Trigonometric interpolation polynomial


    • 5.12 The unitary DFT


    • 5.13 Expressing the inverse DFT in terms of the DFT


    • 5.14 Eigenvalues and eigenvectors


    • 5.15 Uncertainty principles


      • 5.15.1 Probabilistic uncertainty principle


      • 5.15.2 Deterministic uncertainty principle




    • 5.16 DFT of real and purely imaginary signals




  • 6 Generalized DFT (shifted and non-linear phase)


  • 7 Multidimensional DFT


    • 7.1 The real-input multidimensional DFT




  • 8 Applications


    • 8.1 Spectral analysis


    • 8.2 Filter bank


    • 8.3 Data compression


    • 8.4 Partial differential equations


    • 8.5 Polynomial multiplication


      • 8.5.1 Multiplication of large integers


      • 8.5.2 Convolution






  • 9 Some discrete Fourier transform pairs


  • 10 Generalizations


    • 10.1 Representation theory


    • 10.2 Other fields


    • 10.3 Other finite groups




  • 11 Alternatives


  • 12 See also


  • 13 Notes


  • 14 References


  • 15 Further reading


  • 16 External links





Definition


The discrete Fourier transform transforms a sequence of N complex numbers {xn}:=x0,x1,…,xN−1{displaystyle left{mathbf {x_{n}} right}:=x_{0},x_{1},ldots ,x_{N-1}}{displaystyle left{mathbf {x_{n}} right}:=x_{0},x_{1},ldots ,x_{N-1}} into another sequence of complex numbers, {Xk}:=X0,X1,…,XN−1,{displaystyle left{mathbf {X_{k}} right}:=X_{0},X_{1},ldots ,X_{N-1},}{displaystyle left{mathbf {X_{k}} right}:=X_{0},X_{1},ldots ,X_{N-1},} which is defined by









Xk=∑n=0N−1xn⋅e−i2πNkn=∑n=0N−1xn⋅[cos⁡(2πkn/N)−i⋅sin⁡(2πkn/N)],{displaystyle {begin{aligned}X_{k}&=sum _{n=0}^{N-1}x_{n}cdot e^{-{frac {i2pi }{N}}kn}\&=sum _{n=0}^{N-1}x_{n}cdot [cos(2pi kn/N)-icdot sin(2pi kn/N)],end{aligned}}}{displaystyle {begin{aligned}X_{k}&=sum _{n=0}^{N-1}x_{n}cdot e^{-{frac {i2pi }{N}}kn}\&=sum _{n=0}^{N-1}x_{n}cdot [cos(2pi kn/N)-icdot sin(2pi kn/N)],end{aligned}}}












 



 



 



 





(Eq.1)





where the last expression follows from the first one by Euler's formula.


The transform is sometimes denoted by the symbol F{displaystyle {mathcal {F}}}{mathcal {F}}, as in X=F{x}{displaystyle mathbf {X} ={mathcal {F}}left{mathbf {x} right}}mathbf {X} ={mathcal {F}}left{mathbf {x} right} or F(x){displaystyle {mathcal {F}}left(mathbf {x} right)}{mathcal {F}}left(mathbf {x} right) or Fx{displaystyle {mathcal {F}}mathbf {x} }{mathcal {F}}mathbf {x} .[note 1]



Motivation


Eq.1 can also be evaluated outside the domain k∈[0,N−1]{displaystyle kin [0,N-1]}{displaystyle kin [0,N-1]}, and that extended sequence is N{displaystyle N}N-periodic. Accordingly, other sequences of N{displaystyle N}N indices are sometimes used, such as [−N2,N2−1]{displaystyle left[-{tfrac {N}{2}},{tfrac {N}{2}}-1right]}{displaystyle left[-{tfrac {N}{2}},{tfrac {N}{2}}-1right]} (if N{displaystyle N}N is even) and [−N−12,N−12]{displaystyle left[-{tfrac {N-1}{2}},{tfrac {N-1}{2}}right]}{displaystyle left[-{tfrac {N-1}{2}},{tfrac {N-1}{2}}right]} (if N{displaystyle N}N is odd), which amounts to swapping the left and right halves of the result of the transform.
[4]


Eq.1 can be interpreted or derived in various ways, for example:



  • It completely describes the discrete-time Fourier transform (DTFT) of an N{displaystyle N}N-periodic sequence, which comprises only discrete frequency components. (Using the DTFT with periodic data)

  • It can also provide uniformly spaced samples of the continuous DTFT of a finite length sequence. (Sampling the DTFT)

  • It is the cross correlation of the input sequence, xn{displaystyle x_{n}}x_{n}, and a complex sinusoid at frequency k/N{displaystyle k/N}k/N.  Thus it acts like a matched filter for that frequency.

  • It is the discrete analogy of the formula for the coefficients of a Fourier series:








xn=1N∑k=0N−1Xk⋅ei2πkn/N,n∈Z,{displaystyle x_{n}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}cdot e^{i2pi kn/N},quad nin mathbb {Z} ,,}{displaystyle x_{n}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}cdot e^{i2pi kn/N},quad nin mathbb {Z} ,,}












 



 



 



 





(Eq.2)




which is also N{displaystyle N}N-periodic.  In the domain  n ∈ [0, N − 1],  this is the inverse transform of Eq.1.  In this interpretation, each Xk{displaystyle X_{k}}X_{k} is a complex number that encodes both amplitude and phase of a complex sinusoidal component  (ei2πkn/N){displaystyle (e^{i2pi kn/N})}{displaystyle (e^{i2pi kn/N})}  of function xn.{displaystyle x_{n}.}{displaystyle x_{n}.} The sinusoid's frequency is k cycles per N samples.  Its amplitude and phase are:


|Xk|/N=Re⁡(Xk)2+Im⁡(Xk)2/N{displaystyle |X_{k}|/N={sqrt {operatorname {Re} (X_{k})^{2}+operatorname {Im} (X_{k})^{2}}}/N}|X_{k}|/N={sqrt {operatorname {Re} (X_{k})^{2}+operatorname {Im} (X_{k})^{2}}}/N

arg⁡(Xk)=atan2⁡(Im⁡(Xk),Re⁡(Xk))=−i⋅ln⁡(Xk|Xk|),{displaystyle arg(X_{k})=operatorname {atan2} {big (}operatorname {Im} (X_{k}),operatorname {Re} (X_{k}){big )}=-icdot operatorname {ln} left({frac {X_{k}}{|X_{k}|}}right),}{displaystyle arg(X_{k})=operatorname {atan2} {big (}operatorname {Im} (X_{k}),operatorname {Re} (X_{k}){big )}=-icdot operatorname {ln} left({frac {X_{k}}{|X_{k}|}}right),}


where atan2 is the two-argument form of the arctan function. In polar form Xk=|Xk|eiarg⁡(Xk)=|Xk|cis⁡arg⁡(Xk){displaystyle X_{k}=|X_{k}|e^{iarg(X_{k})}=|X_{k}|operatorname {cis} arg(X_{k})}{displaystyle X_{k}=|X_{k}|e^{iarg(X_{k})}=|X_{k}|operatorname {cis} arg(X_{k})} where cis is the mnemonic for cos + i sin.

The normalization factor multiplying the DFT and IDFT (here 1 and 1/N) and the signs of the exponents are merely conventions, and differ in some treatments. The only requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be 1/N{displaystyle 1/N}1/N.  A normalization of 1/N{displaystyle scriptstyle {sqrt {1/N}}}scriptstyle {sqrt {1/N}} for both the DFT and IDFT, for instance, makes the transforms unitary. A discrete impulse, xn{displaystyle x_{n}}x_{n} = 1 at n = 0 and 0 otherwise; might transform to Xk{displaystyle X_{k}}X_{k} = 1 for all k (use normalization factors 1 for DFT and 1/N for IDFT). A DC signal, Xk{displaystyle X_{k}}X_{k} = 1 at k = 0 and 0 otherwise; might inversely transform to xn=1{displaystyle x_{n}=1}{displaystyle x_{n}=1} for all n{displaystyle n}n (use 1/N{displaystyle 1/N}1/N for DFT and 1 for IDFT) which is consistent with viewing DC as the mean average of the signal.



Example


Let N=4{displaystyle N=4}N=4 and
x=(x0x1x2x3)=(12−i−i−1+2i){displaystyle mathbf {x} ={begin{pmatrix}x_{0}\x_{1}\x_{2}\x_{3}end{pmatrix}}={begin{pmatrix}1\2-i\-i\-1+2iend{pmatrix}}}{displaystyle mathbf {x} ={begin{pmatrix}x_{0}\x_{1}\x_{2}\x_{3}end{pmatrix}}={begin{pmatrix}1\2-i\-i\-1+2iend{pmatrix}}}


Here we demonstrate how to calculate the DFT of x{displaystyle mathbf {x} }mathbf {x} using Eq.1:


X0=e−i2π0⋅0/4⋅1+e−i2π0⋅1/4⋅(2−i)+e−i2π0⋅2/4⋅(−i)+e−i2π0⋅3/4⋅(−1+2i)=2{displaystyle X_{0}=e^{-i2pi 0cdot 0/4}cdot 1+e^{-i2pi 0cdot 1/4}cdot (2-i)+e^{-i2pi 0cdot 2/4}cdot (-i)+e^{-i2pi 0cdot 3/4}cdot (-1+2i)=2}{displaystyle X_{0}=e^{-i2pi 0cdot 0/4}cdot 1+e^{-i2pi 0cdot 1/4}cdot (2-i)+e^{-i2pi 0cdot 2/4}cdot (-i)+e^{-i2pi 0cdot 3/4}cdot (-1+2i)=2}


X1=e−i2π1⋅0/4⋅1+e−i2π1⋅1/4⋅(2−i)+e−i2π1⋅2/4⋅(−i)+e−i2π1⋅3/4⋅(−1+2i)=−2−2i{displaystyle X_{1}=e^{-i2pi 1cdot 0/4}cdot 1+e^{-i2pi 1cdot 1/4}cdot (2-i)+e^{-i2pi 1cdot 2/4}cdot (-i)+e^{-i2pi 1cdot 3/4}cdot (-1+2i)=-2-2i}{displaystyle X_{1}=e^{-i2pi 1cdot 0/4}cdot 1+e^{-i2pi 1cdot 1/4}cdot (2-i)+e^{-i2pi 1cdot 2/4}cdot (-i)+e^{-i2pi 1cdot 3/4}cdot (-1+2i)=-2-2i}


X2=e−i2π2⋅0/4⋅1+e−i2π2⋅1/4⋅(2−i)+e−i2π2⋅2/4⋅(−i)+e−i2π2⋅3/4⋅(−1+2i)=−2i{displaystyle X_{2}=e^{-i2pi 2cdot 0/4}cdot 1+e^{-i2pi 2cdot 1/4}cdot (2-i)+e^{-i2pi 2cdot 2/4}cdot (-i)+e^{-i2pi 2cdot 3/4}cdot (-1+2i)=-2i}{displaystyle X_{2}=e^{-i2pi 2cdot 0/4}cdot 1+e^{-i2pi 2cdot 1/4}cdot (2-i)+e^{-i2pi 2cdot 2/4}cdot (-i)+e^{-i2pi 2cdot 3/4}cdot (-1+2i)=-2i}


X3=e−i2π3⋅0/4⋅1+e−i2π3⋅1/4⋅(2−i)+e−i2π3⋅2/4⋅(−i)+e−i2π3⋅3/4⋅(−1+2i)=4+4i{displaystyle X_{3}=e^{-i2pi 3cdot 0/4}cdot 1+e^{-i2pi 3cdot 1/4}cdot (2-i)+e^{-i2pi 3cdot 2/4}cdot (-i)+e^{-i2pi 3cdot 3/4}cdot (-1+2i)=4+4i}{displaystyle X_{3}=e^{-i2pi 3cdot 0/4}cdot 1+e^{-i2pi 3cdot 1/4}cdot (2-i)+e^{-i2pi 3cdot 2/4}cdot (-i)+e^{-i2pi 3cdot 3/4}cdot (-1+2i)=4+4i}


X=(X0X1X2X3)=(2−2−2i−2i4+4i){displaystyle mathbf {X} ={begin{pmatrix}X_{0}\X_{1}\X_{2}\X_{3}end{pmatrix}}={begin{pmatrix}2\-2-2i\-2i\4+4iend{pmatrix}}}{displaystyle mathbf {X} ={begin{pmatrix}X_{0}\X_{1}\X_{2}\X_{3}end{pmatrix}}={begin{pmatrix}2\-2-2i\-2i\4+4iend{pmatrix}}}





Inverse transform


The discrete Fourier transform is an invertible, linear transformation


F:CN→CN{displaystyle {mathcal {F}}colon mathbb {C} ^{N}to mathbb {C} ^{N}}{mathcal {F}}colon mathbb {C} ^{N}to mathbb {C} ^{N}

with C{displaystyle mathbb {C} }mathbb {C} denoting the set of complex numbers. In other words, for any N>0{displaystyle N>0}N>0, an N-dimensional complex vector has a DFT and an IDFT which are in turn N{displaystyle N}N-dimensional complex vectors.


The inverse transform is given by:









xn=1N∑k=0N−1Xk⋅ei2πkn/N{displaystyle x_{n}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}cdot e^{i2pi kn/N}}{displaystyle x_{n}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}cdot e^{i2pi kn/N}}












 



 



 



 





(Eq.3)






Properties



Linearity


The DFT is a linear transform, i.e. if F({xn})k=Xk{displaystyle {mathcal {F}}({x_{n}})_{k}=X_{k}}{mathcal {F}}({x_{n}})_{k}=X_{k} and F({yn})k=Yk{displaystyle {mathcal {F}}({y_{n}})_{k}=Y_{k}}{displaystyle {mathcal {F}}({y_{n}})_{k}=Y_{k}}, then for any complex numbers a,b{displaystyle a,b}a,b:


F({axn+byn})k=aXk+bYk{displaystyle {mathcal {F}}({ax_{n}+by_{n}})_{k}=aX_{k}+bY_{k}}{displaystyle {mathcal {F}}({ax_{n}+by_{n}})_{k}=aX_{k}+bY_{k}}


Time and frequency reversal


Reversing the time (i.e. replacing n{displaystyle n}n by N−n{displaystyle N-n}N-n)[note 2] in xn{displaystyle x_{n}}x_{n} corresponds to reversing the frequency (i.e. k{displaystyle k}k by N−k{displaystyle N-k}N-k).[5]:p. 421 Mathematically, if {xn}{displaystyle {x_{n}}}{x_{n}} represents the vector x then



if F({xn})k=Xk{displaystyle {mathcal {F}}({x_{n}})_{k}=X_{k}}{mathcal {F}}({x_{n}})_{k}=X_{k}

then F({xN−n})k=XN−k{displaystyle {mathcal {F}}({x_{N-n}})_{k}=X_{N-k}}{displaystyle {mathcal {F}}({x_{N-n}})_{k}=X_{N-k}}



Conjugation in time


If F({xn})k=Xk{displaystyle {mathcal {F}}({x_{n}})_{k}=X_{k}}{mathcal {F}}({x_{n}})_{k}=X_{k} then F({xn∗})k=XN−k∗{displaystyle {mathcal {F}}({x_{n}^{*}})_{k}=X_{N-k}^{*}}{displaystyle {mathcal {F}}({x_{n}^{*}})_{k}=X_{N-k}^{*}}.[5]:p. 423



Real and imaginary part


This table shows some mathematical operations on xn{displaystyle x_{n}}x_{n} in the time domain and the corresponding effects on its DFT Xk{displaystyle X_{k}}X_{k} in the frequency domain.




























Property
Time domain
xn{displaystyle x_{n}}x_{n}
Frequency domain
Xk{displaystyle X_{k}}X_{k}
Real part in time

(xn){displaystyle Re {(x_{n})}}{displaystyle Re {(x_{n})}}

12(Xk+XN−k∗){displaystyle {frac {1}{2}}(X_{k}+X_{N-k}^{*})}{displaystyle {frac {1}{2}}(X_{k}+X_{N-k}^{*})}
Imaginary part in time

(xn){displaystyle Im {(x_{n})}}{displaystyle Im {(x_{n})}}

12i(Xk−XN−k∗){displaystyle {frac {1}{2i}}(X_{k}-X_{N-k}^{*})}{displaystyle {frac {1}{2i}}(X_{k}-X_{N-k}^{*})}
Real part in frequency

12(xn+xN−n∗){displaystyle {frac {1}{2}}(x_{n}+x_{N-n}^{*})}{displaystyle {frac {1}{2}}(x_{n}+x_{N-n}^{*})}

(Xk){displaystyle Re {(X_{k})}}{displaystyle Re {(X_{k})}}
Imaginary part in frequency

12i(xn−xN−n∗){displaystyle {frac {1}{2i}}(x_{n}-x_{N-n}^{*})}{displaystyle {frac {1}{2i}}(x_{n}-x_{N-n}^{*})}

(Xk){displaystyle Im {(X_{k})}}{displaystyle Im {(X_{k})}}


Orthogonality


The vectors uk=[ei2πNkn|n=0,1,…,N−1]T{displaystyle u_{k}=left[e^{{frac {i2pi }{N}}kn};|;n=0,1,ldots ,N-1right]^{T}}{displaystyle u_{k}=left[e^{{frac {i2pi }{N}}kn};|;n=0,1,ldots ,N-1right]^{T}}
form an orthogonal basis over the set of N-dimensional complex vectors:


ukTuk′∗=∑n=0N−1(ei2πNkn)(ei2πN(−k′)n)=∑n=0N−1ei2πN(k−k′)n=N δkk′{displaystyle u_{k}^{T}u_{k'}^{*}=sum _{n=0}^{N-1}left(e^{{frac {i2pi }{N}}kn}right)left(e^{{frac {i2pi }{N}}(-k')n}right)=sum _{n=0}^{N-1}e^{{frac {i2pi }{N}}(k-k')n}=N~delta _{kk'}}{displaystyle u_{k}^{T}u_{k'}^{*}=sum _{n=0}^{N-1}left(e^{{frac {i2pi }{N}}kn}right)left(e^{{frac {i2pi }{N}}(-k')n}right)=sum _{n=0}^{N-1}e^{{frac {i2pi }{N}}(k-k')n}=N~delta _{kk'}}

where  δkk′{displaystyle ~delta _{kk'}}~delta _{kk'} is the Kronecker delta. (In the last step, the summation is trivial if k=k′{displaystyle k=k'}k=k', where it is 1+1+⋅⋅⋅=N, and otherwise is a geometric series that can be explicitly summed to obtain zero.) This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below.



The Plancherel theorem and Parseval's theorem


If Xk{displaystyle X_{k}}X_{k} and Yk{displaystyle Y_{k}}Y_{k} are the DFTs of xn{displaystyle x_{n}}x_{n} and yn{displaystyle y_{n}}y_{n} respectively then the Parseval's theorem states:


n=0N−1xnyn∗=1N∑k=0N−1XkYk∗{displaystyle sum _{n=0}^{N-1}x_{n}y_{n}^{*}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}Y_{k}^{*}}sum _{n=0}^{N-1}x_{n}y_{n}^{*}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}Y_{k}^{*}

where the star denotes complex conjugation. Plancherel theorem is a special case of the Parseval's theorem and states:


n=0N−1|xn|2=1N∑k=0N−1|Xk|2.{displaystyle sum _{n=0}^{N-1}|x_{n}|^{2}={frac {1}{N}}sum _{k=0}^{N-1}|X_{k}|^{2}.}sum _{n=0}^{N-1}|x_{n}|^{2}={frac {1}{N}}sum _{k=0}^{N-1}|X_{k}|^{2}.

These theorems are also equivalent to the unitary condition below.



Periodicity


The periodicity can be shown directly from the definition:


Xk+N ≜ ∑n=0N−1xne−i2πN(k+N)n=∑n=0N−1xne−i2πNkne−i2πn⏟1=∑n=0N−1xne−i2πNkn=Xk.{displaystyle X_{k+N} triangleq sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}(k+N)n}=sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}kn}underbrace {e^{-i2pi n}} _{1}=sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}kn}=X_{k}.}{displaystyle X_{k+N} triangleq  sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}(k+N)n}=sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}kn}underbrace {e^{-i2pi n}} _{1}=sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}kn}=X_{k}.}

Similarly, it can be shown that the IDFT formula leads to a periodic extension.



Shift theorem


Multiplying xn{displaystyle x_{n}}x_{n} by a linear phase ei2πNnm{displaystyle e^{{frac {i2pi }{N}}nm}}{displaystyle e^{{frac {i2pi }{N}}nm}} for some integer m corresponds to a circular shift of the output Xk{displaystyle X_{k}}X_{k}: Xk{displaystyle X_{k}}X_{k} is replaced by Xk−m{displaystyle X_{k-m}}X_{k-m}, where the subscript is interpreted modulo N (i.e., periodically). Similarly, a circular shift of the input xn{displaystyle x_{n}}x_{n} corresponds to multiplying the output Xk{displaystyle X_{k}}X_{k} by a linear phase. Mathematically, if {xn}{displaystyle {x_{n}}}{x_{n}} represents the vector x then


if F({xn})k=Xk{displaystyle {mathcal {F}}({x_{n}})_{k}=X_{k}}{mathcal {F}}({x_{n}})_{k}=X_{k}

then F({xn⋅ei2πNnm})k=Xk−m{displaystyle {mathcal {F}}({x_{n}cdot e^{{frac {i2pi }{N}}nm}})_{k}=X_{k-m}}{displaystyle {mathcal {F}}({x_{n}cdot e^{{frac {i2pi }{N}}nm}})_{k}=X_{k-m}}

and F({xn−m})k=Xk⋅e−i2πNkm{displaystyle {mathcal {F}}({x_{n-m}})_{k}=X_{k}cdot e^{-{frac {i2pi }{N}}km}}{displaystyle {mathcal {F}}({x_{n-m}})_{k}=X_{k}cdot e^{-{frac {i2pi }{N}}km}}


Circular convolution theorem and cross-correlation theorem





The convolution theorem for the discrete-time Fourier transform indicates that a convolution of two infinite sequences can be obtained as the inverse transform of the product of the individual transforms. An important simplification occurs when the sequences are of finite length, N{displaystyle N}N. In terms of the DFT and inverse DFT, it can be written as follows:


F−1{X⋅Y}n =∑l=0N−1xl⋅(yN)n−l  ≜  (x∗yN)n ,{displaystyle {mathcal {F}}^{-1}left{mathbf {Xcdot Y} right}_{n} =sum _{l=0}^{N-1}x_{l}cdot (y_{N})_{n-l} triangleq (mathbf {x*y_{N}} )_{n} ,}{displaystyle {mathcal {F}}^{-1}left{mathbf {Xcdot Y} right}_{n} =sum _{l=0}^{N-1}x_{l}cdot (y_{N})_{n-l}  triangleq   (mathbf {x*y_{N}} )_{n} ,}

which is the convolution of the x{displaystyle mathbf {x} }mathbf {x} sequence with a y{displaystyle mathbf {y} }mathbf {y} sequence extended by periodic summation:


(yN)n ≜ ∑p=−y(n−pN)=yn(mod N).{displaystyle (mathbf {y_{N}} )_{n} triangleq sum _{p=-infty }^{infty }y_{(n-pN)}=y_{n({text{mod }}N)}.,}{displaystyle (mathbf {y_{N}} )_{n} triangleq  sum _{p=-infty }^{infty }y_{(n-pN)}=y_{n({text{mod }}N)}.,}

Similarly, the cross-correlation of  x{displaystyle mathbf {x} }mathbf {x}   and  yN{displaystyle mathbf {y_{N}} }mathbf {y_{N}}   is given by:


F−1{X∗Y}n=∑l=0N−1xl∗(yN)n+l  ≜  (x⋆yN)n .{displaystyle {mathcal {F}}^{-1}left{mathbf {X^{*}cdot Y} right}_{n}=sum _{l=0}^{N-1}x_{l}^{*}cdot (y_{N})_{n+l} triangleq (mathbf {xstar y_{N}} )_{n} .}{displaystyle {mathcal {F}}^{-1}left{mathbf {X^{*}cdot Y} right}_{n}=sum _{l=0}^{N-1}x_{l}^{*}cdot (y_{N})_{n+l}  triangleq   (mathbf {xstar y_{N}} )_{n} .}

When either sequence contains a string of zeros, of length L{displaystyle L}LL+1{displaystyle L+1}L+1 of the circular convolution outputs are equivalent to values of  x∗y.{displaystyle mathbf {x*y} .}mathbf {x*y} .  Methods have also been developed to use this property as part of an efficient process that constructs  x∗y{displaystyle mathbf {x*y} }mathbf {x*y}   with an x{displaystyle mathbf {x} }mathbf {x} or y{displaystyle mathbf {y} }mathbf {y} sequence potentially much longer than the practical transform size (N{displaystyle N}N). Two such methods are called overlap-save and overlap-add.[6] The efficiency results from the fact that a direct evaluation of either summation (above) requires O(N2){displaystyle scriptstyle O(N^{2})}scriptstyle O(N^{2}) operations for an output sequence of length N{displaystyle N}N.  An indirect method, using transforms, can take advantage of the O(Nlog⁡N){displaystyle scriptstyle O(Nlog N)}scriptstyle O(Nlog N) efficiency of the fast Fourier transform (FFT) to achieve much better performance. Furthermore, convolutions can be used to efficiently compute DFTs via Rader's FFT algorithm and Bluestein's FFT algorithm.



Convolution theorem duality


It can also be shown that:



F{x⋅y}k ≜n=0N−1xn⋅yn⋅e−i2πNkn{displaystyle {mathcal {F}}left{mathbf {xcdot y} right}_{k} triangleq sum _{n=0}^{N-1}x_{n}cdot y_{n}cdot e^{-i{frac {2pi }{N}}kn}}{displaystyle {mathcal {F}}left{mathbf {xcdot y} right}_{k} triangleq sum _{n=0}^{N-1}x_{n}cdot y_{n}cdot e^{-i{frac {2pi }{N}}kn}}

=1N(X∗YN)k,{displaystyle ={frac {1}{N}}(mathbf {X*Y_{N}} )_{k},,}={frac {1}{N}}(mathbf {X*Y_{N}} )_{k},,   which is the circular convolution of X{displaystyle mathbf {X} }mathbf {X} and Y{displaystyle mathbf {Y} }mathbf {Y} .



Trigonometric interpolation polynomial


The trigonometric interpolation polynomial




p(t)=1N[X0+X1ei2πt+⋯+XN/2−1ei(N/2−1)2πt+XN/2cos⁡(Nπt)+XN/2+1ei(−N/2+1)2πt+⋯+XN−1e−i2πt]{displaystyle p(t)={frac {1}{N}}left[X_{0}+X_{1}e^{i2pi t}+cdots +X_{N/2-1}e^{i(N/2-1)2pi t}+X_{N/2}cos(Npi t)+X_{N/2+1}e^{i(-N/2+1)2pi t}+cdots +X_{N-1}e^{-i2pi t}right]}{displaystyle p(t)={frac {1}{N}}left[X_{0}+X_{1}e^{i2pi t}+cdots +X_{N/2-1}e^{i(N/2-1)2pi t}+X_{N/2}cos(Npi t)+X_{N/2+1}e^{i(-N/2+1)2pi t}+cdots +X_{N-1}e^{-i2pi t}right]} for N even ,


p(t)=1N[X0+X1ei2πt+⋯+X⌊N/2⌋ei⌊N/2⌋t+X⌊N/2⌋+1e−i⌊N/2⌋t+⋯+XN−1e−i2πt]{displaystyle p(t)={frac {1}{N}}left[X_{0}+X_{1}e^{i2pi t}+cdots +X_{lfloor N/2rfloor }e^{ilfloor N/2rfloor 2pi t}+X_{lfloor N/2rfloor +1}e^{-ilfloor N/2rfloor 2pi t}+cdots +X_{N-1}e^{-i2pi t}right]}{displaystyle p(t)={frac {1}{N}}left[X_{0}+X_{1}e^{i2pi t}+cdots +X_{lfloor N/2rfloor }e^{ilfloor N/2rfloor 2pi t}+X_{lfloor N/2rfloor +1}e^{-ilfloor N/2rfloor 2pi t}+cdots +X_{N-1}e^{-i2pi t}right]} for N odd,


where the coefficients Xk are given by the DFT of xn above, satisfies the interpolation property p(n/N)=xn{displaystyle p(n/N)=x_{n}}p(n/N)=x_{n} for n=0,…,N−1{displaystyle n=0,ldots ,N-1}n=0,ldots ,N-1.


For even N, notice that the Nyquist component XN/2Ncos⁡(Nπt){displaystyle {frac {X_{N/2}}{N}}cos(Npi t)}{frac {X_{N/2}}{N}}cos(Npi t) is handled specially.


This interpolation is not unique: aliasing implies that one could add N to any of the complex-sinusoid frequencies (e.g. changing e−it{displaystyle e^{-it}}e^{-it} to ei(N−1)t{displaystyle e^{i(N-1)t}}e^{i(N-1)t} ) without changing the interpolation property, but giving different values in between the xn{displaystyle x_{n}}x_{n} points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the xn{displaystyle x_{n}}x_{n} are real numbers, then p(t){displaystyle p(t)}p(t) is real as well.


In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to N−1{displaystyle N-1}N-1 (instead of roughly N/2{displaystyle -N/2}-N/2 to +N/2{displaystyle +N/2}+N/2 as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real xn{displaystyle x_{n}}x_{n}; its use is a common mistake.



The unitary DFT


Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as the DFT matrix, a Vandermonde matrix,
introduced by Sylvester in 1867,


F=[ωN0⋅N0⋅1…ωN0⋅(N−1)ωN1⋅N1⋅1…ωN1⋅(N−1)⋮ωN(N−1)⋅N(N−1)⋅1…ωN(N−1)⋅(N−1)]{displaystyle mathbf {F} ={begin{bmatrix}omega _{N}^{0cdot 0}&omega _{N}^{0cdot 1}&ldots &omega _{N}^{0cdot (N-1)}\omega _{N}^{1cdot 0}&omega _{N}^{1cdot 1}&ldots &omega _{N}^{1cdot (N-1)}\vdots &vdots &ddots &vdots \omega _{N}^{(N-1)cdot 0}&omega _{N}^{(N-1)cdot 1}&ldots &omega _{N}^{(N-1)cdot (N-1)}\end{bmatrix}}}mathbf {F} ={begin{bmatrix}omega _{N}^{0cdot 0}&omega _{N}^{0cdot 1}&ldots &omega _{N}^{0cdot (N-1)}\omega _{N}^{1cdot 0}&omega _{N}^{1cdot 1}&ldots &omega _{N}^{1cdot (N-1)}\vdots &vdots &ddots &vdots \omega _{N}^{(N-1)cdot 0}&omega _{N}^{(N-1)cdot 1}&ldots &omega _{N}^{(N-1)cdot (N-1)}\end{bmatrix}}

where


ωN=e−i2π/N{displaystyle omega _{N}=e^{-i2pi /N},}{displaystyle omega _{N}=e^{-i2pi /N},}

is a primitive Nth root of unity.


The inverse transform is then given by the inverse of the above matrix,


F−1=1NF∗{displaystyle mathbf {F} ^{-1}={frac {1}{N}}mathbf {F} ^{*}}mathbf {F} ^{-1}={frac {1}{N}}mathbf {F} ^{*}

With unitary normalization constants 1/N{displaystyle 1/{sqrt {N}}}1/{sqrt {N}}, the DFT becomes a unitary transformation, defined by a unitary matrix:



U=F/N{displaystyle mathbf {U} =mathbf {F} /{sqrt {N}}}mathbf {U} =mathbf {F} /{sqrt {N}}

U−1=U∗{displaystyle mathbf {U} ^{-1}=mathbf {U} ^{*}}mathbf {U} ^{-1}=mathbf {U} ^{*}

|det(U)|=1{displaystyle |det(mathbf {U} )|=1}|det(mathbf {U} )|=1


where det(){displaystyle det()}{displaystyle det()} is the determinant function. The determinant is the product of the eigenvalues, which are always ±1{displaystyle pm 1}pm 1 or ±i{displaystyle pm i}pm i as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT.


The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity):


m=0N−1UkmUmn∗kn{displaystyle sum _{m=0}^{N-1}U_{km}U_{mn}^{*}=delta _{kn}}sum _{m=0}^{N-1}U_{km}U_{mn}^{*}=delta _{kn}

If X is defined as the unitary DFT of the vector x, then


Xk=∑n=0N−1Uknxn{displaystyle X_{k}=sum _{n=0}^{N-1}U_{kn}x_{n}}X_{k}=sum _{n=0}^{N-1}U_{kn}x_{n}

and the Plancherel theorem is expressed as


n=0N−1xnyn∗=∑k=0N−1XkYk∗{displaystyle sum _{n=0}^{N-1}x_{n}y_{n}^{*}=sum _{k=0}^{N-1}X_{k}Y_{k}^{*}}sum _{n=0}^{N-1}x_{n}y_{n}^{*}=sum _{k=0}^{N-1}X_{k}Y_{k}^{*}

If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case x=y{displaystyle mathbf {x} =mathbf {y} }mathbf {x} =mathbf {y} , this implies that the length of a vector is preserved as well—this is just Parseval's theorem,


n=0N−1|xn|2=∑k=0N−1|Xk|2{displaystyle sum _{n=0}^{N-1}|x_{n}|^{2}=sum _{k=0}^{N-1}|X_{k}|^{2}}sum _{n=0}^{N-1}|x_{n}|^{2}=sum _{k=0}^{N-1}|X_{k}|^{2}

A consequence of the circular convolution theorem is that the DFT matrix F diagonalizes any circulant matrix.



Expressing the inverse DFT in terms of the DFT


A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.)


First, we can compute the inverse DFT by reversing all but one of the inputs (Duhamel et al., 1988):


F−1({xn})=F({xN−n})/N{displaystyle {mathcal {F}}^{-1}({x_{n}})={mathcal {F}}({x_{N-n}})/N}{mathcal {F}}^{-1}({x_{n}})={mathcal {F}}({x_{N-n}})/N

(As usual, the subscripts are interpreted modulo N; thus, for n=0{displaystyle n=0}n=0, we have xN−0=x0{displaystyle x_{N-0}=x_{0}}x_{N-0}=x_{0}.)


Second, one can also conjugate the inputs and outputs:


F−1(x)=F(x∗)∗/N{displaystyle {mathcal {F}}^{-1}(mathbf {x} )={mathcal {F}}(mathbf {x} ^{*})^{*}/N}{mathcal {F}}^{-1}(mathbf {x} )={mathcal {F}}(mathbf {x} ^{*})^{*}/N

Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define swap(xn{displaystyle x_{n}}x_{n}) as xn{displaystyle x_{n}}x_{n} with its real and imaginary parts swapped—that is, if xn=a+bi{displaystyle x_{n}=a+bi}x_{n}=a+bi then swap(xn{displaystyle x_{n}}x_{n}) is b+ai{displaystyle b+ai}b+ai. Equivalently, swap(xn{displaystyle x_{n}}x_{n}) equals ixn∗{displaystyle ix_{n}^{*}}ix_{n}^{*}. Then


F−1(x)=swap(F(swap(x)))/N{displaystyle {mathcal {F}}^{-1}(mathbf {x} )={textrm {swap}}({mathcal {F}}({textrm {swap}}(mathbf {x} )))/N}{mathcal {F}}^{-1}(mathbf {x} )={textrm {swap}}({mathcal {F}}({textrm {swap}}(mathbf {x} )))/N

That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988).


The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutory—that is, which is its own inverse. In particular, T(x)=F(x∗)/N{displaystyle T(mathbf {x} )={mathcal {F}}(mathbf {x} ^{*})/{sqrt {N}}}T(mathbf {x} )={mathcal {F}}(mathbf {x} ^{*})/{sqrt {N}} is clearly its own inverse: T(T(x))=x{displaystyle T(T(mathbf {x} ))=mathbf {x} }T(T(mathbf {x} ))=mathbf {x} . A closely related involutory transformation (by a factor of (1+i) /2) is H(x)=F((1+i)x∗)/2N{displaystyle H(mathbf {x} )={mathcal {F}}((1+i)mathbf {x} ^{*})/{sqrt {2N}}}H(mathbf {x} )={mathcal {F}}((1+i)mathbf {x} ^{*})/{sqrt {2N}}, since the (1+i){displaystyle (1+i)}(1+i) factors in H(H(x)){displaystyle H(H(mathbf {x} ))}H(H(mathbf {x} )) cancel the 2. For real inputs x{displaystyle mathbf {x} }mathbf {x} , the real part of H(x){displaystyle H(mathbf {x} )}H(mathbf {x} ) is none other than the discrete Hartley transform, which is also involutory.



Eigenvalues and eigenvectors


The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research.


Consider the unitary form U{displaystyle mathbf {U} }mathbf {U} defined above for the DFT of length N, where


Um,n=1NωN(m−1)(n−1)=1Ne−i2πN(m−1)(n−1).{displaystyle mathbf {U} _{m,n}={frac {1}{sqrt {N}}}omega _{N}^{(m-1)(n-1)}={frac {1}{sqrt {N}}}e^{-{frac {i2pi }{N}}(m-1)(n-1)}.}{displaystyle mathbf {U} _{m,n}={frac {1}{sqrt {N}}}omega _{N}^{(m-1)(n-1)}={frac {1}{sqrt {N}}}e^{-{frac {i2pi }{N}}(m-1)(n-1)}.}

This matrix satisfies the matrix polynomial equation:


U4=I.{displaystyle mathbf {U} ^{4}=mathbf {I} .}mathbf {U} ^{4}=mathbf {I} .

This can be seen from the inverse properties above: operating U{displaystyle mathbf {U} }mathbf {U} twice gives the original data in reverse order, so operating U{displaystyle mathbf {U} }mathbf {U} four times gives back the original data and is thus the identity matrix. This means that the eigenvalues λ{displaystyle lambda }lambda satisfy the equation:


λ4=1.{displaystyle lambda ^{4}=1.}lambda ^{4}=1.

Therefore, the eigenvalues of U{displaystyle mathbf {U} }mathbf {U} are the fourth roots of unity: λ{displaystyle lambda }lambda is +1, −1, +i, or −i.


Since there are only four distinct eigenvalues for this N{displaystyle Ntimes N}Ntimes N matrix, they have some multiplicity. The multiplicity gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (Note that there are N independent eigenvectors; a unitary matrix is never defective.)


The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by Gauss (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of N modulo 4, and is given by the following table:









































Multiplicities of the eigenvalues λ of the unitary DFT matrix U as a function of the transform size N (in terms of an integer m).
size N
λ = +1
λ = −1
λ = -i
λ = +i
4m

m + 1
m m
m − 1
4m + 1
m + 1
m m
m
4m + 2
m + 1

m + 1
m
m
4m + 3
m + 1

m + 1

m + 1

m

Otherwise stated, the characteristic polynomial of U{displaystyle mathbf {U} }mathbf {U} is:


det(λI−U)=(λ1)⌊N+44⌋(λ+1)⌊N+24⌋(λ+i)⌊N+14⌋(λi)⌊N−14⌋.{displaystyle det(lambda I-mathbf {U} )=(lambda -1)^{leftlfloor {tfrac {N+4}{4}}rightrfloor }(lambda +1)^{leftlfloor {tfrac {N+2}{4}}rightrfloor }(lambda +i)^{leftlfloor {tfrac {N+1}{4}}rightrfloor }(lambda -i)^{leftlfloor {tfrac {N-1}{4}}rightrfloor }.}det(lambda I-mathbf {U} )=(lambda -1)^{leftlfloor {tfrac {N+4}{4}}rightrfloor }(lambda +1)^{leftlfloor {tfrac {N+2}{4}}rightrfloor }(lambda +i)^{leftlfloor {tfrac {N+1}{4}}rightrfloor }(lambda -i)^{leftlfloor {tfrac {N-1}{4}}rightrfloor }.

No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like orthogonality and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grünbaum, 1982; Atakishiyev and Wolf, 1997; Candan et al., 2000; Hanna et al., 2004; Gurevich and Hadani, 2008).


A straightforward approach is to discretize an eigenfunction of the continuous Fourier transform,
of which the most famous is the Gaussian function.
Since periodic summation of the function means discretizing its frequency spectrum
and discretization means periodic summation of the spectrum,
the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform:



  • F(m)=∑k∈Zexp⁡(−π(m+N⋅k)2N){displaystyle F(m)=sum _{kin mathbb {Z} }exp left(-{frac {pi cdot (m+Ncdot k)^{2}}{N}}right)}F(m)=sum _{kin mathbb {Z} }exp left(-{frac {pi cdot (m+Ncdot k)^{2}}{N}}right).

The closed form expression for the series can be expressed by
Jacobi theta functions as



  • F(m)=1Nϑ3(πmN,exp⁡(−πN)){displaystyle F(m)={frac {1}{sqrt {N}}}vartheta _{3}left({frac {pi m}{N}},exp left(-{frac {pi }{N}}right)right)}{displaystyle F(m)={frac {1}{sqrt {N}}}vartheta _{3}left({frac {pi m}{N}},exp left(-{frac {pi }{N}}right)right)}.

Two other simple closed-form analytical eigenvectors for special DFT period N were found (Kong, 2008):


For DFT period N = 2L + 1 = 4K +1, where K is an integer, the following is an eigenvector of DFT:


  • F(m)=∏s=K+1L[cos⁡(2πNm)−cos⁡(2πNs)]{displaystyle F(m)=prod _{s=K+1}^{L}left[cos left({frac {2pi }{N}}mright)-cos left({frac {2pi }{N}}sright)right]}F(m)=prod _{s=K+1}^{L}left[cos left({frac {2pi }{N}}mright)-cos left({frac {2pi }{N}}sright)right]

For DFT period N = 2L = 4K, where K is an integer, the following is an eigenvector of DFT:


  • F(m)=sin⁡(2πNm)∏s=K+1L−1[cos⁡(2πNm)−cos⁡(2πNs)]{displaystyle F(m)=sin left({frac {2pi }{N}}mright)prod _{s=K+1}^{L-1}left[cos left({frac {2pi }{N}}mright)-cos left({frac {2pi }{N}}sright)right]}F(m)=sin left({frac {2pi }{N}}mright)prod _{s=K+1}^{L-1}left[cos left({frac {2pi }{N}}mright)-cos left({frac {2pi }{N}}sright)right]

The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the fractional Fourier transform—the DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the continuous Fourier transform, the natural orthogonal eigenfunctions are the Hermite functions, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the Kravchuk polynomials (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however.



Uncertainty principles



Probabilistic uncertainty principle


If the random variable Xk is constrained by


n=0N−1|Xn|2=1 ,{displaystyle sum _{n=0}^{N-1}|X_{n}|^{2}=1~,}sum _{n=0}^{N-1}|X_{n}|^{2}=1~,

then


Pn=|Xn|2{displaystyle P_{n}=|X_{n}|^{2}}P_{n}=|X_{n}|^{2}

may be considered to represent a discrete probability mass function of n, with an associated probability mass function constructed from the transformed variable,


Qm=N|xm|2 .{displaystyle Q_{m}=N|x_{m}|^{2}~.}Q_{m}=N|x_{m}|^{2}~.

For the case of continuous functions P(x){displaystyle P(x)}P(x) and Q(k){displaystyle Q(k)}Q(k), the Heisenberg uncertainty principle states that


D0(X)D0(x)≥116π2{displaystyle D_{0}(X)D_{0}(x)geq {frac {1}{16pi ^{2}}}}D_{0}(X)D_{0}(x)geq {frac {1}{16pi ^{2}}}

where D0(X){displaystyle D_{0}(X)}D_{0}(X) and D0(x){displaystyle D_{0}(x)}D_{0}(x) are the variances of |X|2{displaystyle |X|^{2}}|X|^{2} and |x|2{displaystyle |x|^{2}}|x|^{2} respectively, with the equality attained in the case of a suitably normalized Gaussian distribution. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. Still, a meaningful uncertainty principle has been introduced by Massar and Spindel.[7]


However, the Hirschman entropic uncertainty will have a useful analog for the case of the DFT.[8] The Hirschman uncertainty principle is expressed in terms of the Shannon entropy of the two probability functions.


In the discrete case, the Shannon entropies are defined as


H(X)=−n=0N−1Pnln⁡Pn{displaystyle H(X)=-sum _{n=0}^{N-1}P_{n}ln P_{n}}H(X)=-sum _{n=0}^{N-1}P_{n}ln P_{n}

and


H(x)=−m=0N−1Qmln⁡Qm ,{displaystyle H(x)=-sum _{m=0}^{N-1}Q_{m}ln Q_{m}~,}H(x)=-sum _{m=0}^{N-1}Q_{m}ln Q_{m}~,

and the entropic uncertainty principle becomes[8]


H(X)+H(x)≥ln⁡(N) .{displaystyle H(X)+H(x)geq ln(N)~.}H(X)+H(x)geq ln(N)~.

The equality is obtained for Pn{displaystyle P_{n}}P_{n} equal to translations and modulations of a suitably normalized Kronecker comb of period A{displaystyle A}A where A{displaystyle A}A is any exact integer divisor of N{displaystyle N}N. The probability mass function Qm{displaystyle Q_{m}}Q_{m} will then be proportional to a suitably translated Kronecker comb of period B=N/A{displaystyle B=N/A}B=N/A.[8]



Deterministic uncertainty principle


There is also a well-known deterministic uncertainty principle that uses signal sparsity (or the number of non-zero coefficients).[9] Let x‖0{displaystyle |x|_{0}}|x|_{0} and X‖0{displaystyle |X|_{0}}|X|_{0} be the number of non-zero elements of the time and frequency sequences x0,x1,…,xN−1{displaystyle x_{0},x_{1},ldots ,x_{N-1}}x_{0},x_{1},ldots ,x_{N-1} and X0,X1,…,XN−1{displaystyle X_{0},X_{1},ldots ,X_{N-1}}X_{0},X_{1},ldots ,X_{N-1}, respectively. Then,


N≤x‖0⋅X‖0.{displaystyle Nleq |x|_{0}cdot |X|_{0}.}Nleq |x|_{0}cdot |X|_{0}.

As an immediate consequence of the inequality of arithmetic and geometric means, one also has 2N≤x‖0+‖X‖0{displaystyle 2{sqrt {N}}leq |x|_{0}+|X|_{0}}2{sqrt {N}}leq |x|_{0}+|X|_{0}. Both uncertainty principles were shown to be tight for specifically-chosen "picket-fence" sequences (discrete impulse trains), and find practical use for signal recovery applications.[9]



DFT of real and purely imaginary signals


  • If x0,…,xN−1{displaystyle x_{0},ldots ,x_{N-1}}x_{0},ldots ,x_{N-1} are real numbers, as they often are in practical applications, then the DFT X0,…,XN−1{displaystyle X_{0},ldots ,X_{N-1}}{displaystyle X_{0},ldots ,X_{N-1}} is even symmetric:


xn∈R∀n∈{0,…,N−1}⟹Xk=X−kmodN∗k∈{0,…,N−1}{displaystyle x_{n}in mathbb {R} quad forall nin {0,ldots ,N-1}implies X_{k}=X_{-kmod N}^{*}quad forall kin {0,ldots ,N-1}}{displaystyle x_{n}in mathbb {R} quad forall nin {0,ldots ,N-1}implies X_{k}=X_{-kmod N}^{*}quad forall kin {0,ldots ,N-1}}, where X∗{displaystyle X^{*},}X^{*}, denotes complex conjugation.

It follows that for even N{displaystyle N}N X0{displaystyle X_{0}}X_{0} and XN/2{displaystyle X_{N/2}}{displaystyle X_{N/2}} are real-valued, and the remainder of the DFT is completely specified by just N/2−1{displaystyle N/2-1}{displaystyle N/2-1} complex numbers.


  • If x0,…,xN−1{displaystyle x_{0},ldots ,x_{N-1}}x_{0},ldots ,x_{N-1} are purely imaginary numbers, then the DFT X0,…,XN−1{displaystyle X_{0},ldots ,X_{N-1}}{displaystyle X_{0},ldots ,X_{N-1}} is odd symmetric:


xn∈iR∀n∈{0,…,N−1}⟹Xk=−X−kmodN∗k∈{0,…,N−1}{displaystyle x_{n}in imathbb {R} quad forall nin {0,ldots ,N-1}implies X_{k}=-X_{-kmod N}^{*}quad forall kin {0,ldots ,N-1}}{displaystyle x_{n}in imathbb {R} quad forall nin {0,ldots ,N-1}implies X_{k}=-X_{-kmod N}^{*}quad forall kin {0,ldots ,N-1}}, where X∗{displaystyle X^{*},}X^{*}, denotes complex conjugation.


Generalized DFT (shifted and non-linear phase)


It is possible to shift the transform sampling in time and/or frequency domain by some real shifts a and b, respectively. This is sometimes known as a generalized DFT (or GDFT), also called the shifted DFT or offset DFT, and has analogous properties to the ordinary DFT:


Xk=∑n=0N−1xne−i2πN(k+b)(n+a)k=0,…,N−1.{displaystyle X_{k}=sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}(k+b)(n+a)}quad quad k=0,dots ,N-1.}{displaystyle X_{k}=sum _{n=0}^{N-1}x_{n}e^{-{frac {i2pi }{N}}(k+b)(n+a)}quad quad k=0,dots ,N-1.}

Most often, shifts of 1/2{displaystyle 1/2}1/2 (half a sample) are used.
While the ordinary DFT corresponds to a periodic signal in both time and frequency domains, a=1/2{displaystyle a=1/2}a=1/2 produces a signal that is anti-periodic in frequency domain (Xk+N=−Xk{displaystyle X_{k+N}=-X_{k}}X_{k+N}=-X_{k}) and vice versa for b=1/2{displaystyle b=1/2}b=1/2.
Thus, the specific case of a=b=1/2{displaystyle a=b=1/2}a=b=1/2 is known as an odd-time odd-frequency discrete Fourier transform (or O2 DFT).
Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms.


Another interesting choice is a=b=−(N−1)/2{displaystyle a=b=-(N-1)/2}a=b=-(N-1)/2, which is called the centered DFT (or CDFT). The centered DFT has the useful property that, when N is a multiple of four, all four of its eigenvalues (see above) have equal multiplicities (Rubio and Santhanam, 2005)[10]


The term GDFT is also used for the non-linear phase extensions of DFT. Hence, GDFT method provides a generalization for constant amplitude orthogonal block transforms including linear and non-linear phase types. GDFT is a framework
to improve time and frequency domain properties of the traditional DFT, e.g. auto/cross-correlations, by the addition of the properly designed phase shaping function (non-linear, in general) to the original linear phase functions (Akansu and Agirman-Tosun, 2010).[11]


The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane; more general z-transforms correspond to complex shifts a and b above.



Multidimensional DFT


The ordinary DFT transforms a one-dimensional sequence or array xn{displaystyle x_{n}}x_{n} that is a function of exactly one discrete variable n. The multidimensional DFT of a multidimensional array xn1,n2,…,nd{displaystyle x_{n_{1},n_{2},dots ,n_{d}}}x_{n_{1},n_{2},dots ,n_{d}} that is a function of d discrete variables nℓ=0,1,…,Nℓ1{displaystyle n_{ell }=0,1,dots ,N_{ell }-1}n_{ell }=0,1,dots ,N_{ell }-1 for {displaystyle ell }ell in 1,2,…,d{displaystyle 1,2,dots ,d}1,2,dots ,d is defined by:


Xk1,k2,…,kd=∑n1=0N1−1(ωN1 k1n1∑n2=0N2−1(ωN2 k2n2⋯nd=0Nd−Nd kdnd⋅xn1,n2,…,nd)),{displaystyle X_{k_{1},k_{2},dots ,k_{d}}=sum _{n_{1}=0}^{N_{1}-1}left(omega _{N_{1}}^{~k_{1}n_{1}}sum _{n_{2}=0}^{N_{2}-1}left(omega _{N_{2}}^{~k_{2}n_{2}}cdots sum _{n_{d}=0}^{N_{d}-1}omega _{N_{d}}^{~k_{d}n_{d}}cdot x_{n_{1},n_{2},dots ,n_{d}}right)right),,}X_{k_{1},k_{2},dots ,k_{d}}=sum _{n_{1}=0}^{N_{1}-1}left(omega _{N_{1}}^{~k_{1}n_{1}}sum _{n_{2}=0}^{N_{2}-1}left(omega _{N_{2}}^{~k_{2}n_{2}}cdots sum _{n_{d}=0}^{N_{d}-1}omega _{N_{d}}^{~k_{d}n_{d}}cdot x_{n_{1},n_{2},dots ,n_{d}}right)right),,

where ωNℓ=exp⁡(−i2π/Nℓ){displaystyle omega _{N_{ell }}=exp(-i2pi /N_{ell })}{displaystyle omega _{N_{ell }}=exp(-i2pi /N_{ell })} as above and the d output indices run from kℓ=0,1,…,Nℓ1{displaystyle k_{ell }=0,1,dots ,N_{ell }-1}k_{ell }=0,1,dots ,N_{ell }-1. This is more compactly expressed in vector notation, where we define n=(n1,n2,…,nd){displaystyle mathbf {n} =(n_{1},n_{2},dots ,n_{d})}mathbf {n} =(n_{1},n_{2},dots ,n_{d}) and k=(k1,k2,…,kd){displaystyle mathbf {k} =(k_{1},k_{2},dots ,k_{d})}mathbf {k} =(k_{1},k_{2},dots ,k_{d}) as d-dimensional vectors of indices from 0 to N−1{displaystyle mathbf {N} -1}mathbf {N} -1, which we define as N−1=(N1−1,N2−1,…,Nd−1){displaystyle mathbf {N} -1=(N_{1}-1,N_{2}-1,dots ,N_{d}-1)}mathbf {N} -1=(N_{1}-1,N_{2}-1,dots ,N_{d}-1):


Xk=∑n=0N−1e−i2πk⋅(n/N)xn,{displaystyle X_{mathbf {k} }=sum _{mathbf {n} =mathbf {0} }^{mathbf {N} -1}e^{-i2pi mathbf {k} cdot (mathbf {n} /mathbf {N} )}x_{mathbf {n} },,}{displaystyle X_{mathbf {k} }=sum _{mathbf {n} =mathbf {0} }^{mathbf {N} -1}e^{-i2pi mathbf {k} cdot (mathbf {n} /mathbf {N} )}x_{mathbf {n} },,}

where the division n/N{displaystyle mathbf {n} /mathbf {N} }mathbf {n} /mathbf {N} is defined as n/N=(n1/N1,…,nd/Nd){displaystyle mathbf {n} /mathbf {N} =(n_{1}/N_{1},dots ,n_{d}/N_{d})}mathbf {n} /mathbf {N} =(n_{1}/N_{1},dots ,n_{d}/N_{d}) to be performed element-wise, and the sum denotes the set of nested summations above.


The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by:


xn=1∏=1dNℓk=0N−1ei2πn⋅(k/N)Xk.{displaystyle x_{mathbf {n} }={frac {1}{prod _{ell =1}^{d}N_{ell }}}sum _{mathbf {k} =mathbf {0} }^{mathbf {N} -1}e^{i2pi mathbf {n} cdot (mathbf {k} /mathbf {N} )}X_{mathbf {k} },.}{displaystyle x_{mathbf {n} }={frac {1}{prod _{ell =1}^{d}N_{ell }}}sum _{mathbf {k} =mathbf {0} }^{mathbf {N} -1}e^{i2pi mathbf {n} cdot (mathbf {k} /mathbf {N} )}X_{mathbf {k} },.}

As the one-dimensional DFT expresses the input xn{displaystyle x_{n}}x_{n} as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or multidimensional sinusoids. The direction of oscillation in space is k/N{displaystyle mathbf {k} /mathbf {N} }mathbf {k} /mathbf {N} . The amplitudes are Xk{displaystyle X_{mathbf {k} }}X_{mathbf {k} }. This decomposition is of great importance for everything from digital image processing (two-dimensional) to solving partial differential equations. The solution is broken up into plane waves.


The multidimensional DFT can be computed by the composition of a sequence of one-dimensional DFTs along each dimension. In the two-dimensional case xn1,n2{displaystyle x_{n_{1},n_{2}}}x_{n_{1},n_{2}} the N1{displaystyle N_{1}}N_{1} independent DFTs of the rows (i.e., along n2{displaystyle n_{2}}n_{2}) are computed first to form a new array yn1,k2{displaystyle y_{n_{1},k_{2}}}y_{n_{1},k_{2}}. Then the N2{displaystyle N_{2}}N_{2} independent DFTs of y along the columns (along n1{displaystyle n_{1}}n_{1}) are computed to form the final result Xk1,k2{displaystyle X_{k_{1},k_{2}}}X_{k_{1},k_{2}}. Alternatively the columns can be computed first and then the rows. The order is immaterial because the nested summations above commute.


An algorithm to compute a one-dimensional DFT is thus sufficient to efficiently compute a multidimensional DFT. This approach is known as the row-column algorithm. There are also intrinsically multidimensional FFT algorithms.



The real-input multidimensional DFT


For input data xn1,n2,…,nd{displaystyle x_{n_{1},n_{2},dots ,n_{d}}}x_{n_{1},n_{2},dots ,n_{d}} consisting of real numbers, the DFT outputs have a conjugate symmetry similar to the one-dimensional case above:


Xk1,k2,…,kd=XN1−k1,N2−k2,…,Nd−kd∗,{displaystyle X_{k_{1},k_{2},dots ,k_{d}}=X_{N_{1}-k_{1},N_{2}-k_{2},dots ,N_{d}-k_{d}}^{*},}X_{k_{1},k_{2},dots ,k_{d}}=X_{N_{1}-k_{1},N_{2}-k_{2},dots ,N_{d}-k_{d}}^{*},

where the star again denotes complex conjugation and the {displaystyle ell }ell -th subscript is again interpreted modulo Nℓ{displaystyle N_{ell }}N_{ell } (for =1,2,…,d{displaystyle ell =1,2,ldots ,d}ell =1,2,ldots ,d).



Applications


The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform.



Spectral analysis


When the DFT is used for signal spectral analysis, the {xn}{displaystyle {x_{n}},}{x_{n}}, sequence usually represents a finite set of uniformly spaced time-samples of some signal x(t){displaystyle x(t),}x(t),, where t{displaystyle t}t represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of x(t){displaystyle x(t)}x(t) into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (a.k.a. resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation.


A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated at Sampling the DTFT.



  • The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT.

  • As already noted, leakage imposes a limit on the inherent resolution of the DTFT. So there is a practical limit to the benefit that can be obtained from a fine-grained DFT.



Filter bank


See FFT filter banks and Sampling the DTFT.



Data compression


The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform.)
Some relatively recent compression algorithms, however, use wavelet transforms, which give a more uniform compromise between time and frequency domain than obtained by chopping data into segments and transforming each segment. In the case of JPEG2000, this avoids the spurious image features that appear when images are highly compressed with the original JPEG.



Partial differential equations


Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The advantage of this approach is that it expands the signal in complex exponentials einx{displaystyle e^{inx}}{displaystyle e^{inx}}, which are eigenfunctions of differentiation: d(einx)/dx=ineinx{displaystyle {{text{d}}{big (}e^{inx}{big )}}/{text{d}}x=ine^{inx}}{displaystyle {{text{d}}{big (}e^{inx}{big )}}/{text{d}}x=ine^{inx}}. Thus, in the Fourier representation, differentiation is simple—we just multiply by in{displaystyle in}in. (Note, however, that the choice of n{displaystyle n}n is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method.



Polynomial multiplication


Suppose we wish to compute the polynomial product c(x) = a(x) · b(x). The ordinary product expression for the coefficients of c involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for a(x) and b(x) with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension d > deg(a(x)) + deg(b(x)). Then,


c=a∗b{displaystyle mathbf {c} =mathbf {a} *mathbf {b} }mathbf {c} =mathbf {a} *mathbf {b}

Where c is the vector of coefficients for c(x), and the convolution operator {displaystyle *,}*, is defined so


cn=∑m=0d−1ambn−m mod dn=0,1…,d−1{displaystyle c_{n}=sum _{m=0}^{d-1}a_{m}b_{n-m mathrm {mod} d}qquad qquad qquad n=0,1dots ,d-1}c_{n}=sum _{m=0}^{d-1}a_{m}b_{n-m mathrm {mod}  d}qquad qquad qquad n=0,1dots ,d-1

But convolution becomes multiplication under the DFT:


F(c)=F(a)F(b){displaystyle {mathcal {F}}(mathbf {c} )={mathcal {F}}(mathbf {a} ){mathcal {F}}(mathbf {b} )}{mathcal {F}}(mathbf {c} )={mathcal {F}}(mathbf {a} ){mathcal {F}}(mathbf {b} )

Here the vector product is taken elementwise. Thus the coefficients of the product polynomial c(x) are just the terms 0, ..., deg(a(x)) + deg(b(x)) of the coefficient vector


c=F−1(F(a)F(b)).{displaystyle mathbf {c} ={mathcal {F}}^{-1}({mathcal {F}}(mathbf {a} ){mathcal {F}}(mathbf {b} )).}mathbf {c} ={mathcal {F}}^{-1}({mathcal {F}}(mathbf {a} ){mathcal {F}}(mathbf {b} )).

With a fast Fourier transform, the resulting algorithm takes O (N log N) arithmetic operations. Due to its simplicity and speed, the Cooley–Tukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, d should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation).



Multiplication of large integers


The fastest known algorithms for the multiplication of very large integers use the polynomial multiplication method outlined above. Integers can be treated as the value of a polynomial evaluated specifically at the number base, with the coefficients of the polynomial corresponding to the digits in that base (ex. 123=1⋅102+2⋅101+3⋅100{displaystyle 123=1cdot 10^{2}+2cdot 10^{1}+3cdot 10^{0}}{displaystyle 123=1cdot 10^{2}+2cdot 10^{1}+3cdot 10^{0}}). After polynomial multiplication, a relatively low-complexity carry-propagation step completes the multiplication.



Convolution


When data is convolved with a function with wide support, such as for downsampling by a large sampling ratio, because of the Convolution theorem and the FFT algorithm, it may be faster to transform it, multiply pointwise by the transform of the filter and then reverse transform it. Alternatively, a good filter is obtained by simply truncating the transformed data and re-transforming the shortened data set.



Some discrete Fourier transform pairs















































Some DFT pairs

xn=1N∑k=0N−1Xkei2πkn/N{displaystyle x_{n}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}e^{i2pi kn/N}}x_{n}={frac {1}{N}}sum _{k=0}^{N-1}X_{k}e^{i2pi kn/N}

Xk=∑n=0N−1xne−i2πkn/N{displaystyle X_{k}=sum _{n=0}^{N-1}x_{n}e^{-i2pi kn/N}}X_{k}=sum _{n=0}^{N-1}x_{n}e^{-i2pi kn/N}
Note

xnei2πnℓ/N{displaystyle x_{n}e^{i2pi nell /N},}x_{n}e^{i2pi nell /N},

Xk−{displaystyle X_{k-ell },}X_{k-ell },
Frequency shift theorem

xn−{displaystyle x_{n-ell },}x_{n-ell },

Xke−i2πkℓ/N{displaystyle X_{k}e^{-i2pi kell /N},}X_{k}e^{-i2pi kell /N},
Time shift theorem

xn∈R{displaystyle x_{n}in mathbb {R} }x_{n}in mathbb {R}

Xk=XN−k∗{displaystyle X_{k}=X_{N-k}^{*},}X_{k}=X_{N-k}^{*},
Real DFT

an{displaystyle a^{n},}a^{n},

{Nif a=ei2πk/N1−aN1−ae−i2πk/Notherwise{displaystyle left{{begin{matrix}N&{mbox{if }}a=e^{i2pi k/N}\{frac {1-a^{N}}{1-a,e^{-i2pi k/N}}}&{mbox{otherwise}}end{matrix}}right.}left{{begin{matrix}N&{mbox{if }}a=e^{i2pi k/N}\{frac {1-a^{N}}{1-a,e^{-i2pi k/N}}}&{mbox{otherwise}}end{matrix}}right.
from the geometric progression formula

(N−1n){displaystyle {N-1 choose n},}{N-1 choose n},

(1+e−i2πk/N)N−1{displaystyle left(1+e^{-i2pi k/N}right)^{N-1},}left(1+e^{-i2pi k/N}right)^{N-1},
from the binomial theorem

{1Wif 2n<W or 2(N−n)<W0otherwise{displaystyle left{{begin{matrix}{frac {1}{W}}&{mbox{if }}2n<W{mbox{ or }}2(N-n)<W\0&{mbox{otherwise}}end{matrix}}right.}left{{begin{matrix}{frac {1}{W}}&{mbox{if }}2n<W{mbox{ or }}2(N-n)<W\0&{mbox{otherwise}}end{matrix}}right.

{1if k=0sin⁡WkN)Wsin⁡kN)otherwise{displaystyle left{{begin{matrix}1&{mbox{if }}k=0\{frac {sin left({frac {pi Wk}{N}}right)}{Wsin left({frac {pi k}{N}}right)}}&{mbox{otherwise}}end{matrix}}right.}left{{begin{matrix}1&{mbox{if }}k=0\{frac {sin left({frac {pi Wk}{N}}right)}{Wsin left({frac {pi k}{N}}right)}}&{mbox{otherwise}}end{matrix}}right.

xn{displaystyle x_{n}}x_{n} is a rectangular window function of W points centered on n=0, where W is an odd integer, and Xk{displaystyle X_{k}}X_{k} is a sinc-like function (specifically, Xk{displaystyle X_{k}}X_{k} is a Dirichlet kernel)

j∈Zexp⁡(−πcN⋅(n+N⋅j)2){displaystyle sum _{jin mathbb {Z} }exp left(-{frac {pi }{cN}}cdot (n+Ncdot j)^{2}right)}sum _{jin mathbb {Z} }exp left(-{frac {pi }{cN}}cdot (n+Ncdot j)^{2}right)

cN⋅j∈Zexp⁡(−πcN⋅(k+N⋅j)2){displaystyle {sqrt {cN}}cdot sum _{jin mathbb {Z} }exp left(-{frac {pi c}{N}}cdot (k+Ncdot j)^{2}right)}{sqrt {cN}}cdot sum _{jin mathbb {Z} }exp left(-{frac {pi c}{N}}cdot (k+Ncdot j)^{2}right)

Discretization and periodic summation of the scaled Gaussian functions for c>0{displaystyle c>0}c>0. Since either c{displaystyle c}c or 1c{displaystyle {frac {1}{c}}}{frac {1}{c}} is larger than one and thus warrants fast convergence of one of the two series, for large c{displaystyle c}c you may choose to compute the frequency spectrum and convert to the time domain using the discrete Fourier transform.


Generalizations



Representation theory



The DFT can be interpreted as the complex-valued representation theory of the finite cyclic group. In other words, a sequence of n{displaystyle n}n complex numbers can be thought of as an element of n{displaystyle n}n-dimensional complex space Cn{displaystyle mathbb {C} ^{n}}mathbb {C} ^{n} or equivalently a function f{displaystyle f}f from the finite cyclic group of order n{displaystyle n}n to the complex numbers, Zn↦C{displaystyle mathbb {Z} _{n}mapsto mathbb {C} }{displaystyle mathbb {Z} _{n}mapsto mathbb {C} }. So f{displaystyle f}f is a class function on the finite cyclic group, and thus can be expressed as a linear combination of the irreducible characters of this group, which are the roots of unity.


From this point of view, one may generalize the DFT to representation theory generally, or more narrowly to the representation theory of finite groups.


More narrowly still, one may generalize the DFT by either changing the target (taking values in a field other than the complex numbers), or the domain (a group other than a finite cyclic group), as detailed in the sequel.



Other fields



Many of the properties of the DFT only depend on the fact that e−i2πN{displaystyle e^{-{frac {i2pi }{N}}}}{displaystyle e^{-{frac {i2pi }{N}}}} is a primitive root of unity, sometimes denoted ωN{displaystyle omega _{N}}omega _{N} or WN{displaystyle W_{N}}W_{N} (so that ωNN=1{displaystyle omega _{N}^{N}=1}omega _{N}^{N}=1). Such properties include the completeness, orthogonality, Plancherel/Parseval, periodicity, shift, convolution, and unitarity properties above, as well as many FFT algorithms. For this reason, the discrete Fourier transform can be defined by using roots of unity in fields other than the complex numbers, and such generalizations are commonly called number-theoretic transforms (NTTs) in the case of finite fields. For more information, see number-theoretic transform and discrete Fourier transform (general).



Other finite groups



The standard DFT acts on a sequence x0, x1, …, xN−1 of complex numbers, which can be viewed as a function {0, 1, …, N − 1} → C. The multidimensional DFT acts on multidimensional sequences, which can be viewed as functions


{0,1,…,N1−1}××{0,1,…,Nd−1}→C.{displaystyle {0,1,ldots ,N_{1}-1}times cdots times {0,1,ldots ,N_{d}-1}to mathbb {C} .}{0,1,ldots ,N_{1}-1}times cdots times {0,1,ldots ,N_{d}-1}to mathbb {C} .

This suggests the generalization to Fourier transforms on arbitrary finite groups, which act on functions GC where G is a finite group. In this framework, the standard DFT is seen as the Fourier transform on a cyclic group, while the multidimensional DFT is a Fourier transform on a direct sum of cyclic groups.


Further, Fourier transform can be on cosets of a group.



Alternatives




There are various alternatives to the DFT for various applications, prominent among which are wavelets. The analog of the DFT is the discrete wavelet transform (DWT). From the point of view of time–frequency analysis, a key limitation of the Fourier transform is that it does not include location information, only frequency information, and thus has difficulty in representing transients. As wavelets have location as well as frequency, they are better able to represent location, at the expense of greater difficulty representing frequency. For details, see comparison of the discrete wavelet transform with the discrete Fourier transform.



See also



  • Companion matrix

  • DFT matrix

  • Fast Fourier transform

  • FFTPACK

  • FFTW

  • Generalizations of Pauli matrices

  • List of Fourier-related transforms

  • Multidimensional transform

  • Zak transform

  • Quantum Fourier transform



Notes





  1. ^ As a linear transformation on a finite-dimensional vector space, the DFT expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.


  2. ^ Time reversal for the DFT means replacing n{displaystyle n}n by N−n{displaystyle N-n}N-n and not n{displaystyle n}n by n{displaystyle -n}-n to avoid negative indices.




References





  1. ^ Strang, Gilbert (May–June 1994). "Wavelets". American Scientist. 82 (3): 250–255. JSTOR 29775194. This is the most important numerical algorithm of our lifetime....mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Sahidullah, Md.; Saha, Goutam (Feb 2013). "A Novel Windowing Technique for Efficient Computation of MFCC for Speaker Recognition". IEEE Signal Processing Letters. 20 (2): 149–152. arXiv:1206.2437. Bibcode:2013ISPL...20..149S. doi:10.1109/LSP.2012.2235067.


  3. ^ Cooley et al., 1969


  4. ^ "Shift zero-frequency component to center of spectrum – MATLAB fftshift". mathworks.com. Natick, MA 01760: The MathWorks, Inc. Retrieved 10 March 2014.


  5. ^ ab Proakis, John G.; Manolakis, Dimitri G. (1996), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), Upper Saddle River, NJ: Prentice-Hall International, ISBN 9780133942897, sAcfAQAAIAAJ


  6. ^ T. G. Stockham, Jr., "High-speed convolution and correlation," in 1966 Proc. AFIPS Spring Joint Computing Conf. Reprinted in Digital Signal Processing, L. R. Rabiner and C. M. Rader, editors, New York: IEEE Press, 1972.


  7. ^ Massar, S.; Spindel, P. (2008). "Uncertainty Relation for the Discrete Fourier Transform". Physical Review Letters. 100 (19): 190401. arXiv:0710.0723. Bibcode:2008PhRvL.100s0401M. doi:10.1103/PhysRevLett.100.190401. PMID 18518426.


  8. ^ abc DeBrunner, Victor; Havlicek, Joseph P.; Przebinda, Tomasz; Özaydin, Murad (2005). "Entropy-Based Uncertainty Measures for L2(Rn),ℓ2(Z){displaystyle L^{2}(mathbb {R} ^{n}),ell ^{2}(mathbb {Z} )}L^{2}(mathbb {R} ^{n}),ell ^{2}(mathbb {Z} ), and 2(Z/NZ){displaystyle ell ^{2}(mathbb {Z} /Nmathbb {Z} )}ell ^{2}(mathbb {Z} /Nmathbb {Z} ) With a Hirschman Optimal Transform for 2(Z/NZ){displaystyle ell ^{2}(mathbb {Z} /Nmathbb {Z} )}ell ^{2}(mathbb {Z} /Nmathbb {Z} )" (PDF). IEEE Transactions on Signal Processing. 53 (8): 2690. Bibcode:2005ITSP...53.2690D. doi:10.1109/TSP.2005.850329. Retrieved 2011-06-23.


  9. ^ ab Donoho, D.L.; Stark, P.B (1989). "Uncertainty principles and signal recovery". SIAM Journal on Applied Mathematics. 49 (3): 906–931. doi:10.1137/0149053.


  10. ^ Santhanam, Balu; Santhanam, Thalanayar S. "Discrete Gauss-Hermite functions and eigenvectors of the centered discrete Fourier transform"[permanent dead link], Proceedings of the 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2007, SPTM-P12.4), vol. III, pp. 1385-1388.


  11. ^ Akansu, Ali N.; Agirman-Tosun, Handan
    "Generalized Discrete Fourier Transform With Nonlinear Phase", IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4547-4556, Sept. 2010.





Further reading








  • Brigham, E. Oran (1988). The fast Fourier transform and its applications. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-307505-2.


  • Oppenheim, Alan V.; Schafer, R. W.; and Buck, J. R. (1999). Discrete-time signal processing. Upper Saddle River, N.J.: Prentice Hall. ISBN 978-0-13-754920-7.CS1 maint: Multiple names: authors list (link)


  • Smith, Steven W. (1999). "Chapter 8: The Discrete Fourier Transform". The Scientist and Engineer's Guide to Digital Signal Processing (Second ed.). San Diego, Calif.: California Technical Publishing. ISBN 978-0-9660176-3-2.


  • Cormen, Thomas H.; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2001). "Chapter 30: Polynomials and the FFT". Introduction to Algorithms (Second ed.). MIT Press and McGraw-Hill. pp. 822–848. ISBN 978-0-262-03293-3. esp. section 30.2: The DFT and FFT, pp. 830–838.


  • P. Duhamel; B. Piron; J. M. Etcheto (1988). "On computing the inverse DFT". IEEE Transactions on Acoustics, Speech, and Signal Processing. 36 (2): 285–286. doi:10.1109/29.1519.


  • J. H. McClellan; T. W. Parks (1972). "Eigenvalues and eigenvectors of the discrete Fourier transformation". IEEE Transactions on Audio and Electroacoustics. 20 (1): 66–74. doi:10.1109/TAU.1972.1162342.


  • Bradley W. Dickinson; Kenneth Steiglitz (1982). "Eigenvectors and functions of the discrete Fourier transform" (PDF). IEEE Transactions on Acoustics, Speech, and Signal Processing. 30 (1): 25–31. CiteSeerX 10.1.1.434.5279. doi:10.1109/TASSP.1982.1163843. (Note that this paper has an apparent typo in its table of the eigenvalue multiplicities: the +i/−i columns are interchanged. The correct table can be found in McClellan and Parks, 1972, and is easily confirmed numerically.)


  • F. A. Grünbaum (1982). "The eigenvectors of the discrete Fourier transform". Journal of Mathematical Analysis and Applications. 88 (2): 355–363. doi:10.1016/0022-247X(82)90199-8.


  • Natig M. Atakishiyev; Kurt Bernardo Wolf (1997). "Fractional Fourier-Kravchuk transform". Journal of the Optical Society of America A. 14 (7): 1467–1477. Bibcode:1997JOSAA..14.1467A. doi:10.1364/JOSAA.14.001467.


  • C. Candan; M. A. Kutay; H. M.Ozaktas (2000). "The discrete fractional Fourier transform" (PDF). IEEE Transactions on Signal Processing. 48 (5): 1329–1337. Bibcode:2000ITSP...48.1329C. doi:10.1109/78.839980. hdl:11693/11130.


  • Magdy Tawfik Hanna, Nabila Philip Attalla Seif, and Waleed Abd El Maguid Ahmed (2004). "Hermite-Gaussian-like eigenvectors of the discrete Fourier transform matrix based on the singular-value decomposition of its orthogonal projection matrices". IEEE Transactions on Circuits and Systems I: Regular Papers. 51 (11): 2245–2254. doi:10.1109/TCSI.2004.836850.CS1 maint: Multiple names: authors list (link)


  • Shamgar Gurevich; Ronny Hadani (2009). "On the diagonalization of the discrete Fourier transform". Applied and Computational Harmonic Analysis. 27 (1): 87–99. arXiv:0808.3281. doi:10.1016/j.acha.2008.11.003. preprint at.


  • Shamgar Gurevich; Ronny Hadani; Nir Sochen (2008). "The finite harmonic oscillator and its applications to sequences, communication and radar". IEEE Transactions on Information Theory. 54 (9): 4239–4253. arXiv:0808.1495. doi:10.1109/TIT.2008.926440. preprint at.


  • Juan G. Vargas-Rubio; Balu Santhanam (2005). "On the multiangle centered discrete fractional Fourier transform". IEEE Signal Processing Letters. 12 (4): 273–276. Bibcode:2005ISPL...12..273V. doi:10.1109/LSP.2005.843762.


  • J. Cooley, P. Lewis, and P. Welch (1969). "The finite Fourier transform". IEEE Transactions on Audio and Electroacoustics. 17 (2): 77–85. doi:10.1109/TAU.1969.1162036.CS1 maint: Multiple names: authors list (link)


  • F.N. Kong (2008). "Analytic Expressions of Two Discrete Hermite-Gaussian Signals". IEEE Transactions on Circuits and Systems Ii: Express Briefs. 55 (1): 56–60. doi:10.1109/TCSII.2007.909865.



External links



  • Interactive explanation of the DFT

  • Matlab tutorial on the Discrete Fourier Transformation

  • Interactive flash tutorial on the DFT

  • Mathematics of the Discrete Fourier Transform by Julius O. Smith III

  • FFTW: Fast implementation of the DFT - coded in C and under General Public License (GPL)

  • General Purpose FFT Package: Yet another fast DFT implementation in C & FORTRAN, permissive license

  • Explained: The Discrete Fourier Transform

  • Discrete Fourier Transform

  • Indexing and shifting of Discrete Fourier Transform

  • Discrete Fourier Transform Properties









Popular posts from this blog

鏡平學校

ꓛꓣだゔៀៅຸ໢ທຮ໕໒ ,ໂ'໥໓າ໼ឨឲ៵៭ៈゎゔit''䖳𥁄卿' ☨₤₨こゎもょの;ꜹꟚꞖꞵꟅꞛေၦေɯ,ɨɡ𛃵𛁹ޝ޳ޠ޾,ޤޒޯ޾𫝒𫠁သ𛅤チョ'サノބޘދ𛁐ᶿᶇᶀᶋᶠ㨑㽹⻮ꧬ꧹؍۩وَؠ㇕㇃㇪ ㇦㇋㇋ṜẰᵡᴠ 軌ᵕ搜۳ٰޗޮ޷ސޯ𫖾𫅀ल, ꙭ꙰ꚅꙁꚊꞻꝔ꟠Ꝭㄤﺟޱސꧨꧼ꧴ꧯꧽ꧲ꧯ'⽹⽭⾁⿞⼳⽋២៩ញណើꩯꩤ꩸ꩮᶻᶺᶧᶂ𫳲𫪭𬸄𫵰𬖩𬫣𬊉ၲ𛅬㕦䬺𫝌𫝼,,𫟖𫞽ហៅ஫㆔ాఆఅꙒꚞꙍ,Ꙟ꙱エ ,ポテ,フࢰࢯ𫟠𫞶 𫝤𫟠ﺕﹱﻜﻣ𪵕𪭸𪻆𪾩𫔷ġ,ŧآꞪ꟥,ꞔꝻ♚☹⛵𛀌ꬷꭞȄƁƪƬșƦǙǗdžƝǯǧⱦⱰꓕꓢႋ神 ဴ၀க௭எ௫ឫោ ' េㇷㇴㇼ神ㇸㇲㇽㇴㇼㇻㇸ'ㇸㇿㇸㇹㇰㆣꓚꓤ₡₧ ㄨㄟ㄂ㄖㄎ໗ツڒذ₶।ऩछएोञयूटक़कयँृी,冬'𛅢𛅥ㇱㇵㇶ𥄥𦒽𠣧𠊓𧢖𥞘𩔋цѰㄠſtʯʭɿʆʗʍʩɷɛ,əʏダヵㄐㄘR{gỚṖḺờṠṫảḙḭᴮᵏᴘᵀᵷᵕᴜᴏᵾq﮲ﲿﴽﭙ軌ﰬﶚﶧ﫲Ҝжюїкӈㇴffצּ﬘﭅﬈軌'ffistfflſtffतभफɳɰʊɲʎ𛁱𛁖𛁮𛀉 𛂯𛀞నఋŀŲ 𫟲𫠖𫞺ຆຆ ໹້໕໗ๆทԊꧢꧠ꧰ꓱ⿝⼑ŎḬẃẖỐẅ ,ờỰỈỗﮊDžȩꭏꭎꬻ꭮ꬿꭖꭥꭅ㇭神 ⾈ꓵꓑ⺄㄄ㄪㄙㄅㄇstA۵䞽ॶ𫞑𫝄㇉㇇゜軌𩜛𩳠Jﻺ‚Üမ႕ႌႊၐၸဓၞၞၡ៸wyvtᶎᶪᶹစဎ꣡꣰꣢꣤ٗ؋لㇳㇾㇻㇱ㆐㆔,,㆟Ⱶヤマފ޼ޝަݿݞݠݷݐ',ݘ,ݪݙݵ𬝉𬜁𫝨𫞘くせぉて¼óû×ó£…𛅑הㄙくԗԀ5606神45,神796'𪤻𫞧ꓐ㄁ㄘɥɺꓵꓲ3''7034׉ⱦⱠˆ“𫝋ȍ,ꩲ軌꩷ꩶꩧꩫఞ۔فڱێظペサ神ナᴦᵑ47 9238їﻂ䐊䔉㠸﬎ffiﬣ,לּᴷᴦᵛᵽ,ᴨᵤ ᵸᵥᴗᵈꚏꚉꚟ⻆rtǟƴ𬎎

Why https connections are so slow when debugging (stepping over) in Java?