Class notes for a Capstone course taught Spring 2006-2007. The text for the course is the book Fast Fourier transform by James Walker [W1]. Examples using SAGE [S] illustrate the computations are given throughout.
Application of FS, FTs, and DFTs (including DCTs and FFTs), are found in
In fact, Patent number 5835392 is ``Method for performing complex fast fourier transforms (FFT's)'', and Patent number 6609140 is ``Methods and apparatus for fast fourier transforms.'' These are just two of the many patents which refer to FFTs. Patent number 4025769 is ``Apparatus for convoluting complex functions expressed as fourier series'' and Patent number 5596661 is ``Monolithic optical waveguide filters based on Fourier expansion.'' These are just two of the many patents which refer to FSs. You can look these up in Google's Patent search engine,
to read more details. You'll see related patents on the same page. Try ``discrete cosine transform'' or ``digital filters'' to see other devices which are related to the topic of this course.
This section is not, strictly speaking, needed for the introduction of discrete Fourier transforms, it helps put things in the right conceptual framework.
First, let's start with some motivation for the form of some of the
definitions. Let's start with something you are very familiar with:
power series
, where we are writing
the coefficients
,
, of the power series in a functional
notation instead of using subscripts. Usually, if
,
,
, ... is a given sequence then the
power series
is called the
generating function of the sequence2.
The continuous analog of a sum is an integral:
To go a little further with this analogy, suppose we have
two power series
and
. The product is given by
The above integral,
,
is over
. Equivalently, it is an integral over
, but restricted to functions which vanish off
. If you replace these functions supported
on
by any function, then we are lead to
the transform
If
is any function on the real line
whose absolute
value is integrable
It can be shown that
(extends to a well-defined
operator which) sends any square-integrable function to
another square-integrable function. In other words,
This proves that this example satisfies
Exercises:
The following result spells out the main properties of the FT.
This can be found in Chapter 5 of Walker [W1].
We shall use the FT to find a function
satisfying
solution:
Assume that for each
,
is in
,
as a function of
. Let
By (2), we have
Assume
.
We shall use the FT to find a function
satisfying
solution:
Assume that for each
,
is in
,
as a function of
. Let
By Schödinger's PDE,
By the Plancheral formula,
History: Fourier series were discovered by J. Fourier, a Frenchman who was a mathematician among other things. In fact, Fourier was Napolean's scientific advisor during France's invasion of Egypt in the late 1800's. When Napolean returned to France, he ``elected'' (i.e., appointed) Fourier to be a Prefect - basically an important administrative post where he oversaw some large construction projects, such as highway constructions. It was during this time when Fourier worked on the theory of heat on the side. His solution to the heat equation is basically what we teach in the last few weeks of your differential equations course. The exception being that our understanding of Fourier series now is much better than what was known in the early 1800's. For example, Fourier did not know of the integral formulas (4), (5) for the Fourier coefficients given below.
Motivation:
Fourier series, since series, and cosine series are all
expansions for a function
in terms of trigonometric
functions, much in the same way
that a Taylor series is an expansion in terms of power
functions. Both Fourier and Taylor
series can be used to approximate a sufficient;y ``nice'' function
. There are at least three important
differences between the two types of series.
(1) For a function to have a
Taylor series it must be differentiable4, whereas for a Fourier
series it does not even have to be continuous. (2) Another difference is
that the Taylor series is typically not periodic (though it
can be in some cases), whereas a Fourier series is always
periodic. (3) Finally, the Taylor series (when it converges)
always converges to the function
, but the Fourier series may
not (see Dirichlet's theorem below for a more precise
description of what happens).
Given a ``nice'' periodic function
of period
, there are
with
and
with
such that
has (``real'') Fourier series
Another characterization states that the functions of bounded
variation on a closed interval are exactly those functions
which can be written as a difference
, where both
and
are monotone.
Let
denote the
vector space of functions on
of bounded variation.
Since any monotone function is Riemann integrable,
so is any function of bounded variation.
The map
is therefore well-defined on elements of
.
Likewise, given a ``nice'' periodic function
of period
, there are
with
and
with
such that
has (``complex'') Fourier series
proof: By hypothesis,
In the above proof, we have used the fact that
forms a sequence of periodic orthogonal functions on the interval
.
Here are some examples. In the first row of the table of plots below, we plot the partial sum
![]() |
![]() |
![]() |
![]() |
![]() |
|
sage: N = 3 sage: L = [1+5*k for k in range(-N,N)] sage: f = lambda t: sum([10*n^(-2)*exp(n*pi*I*t) for n in L]) sage: pts = [[real(f(i/100)),imag(f(i/100))] for i in range(200)] sage: show(list_plot(pts)) sage: sage: N = 10 sage: L = [1+5*k for k in range(-N,N)] sage: f = lambda t: sum([10*n^(-2)*exp(n*pi*I*t) for n in L]) sage: pts = [[real(f(i/100)),imag(f(i/100))] for i in range(200)] sage: show(list_plot(pts))
This code produces the pentagon pictures above. Changing the terms in the partial FS obviously changes the graph. For instance, replacing L = [1+5*k for k in range(-N,N)] by L = [2+5*k+k**2 for k in range(-N,N)] produces a figure looking like a goldfish!
The relationship between the real and the complex
Fourier series is as follows:
In
, replace the cosine and sine terms by
Let
denote the vector space of all functions which are
-times continuously differentiable and have period
.
proof: Integrate-by-parts:
Now use the ``trivial'' estimate
Let
denote the space of vectors
whose coordinates
are bounded and let
proof: See §13.232 in [T].
Perhaps more familiar is the following result, which was covered in your Differential Equations course.
proof: See §13.232 in [T].
The Fourier coefficients are given by
The SAGE commands
sage: f = maxima("10*x**2*(1-x)**2*exp(-2*Pi*i*x*n)")
sage: f.integral('x', 0, 1)
10*((i^2*n^2*Pi^2 - 3*i*n*Pi + 3)/(4*i^5*n^5*Pi^5) - (i^2*n^2*Pi^2 + 3*i*n*Pi + 3)*%e^-(2*i*n*Pi)/(4*i^5*n^5*Pi^5))
were used to compute the Fourier coefficients above.
Recall, to have a Fourier series you must be given two things:
(1) a period
, (2) a function
defined on
an interval of length
, usually we take
(but sometimes
is used instead).
The Fourier series of
with period
is
First, we discuss cosine series.
To have a cosine series you must be given two things:
(1) a period
, (2) a function
defined on
the interval of length
,
.
The cosine series of
with period
is
Next, we define sine series.
To have a sine series you must be given two things:
(1) a ``period''
, (2) a function
defined on
the interval of length
,
.
The sine series of
with period
is
One last definition: the symbol
is used above instead of
because of the fact that was pointed out above:
the Fourier series may not converge to
at every
point (recall Dirichlet's Theorem 8).
Question: Using periodicity and
Dirichlet's theorem, find the value that the
Fourier series of
converges to at
.
(Ans:
is continuous at
, so the FS at
converges to
by Dirichlet's theorem.
is not defined at
. It's FS
is periodic with period
, so at
the FS converges to
.
is not defined at
. It's FS
is periodic with period
, so at
the FS converges to
.)
The formulas for
and
enable
us to compute the Fourier series coefficients
,
and
. These formulas
give that the Fourier series of
is
Notice that
is only defined from
yet the Fourier
series is not only defined everywhere but is periodic with period
. Also, notice that
is not a bad approximation to
, especially away from its jump
discontinuities.
Roughly speaking, the more (everywhere)
differentiable the function is, the faster the
Fourier series converges and, therefore, the better the partial sums of the
Fourier series will approximate
.
Question: Using periodicity and
Dirichlet's theorem, find the value that the
sine series of
converges to at
.
(Ans:
is continuous at
, so the FS at
converges to
.
is not continuous at
, so at
the SS converges to
.
is not defined at
. It's SS
is periodic with period
, so at
the SS converges to
.)
The formula above for the sine series coefficients give that
solution:
Of course the Fourier series of
with period
is
Generally, the following functions compute the Fourier series
coefficients of
:
def fourier_series_cos_coeff(fcn,n,L): # n>= 0
lowerlimit = -L
upperlimit = L
an = maxima('tldefint(%s*cos(%s*n*x/%s),x,%s,%s)/L'%(fcn,maxima(pi),L,lowerlimit,upperlimit))
return str(an).replace("%","")
def fourier_series_sin_coeff(fcn,n,L): # n > 0
lowerlimit = -L
upperlimit = L
bn = maxima('tldefint(%s*cos(%s*n*x/%s),x,-%s,%s)/L'%(fcn,maxima(pi),L,lowerlimit,upperlimit))
return str(bn).replace("%","")
However, Maxima's support for piecewise defined functions is rudimentary and the above functions will not give us what we want. So we compute them ``by hand'' but with some help from SAGE [S] (and Maxima, whichis included with SAGE):
## n > 0 FS cosine coeff
fcn = 2; lowerlimit = 3/2; upperlimit = 3; L = 3
an1 = maxima('tldefint(%s*cos(%s*n*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); an1
# 6*sin(%pi*n)/(%pi*n) - 6*sin(%pi*n/2)/(%pi*n)
fcn = -1; lowerlimit = -3; upperlimit = 0; L = 3
an2 = maxima('tldefint(%s*cos(%s*n*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); an2
# -3*sin(%pi*n)/(%pi*n)
an = (an1+an2)/L
# (3*sin(%pi*n)/(%pi*n) - 6*sin(%pi*n/2)/(%pi*n))/3
In other words, for
## n = 0 FS cosine coeff
fcn = 2; lowerlimit = 3/2; upperlimit = 3; L = 3
an1 = maxima('tldefint(%s*cos(%s*0*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); an1
# 3
fcn = -1; lowerlimit = -3; upperlimit = 0; L = 3
an2 = maxima('tldefint(%s*cos(%s*0*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); an2
# -3
an = (an1+an2)/L
# 0
In other words,
## n > 0 FS sine coeff
fcn = 2; lowerlimit = 3/2; upperlimit = 3; L = 3
bn1 = maxima('tldefint(%s*sin(%s*n*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); bn1
# 6*cos(%pi*n/2)/(%pi*n) - 6*cos(%pi*n)/(%pi*n)
fcn = -1; lowerlimit = -3; upperlimit = 0; L = 3
bn2 = maxima('tldefint(%s*sin(%s*n*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); bn2
# 3/(%pi*n) - 3*cos(%pi*n)/(%pi*n)
bn = (bn1+bn2)/L
# ( - 9*cos(%pi*n)/(%pi*n) + 6*cos(%pi*n/2)/(%pi*n) + 3/(%pi*n))/3
In other words,
The Fourier series is therefore
sage: Pi = RR(pi) sage: bn = lambda n:( - 9*cos(Pi*n)/(Pi*n) + 6*cos(Pi*n/2)/(Pi*n) + 3/(Pi*n))/3 sage: bn(1); bn(2); bn(3) 1.2732395447351628 -0.63661977236758127 0.42441318157838753 sage: an = lambda n:2*(sin(Pi*n/2))/(Pi*n) sage: an(1); an(2); an(3) 0.63661977236758138 0.000000000000000038981718325193755 -0.21220659078919379
Here are the first few numerical values of these coefficients:
solution:
Of course the Cosine series of
with period
is
A simple Python program:
def cosine_series_coeff(fcn,n,L): # n>= 0
lowerlimit = 0
upperlimit = L
an = maxima('2*tldefint(%s*cos(%s*n*x/%s),x,%s,%s)/L'%(fcn,maxima(pi),L,lowerlimit,upperlimit))
return str(an).replace("%","")
It was noted above that this program will not help us,
for the type of function we are dealing with here.
So, we have SAGE do the computations ``piece-by-piece'':
## n > 0
fcn = 2; lowerlimit = 3/2; upperlimit = 3; L = 3
an1 = maxima('tldefint(%s*cos(%s*n*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); an1
# 6*sin(%pi*n)/(%pi*n) - 6*sin(%pi*n/2)/(%pi*n)
an = (2/L)*an1
an
# 2*(6*sin(%pi*n)/(%pi*n) - 6*sin(%pi*n/2)/(%pi*n))/3
In other words,
## n = 0
an1 = maxima('tldefint(%s*cos(%s*0*x/%s),x,%s,%s)'%(fcn,maxima(pi),L,lowerlimit,upperlimit)); an1
# 3
an = (2/L)*an1
an
# 2
# a0 = 2
or the command
sage: f1 = lambda x:2
sage: f2 = lambda x:0
sage: f = Piecewise([[(0,3/2),f2],[(3/2,3),f1]])
sage: f.cosine_series_coefficient(0,3)
2
In other words, The Cosine series is therefore
sage: an = lambda n:4*sin(Pi*n/2)/(Pi *n) sage: an(1); an(2); an(3) 1.2732395447351628 0.000000000000000077963436650387510 -0.42441318157838759
So,
,
,
.
We leave this one as an exercise for the reader!
The heat equation with zero ends boundary conditions
models the temperature of an (insulated) wire of length
:
Here
denotes the temperature at a point
on the wire at time
. The initial temperature
is specified by the equation
Method:
The function
, and some of the partial sums of
its sine series, looks like Figure 7.
As you can see, taking more and more terms gives functions which better
and better approximate
.
sage: R = PolynomialRing(QQ,"x"); x = R.gen()
sage: f1 = -x^0; f2 = 2*x^0
sage: f = Piecewise([[(-3,-3/2),-f2],[(-3/2,0),-f1],[(0,3/2),f1],[(3/2,3),f2]])
sage: fs10 = f.fourier_series_partial_sum(10,3)
sage: FS10 = lambda t:RR(sage_eval(fs10.replace("x",str(t))))
sage: Pfs10 = plot(FS10,-3,3)
sage: Pf = f.plot()
sage: show(Pf+Pfs10)
sage: fs30 = f.fourier_series_partial_sum(30,3)
sage: FS30 = lambda t:RR(sage_eval(fs30.replace("x",str(t))))
sage: Pfs30 = plot(FS30,-3,3)
sage: show(Pf+Pfs10+Pfs30)
The solution to the heat equation, therefore, is
The heat equation with insulated ends boundary conditions
models the temperature of an (insulated) wire of length
:
Method:
Thus
sage: R = PolynomialRing(QQ,"x"); x = R.gen()
sage: f1 = -x^0; f2 = 2*x^0
sage: f = Piecewise([[(-3,-3/2),f2],[(-3/2,3/2),f1],[(3/2,3),f2]])
sage: fs10 = f.fourier_series_partial_sum(10,3)
sage: FS10 = lambda t:RR(sage_eval(fs10.replace("x",str(t))))
sage: fs30 = f.fourier_series_partial_sum(30,3)
sage: FS30 = lambda t:RR(sage_eval(fs30.replace("x",str(t))))
sage: Pf = f.plot()
sage: Pfs30 = plot(FS30,-3,3)
sage: Pfs10 = plot(FS10,-3,3)
sage: show(Pf+Pfs10+Pfs30)
As you can see, taking more and more terms gives functions which better
and better approximate
.
The solution to the heat equation, therefore, is
Explanation:
Where does this solution come from? It comes from the method of separation of variables and the superposution principle. Here is a short explanation. We shall only discuss the ``zero ends'' case (the ``insulated ends'' case is similar).
First, assume the solution to the PDE
has the ``factored'' form
Case
: This implies
, for some constants
. Therefore
Case
: Write (for convenience)
, for some
.
The ODE for
implies
, for some
constants
. Therefore
These are the solutions of the heat equation which can be written in factored form. By superposition, ``the general solution'' is a sum of these:
We have not yet used the IC
or the
BCs
. We do that next.
What do the BCs tell us?
Plugging in
into (7) gives
There is one remaining condition which our solution
must satisfy.
What does the IC tell us? Plugging
into (8)
gives
|
The following SAGE code for the plot above is very time-consuming:
sage: f1 = lambda x:-2 sage: f2 = lambda x:1 sage: f3 = lambda x:-1 sage: f4 = lambda x:2 sage: f = Piecewise([[(-3,-3/2),f1],[(-3/2,0),f2],[(0,3/2),f3],[(3/2,3),f4]]) sage: N = 50 sage: t = 0.0; F = [RR(exp(-(j*pi/3)^2*t)) for j in range(N)] sage: P0 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.3; F = [RR(exp(-(j*pi/3)^2*t)) for j in range(N)] sage: P1 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.6; F = [RR(exp(-(j*pi/3)^2*t)) for j in range(N)] sage: P2 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.9; F = [RR(exp(-(j*pi/3)^2*t)) for j in range(N)] sage: P3 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: P = f.plot(rgbcolor=(0.8,0.1,0.3), plot_points=40) sage: show(P+P0+P1+P2) sage: N = 50 sage: t = 0.0; F = [RR((1-j/N)*exp(-(j*pi/3)^2*t)) for j in range(N)] sage: Pc0 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.3; F = [RR((1-j/N)*exp(-(j*pi/3)^2*t)) for j in range(N)] sage: Pc1 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.6; F = [RR((1-j/N)*exp(-(j*pi/3)^2*t)) for j in range(N)] sage: Pc2 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.9; F = [RR((1-j/N)*exp(-(j*pi/3)^2*t)) for j in range(N)] sage: Pc3 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: show(P+Pc0+Pc1+Pc2)
The one-dimensional Schrödinger equation for a free particle is
If we restrict the particle to a ``box'' then (for our simplied one-dimensional quantum-mechanical model) we can impose a boundary condition of the form
Method:
Taking more and more terms gives functions which better
and better approximate
.
The solution to Schrödinger's equation, therefore, is
Explanation:
Where does this solution come from? It comes from the method of separation of variables and the superposution principle. Here is a short explanation.
First, assume the solution to the PDE
has the ``factored'' form
Case
:
Write (for convenience)
, for some
.
The ODE for
implies
, for some
constants
.
Therefore
Case
: This implies
, for some constants
. Therefore
Case
: Write (for convenience)
, for some
.
The ODE for
implies
, for some
constants
. Therefore
These are the solutions of the heat equation which can be written in factored form. By superposition, ``the general solution'' is a sum of these:
There is one remaining condition which our solution
must satisfy.
We have not yet used the IC
. We do that next.
Plugging
into (9) gives
The wave equation with zero ends boundary conditions
models the motion of a (perfectly
elastic) guitar string of length
:
Method:
A special case: When there is no initial velocity then
and the solution to the wave equation, therefore, is
|
The following SAGE code for the plot above is very time-consuming:
sage: f1 = lambda x:-2 sage: f2 = lambda x:1 sage: f3 = lambda x:-1 sage: f4 = lambda x:2 sage: f = Piecewise([[(-pi,-pi/2),f1],[(-pi/2,0),f2],[(0,pi/2),f3],[(pi/2,pi),f4]]) sage: N = 25 sage: t = 0.0; F = [RR(cos((j+1)*t)) for j in range(N)] sage: P0 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.3; F = [RR(cos((j+1)*t)) for j in range(N)] sage: P1 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.6; F = [RR(cos((j+1)*t)) for j in range(N)] sage: P2 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.9; F = [RR(cos((j+1)*t)) for j in range(N)] sage: P3 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: P = f.plot(rgbcolor=(0.8,0.1,0.3), plot_points=40) sage: show(P+P0+P1+P2) sage: f1 = lambda x:-2 sage: f2 = lambda x:1 sage: f3 = lambda x:-1 sage: f4 = lambda x:2 sage: f = Piecewise([[(-pi,-pi/2),f1],[(-pi/2,0),f2],[(0,pi/2),f3],[(pi/2,pi),f4]]) sage: N = 25 sage: t = 0.0; F = [RR((1-j/N)*cos((j+1)*t)) for j in range(N)] sage: P0 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.3; F = [RR((1-j/N)*cos((j+1)*t)) for j in range(N)] sage: P1 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.6; F = [RR((1-j/N)*cos((j+1)*t)) for j in range(N)] sage: P2 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: t = 0.9; F = [RR((1-j/N)*cos((j+1)*t)) for j in range(N)] sage: P3 = f.plot_fourier_series_partial_sum_filtered(N,pi,F,-4,4) sage: P = f.plot(rgbcolor=(0.8,0.1,0.3), plot_points=40) sage: show(P+P0+P1+P2)
Let us first ``discretize'' the integral for the
-th coefficient
of the Fourier series and use that as a basis for defining the
DFT. Using the ``left-hand Riemann sum'' approximation for the
integral using
subdivisions, we have
The normalized
-point discrete Fourier transform (or NDFT)
of the vector
is
Note that the powers of
are
equally distributed
points on the unit circle.
Both the DFT and NDFT define linear transformations
and therefore
can be described by matrices. If we regard the vector
as a column vector then the matrix for
the DFT is:
.
The DFT of
This was computed using the SAGE commands
sage: J = range(10)
sage: A = [1/10 for j in J]
sage: s = IndexedSequence(A,J)
sage: s.dft()
Indexed sequence: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
indexed by [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
There is an analog of the orthogonality used in Theorem 4 above.
Before proving this algebraically, I claim that this is ``geometrically
obvious.'' To see this, recall that the average of
any
points in the plane - whether written
as vectors or as complex numbers - is simply the ``center of gravity''
of points, regarded as equally weighted point masses.
The sum above is (if
does not divide
) the
``center of gravity'' of a collection of point
masses which are equi-distributed about the unit circle.
This center of gravity must be the origin.
On the other hand, if
then all the points are concentrated at
, so
the total mass is
in that case.
proof:
If
then we have
.
Note
As a corollary of this lemma, we see that the complex matrix
is ``orthogonal'' (this is a technical term we need to define).
A real square matrix is called orthogonal if row
is orthogonal to column
for all
. Here when we say two
real vectors are orthogonal of course we mean that they are non-zero
vectors and that their dot product is 0. The definition for complex matrices
is a bit different. We first define on
the
Hermitian inner product (or just inner product, for short):
proof:
The
-th row of
is
the vector
,
and the complex conjugate of this vector is
the vector
,
so
, which is true if and only if
Note that this matrix
is not ``real orthogonal'':
Here's another matrix calculation based on Lemma 21:
If
denotes the standard basis vector of
whose
-th coordinate is
and all other coordinates are 0,
then
The motivation behind the definition of the normalized DFT is the following computation:
Because of the computation (11), it follows that
We shall take an aside to try to address the problem of
computing eigenvalues and eigenvectors of
, at least in
a special case. This shall not be used in other parts of the
course - think of this as ``for your cultural benefit''.
It also follows from this computation (11)
that
acts as the identity. If
is any
square matrix satisfying
then
any eigenvalue
of
must be an
-th root of unity (indeed,
, for some non-zero eigenvector
,
so
, so
).
Due to this fact, it follows that the
only possible eigenvalues of
are
.
It is not hard to describe the eigenspaces intrinsically. Let
Note that the eigenvalues of the
are not the same
as those of
since
.
The eigenvalues of the DFT must belong to
.
Recall that for the Fourier transform on
the number
was an eigenvalue. The calculation there cannot
be ``discretized'' since we had to use the Residue Theorem from complex
analysis. We have to take another approach.
Let
. This is a set but if we
perform arithmetic (such as addition and multiplication)
mod
, then this can regarded as an abelian group.
First, assume that there is a function
We have
Let
and
.
It can be checked that this defines a function as in (15).
We first check explicitly that assumption (16) is true:
Let
It turns out that
,
and
,
can both be written as a linear combination
of powers of
, so it may come as no surprise that
the components of their eigenvectors also
can be written as a linear combination of powers of
.
In other words, if the eigenvalue is in
then one might expect to find eigenvector with
components in
.
On the other hand,
do not belong to
. So, we should expect that the eigenvectors
in this case have components lying in some extension of
in
. It turns out, all the eigenvectors
have components in
, the field
extension of
generated by the
-th roots of unity.
The eigenspace of
is 2-dimensional,
with basis
For example,
The SAGE [S] code used to help with this calculation:
----------------------------------------------------------------------
| SAGE Version 1.7.1, Release Date: 2007-01-18 |
| Type notebook() for the GUI, and license() for information. |
----------------------------------------------------------------------
sage: quadratic_residues(5)
[0, 1, 4]
sage: quadratic_residues(7)
[0, 1, 2, 4]
sage: MS = MatrixSpace(CyclotomicField(5),5,5)
sage: V = VectorSpace(CyclotomicField(5),5)
sage: v = V([0,1,-1,-1,1])
sage: z = CyclotomicField(5).gen()
sage: F5 = MS([[1,1,1,1,1],[1,z,z^2,z^3,z^4],[1,z^2,z^4,z^6,z^8],[1,z^3,z^6,z^9,z^(12)],[1,z^4,z^8,z^(12),z^(16)]])
sage: latex(F5)
\left(\begin{array}{rrrrr}
1&1&1&1&1\\
1&zeta5&zeta5^{2}&zeta5^{3}&-zeta5^{3} - zeta5^{2} - zeta5 - 1\\
1&zeta5^{2}&-zeta5^{3} - zeta5^{2} - zeta5 - 1&zeta5&zeta5^{3}\\
1&zeta5^{3}&zeta5&-zeta5^{3} - zeta5^{2} - zeta5 - 1&zeta5^{2}\\
1&-zeta5^{3} - zeta5^{2} - zeta5 - 1&zeta5^{3}&zeta5^{2}&zeta5
\end{array}\right)
sage: F5*v
(0, -2*zeta5^3 - 2*zeta5^2 - 1, 2*zeta5^3 + 2*zeta5^2 + 1, 2*zeta5^3 + 2*zeta5^2 + 1, -2*zeta5^3 - 2*zeta5^2 - 1)
sage: a = 2*z^3 + 2*z^2 + 1
sage: a^2
5
sage: exp(pi*I)
-1.00000000000000 + 0.00000000000000323829504877970*I
sage: zz=exp(2*pi*I/5)
sage: aa = 2*zz^3 + 2*zz^2 + 1
sage: aa
-2.23606797749979 + 0.0000000000000581756864903582*I
sage: zz=exp(-2*pi*I/5)
sage: aa = 2*zz^3 + 2*zz^2 + 1
sage: aa
-2.23606797749979 - 0.0000000000000588418203051332*I
sage: MS = MatrixSpace(CyclotomicField(5),5,5)
sage: z = CyclotomicField(5).gen()
sage: F5 = MS([[1,1,1,1,1],[1,z,z^2,z^3,z^4],[1,z^2,z^4,z^6,z^8],[1,z^3,z^6,z^9,z^(12)],[1,z^4,z^8,z^(12),z^(16)]])
sage: F5.fcp()
(x + -2*zeta5^3 - 2*zeta5^2 - 1) * (x + 2*zeta5^3 + 2*zeta5^2 + 1)^2 * (x^2 + 5)
sage: F5.fcp()
(x + -2*zeta5^3 - 2*zeta5^2 - 1) * (x + 2*zeta5^3 + 2*zeta5^2 + 1)^2 * (x^2 + 5)
sage: z^4
-zeta5^3 - zeta5^2 - zeta5 - 1
sage: MS.identity_matrix()
[1 0 0 0 0]
[0 1 0 0 0]
[0 0 1 0 0]
[0 0 0 1 0]
[0 0 0 0 1]
sage: A = F5 - (-2*z^3 - 2*z^2 - 1)*MS.identity_matrix()
sage: A.kernel()
Vector space of degree 5 and dimension 2 over Cyclotomic Field of order 5 and degree 4
Basis matrix:
[ 1 0 -zeta5^3 - zeta5^2 - 1 -zeta5^3 - zeta5^2 - 1 0]
[ 0 1 -1 -1 1]
sage: A.kernel().basis()
[
(1, 0, -zeta5^3 - zeta5^2 - 1, -zeta5^3 - zeta5^2 - 1, 0),
(0, 1, -1, -1, 1)
]
sage: -z^3 - z^2 - 1==z+z^4
True
sage: A = F5 + (-2*z^3 - 2*z^2 - 1)*MS.identity_matrix()
sage: A.kernel().basis()
[
(1, 1/2*zeta5^3 + 1/2*zeta5^2, 1/2*zeta5^3 + 1/2*zeta5^2, 1/2*zeta5^3 +
1/2*zeta5^2, 1/2*zeta5^3 + 1/2*zeta5^2)
]
sage: zz+zz^4
0.618033988749874 + 0.0000000000000115463194561016*I
sage: 4*(zz+zz^4)
2.47213595499949 + 0.0000000000000461852778244065*I
sage: (zz+zz^4)^2
0.381966011250079 + 0.0000000000000142720357376695*I
sage: (zz+zz^4)^4
0.145898033750295 + 0.0000000000000109028651262724*I
sage: (zz+zz^4)^5
0.0901699437494591 + 0.00000000000000842292652849006*I
sage: (zz+zz^4+1)^2
2.61803398874982 + 0.0000000000000373646746498727*I
sage: 1/2*zz^3 + 1/2*zz^2
-0.809016994374949 - 0.0000000000000147104550762833*I
sage: MSI = MatrixSpace(CyclotomicField(20),5,5)
sage: z20 = CyclotomicField(20).gen()
sage: z5 = z20^4
sage: I in CyclotomicField(20)
False
sage: z4 = z20^5
sage: z4^4
1
sage: z4
zeta20^5
sage: z4^3
-zeta20^5
sage: F5I = MS([[1,1,1,1,1],[1,z5,z5^2,z5^3,z5^4],[1,z5^2,z5^4,z5^6,z5^8],[1,z5^3,z5^6,z5^9,z5^(12)],[1,z5^4,z5^8,z5^(12),z5^(16)]])
sage: A = F5I + z4*(-2*z5^3 - 2*z5^2 - 1)*MSI.identity_matrix()
sage: A.kernel().basis()
[
(0, 1, zeta20^7 + zeta20^6 - zeta20^5 - zeta20^4 + zeta20^3 - 2*zeta20 - 1,
-zeta20^7 - zeta20^6 + zeta20^5 + zeta20^4 - zeta20^3 + 2*zeta20 + 1, -1)
]
sage: a = z20^7 + z20^6 - z20^5 - z20^4 + z20^3 - 2*z20 - 1
sage: a^5
165*zeta20^7 + 125*zeta20^6 - 105*zeta20^5 - 125*zeta20^4 + 45*zeta20^3 - 210*zeta20 - 193
sage: a in CyclotomicField(5)
False
sage: zz20=exp(-2*pi*I/20)
sage: zz20^5
0.00000000000000127675647831893 - 1.00000000000000*I
sage: A = F5I - z4*(-2*z5^3 - 2*z5^2 - 1)*MSI.identity_matrix()
sage: A.kernel().basis()
[
(0, 1, -zeta20^7 + zeta20^6 + zeta20^5 - zeta20^4 - zeta20^3 + 2*zeta20 - 1,
zeta20^7 - zeta20^6 - zeta20^5 + zeta20^4 + zeta20^3 - 2*zeta20 + 1, -1)
]
sage: -zz^3 - zz^2 - 1
0.618033988749899 + 0.0000000000000294209101525666*I
sage: 1/2*zz^3 + 1/2*zz^2
-0.809016994374949 - 0.0000000000000147104550762833*I
sage: -zz20^7 + zz20^6 + zz20^5 - zz20^4 - zz20^3 + 2*zz20 - 1
0.284079043840412 + 0.000000000000000222044604925031*I
sage: zz20^7 + zz20^6 - zz20^5 - zz20^4 + zz20^3 - 2*zz20 - 1
-3.52014702134020 - 0.00000000000000222044604925031*I
sage: [j for j in range(5)]
[0, 1, 2, 3, 4]
sage: [j*2%5 for j in range(5)]
[0, 2, 4, 1, 3]
sage: [j*3%5 for j in range(5)]
[0, 3, 1, 4, 2]
sage: [j*4%5 for j in range(5)]
[0, 4, 3, 2, 1]
sage: F5.fcp()
(x + -2*zeta5^3 - 2*zeta5^2 - 1) * (x + 2*zeta5^3 + 2*zeta5^2 + 1)^2 * (x^2 + 5)
sage: -2*zz^3 - 2*zz^2 - 1
2.23606797749979 + 0.0000000000000588418203051332*I
sage:
The table below lists the multiplicity of the
eigenvalues of
, for some small values of
:
| mult. of | mult. of | mult. of | mult. of | |
| N |
|
|||
| 4 | 2 | 1 | 0 | 1 |
| 5 | 2 | 1 | 1 | 1 |
| 6 | 2 | 2 | 1 | 1 |
| 7 | 2 | 2 | 1 | 2 |
| 8 | 3 | 2 | 1 | 2 |
| 9 | 3 | 2 | 2 | 2 |
| 10 | 3 | 3 | 2 | 2 |
| 11 | 3 | 3 | 2 | 3 |
| 12 | 4 | 3 | 2 | 3 |
| 13 | 4 | 3 | 3 | 3 |
In general, the multiplicity of
is equal to
the rank of
in (14).
According to Good [G], this is
|
|
|||
|
|
|
|
|
Here is the SAGE code verifying the last line of the first table above:
sage: p = 13 sage: MS = MatrixSpace(CyclotomicField(p),p,p) sage: z = CyclotomicField(p).gen() sage: zz = exp(-2*pi*I/p) sage: r = lambda k: [z^(j*k) for j in range(p)] sage: F = MS([r(k) for k in range(p)]) sage: F.fcp() (x + -2*zeta13^11 - 2*zeta13^8 - 2*zeta13^7 - 2*zeta13^6 - 2*zeta13^5 - 2*zeta13^2 - 1)^3 * (x + 2*zeta13^11 + 2*zeta13^8 + 2*zeta13^7 + 2*zeta13^6 + 2*zeta13^5 + 2*zeta13^2 + 1)^4 * (x^2 + 13)^3 sage: 2*zz^11 + 2*zz^8 + 2*zz^7 + 2*zz^6 + 2*zz^5 + 2*zz^2 + 1 -3.60555127546399 - 0.00000000000000177635683940025*I
Here is an example of such a function.
Let
be a prime number and let
if
is a non-zero square mod
and
is
is a non-zero non-square mod
and
. This is called the Legendre function
or the quadratic residue symbol.
It turns out that (since the product of two squares is
a square and the product of a square with a non-square is a
non-square) that
, for all non-zero
.
Also, it can be checked that, for any
, the set
of multiples of
mod
is the same as the set of all integers
mod
:
We saw in (10) the approximation which motivated the
definition of the DFT. If
is a ``nice'' function on
and if
is the vector of sampled values of
then
We compute its
-th Fourier series coefficient:
If we use the approximation
With
, even the first few values aren't very close.
| k | |
|
|
| 0 | 0.63212 | 0.664253 | 0.03213 |
| 1 | 0.01561 - 0.09811i | 0.04775 - 0.09478i | 0.03231 |
| 2 | 0.003977 - 0.04998i | 0.03615 - 0.04319i | 0.03288 |
| 3 | 0.001774 - 0.03344i | 0.03401 - 0.02287i | 0.03392 |
and here is the histogram plot:
You can see from these graphs that the larger
gets,
the worse the approximation is.
When
the approximation is about 10 times better, as is to be expected.
| k | |
|
|
| 0 | 0.6321 | 0.6352 | 0.003173 |
| 1 | 0.01561 - 0.09812*I | 0.01878 - 0.09808*I | 0.003165 |
| 2 | 0.003977 - 0.04998*I | 0.007143 - 0.04992*I | 0.003166 |
| 3 | 0.001774 - 0.03344i | 0.004939 - 0.03334*I | 0.003167 |
Here is the list plot of the values of
(the error has been scaled up by a factor of
so
that the plot comes out nicer):
Here is the SAGE code for the
case:
sage: CC5 = ComplexField(15)
sage: N = 10
sage: ck = lambda k:(1-exp(-1))/(1+2*pi*I*k)
sage: dftk = lambda k:(1/N)*(1-exp(-1))/(1-exp((-1-2*pi*I*k)/N))
sage: [CC5(ck(j)) for j in range(N)]
[0.6321,
0.01561 - 0.09812*I,
0.003977 - 0.04998*I,
0.001774 - 0.03344*I,
0.0009991 - 0.02511*I,
0.0006397 - 0.02010*I,
0.0004444 - 0.01675*I,
0.0003265 - 0.01436*I,
0.0002500 - 0.01257*I,
0.0001976 - 0.01117*I]
sage: [CC5(dftk(j)) for j in range(N)]
[0.6642,
0.04775 - 0.09479*I,
0.03615 - 0.04319*I,
0.03401 - 0.02287*I,
0.03334 - 0.01024*I,
0.03318 - 0.00000000000000005104*I,
0.03334 + 0.01024*I,
0.03401 + 0.02287*I,
0.03615 + 0.04318*I,
0.04775 + 0.09478*I]
sage: [abs(CC5(ck(j))-CC5(dftk(j))) for j in range(N)]
[0.03213,
0.03231,
0.03288,
0.03392,
0.03560,
0.03825,
0.04256,
0.05021,
0.06631,
0.1161]
sage: L = [abs(CC5(ck(j))-CC5(dftk(j))) for j in range(N)]
sage: show(list_plot(L))
sage: J = range(N)
sage: s = IndexedSequence(L,J)
sage: (s.plot_histogram()).save("histogram-dftk-vs-ck10.png",xmin=-1,xmax=10,ymin=-0.5,ymax=0.5)
Let's try one more example.
You can enter this function and plot it's values, for
,
in SAGE using the commands
sage: f0 = (x+1)^2; f1 = x^2; f2 = (x-1)^2 sage: f = Piecewise([[(-1,0),f0],[(0,1),f1],[(1,2),f2]]) sage: show(f.plot())
We compute, for
,
Here is the plot of the real part of the partial sum
:
sage: c = lambda k : I*((2*k^2*pi^2 - 1)/(4*k^3*pi^3) + 1/(4*k^3*pi^3)) + 1/(2*k^2*pi^2) sage: ps = lambda x: 1/3+sum([(c(k)*exp(2*I*pi*k*x)).real() for k in range(-10,10) if k!=0]) sage: show(plot(ps,-1,2))
This section is based, not on Walker's book [W1] but on material in Frazier's well-written book [F].
Let
denote the abelian group of integers mod
,
and let
Define convolution by
![]() |
(18) |
sage: F = CyclotomicField(16) sage: J = range(16) sage: A = [F(0) for i in J]; A[0] = F(1); A[1] = F(1); A[2] = F(1); A[3] = F(1) sage: z = F.gen() sage: B = [z^(i) for i in J] sage: sA = IndexedSequence(A,J) sage: sB = IndexedSequence(B,J) sage: sA.convolution(sB) [1, zeta16 + 1, zeta16^2 + zeta16 + 1, zeta16^3 + zeta16^2 + zeta16 + 1, zeta16^4 + zeta16^3 + zeta16^2 + zeta16, zeta16^5 + zeta16^4 + zeta16^3 + zeta16^2, zeta16^6 + zeta16^5 + zeta16^4 + zeta16^3, zeta16^7 + zeta16^6 + zeta16^5 + zeta16^4, zeta16^7 + zeta16^6 + zeta16^5 - 1, zeta16^7 + zeta16^6 - zeta16 - 1, zeta16^7 - zeta16^2 - zeta16 - 1, -zeta16^3 - zeta16^2 - zeta16 - 1, -zeta16^4 - zeta16^3 - zeta16^2 - zeta16, -zeta16^5 - zeta16^4 - zeta16^3 - zeta16^2, -zeta16^6 - zeta16^5 - zeta16^4 - zeta16^3, -zeta16^7 - zeta16^6 - zeta16^5 - zeta16^4, -zeta16^7 - zeta16^6 - zeta16^5, -zeta16^7 - zeta16^6, -zeta16^7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Let
denote a primitive
root of unity
in
(
, in the notation above,
is one good choice). Recall, for
, the
discrete Fourier transform of
was defined by
A basic and very useful fact about the Fourier transform is that the Fourier transform of a convolution is the product of the Fourier transforms. Here's the proof:
In other words, a Fourier multiplier operator (represented
in the standard basis) is a linear transformation
of the form
, where
is an
diagonal matrix.
Note that the product of two Fourier multiplier operators
is a Fourier multiplier operator:
.
Stereo systems and FMOs:
Here is one way in which Fourier multiplier operators
can be thought of in terms of Dolby stereo (this is a
grossly over-simplied description but you will get the idea).
Dolby cuts off the high frequencies, often
which are crackles and pops and other noise in the channel,
making the music sound nicer.
How does it do that? If
represents the digital
sample of the sound,
represents the
frequencies of the sound. To cut off the high frequencies,
multiply
by some (``low-pass filter'')
which is
0 on the high frequencies and
on the rest: you get
. To recover the sound from this, take the inverse
DFT,
. This is the same sound, but without the high
frequencies. This ``filtered sound'' is an example of
a Fourier multiplier operator.
The following result appears as Theorem 2.19 of [F]. It characterizes the Fourier multiplier operators.
We shall define all these terms (convolution operator, etc) give some examples, and prove this theorem.
commutes, for all
.
In terms of the above theorem, this lemma says ``
''.
proof: Recall
In other words, to go from one row to the next in a circulant matrix,
just ``rotate'' or cyclically permute the rows.
In particular, (setting
)
, so all the diagonal entries must be the same.
In terms of the above theorem, this lemma says ``
''.
proof:
Since
is linear, with respect to the standard vasis
it is represented by an
matrix
| 0 | 1 | 2 | 3 | 4 | |
| 1 | 0 | 1 | 0 | 0 | |
| 0 | 1 | 0 | 1 | 0 | |
| 4 | 7 | 5 | 8 | 6 | |
|
|
6 | 4 | 7 | 5 | 8 |
|
|
6 | 4 | 7 | 5 | 8 |
We see in this example that
, consistent with the fact
``
''.
In terms of the above theorem, this lemma says ``
''.
proof:
Let
denote a translation invariant linear operator.
Define
by
,
. Then
Using the above lemmas, we see the connection between maps given by circulant matrices and convolution operators.
This says
.
proof:
By hypothesis,
, for some
. Let us compute the action of
on the function
, where
.
We have
In terms of the above theorem, this computation proves
``
''.
This says
.
proof:
Suppose
is a convolution operator,
say
, for some
. Let
denote the inverse discrete Fourier
transform of
. The claim is that
, where
is defined as in
Definition 28. Since the Fourier transform of the
convolution is the product of the Fourier transforms,
for each
, we have
.
Taking inverse Fourier transforms of both sides gives
Conversely, suppose
is a Fourier multiplier operator,
say
, for some
. Let
.
The claim is that
. The proof of this case also
follows from the identity
.
This says
.
proof:
By hypothesis, there are constants
such that
, for each
with
and each
.
Therefore, if
is expressed in terms of the Fourier basis
as
(for
) then
We have:
(Lemma 32),
(Lemma 35),
(Lemma 37),
(Lemma 38),
(Lemma 39), and
(Lemma 40).
These lemmas taken together finished the proof of the theorem.
In the next section, we shall give an example of how the Cesàro filter gives rise to a Fourier multiplier operator.
Here is a problem which is sometimes called the
reconstruction problem: Suppose we know the Fourier series
coefficients
of
but we don't know
itself.
How do we ``reconstruct''
from its FS coefficients?
The theoretical answer to this is very easy, it is (by Dirichlet's
Theorem 8) simply the FS expansion:
Suppose that we modify this problem into a more practical one:
Suppose we know the Fourier series
coefficients
of
but we don't know
itself.
Given some error tolerance
, how do we ``reconstruct''
(efficiently)
, with error at most
,
from its FS coefficients?
Practically speaking, a ``filter'' is a sequence of weights
you apply to the FS coefficients to accomplish some aim (such
as approximating
``quickly'', or filtering out
``noise'', or ...). Can we weight the terms in
the partial sums of the FS to compute an
approximation to the value of a FS in a more efficient way
than simply looking at partial sums alone?
In this section, we examine several filters and see that the answer to this
question is, in a specific sense, ``yes''.
Let
The SAGE commands for this are as follows:
sage: M = 5 sage: f = lambda z: (1/(M+1))*(sin((2*M+1)*z/2)/sin(z/2)) sage: P1 = plot(f,-5,5) sage: M = 10 sage: f = lambda z: (1/(M+1))*(sin((2*M+1)*z/2)/sin(z/2)) sage: P2 = plot(f,-5,5) sage: M = 50 sage: f = lambda z: (1/(M+1))*(sin((2*M+1)*z/2)/sin(z/2)) sage: P3 = plot(f,-5,5) sage: show(P1+P2+P3)
Here is the proof of (20):
Aside: Here is a historical/mathematical remark explaining the idea behind this. Start with any infinite series,
As usual, let
denote the
-th partial sum of the FS of
and let
As the graphs show, as
gets larger, the
's approximate
``more smoothly'' than the
's do. The factor
have the effect of ``smoothing out'' the
``Gibbs phenomenon spikes'' you see near the jump discontinuities.
Let
,
as in Example 25 above.
We shall use list plots, since they are easy to construct in
SAGE . Here is
again, but as a list plot:
:
:
Note how the Gibbs phenomenom of
is ``smoothed out'' by the filter -
the graph of
seems to be less ``bumpy''.
:
:
:
:
In each case, we see that the Cesàro filter ``smooths out'' the the Gibbs phenomenom of the partial sums of the Fourier series.
Here is the SAGE code for the
case:
sage: FSf = lambda x:(sum([ck(j)*exp(2*pi*I*j*x) for j in range(-25,25)])).real() sage: L = [FSf(j/50) for j in range(50)] sage: show(list_plot(L)) sage: FSf = lambda x:(sum([(1-abs(j)/25)*ck(j)*exp(2*pi*I*j*x) for j in range(-25,25)])).real() sage: L = [FSf(j/50) for j in range(50)] sage: show(list_plot(L))
Here is an integral representation for
. First,
recall that
You see how these functions seem to be, as
,
approaching the spiky-looking Dirac delta function.
In fact, (as distributions) they do. (A distribution is a linear
functional on the vector space of all compactly supported
infinitely differentiable functions
.)
In other words,
The SAGE command for this:
sage: M = 5 sage: f = lambda z: (1/(M+1))*(sin((M+1)*z/2)/sin(z/2))^2 sage: P1 = plot(f,-5,5) sage: M = 10 sage: f = lambda z: (1/(M+1))*(sin((M+1)*z/2)/sin(z/2))^2 sage: P2 = plot(f,-5,5) sage: M = 50 sage: f = lambda z: (1/(M+1))*(sin((M+1)*z/2)/sin(z/2))^2 sage: P3 = plot(f,-5,5) sage: show(P1+P2+P3)
There is a discrete analog of
the
which may be regarded
as a Fourier multiplier operator. This subsection is included
for the purpose of illustrating the notion of
a Fourier multiplier operator by means of an example
(as you will see, our process of discretizing
the filter ruins its usefulness).
If we replace the usual FT on
As usual, let
denote the
-th partial sum of the FS of
and let
Here is an integral representation for
. Since
We use SAGE to compare the ordinary partial sum
to
the Hann-filtered partial sum
and the Césaro-filtered
partial sum
.
As you can see from the graphs, the filtered partial sum is
``smoother'' than
and a slightly better fit than
.
The SAGE command for this:
sage: f1 = lambda x:-2 sage: f2 = lambda x:1 sage: f3 = lambda x:-1 sage: f4 = lambda x:2 sage: f = Piecewise([[(-pi,-pi/2),f1],[(-pi/2,0),f2],[(0,pi/2),f3],[(pi/2,pi),f4]]) sage: P1 = f.plot_fourier_series_partial_sum_hann(20,pi,-5,5) sage: P2 = f.plot(rgbcolor=(0.8,0.3,0.4)) sage: P3 = f.plot_fourier_series_partial_sum(20,pi,-5,5) sage: P4 = f.plot_fourier_series_partial_sum_cesaro(20,pi,-5,5) sage: show(P1+P2+P3+P4)
Let
be the Dirac delta function. The Poisson summation
formula can be regarded as formula for the FT of the
``Dirac comb'':
Then
proof:
This theorem says that the
Fourier series
coefficient of the perioic function
is
Let
be a continuous function periodic with period
.
So far, we have started with sampling values of
at
equally
spaces points to get a vector
,
then computed its DFT, which we showed was a good approximation to the
FS coefficients:
might be true. Shannon's Sampling Theorem says that, under certain
condtions,
We say
is band limited if there is a number
such that
for all
with
. When such an
exists and is choosen as small as
possible, the number
is called the Nyquist rate and the number
is the sampling period.
Define the ``sink'' function
proof:
Let
be defined analogously to
above. The Poisson formula gives
.
By hypothesis,
Obviously most functions are not band limited. When a function is not band limited but the right-hand side of the above ``reconstruction formula'' is used anyway, the error creates an effect called ``aliasing.'' This also occurs when one uses the reconstruction formula for a smaple rate lower than the Nyquist rate.
Aliasing is a major concern in the analog-to-digital conversion of video and audio signals.
Recall that, given a differentiable, real-valued,
periodic function
of period
, there are
with
and
with
such that
has (real) Fourier series
When
is even then the
and we call the
FS a cosine series. When
is odd then the
and we call the
FS a sine series. In either of these cases, it suffices
to define
on the interval
instead of
.
Let us first assume
is even and``discretize'' the integral
for the
-th coefficient
of the cosine series and use that as a basis for defining the
discrete cosine transform or DCT. Using the ``left-hand Riemann sum''
approximation for the
integral using
subdivisions, we have
This transform is represented by the
real
symmetric matrix
.
sage: RRR = RealField(15) sage: MS = MatrixSpace(RRR,5,5) sage: r = lambda j: [RRR(cos(pi*j*k/5)) for k in range(5)] sage: dct = MS([r(j) for j in range(5)])
Next, assume
is odd and``discretize'' the integral for the
-th coefficient
of the sine series and use that as a basis for defining the
discrete sine transform or DST. Using the ``left-hand Riemann sum''
approximation for the
integral using
subdivisions, we have
This transform is represented by the
real
symmetric matrix
.
Since the 0-th coordinate is always
, since
,
sometimes this is replaced by a map
, represented by the
real
symmetric matrix
.
One difference between these definitions and the definition
of the DFT is that here the
samples are taken from
, whereas for the DFT the
samples are taken
from
.
Is there some way to compute the DCT and the DST in terms of the
DFT of a function?
Let
and compute, for
,
Let
and let
denote its ``extension by zero'' to
.
The above computation implies
The fast Fourier transform is a procedure for computing the discrete Fourier transform which is especially fast. The term FFT often loosely refers to a hybrid combination of the two algorithms presented in this section.
The algorithm described first, due to James Cooley and
John Tukey [CT],
works when the number of sample values
is a power of
,
say
, for some integer
. This special case is also
referred to as the radix-2 algorithm.
This is the one we will describe in the next section.
First, we assume that the powers of
(namely,
,
,
, ...,
)
have been precomputed. Note that the computation of the DFT on
requires
multiplications. This is because
the matrix
is
and each matrix entry is
involved in the computation of the vector
.
If
denotes the number of multiplications required to
compute the
then the above reasoning shows that
To improve on this, the Cooley-Tukey procedure is described
next. Let
for the argument below. To be clear about the
notation, let
, so
.
Let
be the vector we want to
compute the DFT of and write
proof:
We prove this by mathematical induction on
. This requires proving
a (base case) step
and a step where we assume the truth
of the inequality for
and prove it for
.
The number of multiplications required to compute the
DFT when
is
. Therefore,
.
Now assume
.
The Cooley-Tukey procedure (25) shows that
. This and the induction hypothesis implies
Here is the SAGE code for the above computations.
sage: MS4 = MatrixSpace(CyclotomicField(4),4,4) sage: MS8 = MatrixSpace(CyclotomicField(8),8,8) sage: V4 = VectorSpace(CyclotomicField(4),4) sage: V8 = VectorSpace(CyclotomicField(8),8) sage: z4 = CyclotomicField(4).gen() sage: z8 = CyclotomicField(8).gen() sage: r4 = lambda k: [z4^(j*k) for j in range(4)] sage: r8 = lambda k: [z8^(j*k) for j in range(8)] sage: F4 = MS4([r4(k) for k in range(4)]) sage: F8 = MS8([r8(k) for k in range(8)]) sage: f = V8([0,1,2,3,4,5,6,7]) sage: fe = V4([0,2,4,6]) sage: fo = V4([1,3,5,7]) sage: FFTe = [(F4*fe)[j]+z8^j*(F4*fo)[j] for j in range(4)] sage: FFTo = [(F4*fe)[j]-z8^j*(F4*fo)[j] for j in range(4)] sage: FFTe+FFTo [28, -4*zeta8^3 - 4*zeta8^2 - 4*zeta8 - 4, -4*zeta8^2 - 4, -4*zeta8^3 + 4*zeta8^2 - 4*zeta8 - 4, -4, 4*zeta8^3 - 4*zeta8^2 + 4*zeta8 - 4, 4*zeta8^2 - 4, 4*zeta8^3 + 4*zeta8^2 + 4*zeta8 - 4] sage: [(F8*f)[j] for j in range(8)] [28, -4*zeta8^3 - 4*zeta8^2 - 4*zeta8 - 4, -4*zeta8^2 - 4, -4*zeta8^3 + 4*zeta8^2 - 4*zeta8 - 4, -4, 4*zeta8^3 - 4*zeta8^2 + 4*zeta8 - 4, 4*zeta8^2 - 4, 4*zeta8^3 + 4*zeta8^2 + 4*zeta8 - 4]
Finally, we give an example which only illustrates SAGE 's
implementation of the FFT (which calls functions in the GSL [GSL]),
as compared to its implementation of its DFT (which is implemented in
Python but calls Pari for the computations involving
-th roots of unity
over
):
sage: J = range(30) sage: A = [QQ(int(10*(random()-1/2))) for i in J] sage: s = IndexedSequence(A,J) sage: time dfts = s.dft() CPU times: user 0.86 s, sys: 0.04 s, total: 0.90 s Wall time: 0.94 sage: time ffts = s.fft() CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 sage: J = range(3000) sage: A = [QQ(int(10*(random()-1/2))) for i in J] sage: s = IndexedSequence(A,J) sage: time ffts = s.fft() CPU times: user 0.21 s, sys: 0.00 s, total: 0.21 s Wall time: 0.21
As you can see, for a sample vector in
, SAGE can compute
the FFT in about
-th of a second.
However, of you try to compute the DFT
using this example, SAGE will probably give you an error related to
its extreme size.
In this subsection, we assume
is prime. Here we breifly
describe an algorithm due to Rader [Ra] for computing
the DFT on
for prime
.
The basic idea is to rewrite the DFT on
as a
convolution on
. Remark 1 is then used
to show that this convolution can be computed using a
``fast'' algorithm.
The first step in the algorithm is to select an element
,
, which has the property that
every element
in
can be written
in the form
for some
.
This element
s called a primitive root
(or a generator of
), where
.
Here is a table of primitive roots for various small primes
, and a demonstration, in the case
, that
is indeed
a primitive root:
|
|
3 | 5 | 7 | 11 | 11 | 13 | 17 | 19 | 23 |
|
|
2 | 2 | 3 | 2 | 2 | 2 | 3 | 2 | 5 |
|
|
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
|
|
1 | 3 | 9 | 10 | 13 | 5 | 15 | 11 | 16 | 14 | 8 | 7 | 4 | 12 | 2 | 6 |
The SAGE command which produces the smallest primitive
root
is primitive_root(N). For example, the
above tables were produced using the commands
sage: [primitive_root(p) for p in [3,5,7,11,13,17,19,23]] [2, 2, 3, 2, 2, 3, 2, 5] sage: N = 17; g = 3 sage: [g^k%N for k in range(N-1)] [1, 3, 9, 10, 13, 5, 15, 11, 16, 14, 8, 7, 4, 12, 2, 6]
The next step in the algorithm is to rewrite the
DFT on
as a convolution on
. To this end,
fix
and define
as
Fourier optics is a special topic in the theory of optics. Good references are Goodman [Go] and chapter 5 of Walker [W1]. This section owes much to discussions with Prof. Larry Tankersley of the USNA Physics dept.
The object of this section is to describe a method for computing certain quantities arising in defraction experiments. I'll try to describe these experiments next.
Consider a monochromatic light source having wavelength
.
For example, the light emitted from a laser would fit
this9.
Imagine
- and
-coordinates in the aperture plane. The aperture
function
is defined to be
if light can pass
though the ``slit'' at
and 0 otherwise10.
The light which passes through the aperture is pictured on a
screen. It is this pattern which we wish to describe in this section.
When the aperture is a square (whose sides are aligned
parallel to the
- and
-axes), the image projected
on the screen of this experiment looks like a dotted plus sign, where
the dots get fainter and fainter as they move away from the center.
A square slit diffraction pattern is pictured below11:
The goal of this last section will be to describe the mathematics behind this ``dotted plus sign'' square slit diffraction pattern.
The theory we shall sketch is called scalar defraction theory. A special case which we shall concentrate on is the Fraunhofer defraction model.
Let
denote the distance from the aperture to the screen,
(as above) the wavelength of the light,
and
the width of the slit.
The Fresnel number12
is the quantity
.
When
the screen is ``sufficiently far'' (past the
``Fresnel threshold'') from the
aperture that the wavefronts created by the slit
have negligable curvature.
Here is Wikipedia's image13 [WFd]:
|
We also assume that the index of refraction of the medium is
.
If a light ray of wavelength
travels from
to
, points at a distance of
from each other,
at time
then the light amplitude
at
arising from this light ray
satisfies the property
The light intensity is defined by
Let
and
be points on the aperture plane
where
and
.
The coherency function is defined by
Using (26), we have
Notation: In
-coordinates,
the aperature plane is described by
,
the screen is described by
. In the diagram below,
if
then
,
,
and
.
In particular, the vector
If
, define the 2-dimensional
Fourier transforms by
We know
Here are the SAGE commands which produce the plot below for this intensity function (the axes have been scaled for simplicity).
sage: f = "(sin(x)/x)^2*(sin(y)/y)^2" sage: opts = "[plot_format, openmath]" sage: maxima.plot3d (f, "[x, -5, 5]", "[y, -5, 5]", opts)
This plot is consistent with the image pictured in the ``dotted plus sign'' square slit diffraction pattern. As the mathematical explanation of this image was the goal of this last section of these notes, we are done.
For an elementary introduction to wavelets, appropriate for capstone projects, see Frazier [F].
A number of papers from the American Math. Monthly in the theory of FS is available at the URL
http://math.fullerton.edu/mathews/c2003/FourierTransformBib/Links/FourierTransformBib_lnk_2.html
A number of those articles would also make good starters for a capstone project.
Finally, we recommend Walker [W2] and Körner [K] as excellent additional references.
Prof. W. D. Joyner, Math Dept, USNA, wdj@usna.edu
Last updated: 5-2-2007.
This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.70)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -t 'Computational Fourier Transform lecture notes, spring 2006-2007' -split 0 sm472_CFT.tex
The translation was initiated by David Joyner on 2007-05-02
?