Fourier transforms

The Fourier transform is one of the most important, and widely used, tools in traditional theoretical physics, and it is also very useful in computational physics. In general, it allows us to break down functions (or signals) into the components for analysis, smoothing or filtering, and it also allows us to greatly increase the speed of certain calculations e.g. the diffusion equation or the Schrödinger equation.

Fourier cat

Fourier series

A periodic function $f(x)$ defined on a finite interval $0 \leq x \lt L$ can be written as a Fourier series (assuming it is bounded and has a finite number of extrema). If the function is even i.e. symmetric about the midpoint then we can use a cosine series:

$$f(x)=\sum_{n=0}^{\infty}\alpha _k cos\left(\frac{2 \pi kx }{L}\right)$$

where $\alpha_k$ are coefficients defined by the shape of the function. For an odd function, this can be written as a sine series:

$$f(x)=\sum_{n=1}^{\infty}\beta _k sin\left(\frac{2 \pi kx }{L}\right)$$

with the sum now starting from $n=1$. A general function can written by combining the two forms:

$$f(x)=\sum_{n=0}^{\infty}\alpha _k cos\left(\frac{2 \pi kx }{L}\right) + \sum_{n=1}^{\infty}\beta _k sin\left(\frac{2 \pi kx }{L}\right)$$

An alternative way to represent this general series is to make use of the identities $cos\theta = \frac{1}{2}(e^{-i\theta}+e^{i\theta})$ and $sin\theta = \frac{1}{2}i(e^{-i\theta}-e^{i\theta})$. Substituting these into the previous form gives:

$$f(x)=\frac{1}{2}\sum_{n=0}^{\infty}\alpha _k\left[\exp \left(-i\frac{2 \pi kx }{L}\right)+exp \left(i\frac{2 \pi kx }{L}\right)\right] + \frac{1}{2}\sum_{n=1}^{\infty}\beta _k\left[\exp \left(-i\frac{2 \pi kx }{L}\right)-\exp \left(i\frac{2 \pi kx }{L}\right)\right]$$

which can be more straightforwardly written as:

$$f(x)=\sum_{n=-\infty}^{\infty}\gamma _k \exp \left(i\frac{2 \pi kx }{L}\right)$$

where $$ \gamma _k= \begin{cases} \frac{1}{2}(\alpha_{-k}+i\beta_{-k}) & \text{if } k \lt 0\\ \alpha_0 & \text{if } k=0\\ \frac{1}{2}(\alpha_{k}-i\beta_{k}) & \text{if } k \gt 0 \end{cases} $$

If we can find the coefficients $\alpha_k$ for a given function $f(x)$ then we can represent it in a convenient way to implement in computational form. At first glance, this appears to be only useful for periodic functions, but it can be applied to any function if we are only interested in the interval from $0$ to $L$. Consider the following example:

Now if we apply Fourier analysis to this, it will be valid from 0 to 2, but incorrect outside the limits where we defined the periodic function unit cell.

The coefficients $\gamma_k$ are in general complex and the standard way to calculate them is to evaluate the integral:

$$\int_{0}^{L}f(x)\exp \left(-i\frac{2 \pi kx }{L}\right)dx = \sum_{k^\prime=-\infty}^{\infty}\gamma_{k^\prime}\int_{0}^{L}\exp \left(i\frac{2 \pi (k^\prime-k)x }{L}\right)dx$$

The integral is straightforward for $k \neq k^\prime$, since $e^{i2\pi n} = 1$ for any integer, we have:

$$ \begin{align} \int_{0}^{L}\exp \left(i\frac{2 \pi (k^\prime-k)x }{L}\right)dx &= \frac{L}{i2\pi(k^\prime-k)}\left[\exp\left(i\frac{2\pi(k^\prime-k)x}{L}\right)\right]^L_0 \\ &= \frac{L}{i2\pi(k^\prime-k)}[e^{i2\pi(k^\prime-k)}-1] \\ &= 0 \end{align} $$

When $k = k^\prime$ then the integral becomes simply $\int_{0}^{L}1dx=L$ and we are left with only one nonzero term in the sum:

$$\int_{0}^{L}f(x)\exp \left(-i\frac{2 \pi kx }{L}\right)dx = L\gamma_k$$

such that:

$$\gamma_k = \frac{1}{L}\int_{0}^{L}f(x)\exp \left(-i\frac{2 \pi kx }{L}\right)dx$$

So from any periodic function (or any function that can be made so) we can find the Fourier coefficients and vice versa. Both are complete representations of the information contained in the function.

As an example, let's consider the function $f(x)=e^x$ in the range $0 \lt x \lt 2\pi$, with $L=2\pi$:

$$f(x)=\sum_{k=-\infty}^{\infty}\gamma _k \exp \left(i\frac{2 \pi kx }{L}\right)$$

where:

$$\gamma_k = \frac{1}{2\pi}\int_{0}^{2\pi}f(x)\exp \left(-ikx\right)dx$$$$= \frac{1}{2\pi}\int_{0}^{2\pi}\exp(x)\exp \left(-ikx\right)dx$$

Skipping the derivation since we are physicists (Wolfram AI), we get the Fourier coefficients as:

$$\gamma_k= -\frac{i(1-e^{(2\pi(1-ik)})}{2\pi(i+k)}$$

with the complex Fourier series given by:

$$ f(x) = \sum_{k=-\infty}^{\infty} -\frac{i(1-e^{(2\pi(1-ik)})}{2\pi(i+k)}e^{ikx} $$

We can then look at the two functions with a simple piece of Python code.

Example - Gaussian function

Repeat the Fourier analysis we just did for $f(x)=e^x$ for $f(x)=x^2$ and explore the behaviour with respect to the size of the series you use.

Discrete Fourier Transforms

For many cases it is not possible to solve the Fourier integral analytically to give us an exact formula for the coefficients. Furthermore, when considering real data, we may not have an analytical form of $f(x)$ at all. In these cases we need to calculate the coefficients numerically and here we can employ some of the integral techniques considered earlier in the course check that Patrick gives it. In particular, for:

$$\gamma_k = \frac{1}{L}\int_{0}^{L}f(x)\exp \left(-i\frac{2 \pi kx }{L}\right)dx$$

we can use the trapezoidal rule with $N$ slices of width $h=L/N$ each, giving:

$$\gamma_k = \frac{1}{L}\frac{L}{N}\left[ \frac{1}{2}f(0) + \frac{1}{2}f(L) + \sum_{n=1}^{N-1}f(x_n)\exp \left(-i\frac{2 \pi kx_n }{L}\right)\right]$$

where the positions of the sample points for the integral are:

$$x_n=\frac{n}{N}L$$

Since the function is by definition periodic, then $f(0)=f(L)$ and this can be simplified to:

$$\gamma_k = \frac{1}{N}\left[\sum_{n=0}^{N-1}f(x_n)\exp \left(-i\frac{2 \pi kx_n }{L}\right)\right]$$

This form is suited to finding the coefficients for a function that we know at only a set of regularly spaced points, a very common occurrence for physical problems. In fact, an even simpler form can be given if we define $y_n=f(x_n)$:

$$\gamma_k = \frac{1}{N}\left[\sum_{n=0}^{N-1}y_n \exp \left(-i\frac{2 \pi kn }{N}\right)\right]$$

It is known as the discrete Fourier transform (DFT) of the $N$ samples $y_n$ and is commonly written as $c_k$ where $c_k=N\gamma_k$. While the integral approach we used in the derivation is approximate, it is relatively easy to show that the DFT is an exact result. Consider:

$$\sum_{n=0}^{N-1}c_k \exp \left(i\frac{2 \pi kn }{N}\right) = \sum_{k=0}^{N-1}\sum_{n^\prime =0}^{N-1}y_{n^\prime} \exp \left(-i\frac{2 \pi kn^\prime }{N}\right) \exp \left(i\frac{2 \pi kn }{N}\right)$$$$= \sum_{k=0}^{N-1}y_{n^\prime}\sum_{n^\prime =0}^{N-1}\exp \left(i\frac{2 \pi k(n-n^\prime) }{N}\right)$$

In general for exponential sums of this form $\sum_{k=0}^{N-1}\exp \left(i\frac{2 \pi km }{N}\right)$, they are equal to $N$ if $m$ is zero or a multiple of $N$, and zero otherwise. For $m=n-n^\prime$ in the above equation, both $n$ and $n^\prime$ are less than $N$, so there is no way for $m$ to be a multiple of $N$, leaving only $m=0$ when $n = n^\prime$. This leaves only a single term from the sum:

$$\sum_{n=0}^{N-1}c_k \exp \left(i\frac{2 \pi kn }{N}\right)=Ny_n$$

equivalently written as:

$$y_n = \frac{1}{N}\sum_{n=0}^{N-1}c_k \exp \left(i\frac{2 \pi kn }{N}\right)$$

known as the inverse discrete Fourier transform. It means we can extract the values of the sample $y_n$ from the coefficients $c_k$ exactly. Note that the DFT is powerful, but unlike the original Fourier series, it cannot tell us anything about the function between the sampling points and these two functions would have the same DFT if sampled at the points shown...

If we have real, as opposed to complex data, which is generally the case, we only need to solve for the first $\frac{N}{2}$ of the DFT, as the rest are just complex conjugates and we can formulate a simple (and naive) Python function to find it:

Let's apply this function to some wave data we have handy:

What does it actually mean?

We have now explored the Fourier function mathematically and computationally, but at this point it is important to think what it actually means physically. The Fourier transform converts the function into a series of real or complex sinusoidal waves - each term in the series represents one wave with a well-defined frequency. For $f(x)$ these are spatial frequencies and for $f(t)$ these are temporal frequencies, such that the Fourier transform gives us a breakdown of the sum of waves of a given frequency that can represent the original function. This is effectively what signal analyzers actually do, for example, the graphical visualizers in many music systems are basically just Fourier transforming the music signal and outputting the result.

Consider the data that we just plotted. The original waveform has a clearly periodic set of main peaks, but there is also some noise - as sound this would translate into one sound a constant frequency with background hiss due to the noise. This can be seen clearly in the DFT, where the $c_k$ are dominated by one peak at low frequency, but we also see other peaks aside from the background noise. These are harmonics of the original peak at multiples of its frequency, telling us that the original function was not a pure sine wave, as this could be represented by a single peak.

Example - Inverse Fourier transform

Take the coefficients from the example we just looked at and use the inverse DFT to reconstruct the original function.

Phase and positions of sample points

For DFTs, if we shift our sample points such that:

$$x_n^\prime=x_n + \Delta=\frac{n}{N}L+\Delta$$

we find that the equivalent Fourier transform for this set of samples is:

$$c_k = \sum_{n=0}^{N-1}f(x_n+\Delta) \exp \left(-i\frac{2 \pi k(x_n+\Delta) }{L}\right)$$$$= \exp \left(-i\frac{2 \pi k\Delta }{L}\right) \sum_{n=0}^{N-1}f(x_n^\prime) \exp \left(-i\frac{2 \pi kx_n }{L}\right)$$$$= \exp \left(-i\frac{2 \pi k\Delta }{L}\right) \sum_{n=0}^{N-1}y_n^\prime \exp \left(-i\frac{2 \pi kn }{N}\right)$$

But this is just the same as our original formulation of the DFT with an additional phase factor - the DFT is the same and independent of where we place the samples. Usually this is adsorbed into the definition of $c_k$, such that $c_k^\prime=\exp(i2\pi k\Delta/L)c_k$.

Discrete cosine transforms

For generality we introduced the complex form of the Fourier transform, but there are many practical cases where it makes more sense to use the cosine transform (the sine form is much more rarely used since it requires the function to be zero at $x=0$ and $x=L$). If we return to the cosine series:

$$f(x)=\sum_{n=0}^{\infty}\alpha _k cos\left(\frac{2 \pi kx }{L}\right)$$

we already noted that this is only valid for even functions i.e. those that are symmetric about the midpoint. This might initially seem a major flaw, but any function can be made even by adding it to a mirror image of itself and then repeating it. This means that the number of samples, $N$, is also always even.

Once we have a symmetric function, it is simply a special case for $c_k$ when $y_n$ is symmetric about $x=\frac{1}{2}L$ with $y_n=y_{N-n}$ for all $n$. Given that $N$ is even, we can rewrite the equation for the coefficients as follows (noting that $e^{i2\pi k}=1$ for all integer $k$):

$$c_k = \sum_{n=0}^{N-1}y_n \exp \left(-i\frac{2 \pi kn }{N}\right)$$$$= \sum_{n=0}^{\frac{1}{2}N}y_n \exp \left(-i\frac{2 \pi kn }{N}\right) + \sum_{n=\frac{1}{2}N+1}^{N-1}y_n \exp \left(-i\frac{2 \pi kn }{N}\right)$$$$= \sum_{n=0}^{\frac{1}{2}N}y_n \exp \left(-i\frac{2 \pi kn }{N}\right) + \sum_{n=\frac{1}{2}N+1}^{N-1}y_{N-n} \exp \left(i\frac{2 \pi k(N-n) }{N}\right)$$

Making a change of variables such that $N-n \rightarrow n$ and using the identity $cos\theta = \frac{1}{2}(e^{-i\theta}+e^{i\theta})$ we get the discrete cosine transform (DCT):

$$c_k = \sum_{n=0}^{\frac{1}{2}N}y_n \exp \left(-i\frac{2 \pi kn }{N}\right) + \sum_{n=1}^{\frac{1}{2}N-1}y_n \exp \left(i\frac{2 \pi kn }{N}\right)$$$$= y_0 + y_{\frac{N}{2}}cos\left(\frac{2\pi k(\frac{N}{2})}{N} \right) + 2\sum_{n=1}^{\frac{1}{2}N-1}y_n cos \left(\frac{2 \pi kn }{N}\right) $$

The inverse of this is very similar, differing only by the leading $\frac{1}{N}$ term:

$$y_n = \frac{1}{N}\left[ c_0 + c_{\frac{N}{2}}cos\left(\frac{2\pi k(\frac{N}{2})}{N} \right) + 2\sum_{n=1}^{\frac{1}{2}N-1}c_k cos \left(\frac{2 \pi kn }{N}\right)\right] $$

Note that it is common to see DCT formulations where the sampling is taken at the midpoint of the sample intervals, known at a "Type-II" DCT, which is effectively the same, but implies that $y_n = y_{N-1-n}$ and for an even $N$:

$$c_k= \sum_{n=0}^{\frac{1}{2}N-1}y_n \exp \left(-i\frac{2\pi kn}{N}\right) + \sum_{n=\frac{1}{2}N}^{N-1}y_n \exp \left(-i\frac{2\pi kn}{N}\right)$$$$= \exp \left(i\frac{\pi k}{N}\right) \left[ \sum_{n=0}^{\frac{1}{2}N-1}y_n \exp \left(-i\frac{2\pi k(n+\frac{1}{2})}{N}\right) + \sum_{n=\frac{1}{2}N}^{N-1}y_{N-1-n} \exp \left(i\frac{2\pi k(N-\frac{1}{2}-n)}{N}\right) \right]$$$$= \exp \left(i\frac{\pi k}{N}\right) \left[ \sum_{n=0}^{\frac{1}{2}N-1}y_n \exp \left(-i\frac{2\pi k(n+\frac{1}{2})}{N}\right) + \sum_{n=0}^{\frac{1}{2}N-1}y_{n} \exp \left(i\frac{2\pi k(n+\frac{1}{2})}{N}\right) \right]$$$$= 2\exp \left(i\frac{\pi k}{N}\right) \sum_{n=0}^{\frac{1}{2}N-1}y_n cos \left(\frac{2\pi k(n+\frac{1}{2})}{N}\right) $$

Conventionally the leading phase factor is included in the Fourier coefficients such that:

$$a_k = 2 \sum_{n=0}^{\frac{1}{2}N-1}y_n cos \left(\frac{2\pi k(n+\frac{1}{2})}{N}\right) $$

and the inverse is:

$$y_n= \frac{1}{N}\left[ a_0 + 2\sum_{k=1}^{\frac{1}{2}N-1}a_k cos \left(\frac{\pi k(n+\frac{1}{2})}{N}\right)\right]$$

redefining $\frac{1}{2}N \rightarrow N$ and $a_k \rightarrow 2a_k$ this is often written as follows:

$$a_k= \sum_{n=0}^{N-1}y_n cos \left(\frac{\pi k(n+\frac{1}{2})}{N}\right), y_n= \frac{1}{N}\left[ a_0 + 2\sum_{k=1}^{N-1}a_k cos \left(\frac{\pi k(n+\frac{1}{2})}{N}\right)\right]$$

DCT and compression

DCTs are behind a lot the original compression algorithms used in music, image and video files. In particular JPEGs are constructed by dividing an image in to blocks, performing DCTs on these blocks and then filtering the resultant coefficients to remove small contributions to the signal. When you open a JPEG file it applies a version of an iDCT to reconstruct the original image from the stored coefficients, including compression artifacts if important coefficients have been discarded. MP3s take the filtering to the next stage by keeping coefficients that makes sense from the perspective of what the human ear can hear.

Jpeg compression The DCT transforms an 8×8 block of input values to a linear combination of these 64 patterns. The patterns are referred to as the two-dimensional DCT basis functions, and the output values are referred to as transform coefficients.

Two-dimensional Fourier transforms

Before we explore the potential of Fourier transforms for 2D data, it is worth looking a little at the basic equations behind it. For a 2D function $f(x,y)$, we need to transform first with respect to $x$ and then with respect to $y$. If we have an $M \times N$ grid of data, we first perform an ordinary Fourier transform on each of the $M$ rows:

$$c^\prime_{ml} = \sum_{n=0}^{N-1}y_{mn} \exp \left(-i\frac{2 \pi ln }{N}\right)$$

For each row $m$ we now have $N$ coefficients, one for each value of $l$. Next we take the $l$th coefficient in each of the $M$ rows and Fourier transform these $M$ values again to get:

$$c_{kl} = \sum_{m=0}^{M-1}c^\prime_{ml} \exp \left(-i\frac{2 \pi km }{M}\right)$$

If we now combine these, we can formulate a complete expression for the Fourier transform in 2D:

$$c_{kl} = \frac{1}{MN}\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}y_{mn} \exp \left[-i2\pi \left(\frac{km}{M}+\frac{ln}{N}\right)\right]$$

with the corresponding inverse transform as follows:

$$y_{mn} = \frac{1}{MN}\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}c_{kl} \exp \left[i2\pi \left(\frac{km}{M}+\frac{ln}{N}\right)\right]$$

We can see this in action by coding (a very basic) compression algorithm for an image.

Fast Fourier Transforms

We introduced the DFT in the following form:

$$c_k = \sum_{n=0}^{N-1}y_n \exp \left(-i\frac{2 \pi kn }{N}\right)$$

In the simple approach we discussed for solving this, we must sum over $N$ terms and repeat this $\frac{1}{2}N+1$ times, so it scales with $N^2$ and becomes prohibitively expensive for large datasets (just try it). The numpy routines introduced in the image compression example hint that this is not the most efficient way of solving the problem. A cleverer approach was introduced by Gauss (in 1805) and it is simpler to understand if we assume the number of samples is a power of two, such that $N=2_m$ where $m$ is an integer. If we split our sum into two equally sized samples, of odd and even $n$, then the sum of the even terms can be written as:

$$E_k = \sum_{r=0}^{\frac{1}{2}N-1}y_{2r} \exp \left(-i\frac{2 \pi k(2r) }{N}\right) = \sum_{r=0}^{\frac{1}{2}N-1}y_{2r} \exp \left(-i\frac{2 \pi kr }{\frac{1}{2}N}\right)$$

But this is just another Fourier transform with $\frac{1}{2}N$ samples instead of $N$. Similarly, for the odd terms we get:

$$\sum_{r=0}^{\frac{1}{2}N-1}y_{2r+1} \exp \left(-i\frac{2 \pi k(2r+1) }{N}\right) = \exp\left(-i\frac{2 \pi k}{N}\right)\sum_{r=0}^{\frac{1}{2}N-1}y_{2r+1} \exp \left(-i\frac{2 \pi kr }{\frac{1}{2}N}\right)$$$$= \exp\left(-i\frac{2 \pi k}{N}\right)O_k$$

where $O_k$ is another Fourier transform with $\frac{1}{2}N$ samples. Then the complete Fourier coefficient is the sum of the odd and even terms:

$$c_k = E_k + \exp\left(-i\frac{2 \pi k}{N}cO_k\right)$$

So we can get the Fourier transform of any function $f(x)$ by finding two Fourier transforms of that function with half as many points spaced twice as far apart (with an extra factor, $\exp\left(-i\frac{2 \pi k}{N}\right)$, called the twiddle factor). But can't we just do the same split with the two new transforms? and then we can keep repeating the process until the transform is just that of a single sample such that $k=0$, $N=1$ and the single coefficient is:

$$c_0 = \sum_{n=0}^{0}y_ne^0=y_0$$

So the transform is the function at the sample point...the actual calculation of this starts with individual samples (which are their own transforms) and then combines them in pairs, then pairs into fours and so on until the complete transform is completed. Initially this requires $N$ operations, then $\frac{1}{2} N$ transforms of two coefficients, then $\frac{1}{4} N$ transforms of four coefficients...giving $N$ operations at every level. After we have gone through $m$ levels we have $2^m$ samples, so the total number of levels is given by $2^m=N$, or $m=\log_2 N$. Compared to scaling with $N^2$ we now have $\log_2 N$ levels times $N$ coefficients, scaling as $N\log_2 N$ - for a million samples this is a difference of $\frac{1}{2}N^2=5\times 10^{11}$ operations compared to $N\log_2 N=2\times 10^{7}$, four orders of magnitude faster. This is why it is know as the Fast Fourier Transform (FFT) and is generally used. An equivalently fast form can be derived for the inverse, giving us the Inverse Fast Fourier Transform (Inverse FFT). Note that the formulation also works with functions where the number of samples is not a power of two, but the derivation is fairly complex.

Here is an example using fft. The only required argument for fft and ifft is an N-element array of floating-point or complex numbers; the sampling time $\tau$ does not appear in the formulas above, but it is needed to figure out the frequencies represented by the points in the transformed array.

In the example the original data fn are real, consisting of a cosine oscillating at 200 kHz multiplied by a decaying exponential. Nevertheless, the DFT of these data f̃k are complex.

The second half of the plot above, labeled from 500 kHz to 1000 kHz, really holds negative frequencies from −500 kHz to zero. Thus the DFT of this input data actually has peaks at +200 kHz and −200 kHz. If we wish to display the frequency axis correctly, we can use the the numpy function roll to shift the DFT so zero frequency is in the middle (note that the DCT will always give real results and positive frequencies):

Example - FFT speed comparison

Make a comparison of the speed of our simple implementation of the FT used to look at the sound file pitch.txt with the numpy implementation of an FFT. Make a plot of the relative speed for the two algorithms with respect to sample size for any function you choose.

Example - DCT speed comparison

In the image example above we used rfft from numpy and coded a DCT in one- and two-dimensions on top of it. In fact, numpy already has 2D functions in the fft package and direct DCT functions are available in scipy in 1D. Repeat the image compression analysis we performed on an image of your own choice and compare the speed differences depending on what packages you use - feel free to try anything you can find...

Windowing and power spectrum estimation

We saw that a common use of discrete Fourier and cosine transforms is to find out what frequency components are present in a signal - known as power spectrum estimation. For this purpose one might not care about phase information, so it suffices to use the discrete cosine transform (no complex numbers) and to plot the absolute value of the transform.

For a signal that goes on for a long time but is only sampled for a finite period, the abrupt start and stop of the sample at the start and end times makes the peaks in the frequency spectrum artificially wide, and introduces other artifacts such as wiggles extending far from the peaks. These undesirable artifacts can be greatly reduced by muliplying the original data by a windowing function that goes smoothly to zero at the start and end of the sampling time, before applying the Fourier or cosine transform.

One popular windowing function is the Hanning window, available (along with various other windowing functions) in numpy. Specifically numpy.hanning(nn) returns a Hanning window in an array with nn elements, as ilustrated here:

A magnetic example

A practical example of Fourier analysis for images can be seen in Atomic Force Microscopy (AFM) images. Imaging periodic systems, in this case the surface of Nickel Oxide (NiO), offers a very obvious route where Fourier analysis can be used to see what periodicities are really present. In this work (Nature 446 (2007) 522–525), the authors are trying to detect the exchange force - the difference in force measured in AFM when a magnetic tip interacts with atoms of different spin. This can only be seen at close approach, so they use a Fourier transform to show that the image at close approach (b) has extra peaks in the FT (d) due to the magnetic surface atoms, which cannot be seen in the image (a) and FT (c) at longer range.

nio_afm

Other data sources

There is a large variety in the data sources you are likely to encounter in physics, but usually there is a package in existence that has been developed to handle it. In the earlier example we explored the Fourier transform for sound at a single frequency, but it is likely that any real data you encounter will be much more complex. Here, we can perform a Fourier analysis on a piece of music.

Example: Noisy function

Generate your own noisy function (or find some data with periodic noise). Use Fourier transforms to identify the frequencies of the noise, remove them and plot the true signal.