Chapter 2Part I: Foundations

Fourier Series

The Fourier series is one of the most powerful tools in signal analysis, allowing us to decompose any periodic signal into a sum of sinusoidal components. This chapter develops the theory from orthogonality principles, establishes convergence conditions, and explores the rich consequences including Parseval's theorem and the Gibbs phenomenon.

2.1 Trigonometric Fourier Series

The central idea of Fourier analysis is that a periodic signal can be represented as a (possibly infinite) sum of harmonically related sinusoids. Given a periodic signal $x(t)$ with fundamental period $T_0$ and fundamental frequency $\omega_0 = 2\pi / T_0$, we seek a representation of the form:

Theorem 2.1 — Trigonometric Fourier Series

A periodic signal $x(t)$ with period $T_0$ can be expanded as:

$$x(t) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left[ a_n \cos(n\omega_0 t) + b_n \sin(n\omega_0 t) \right]$$

where the Fourier coefficients are given by:

$$a_0 = \frac{2}{T_0} \int_{0}^{T_0} x(t) \, dt$$

$$a_n = \frac{2}{T_0} \int_{0}^{T_0} x(t) \cos(n\omega_0 t) \, dt, \quad n \geq 1$$

$$b_n = \frac{2}{T_0} \int_{0}^{T_0} x(t) \sin(n\omega_0 t) \, dt, \quad n \geq 1$$

Derivation from Orthogonality

The key to deriving these coefficients lies in the orthogonality of the trigonometric system. The functions $\{1, \cos(n\omega_0 t), \sin(n\omega_0 t)\}_{n=1}^{\infty}$ form an orthogonal set over the interval $[0, T_0]$. Specifically:

Orthogonality Relations

For integers $m, n \geq 1$:

$$\int_0^{T_0} \cos(m\omega_0 t) \cos(n\omega_0 t) \, dt = \begin{cases} T_0 / 2 & m = n \\ 0 & m \neq n \end{cases}$$

$$\int_0^{T_0} \sin(m\omega_0 t) \sin(n\omega_0 t) \, dt = \begin{cases} T_0 / 2 & m = n \\ 0 & m \neq n \end{cases}$$

$$\int_0^{T_0} \cos(m\omega_0 t) \sin(n\omega_0 t) \, dt = 0 \quad \text{for all } m, n$$

To extract $a_n$, we multiply both sides of the Fourier expansion by $\cos(n\omega_0 t)$ and integrate over one period. By orthogonality, all terms vanish except the one matching $n$, yielding:

$$\int_0^{T_0} x(t) \cos(n\omega_0 t) \, dt = a_n \cdot \frac{T_0}{2}$$

Solving for $a_n$ gives the formula in Theorem 2.1. The derivation for $b_n$ is analogous, multiplying by $\sin(n\omega_0 t)$ instead. The DC term $a_0/2$ is simply the average value of $x(t)$ over one period.

Remark: The factor of $1/2$ in the DC term $a_0/2$ is a convention that makes the formula for $a_n$ valid for $n = 0$ as well. Some textbooks write $c_0$ for the DC component to avoid confusion.

Symmetry Properties

Symmetry in $x(t)$ can simplify computation dramatically:

  • Even symmetry ($x(t) = x(-t)$): All $b_n = 0$. The series contains only cosine terms.
  • Odd symmetry ($x(t) = -x(-t)$): All $a_n = 0$ (including $a_0$). The series contains only sine terms.
  • Half-wave symmetry ($x(t + T_0/2) = -x(t)$): All even-numbered harmonics vanish. Only odd $n$ contribute.

2.2 Complex Exponential Fourier Series

Using Euler's formula $e^{j\theta} = \cos\theta + j\sin\theta$, we can rewrite the trigonometric Fourier series in a more compact and elegant form using complex exponentials. This representation is widely preferred in signal processing and communications.

Theorem 2.2 — Complex Exponential Fourier Series

A periodic signal $x(t)$ with period $T_0$ has the expansion:

$$x(t) = \sum_{n=-\infty}^{\infty} c_n \, e^{jn\omega_0 t}$$

where the complex Fourier coefficients are:

$$c_n = \frac{1}{T_0} \int_{0}^{T_0} x(t) \, e^{-jn\omega_0 t} \, dt, \quad n \in \mathbb{Z}$$

Relationship to Trigonometric Coefficients

The complex coefficients $c_n$ relate directly to the trigonometric coefficients through:

Coefficient Relationships

$$c_n = \frac{a_n - jb_n}{2}, \quad c_{-n} = \frac{a_n + jb_n}{2}, \quad c_0 = \frac{a_0}{2}$$

Conversely:

$$a_n = c_n + c_{-n} = 2\,\text{Re}(c_n), \quad b_n = j(c_n - c_{-n}) = -2\,\text{Im}(c_n)$$

For a real-valued signal $x(t)$, the coefficients satisfy the conjugate symmetry property $c_{-n} = c_n^*$. This means the magnitude spectrum is even: $|c_{-n}| = |c_n|$, and the phase spectrum is odd: $\angle c_{-n} = -\angle c_n$.

The amplitude of each harmonic can also be written in terms of the complex coefficients:

$$A_n = \sqrt{a_n^2 + b_n^2} = 2|c_n|, \quad \phi_n = \arctan\!\left(\frac{-b_n}{a_n}\right) = \angle c_n$$

Remark: The complex exponential form is not merely a mathematical convenience. It directly corresponds to the physical notion of positive and negative frequencies in modulation theory. Each pair $(c_n, c_{-n})$ represents a counter-rotating phasor pair whose sum is a real sinusoid at frequency $n\omega_0$.

2.3 Dirichlet Conditions

Not every periodic function has a convergent Fourier series. The Dirichlet conditions provide sufficient (though not necessary) conditions for the Fourier series of a periodic signal to converge.

Theorem 2.3 — Dirichlet Conditions for Convergence

The Fourier series of a periodic signal $x(t)$ converges if the following conditions are satisfied over one period:

  1. Absolute integrability: $\displaystyle\int_0^{T_0} |x(t)| \, dt < \infty$
  2. Finite number of maxima and minima within one period.
  3. Finite number of discontinuities within one period, and each discontinuity must be finite (no infinite jumps).

When these conditions are met, the Fourier series converges to $x(t)$ at points of continuity, and to the average of the left and right limits at points of discontinuity:

$$S(t_0) = \frac{x(t_0^+) + x(t_0^-)}{2}$$

These conditions are satisfied by virtually all physically realizable signals. A classic counterexample is the function $x(t) = \sin(1/t)$ near $t = 0$, which oscillates infinitely often in any neighborhood of the origin, violating condition 2.

Remark: The Dirichlet conditions are sufficient but not necessary. There exist functions that violate these conditions yet still possess convergent Fourier series (e.g., certain functions with infinitely many discontinuities arranged such that the series still converges in the $L^2$ sense). In engineering practice, however, the Dirichlet conditions are almost always met.

2.4 Square Wave Fourier Series

The square wave is perhaps the most important worked example in Fourier series theory. It illustrates many fundamental concepts: odd harmonics only, $1/n$ amplitude decay, the Gibbs phenomenon, and half-wave symmetry.

Example 2.1 — Fourier Series of a Square Wave

Consider the square wave defined on $[-\pi, \pi]$ with period $T_0 = 2\pi$:

$$x(t) = \begin{cases} +1 & 0 < t < \pi \\ -1 & -\pi < t < 0 \end{cases}$$

Since $x(t)$ is an odd function, all cosine coefficients vanish: $a_n = 0$ for all $n$.

Step 1: Compute $b_n$:

$$b_n = \frac{2}{2\pi} \int_{-\pi}^{\pi} x(t) \sin(nt) \, dt = \frac{2}{\pi} \int_{0}^{\pi} \sin(nt) \, dt$$

Step 2: Evaluate the integral:

$$b_n = \frac{2}{\pi} \left[ -\frac{\cos(nt)}{n} \right]_0^{\pi} = \frac{2}{n\pi} \left(1 - \cos(n\pi)\right) = \frac{2}{n\pi}\left(1 - (-1)^n\right)$$

Step 3: Simplify:

$$b_n = \begin{cases} \dfrac{4}{n\pi} & n \text{ odd} \\ 0 & n \text{ even} \end{cases}$$

Result:

$$x(t) = \frac{4}{\pi} \sum_{k=0}^{\infty} \frac{\sin\!\big((2k+1)t\big)}{2k+1} = \frac{4}{\pi}\!\left(\sin t + \frac{\sin 3t}{3} + \frac{\sin 5t}{5} + \cdots\right)$$

Key observations from this result:

  • Only odd harmonics appear — a consequence of half-wave symmetry $x(t + T_0/2) = -x(t)$.
  • Amplitudes decay as $1/n$ — this slow decay reflects the discontinuities in the waveform. Smoother signals have faster-decaying coefficients.
  • The series converges slowly — many terms are needed for a good approximation near the discontinuities.

Fourier Series of a Square Wave

Visualize how adding more harmonics builds up the square wave approximation. Note the persistent overshoot near discontinuities (Gibbs phenomenon).

Click Run to execute the Python code

First run will download Python environment (~15MB)

2.5 Parseval's Theorem for Fourier Series

Parseval's theorem establishes a fundamental connection between the time-domain and frequency-domain representations: the total power of a signal is preserved in the Fourier domain. This is a manifestation of energy conservation and underpins many practical applications in spectral analysis.

Theorem 2.4 — Parseval's Theorem

For a periodic signal $x(t)$ with Fourier coefficients $c_n$, the average power satisfies:

$$P = \frac{1}{T_0} \int_0^{T_0} |x(t)|^2 \, dt = \sum_{n=-\infty}^{\infty} |c_n|^2$$

Equivalently, in terms of trigonometric coefficients:

$$P = \frac{a_0^2}{4} + \frac{1}{2} \sum_{n=1}^{\infty} \left(a_n^2 + b_n^2\right)$$

Proof Sketch

Starting from the power integral and substituting the Fourier series expansion:

$$P = \frac{1}{T_0} \int_0^{T_0} x(t) \cdot x^*(t) \, dt = \frac{1}{T_0} \int_0^{T_0} \left(\sum_{n} c_n e^{jn\omega_0 t}\right)\!\left(\sum_{m} c_m^* e^{-jm\omega_0 t}\right) dt$$

Exchanging summation and integration (justified by uniform convergence under Dirichlet conditions), and applying the orthogonality relation:

$$\frac{1}{T_0} \int_0^{T_0} e^{j(n-m)\omega_0 t} \, dt = \delta_{nm}$$

where $\delta_{nm}$ is the Kronecker delta, we obtain:

$$P = \sum_{n=-\infty}^{\infty} c_n c_n^* = \sum_{n=-\infty}^{\infty} |c_n|^2 \quad \blacksquare$$

Example 2.2 — Parseval's Applied to the Square Wave

For the unit square wave, the time-domain power is:

$$P = \frac{1}{T_0}\int_0^{T_0} |x(t)|^2 \, dt = \frac{1}{2\pi}\int_0^{2\pi} 1 \, dt = 1$$

Using the Fourier coefficients $c_n = \frac{-2j}{n\pi}$ for odd $n$:

$$\sum_{n=-\infty}^{\infty} |c_n|^2 = 2\sum_{k=0}^{\infty} \frac{4}{\pi^2(2k+1)^2} = \frac{8}{\pi^2}\sum_{k=0}^{\infty} \frac{1}{(2k+1)^2}$$

Setting this equal to 1 gives the famous identity:

$$\sum_{k=0}^{\infty} \frac{1}{(2k+1)^2} = 1 + \frac{1}{9} + \frac{1}{25} + \cdots = \frac{\pi^2}{8}$$

which is closely related to the Basel problem $\sum_{n=1}^{\infty} 1/n^2 = \pi^2/6$.

Parseval's Theorem Verification

Numerically verify that the sum of |c_n|^2 converges to the time-domain power of a square wave. The left plot shows cumulative spectral power approaching the time-domain value.

Click Run to execute the Python code

First run will download Python environment (~15MB)

2.6 Gibbs Phenomenon

When we approximate a signal with discontinuities using a finite Fourier series, we observe a persistent overshoot near the jumps that does not vanish as we increase the number of harmonics. This is the Gibbs phenomenon, first described by Henry Wilbraham in 1848 and later rediscovered by J. Willard Gibbs in 1899.

Theorem 2.5 — Gibbs Phenomenon

At a jump discontinuity of height $d$, the $N$-term partial sum of the Fourier series overshoots by approximately:

$$\text{Overshoot} \approx \frac{d}{2} \cdot \left(\frac{2}{\pi}\int_0^{\pi} \frac{\sin u}{u}\,du - 1\right) \approx 0.0895 \cdot d$$

That is, the overshoot is approximately 8.95% of the jump height, regardless of how many terms $N$ are used.

Mathematical Explanation

The $N$-term partial sum of the Fourier series at a point $t$ near a discontinuity can be expressed via the Dirichlet kernel:

$$S_N(t) = \frac{1}{T_0}\int_0^{T_0} x(\tau) D_N(t - \tau)\, d\tau$$

where the Dirichlet kernel is:

$$D_N(\theta) = \frac{\sin\!\big((N + \tfrac{1}{2})\theta\big)}{\sin(\theta/2)}$$

As $N \to \infty$, the main lobe of $D_N$ narrows, but the integral of the first side lobe converges to a value that produces the 8.95% overshoot. Specifically, the maximum of the partial sum near a discontinuity converges to:

$$\lim_{N\to\infty} S_N\!\left(t_0 + \frac{\pi}{N\omega_0}\right) = \frac{x(t_0^+) + x(t_0^-)}{2} + \frac{d}{2}\left(\frac{2}{\pi}\,\text{Si}(\pi) - 1\right)$$

where $\text{Si}(\pi) = \int_0^{\pi} \frac{\sin u}{u}\,du \approx 1.8519$, giving the overshoot factor $\frac{2}{\pi}\text{Si}(\pi) - 1 \approx 0.1790$, which is about 8.95% of the half-jump $d/2$, or about 8.95% of the full jump $d$ measured from the target value.

Remark: The Gibbs phenomenon has practical consequences in digital signal processing, image compression, and audio engineering. It is one reason why windowing functions (Hanning, Hamming, Blackman) are used before spectral analysis — they trade frequency resolution for reduced spectral leakage and suppress Gibbs-like ripples.

Gibbs Phenomenon Close-Up

Zoom in on the discontinuity to observe the persistent ~8.95% overshoot that remains regardless of how many harmonics are included.

Click Run to execute the Python code

First run will download Python environment (~15MB)

2.7 Modes of Convergence

The Fourier series can converge in several different senses, each with distinct mathematical implications. Understanding these distinctions is crucial for rigorous signal analysis.

Pointwise Convergence

The Fourier series converges pointwise at a point $t_0$ if:

$$\lim_{N\to\infty} S_N(t_0) = x(t_0)$$

Under the Dirichlet conditions, pointwise convergence holds at every point of continuity. At discontinuities, $S_N(t_0)$ converges to the midpoint value $\frac{1}{2}[x(t_0^+) + x(t_0^-)]$.

Uniform Convergence

The Fourier series converges uniformly if:

$$\lim_{N\to\infty} \sup_{t \in [0, T_0]} |S_N(t) - x(t)| = 0$$

Uniform convergence requires $x(t)$ to be continuous on $[0, T_0]$. Discontinuities prevent uniform convergence, which is precisely why the Gibbs phenomenon persists — the maximum error does not go to zero even as $N \to \infty$.

Mean-Square (L² ) Convergence

The Fourier series converges in the $L^2$ (mean-square) sense if:

$$\lim_{N\to\infty} \frac{1}{T_0}\int_0^{T_0} |S_N(t) - x(t)|^2 \, dt = 0$$

This is the weakest and most broadly applicable mode. By the Riesz-Fischer theorem, the Fourier series converges in $L^2$ for any square-integrable periodic signal, even those with discontinuities. This mode of convergence is particularly natural for energy/power signals.

Comparison of Convergence Modes

ModeRequirement on x(t)Handles Discontinuities?Gibbs Effect?
PointwiseDirichlet conditionsConverges to midpointYes (locally)
UniformContinuity + bounded variationNoN/A (requires continuity)
$L^2$ (mean-square)Square-integrabilityYes (error energy → 0)Yes, but energy → 0

Remark: The hierarchy of convergence is: uniform $\Rightarrow$ pointwise $\Rightarrow$ $L^2$. That is, uniform convergence implies pointwise convergence, which in turn implies $L^2$ convergence — but not the reverse. A Fourier series can converge in $L^2$ without converging pointwise anywhere (though such pathological cases do not arise for signals satisfying Dirichlet conditions).

Rate of Coefficient Decay

The smoothness of a signal is directly reflected in how quickly its Fourier coefficients decay. This relationship is fundamental:

  • Discontinuous signals (e.g., square wave): coefficients decay as $O(1/n)$.
  • Continuous but non-differentiable (e.g., triangle wave): coefficients decay as $O(1/n^2)$.
  • $k$-times differentiable: coefficients decay as $O(1/n^{k+1})$.
  • Infinitely differentiable (smooth): coefficients decay faster than any polynomial — superalgebraic decay.
  • Analytic signals: coefficients decay exponentially $O(e^{-\alpha n})$.

This principle can be stated precisely: if $x(t)$ is $k$-times differentiable and $x^{(k)}(t)$ is of bounded variation, then $|c_n| \leq C/|n|^{k+1}$ for some constant $C$. This is proven by repeated integration by parts in the coefficient integral.

2.8 Additional Waveform Examples

To solidify our understanding, let us examine two more canonical waveforms: the sawtooth wave and the triangle wave. These examples illustrate how waveform symmetry and smoothness affect the Fourier series.

Example 2.3 — Sawtooth Wave

The sawtooth wave with period $T_0 = 2\pi$, defined by $x(t) = t/\pi$ on $(-\pi, \pi)$, is an odd function. Its Fourier series contains only sine terms:

$$x(t) = \frac{2}{\pi} \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} \sin(nt) = \frac{2}{\pi}\!\left(\sin t - \frac{\sin 2t}{2} + \frac{\sin 3t}{3} - \cdots\right)$$

The coefficients decay as $1/n$, reflecting the jump discontinuity at the sawtooth's flyback. Note that all harmonics are present (both even and odd), since the sawtooth lacks half-wave symmetry.

Example 2.4 — Triangle Wave

The triangle wave is continuous but has corners (non-differentiable points). It is an odd function with half-wave symmetry, so only odd sine terms appear:

$$x(t) = \frac{8}{\pi^2} \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)^2} \sin\!\big((2k+1)t\big)$$

The key difference: coefficients decay as $1/n^2$, which is much faster than the square wave's $1/n$ decay. This is because the triangle wave is continuous — it has no jump discontinuities, only kinks. Consequently, fewer harmonics are needed for a good approximation, and the Gibbs phenomenon does not occur.

Sawtooth and Triangle Wave Fourier Series

Compare the Fourier series approximations and coefficient decay rates for sawtooth (1/n decay) and triangle (1/n² decay) waves.

Click Run to execute the Python code

First run will download Python environment (~15MB)

Summary Key Results

  • 1.Any periodic signal satisfying the Dirichlet conditions can be decomposed into a sum of harmonically related sinusoids (trigonometric form) or complex exponentials.
  • 2.The Fourier coefficients are extracted via the orthogonality of the basis functions. Symmetry properties (even, odd, half-wave) can simplify computation.
  • 3.Parseval's theorem ensures power conservation: time-domain power equals the sum of squared magnitudes of the Fourier coefficients.
  • 4.The Gibbs phenomenon produces a persistent ~8.95% overshoot at discontinuities, regardless of the number of terms in the partial sum.
  • 5.Smoother signals have faster-decaying Fourier coefficients: discontinuities give $O(1/n)$, kinks give $O(1/n^2)$, and $k$-times differentiable signals give $O(1/n^{k+1})$.
  • 6.Three modes of convergence — pointwise, uniform, and $L^2$ — describe increasingly general notions of how the partial sums approach the original signal.