Part I, Chapter 4

The Laplace Transform

The Laplace transform generalises the Fourier transform by introducing a complex frequency variable \(s = \sigma + j\omega\), allowing us to analyse both the transient and steady-state behaviour of LTI systems. It is the cornerstone of control theory, circuit analysis, and mechanical-systems modelling.

4.1 Definition and the s-Plane

Definition 4.1 — Bilateral Laplace Transform

For a signal \(x(t)\) defined for all \(t\in\mathbb{R}\), the bilateral (two-sided) Laplace transform is

$$X(s) = \mathcal{L}\{x(t)\} = \int_{-\infty}^{\infty} x(t)\, e^{-st}\, dt, \qquad s = \sigma + j\omega$$

When the signal is causal (\(x(t)=0\) for \(t<0\)), this simplifies to the unilateral form:

$$X(s) = \int_{0^-}^{\infty} x(t)\, e^{-st}\, dt$$

The complex variable \(s = \sigma + j\omega\) spans an entire complex plane called the s-plane. The real part \(\sigma\) controls exponential growth or decay, while the imaginary part \(j\omega\) captures the oscillatory (sinusoidal) component.

Definition 4.2 — Region of Convergence (ROC)

The ROC is the set of values of \(s\) for which the Laplace integral converges absolutely:

$$\text{ROC} = \left\{ s \in \mathbb{C} : \int_{-\infty}^{\infty} |x(t)\, e^{-st}|\, dt < \infty \right\}$$

Key properties of the ROC:

  • The ROC is always a vertical strip or half-plane in the s-plane.
  • For right-sided (causal) signals, the ROC is a right half-plane \(\text{Re}(s) > \sigma_{\max}\).
  • For left-sided signals, the ROC is a left half-plane \(\text{Re}(s) < \sigma_{\min}\).
  • For two-sided signals, the ROC is a vertical strip \(\sigma_{\min} < \text{Re}(s) < \sigma_{\max}\).
  • The ROC cannot contain any poles of \(X(s)\).

Remark — Relationship to the Fourier Transform

The Fourier transform is a special case of the Laplace transform obtained by setting \(s = j\omega\) (i.e. \(\sigma = 0\)):

$$X(j\omega) = \left. X(s) \right|_{s=j\omega} = \int_{-\infty}^{\infty} x(t)\, e^{-j\omega t}\, dt$$

This relationship is valid only when the ROC includes the imaginary axis(\(\sigma = 0\)). If the imaginary axis lies outside the ROC, the Fourier transform does not converge, even though the Laplace transform may exist in a different region.

Example 4.1 — Causal Exponential

Compute \(\mathcal{L}\{e^{-at}\, u(t)\}\) for \(a > 0\):

$$X(s) = \int_0^\infty e^{-at}\, e^{-st}\, dt = \int_0^\infty e^{-(s+a)t}\, dt = \frac{1}{s+a}, \qquad \text{ROC: } \text{Re}(s) > -a$$

The transform has a single pole at \(s = -a\). Since \(a > 0\), the pole lies in the left half-plane and the ROC includes \(j\omega\), so the Fourier transform exists.

4.2 Transfer Functions

An LTI system whose input-output relationship is described by a linear constant-coefficient ODE

$$\sum_{k=0}^{N} a_k \frac{d^k y(t)}{dt^k} = \sum_{k=0}^{M} b_k \frac{d^k x(t)}{dt^k}$$

can be completely characterised in the Laplace domain. Taking the Laplace transform of both sides (assuming zero initial conditions) gives:

$$H(s) = \frac{Y(s)}{X(s)} = \frac{\sum_{k=0}^{M} b_k\, s^k}{\sum_{k=0}^{N} a_k\, s^k} = \frac{b_M\, s^M + \cdots + b_1\, s + b_0}{a_N\, s^N + \cdots + a_1\, s + a_0}$$

Definition 4.3 — Poles, Zeros, and Gain

Factor the transfer function into its pole-zero form:

$$H(s) = K\, \frac{(s - z_1)(s - z_2)\cdots(s - z_M)}{(s - p_1)(s - p_2)\cdots(s - p_N)}$$
  • Zeros \(z_1, z_2, \ldots, z_M\): values of \(s\) where \(H(s) = 0\).
  • Poles \(p_1, p_2, \ldots, p_N\): values of \(s\) where \(H(s) \to \infty\).
  • Gain \(K = b_M / a_N\): the leading coefficient ratio.

Theorem 4.1 — BIBO Stability

A causal LTI system with rational transfer function \(H(s)\) is bounded-input bounded-output (BIBO) stable if and only if:

$$\text{All poles of } H(s) \text{ lie in the open left half-plane: } \text{Re}(p_i) < 0 \; \forall\, i$$

Equivalently, the ROC of \(H(s)\) must include the imaginary axis, which guarantees the impulse response \(h(t)\) is absolutely integrable:

$$\int_0^\infty |h(t)|\, dt < \infty$$

Remark — Marginal Stability

A system with poles on the imaginary axis (e.g. a pure oscillator with poles at \(s = \pm j\omega_0\)) is marginally stable: bounded inputs can produce unbounded outputs if they match the natural frequency (resonance). Such systems are not BIBO stable.

4.3 Common Transform Pairs

The following table lists the most commonly used unilateral Laplace transform pairs. All signals are causal (multiplied by \(u(t)\)).

Signal \(x(t)\)Transform \(X(s)\)ROC
\(\delta(t)\)\(1\)All \(s\)
\(u(t)\)\(\dfrac{1}{s}\)\(\text{Re}(s) > 0\)
\(t\, u(t)\)\(\dfrac{1}{s^2}\)\(\text{Re}(s) > 0\)
\(t^n\, u(t)\)\(\dfrac{n!}{s^{n+1}}\)\(\text{Re}(s) > 0\)
\(e^{-at}\, u(t)\)\(\dfrac{1}{s+a}\)\(\text{Re}(s) > -a\)
\(t^n e^{-at}\, u(t)\)\(\dfrac{n!}{(s+a)^{n+1}}\)\(\text{Re}(s) > -a\)
\(\cos(\omega_0 t)\, u(t)\)\(\dfrac{s}{s^2 + \omega_0^2}\)\(\text{Re}(s) > 0\)
\(\sin(\omega_0 t)\, u(t)\)\(\dfrac{\omega_0}{s^2 + \omega_0^2}\)\(\text{Re}(s) > 0\)
\(e^{-at}\cos(\omega_0 t)\, u(t)\)\(\dfrac{s+a}{(s+a)^2 + \omega_0^2}\)\(\text{Re}(s) > -a\)
\(e^{-at}\sin(\omega_0 t)\, u(t)\)\(\dfrac{\omega_0}{(s+a)^2 + \omega_0^2}\)\(\text{Re}(s) > -a\)

Remark — Properties of the Laplace Transform

The following operational properties are used extensively when working with transform pairs:

  • Linearity: \(\mathcal{L}\{a\,x(t) + b\,y(t)\} = a\,X(s) + b\,Y(s)\)
  • Time shift: \(\mathcal{L}\{x(t - t_0)\,u(t-t_0)\} = e^{-s t_0} X(s)\)
  • s-shift: \(\mathcal{L}\{e^{-at}x(t)\} = X(s + a)\)
  • Differentiation: \(\mathcal{L}\{x'(t)\} = s\,X(s) - x(0^-)\)
  • Integration: \(\mathcal{L}\!\left\{\int_0^t x(\tau)\,d\tau\right\} = \dfrac{X(s)}{s}\)
  • Convolution: \(\mathcal{L}\{x(t)*y(t)\} = X(s)\,Y(s)\)
  • Scaling: \(\mathcal{L}\{x(at)\} = \dfrac{1}{a}\,X\!\left(\dfrac{s}{a}\right),\; a > 0\)

4.4 Inverse Laplace Transform

Definition 4.4 — Bromwich Integral

The formal inversion formula for the Laplace transform is the Bromwich contour integral:

$$x(t) = \mathcal{L}^{-1}\{X(s)\} = \frac{1}{2\pi j} \int_{c - j\infty}^{c + j\infty} X(s)\, e^{st}\, ds$$

where \(c\) is any real number such that the vertical line \(\text{Re}(s) = c\)lies entirely within the ROC of \(X(s)\). In practice, this integral is evaluated using the residue theorem or, more commonly, avoided altogether via partial fraction expansion.

Partial Fraction Expansion (Practical Method)

Given a proper rational function (\(M < N\)):

$$X(s) = \frac{B(s)}{A(s)} = \frac{B(s)}{(s - p_1)(s - p_2)\cdots(s - p_N)}$$

Case 1 — Distinct poles:

$$X(s) = \frac{A_1}{s - p_1} + \frac{A_2}{s - p_2} + \cdots + \frac{A_N}{s - p_N}, \qquad A_i = \left.(s - p_i)\,X(s)\right|_{s=p_i}$$

Case 2 — Repeated poles: If \(p_1\) has multiplicity \(r\):

$$X(s) = \frac{A_{1,1}}{s - p_1} + \frac{A_{1,2}}{(s - p_1)^2} + \cdots + \frac{A_{1,r}}{(s - p_1)^r} + \text{(other terms)}$$
$$A_{1,k} = \frac{1}{(r-k)!} \left. \frac{d^{r-k}}{ds^{r-k}} \left[ (s-p_1)^r\, X(s) \right] \right|_{s=p_1}$$

Example 4.2 — Partial Fractions with Repeated Poles

Find the inverse Laplace transform of:

$$X(s) = \frac{2s + 5}{(s + 1)^2 (s + 3)}$$

Step 1: Write the partial fraction expansion:

$$X(s) = \frac{A}{s+1} + \frac{B}{(s+1)^2} + \frac{C}{s+3}$$

Step 2: Determine coefficients:

  • \(B = \left.(s+1)^2 X(s)\right|_{s=-1} = \left.\frac{2s+5}{s+3}\right|_{s=-1} = \frac{3}{2}\)
  • \(C = \left.(s+3) X(s)\right|_{s=-3} = \left.\frac{2s+5}{(s+1)^2}\right|_{s=-3} = \frac{-1}{4}\)
  • \(A = \frac{d}{ds}\left[(s+1)^2 X(s)\right]_{s=-1} = \frac{d}{ds}\left[\frac{2s+5}{s+3}\right]_{s=-1} = \left.\frac{2(s+3)-(2s+5)}{(s+3)^2}\right|_{s=-1} = \frac{1}{4}\)

Step 3: Invert term-by-term:

$$x(t) = \left(\frac{1}{4}\,e^{-t} + \frac{3}{2}\,t\,e^{-t} - \frac{1}{4}\,e^{-3t}\right) u(t)$$

Interactive: Partial Fraction Expansion

Partial Fraction Expansion

Click Run to execute the Python code

First run will download Python environment (~15MB)

4.5 Second-Order Systems

Second-order systems are ubiquitous in engineering: RLC circuits, mass-spring-dampers, servo mechanisms, and more. Their transfer function takes the standard form:

Definition 4.5 — Standard Second-Order Transfer Function

$$H(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}$$

where:

  • \(\omega_n\) is the natural frequency (rad/s) -- the frequency of oscillation when \(\zeta = 0\).
  • \(\zeta\) (zeta) is the damping ratio (dimensionless).

The poles are located at:

$$s_{1,2} = -\zeta\omega_n \pm \omega_n\sqrt{\zeta^2 - 1}$$

Damping Regimes

Underdamped (\(0 < \zeta < 1\))

Complex conjugate poles. The system oscillates with an exponentially decaying envelope. The damped frequency is \(\omega_d = \omega_n\sqrt{1 - \zeta^2}\).

$$y(t) = 1 - \frac{e^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}} \sin\!\left(\omega_d t + \phi\right)$$

Critically Damped (\(\zeta = 1\))

Repeated real poles at \(s = -\omega_n\). Fastest non-oscillatory response.

$$y(t) = 1 - (1 + \omega_n t)\, e^{-\omega_n t}$$

Overdamped (\(\zeta > 1\))

Two distinct negative real poles. Sluggish, non-oscillatory response.

$$s_{1,2} = -\zeta\omega_n \pm \omega_n\sqrt{\zeta^2 - 1}$$

Undamped (\(\zeta = 0\))

Purely imaginary poles at \(s = \pm j\omega_n\). Perpetual oscillation.

$$y(t) = 1 - \cos(\omega_n t)$$

Theorem 4.2 — Percent Overshoot

For an underdamped second-order system, the percent overshoot of the step response depends only on the damping ratio:

$$\%\text{OS} = 100 \cdot e^{-\pi\zeta / \sqrt{1 - \zeta^2}}$$

The peak time and settling time are:

$$t_p = \frac{\pi}{\omega_d} = \frac{\pi}{\omega_n\sqrt{1-\zeta^2}}, \qquad t_s \approx \frac{4}{\zeta\omega_n} \;\text{(2\% criterion)}$$

Interactive: Pole-Zero Map and Step Response

Pole-Zero Map and Step Response

Click Run to execute the Python code

First run will download Python environment (~15MB)

4.6 Initial and Final Value Theorems

These two theorems allow us to determine the behaviour of a signal at \(t = 0^+\)and \(t \to \infty\) directly from its Laplace transform, without performing the full inversion.

Theorem 4.3 — Initial Value Theorem

If \(x(t)\) and \(x'(t)\) are both Laplace-transformable, then:

$$x(0^+) = \lim_{s \to \infty} s\, X(s)$$

Proof sketch:

Start from the differentiation property:\(\mathcal{L}\{x'(t)\} = sX(s) - x(0^-)\). Expand the integral:

$$\int_0^\infty x'(t)\, e^{-st}\, dt = sX(s) - x(0^-)$$

As \(s \to \infty\), the factor \(e^{-st}\) kills the integral for all \(t > 0\), so the left side vanishes. Therefore:

$$0 = \lim_{s\to\infty} [sX(s) - x(0^-)] \implies x(0^+) = \lim_{s\to\infty} sX(s)$$

Theorem 4.4 — Final Value Theorem

If \(x(t)\) has a finite limit as \(t \to \infty\), and if all poles of \(sX(s)\) have negative real parts (except possibly a simple pole at \(s = 0\)), then:

$$\lim_{t \to \infty} x(t) = \lim_{s \to 0} s\, X(s)$$

Proof sketch:

Again from \(\int_0^\infty x'(t) e^{-st} dt = sX(s) - x(0^-)\). As \(s \to 0^+\):

$$\int_0^\infty x'(t)\, dt = \lim_{s\to 0} [sX(s) - x(0^-)]$$

The left side is \(x(\infty) - x(0^-)\), giving us\(x(\infty) = \lim_{s\to 0} sX(s)\).

Example 4.3 — Applying the Value Theorems

Given \(X(s) = \dfrac{5}{s(s+2)}\), find the initial and final values.

Initial value: \(x(0^+) = \lim_{s\to\infty} s \cdot \frac{5}{s(s+2)} = \lim_{s\to\infty} \frac{5}{s+2} = 0\)

Final value: \(x(\infty) = \lim_{s\to 0} s \cdot \frac{5}{s(s+2)} = \frac{5}{2}\)

Verification: \(x(t) = \frac{5}{2}(1 - e^{-2t})\,u(t)\), so\(x(0) = 0\) and \(x(\infty) = 5/2\). Correct!

Remark — When the Final Value Theorem Fails

The FVT requires that \(\lim_{t\to\infty} x(t)\) actually exists as a finite constant. If \(X(s)\) has poles on the imaginary axis (other than a simple pole at the origin), the FVT does not apply. For example,\(X(s) = \frac{1}{s^2 + 1}\) gives \(x(t) = \sin(t)\), which oscillates forever -- applying the FVT here would incorrectly give 0.

4.7 Block Diagram Algebra

Complex systems are often represented as interconnections of simpler subsystems. Block diagram algebra provides rules for reducing these interconnections to a single equivalent transfer function.

Series (Cascade) Connection

Two blocks \(H_1(s)\) and \(H_2(s)\) in series:

$$Y(s) = H_2(s)\, H_1(s)\, X(s) \implies H_{\text{eq}}(s) = H_1(s)\, H_2(s)$$

Parallel Connection

Two blocks in parallel:

$$Y(s) = [H_1(s) + H_2(s)]\, X(s) \implies H_{\text{eq}}(s) = H_1(s) + H_2(s)$$

Negative Feedback Connection

A forward path \(G(s)\) with a feedback path \(H(s)\):

$$E(s) = X(s) - H(s)\,Y(s), \qquad Y(s) = G(s)\,E(s)$$

Solving for the closed-loop transfer function:

$$\frac{Y(s)}{X(s)} = \frac{G(s)}{1 + G(s)\,H(s)}$$

Theorem 4.5 — General Feedback Formula

For a system with forward-path transfer function \(G(s)\) and feedback-path transfer function \(H(s)\):

$$T(s) = \frac{G(s)}{1 \pm G(s)\,H(s)}$$

where the + sign corresponds to negative feedback and the sign to positive feedback. Unity feedback (\(H(s) = 1\)) gives \(T(s) = G(s) / (1 + G(s))\).

Example 4.4 — Feedback Reduction

Consider a unity-feedback system with \(G(s) = \dfrac{10}{s(s+5)}\). The closed-loop transfer function is:

$$T(s) = \frac{G(s)}{1 + G(s)} = \frac{\frac{10}{s(s+5)}}{1 + \frac{10}{s(s+5)}} = \frac{10}{s^2 + 5s + 10}$$

Comparing with the standard form: \(\omega_n = \sqrt{10} \approx 3.16\) rad/s and \(\zeta = \frac{5}{2\sqrt{10}} \approx 0.79\). This is an underdamped system with moderate overshoot.

Remark — Mason's Gain Formula

For complex signal flow graphs with multiple loops, Mason's gain formulaprovides a systematic way to find the overall transfer function:

$$T(s) = \frac{1}{\Delta} \sum_k P_k \Delta_k$$

where \(P_k\) is the gain of the \(k\)-th forward path,\(\Delta\) is the graph determinant, and \(\Delta_k\)is the cofactor for the \(k\)-th path. This is covered in greater depth in control theory courses.

Interactive: Bode Plot of a Transfer Function

Bode Plot of a Transfer Function

Click Run to execute the Python code

First run will download Python environment (~15MB)

Worked Examples

Example 4.5 — Impulse Response from Transfer Function

Find \(h(t)\) for the system \(H(s) = \dfrac{s + 3}{s^2 + 4s + 13}\).

Step 1: Complete the square in the denominator:

$$s^2 + 4s + 13 = (s+2)^2 + 9 = (s+2)^2 + 3^2$$

Step 2: Rewrite the numerator in terms of \((s+2)\):

$$H(s) = \frac{(s+2) + 1}{(s+2)^2 + 3^2} = \frac{s+2}{(s+2)^2 + 3^2} + \frac{1}{3}\cdot\frac{3}{(s+2)^2 + 3^2}$$

Step 3: Invert using the table (s-shifted cosine and sine):

$$h(t) = \left[e^{-2t}\cos(3t) + \frac{1}{3}\,e^{-2t}\sin(3t)\right] u(t)$$

Example 4.6 — Step Response via Final Value Theorem

For \(H(s) = \dfrac{6}{s^2 + 5s + 6}\), find the steady-state step response.

The step response in the s-domain is \(Y(s) = \dfrac{H(s)}{s} = \dfrac{6}{s(s^2+5s+6)} = \dfrac{6}{s(s+2)(s+3)}\).

Since all poles of \(sY(s)\) are in the left half-plane, the FVT applies:

$$y(\infty) = \lim_{s\to 0} sY(s) = \lim_{s\to 0} \frac{6}{(s+2)(s+3)} = \frac{6}{6} = 1$$

The system has unity DC gain, as expected from \(H(0) = 6/6 = 1\).

Example 4.7 — Solving an ODE with Initial Conditions

Solve \(y'' + 3y' + 2y = 0\) with \(y(0) = 1,\; y'(0) = 0\).

Step 1: Take the Laplace transform using the differentiation property:

$$[s^2 Y(s) - s\cdot 1 - 0] + 3[sY(s) - 1] + 2Y(s) = 0$$

Step 2: Solve for \(Y(s)\):

$$Y(s) = \frac{s + 3}{s^2 + 3s + 2} = \frac{s+3}{(s+1)(s+2)} = \frac{2}{s+1} - \frac{1}{s+2}$$

Step 3: Invert:

$$y(t) = 2e^{-t} - e^{-2t}, \qquad t \geq 0$$

Chapter Summary

  • The Laplace Transform generalises the Fourier transform via the complex frequency \(s = \sigma + j\omega\), enabling analysis of systems that the Fourier transform cannot handle.
  • Transfer Functions \(H(s)\)completely characterise LTI systems; their poles and zeros determine stability and frequency response.
  • BIBO Stability requires all poles to lie in the open left half of the s-plane.
  • Partial Fraction Expansion is the practical workhorse for computing inverse Laplace transforms.
  • Second-Order Systems are characterised by \(\omega_n\) and \(\zeta\); the damping ratio determines underdamped, critically damped, or overdamped behaviour.
  • Initial & Final Value Theoremsextract time-domain limits directly from \(X(s)\).
  • Block Diagram Algebra reduces complex system interconnections via series, parallel, and feedback rules.