Numerical Methods & Computational Physics

The algorithmic backbone of modern science: turning continuous mathematics into computable solutions with controlled accuracy and efficiency.

Why Numerical Methods Matter

Most real-world equations in physics and engineering cannot be solved analytically. Numerical methods provide the bridge between mathematical models and computable predictions:

  • 🔬Physics Simulations: N-body problems, fluid dynamics, quantum systems, climate models
  • 🏗️Engineering: Structural analysis (FEM), circuit simulation (SPICE), aerodynamics (CFD)
  • 💰Finance: Option pricing (Black-Scholes PDE), risk analysis (Monte Carlo), portfolio optimization
  • 🧬Biology: Protein folding, population dynamics, epidemiological models (SIR)
  • 🤖Machine Learning: Gradient descent, automatic differentiation, convex optimization

Course Overview

This course covers the core numerical algorithms every scientist and engineer must master. Each chapter includes mathematical derivations, convergence analysis, stability theory, and fully runnable Python implementations.

Core Themes Throughout

Convergence & Order

\(||e_{n+1}|| \leq C \, ||e_n||^p\) where \(p\) is the order of convergence. Linear (\(p=1\)), quadratic (\(p=2\)), superlinear convergence.

Stability

Does a small perturbation in input produce a small perturbation in output? Condition numbers measure sensitivity: \(\kappa(A) = ||A|| \cdot ||A^{-1}||\).

Computational Cost

Algorithm complexity matters: \(O(N^3)\) for Gaussian elimination,\(O(N \log N)\) for FFT, \(O(N^2)\) for naive DFT.

Floating-Point Arithmetic

Machine epsilon \(\epsilon_{\text{mach}} \approx 2.2 \times 10^{-16}\) for double precision. Round-off errors accumulate and can dominate truncation errors.

🔍

Part I: Root Finding

Solving nonlinear equations \(f(x) = 0\)

The fundamental problem of finding where a function equals zero. We cover bracketing methods (bisection), open methods (Newton-Raphson, secant), and fixed-point iteration. Key focus on convergence rates: bisection is \(O(1/2^n)\), Newton is quadratic \(O(h^2)\).

Bisection MethodNewton-RaphsonSecant MethodFixed-Point Iteration
Start Part I →
📐

Part II: Linear Systems

Solving \(Ax = b\) efficiently and stably

Linear algebra is the workhorse of scientific computing. Direct methods (Gaussian elimination, LU, Cholesky) and iterative methods (Jacobi, Gauss-Seidel, conjugate gradient) for sparse systems. Understanding condition numbers and when your solution can be trusted.

Gaussian EliminationLU DecompositionCholeskyConjugate Gradient
Start Part II →
📈

Part III: ODE Solvers

Integrating \(\dot{y} = f(t, y)\) forward in time

From Euler's method to adaptive Runge-Kutta and symplectic integrators. Covers explicit and implicit methods, stability regions, stiff systems, and energy-conserving schemes for Hamiltonian mechanics. The RK4 formula: \(y_{n+1} = y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)\).

Euler MethodRunge-Kutta 4Adaptive RK45Symplectic Integrators
Start Part III →
🌊

Part IV: PDE Solvers

Heat, wave, and Laplace equations on grids

Discretizing partial differential equations on spatial grids. Explicit and implicit finite differences, Crank-Nicolson, finite element basics, and spectral methods. The CFL condition \(\Delta t \leq C \, \Delta x / |v_{\text{max}}|\) governs stability.

Finite DifferencesCrank-NicolsonFEM BasicsSpectral Methods
Start Part IV →
📊

Part V: Interpolation & Integration

Approximating functions and computing definite integrals

Polynomial interpolation (Lagrange, Newton, splines), numerical quadrature (Gaussian, adaptive), Monte Carlo integration for high-dimensional integrals, and the Fast Fourier Transform:\(O(N \log N)\) vs \(O(N^2)\) for the DFT.

Lagrange InterpolationCubic SplinesGaussian QuadratureFFT
Start Part V →
🎯

Part VI: Optimization

Finding minima of \(f(x)\) in one and many dimensions

Gradient-based methods (steepest descent, Newton, conjugate gradient) for smooth convex problems. Metaheuristics (simulated annealing, genetic algorithms) for non-convex landscapes. Convexity theory and convergence guarantees. Python scipy.optimize in practice.

Gradient DescentNewton's MethodSimulated AnnealingGenetic Algorithms
Start Part VI →

Quick Demo: Newton-Raphson Root Finding

A taste of what we will build. Newton's method converges quadratically to roots of \(f(x) = 0\) using the iteration \(x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\):

Newton-Raphson: Finding sqrt(2)

Python
script.py40 lines

Click Run to execute the Python code

Code will be executed with Python 3 on the server

Prerequisites

Mathematics

  • - Calculus (single and multivariable)
  • - Linear algebra (matrices, eigenvalues)
  • - Differential equations (basic ODE/PDE)
  • - Taylor series and convergence

Programming

  • - Basic Python (loops, functions, arrays)
  • - NumPy fundamentals (vectorized operations)
  • - Matplotlib for plotting (helpful but not required)
  • - All code runs in-browser via our Python runner

Recommended Textbooks

Numerical Recipes

Press, Teukolsky, Vetterling, Flannery - The classic reference for practical algorithms

Numerical Linear Algebra

Trefethen & Bau - Elegant treatment of matrix computations and conditioning

Computational Physics

Newman - Problem-solving with Python, excellent for physics students

Finite Difference Methods for ODEs and PDEs

LeVeque - Rigorous treatment of discretization and stability