Chapter 21: AI & the Future of Mathematical Physics

2010s–Present

Machine Learning Discovers Physical Laws

Science has always proceeded by distilling data into equations. For most of history, this distillation required human intuition. Recent developments suggest that machine learning systems can perform parts of this process autonomously.

Symbolic regression — searching the space of mathematical expressions for formulas that fit data — has been transformed by algorithms like AI Feynman (Udrescu and Tegmark, 2020), which rediscovered 100 physics equations from the Feynman Lectures from raw numerical data. The system exploits physical principles (dimensional analysis, separability, symmetry) to constrain the search space.

More generally, neural networks trained on physical data can be probed to discover conserved quantities. Noether's theorem tells us that conservation laws arise from symmetries; algorithms like the work of Liu and Tegmark (2021) can identify these conserved quantities from trajectories alone, effectively rediscovering Hamiltonian mechanics from first principles. The distinction between the machine “discovering” a law and the machine “fitting” a law is philosophically murky, but the practical results are striking.

Physics-Informed Neural Networks and PDE Solving

Classical numerical methods for partial differential equations (finite elements, finite differences, spectral methods) discretize space and time on a grid. This becomes prohibitively expensive in high dimensions: a grid with \(N\)points per dimension in \(d\) dimensions requires \(N^d\)grid points — the curse of dimensionality.

Physics-Informed Neural Networks (PINNs), introduced by Raissi, Perdikaris, and Karniadakis in 2019, sidestep this by representing the solution\(u(x,t)\) as a neural network and incorporating the PDE as a constraint in the loss function:

\[ \mathcal{L} = \mathcal{L}_{\text{data}} + \lambda\,\mathcal{L}_{\text{PDE}}, \quad \mathcal{L}_{\text{PDE}} = \left\|\mathcal{N}[u] - f\right\|^2 \]

where \(\mathcal{N}[u] = f\) is the PDE. Automatic differentiation computes the required derivatives of the network output exactly. This approach scales more gracefully to high dimensions and handles irregular geometries naturally. It has been applied to fluid dynamics, quantum mechanics, general relativity, and molecular dynamics. The Neural Operator framework (FNO, DeepONet) goes further, learning maps between function spaces rather than individual solutions.

AI-Assisted Theorem Proving

Mathematics has a formalization frontier. Proof assistants like Lean, Coq, and Isabelle can check proofs with machine-level certainty, but writing such proofs has historically required enormous human effort. AI is changing this.

DeepMind's work on knot theory (Davies et al., Nature 2021) used supervised learning to discover a previously unknown relationship between algebraic and geometric invariants of knots — the signature and the hyperbolic volume. Mathematicians had studied both invariants for decades without noticing the connection. The neural network found it by pattern-matching across a large dataset of computed invariants, prompting mathematicians to look more closely and then prove the relationship rigorously.

AlphaProof (DeepMind, 2024) combined reinforcement learning with the Lean proof assistant to solve several problems from the International Mathematical Olympiad, including a combinatorics problem and a number theory problem, at silver-medal level. The system generates proof candidates, verifies them formally, and learns from verified successes.

The Lean 4 mathlib library now contains formal proofs of thousands of theorems, including significant results in algebraic geometry and number theory. The question of whether AI will eventually be able to discover and prove genuinely deep new theorems — not merely search a space of known techniques — is open.

The Ramanujan Machine and Geometric Deep Learning

Srinivasa Ramanujan (1887–1920) famously produced hundreds of identities involving continued fractions, infinite series, and special functions, often without proof, claiming they came to him in dreams. The Ramanujan Machine(Gottlieb et al., Technion, 2021) is an algorithm that systematically searches for new conjectured identities of this type, using algorithms inspired by the PSLQ integer relation algorithm. It has produced dozens of new conjectures, some of which have since been proved. AI as a conjecture generator, with humans providing the proofs, may become a standard mode of mathematical discovery.

Geometric deep learning (Bronstein, Bruna, LeCun, Szlam, Vandergheynst, 2017) frames the entire field of deep learning through the lens of symmetry and group theory. Classical convolutional neural networks exploit translational symmetry. Graph neural networks exploit permutation invariance. Equivariant networks are designed to respect arbitrary Lie group symmetries — rotations, reflections, gauge transformations. The principle is Kleinian: the architecture should reflect the symmetry group of the problem.

In physics applications, equivariant neural networks have been used to predict molecular properties (SchNet, SE(3)-Transformers), model quantum many-body systems, and represent interatomic potentials. The symmetries built into the architecture are not learned from data — they are mathematical truths encoded into the network structure itself.

AI in Experimental Physics

The Large Hadron Collider produces roughly \(10^9\) proton–proton collisions per second. The vast majority are mundane; rare events of interest must be identified in real time. Deep learning — particularly graph neural networks treating particle tracks as graphs — has become central to LHC trigger systems and offline analysis, identifying Higgs decays and searching for beyond-Standard-Model signatures in datasets of unprecedented size.

Gravitational wave detection by LIGO and Virgo depends on matched-filter searches: comparing observed strain data against a bank of template waveforms computed from general relativity. Deep learning has been applied both to improve detection sensitivity and to accelerate parameter estimation, reducing the time to characterize a detected event from days to seconds. The GW150914 event — the first detected gravitational wave — was later reanalyzed with neural networks that extracted additional information from the same data.

In plasma physics, DeepMind's work with the TCV tokamak (2022) demonstrated reinforcement learning controlling a plasma configuration in real time — a high-dimensional, nonlinear control problem that classical control theory struggles with. The policy network learned to maintain plasma shapes that no human engineer had previously achieved.

The Philosophical Question: Understanding or Mimicry?

Throughout this course we have traced a recurring pattern: mathematics developed for purely abstract reasons turns out to be the language of physics (Riemannian geometry and general relativity; group theory and the Standard Model; functional analysis and quantum mechanics). Wigner called this “the unreasonable effectiveness of mathematics.” The question AI raises is its mirror image: the unreasonable effectiveness of pattern matching.

A neural network that predicts molecular energies has encoded, in its weights, a vast amount of quantum chemistry. Does it understand quantum mechanics, or does it merely interpolate? A system that generates a correct proof in Lean has produced something that is verifiably true. Does it know why it is true? These questions echo ancient debates about the nature of mathematical knowledge, now made urgent by systems that seem to do mathematics without a mathematician.

One view, associated with Platonism, holds that mathematical truths exist independently of minds; AI, like a powerful telescope, merely reveals what was always there. Another view holds that mathematics is a human activity and that understanding — genuine comprehension, not just correct output — is essential to what mathematics is. On this view, AlphaProof proves theorems the way a pocket calculator multiplies: impressively, correctly, and without understanding.

The Next Bridges

Every era of this course has had its great bridge: Descartes connecting algebra and geometry; Newton connecting calculus and mechanics; Riemann connecting analysis and geometry; Noether connecting symmetry and conservation; von Neumann connecting functional analysis and quantum mechanics. The bridges of the 21st century are being built in real time, and their full extent is not yet visible. What is clear is that the conversation between mathematics and physics — two thousand years old, always surprising, never finished — continues with undiminished vitality, and now with a new interlocutor: the machine.