Continuous Channels
From discrete bits to the analog world: differential entropy, the Gaussian channel, and optimal power allocation in MIMO systems.
Part Overview
Shannonβs original theory was framed for discrete sources and channels. Real communication systems, however, operate in continuous time and amplitude. Part III extends information theory to the continuous setting, culminating in the Shannon-Hartley theorem β the formula that tells engineers the fundamental limit of every wired and wireless channel.
The key insight is that Gaussian noise is the worst-case additive noise for a given power level, and a Gaussian input distribution achieves capacity. This leads directly to the famous formula \( C = B\log_2(1 + S/N) \), where every decibel of SNR and every hertz of bandwidth has a measurable effect on the maximum achievable data rate.
We then tackle MIMO: by deploying multiple antennas, a channel can be decomposed into parallel sub-channels via singular value decomposition (SVD). The water-filling algorithm then solves the constrained power allocation problem, distributing power where the channel is strong and withholding it where it is weak.
Key Equations of Part III
Differential Entropy
\( h(X) = -\int f(x)\log f(x)\,dx \)
Shannon-Hartley
\( C = B\log_2\!\left(1+\tfrac{S}{N}\right) \)
Water-Filling
\( P_i^* = \bigl(\mu - \sigma_i^2/\lambda_i^2\bigr)^+ \)
The Gaussian Channel Model
Chapters
Chapter 7: Differential Entropy
Extending entropy to continuous distributions. The Gaussian maximizes entropy for a fixed variance. Properties, pitfalls (h can be negative), and connections to discrete entropy.
Chapter 8: Gaussian Channel Capacity
Shannon-Hartley theorem derived from mutual information maximization. The bandwidth-power tradeoff. Why the Shannon limit is β1.6 dB and what it means for real systems.
Chapter 9: MIMO & Water-Filling
Multiple-input multiple-output channels decomposed via SVD into parallel independent channels. Optimal power allocation via the water-filling algorithm.
Prerequisites
- β£Parts I & II (discrete entropy, channel capacity, coding theorems)
- β£Probability theory: Gaussian distribution, expectation, variance
- β£Linear algebra: matrix multiplication, eigenvalues, SVD (for Ch 9)
- β£Calculus: integration by parts, Lagrange multipliers