Most things in nature, when you measure them carefully, look roughly Gaussian. Heights, exam scores, voltage noise, sample averages of pretty much any quantity. This is, famously, the central limit theorem (CLT). The textbook statement uses characteristic functions and analytic limits, but here I try to present the underlying picture in a purely geometric way.
The setup
Take an i.i.d. sequence with mean and variance . The CLT says that the standardized sum
converges in distribution to as .
I’ve written more on this mode of convergence here.
A more visual restatement: summing independent random variables is convolving their densities. Why? For the sum to land at a particular value , the two summands have to take values that add up to : if takes value , then must take the complementary value . By independence, the joint density at any pair is the product . The density of at is therefore what you get by summing (integrating) the joint density over every valid split that produces this :
That integral is, by definition, the convolution of and . So summing i.i.d. copies of corresponds to convolving its density with itself times. The CLT becomes a statement about repeated self-convolution: under the right rescaling, it converges to a Gaussian.
Watch it happen
Pick a base distribution and slide . The blue bars are the exact distribution of the sum after convolutions; the red curve is the Gaussian with mean and variance .
A few things to notice as you slide:
- The shape becomes bell-curve-like fast. By even an asymmetric or bimodal base looks visibly Gaussian. By the histogram and the red curve are nearly indistinguishable.
- The mean shifts at rate per step; the spread grows like , sub-linearly in .
- Standardization is essential. Without it, the unstandardized sum’s distribution drifts to infinity and spreads forever; you’d never converge to anything. The CLT is about the shape of the centered, scaled distribution, not its location.
Why convolution → Gaussian?
Three complementary intuitions, each capturing a different aspect of why this happens.
(1) Convolution is smoothing. Each convolution averages out sharp features of the densities being combined. Spikes get blurred, gaps get filled, rough edges get rounded. After enough convolutions, only the smoothest distribution with the right mean and variance survives. That happens to be the Gaussian.
(2) Entropy maximization. Among all densities on with a given mean and variance, the one with maximum differential entropy is the Gaussian. Convolution monotonically increases the entropy of the standardized sum (Barron, 1986). Repeated convolution therefore drives the standardized distribution toward the entropy maximizer with the matching moments. This is the entropic CLT, and it gives a thermodynamic flavor to convergence: the Gaussian is the thermal equilibrium of independent additive noise.
(3) Self-similarity. The Gaussian is the unique distribution with finite variance that is preserved under the operation add an independent copy and rescale by :
For any other finite-variance distribution, this operation moves the shape closer to Gaussian. The add and rescale map is a contraction toward the Gaussian fixed point, and the CLT is its convergence.
(Without finite variance, the same fixed-point logic gives non-Gaussian limits called stable distributions: Cauchy, Lévy, and others. The Gaussian is just the most familiar member of an infinite family of self-similar limit laws.)
How fast?
The Berry–Esseen theorem makes the convergence quantitative:
where is the CDF of , is the standard normal CDF, and is the third absolute central moment. The best known universal constant is (Shevtsova, 2011).
Three takeaways:
- The convergence rate is the canonical .
- The factor measures how skewed/heavy-tailed the base is. Heavy tails slow it down; thin tails speed it up.
- The bound is uniform over . Sharper local statements (Edgeworth expansions) give higher-order corrections in that depend on higher cumulants of .
In the widget, the rate-of-convergence comparison is easy to feel: the symmetric uniform die approaches Gaussian faster than asymmetric Bernoulli(0.3), which has a larger .
What can go wrong?
The standard CLT has two requirements: i.i.d. (or close to it) and finite variance. Drop either and the limit can change shape.
- Drop independence. There are CLTs for martingales, for mixing/weakly-dependent sequences, for U-statistics, and so on. The variance in the limit gets corrected to a long-run variance that accounts for autocorrelation.
- Drop identical distribution. The Lindeberg / Lyapunov conditions ensure no single ‘s variance dominates the sum. As long as everyone contributes, the limit is still Gaussian.
- Drop finite variance. The limit is no longer Gaussian. For variables in the domain of attraction of a non-Gaussian stable law (e.g., when for some ), the limit is a stable distribution with index , with infinite variance and (for ) infinite mean. The rescaling itself changes from to .
Higher dimensions
In , the multivariate CLT replaces the variance with a covariance matrix :
A lot of high-dimensional statistics is what happens to this picture when the dimension grows along with . The covariance matrix becomes a random object itself; concentration of measure phenomena take over; the spectral edge of the empirical covariance follows the Marchenko–Pastur law instead of staying at the deterministic . The relevant geometry stops being about a single Gaussian limit and becomes about many Gaussian-ish marginals interacting through random matrix theory. A topic for a separate post.
References
- Roman Vershynin, High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge University Press, 2018. Chapter 2 has a clean treatment of CLT and Berry–Esseen.
- Andrew R. Barron. Entropy and the central limit theorem. The Annals of Probability, 14(1):336–342, 1986.
- Irina Shevtsova. On the absolute constants in the Berry–Esseen type inequalities for identically distributed summands. arXiv:1111.6554, 2011.
- Stéphane Boucheron, Gábor Lugosi, Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.