One of the rare pleasures of doing mathematics — not necessarily high-powered research-level mathematics, but casual fun stuff too — is finally getting an answer to a question tucked away at the back of one’s mind for years and years, sometimes decades. Let me give an example: ever since I was pretty young (early teens), I’ve loved continued fractions; they are a marvelous way of representing numbers, with all sorts of connections to non-trivial mathematics [analysis, number theory (both algebraic and transcendental!), dynamical systems, knot theory, …]. And ever since I’ve heard of continued fractions, there’s one little factoid which I have frequently seen mentioned but which is hardly ever proved in the classic texts, at least not in the ones I looked at: the beautiful continued fraction representation for $e = 2.7182818...$.

[Admittedly, most of my past searches were done in the pre-Google era — today it’s not that hard to find proofs online.]

This continued fraction was apparently “proved” by Euler way back when (1731); I once searched for a proof in his Collected Works, but for some reason didn’t find it; perhaps I just got lost in the forest. Sometimes I would ask people for a proof; the responses I got were generally along the lines of “isn’t that trivial?” or “I think I can prove that”. But talk is cheap, and I never did get no satisfaction. That is, until a few (maybe five) years ago, when by accident I spotted a proof buried in Volume 2 of Knuth’s The Art of Computer Programming. Huge rush of relief! So, if any of you have been bothered by this yourselves, maybe this is your lucky day.

I’m sure most of you know what I’m talking about. To get the (regular) continued fraction for a number, just iterate the following steps: write down the integer part, subtract it, take the reciprocal. Lather, rinse, repeat. For example, the sequence of integer parts you get for $\sqrt{2}$ is 1, 2, 2, 2, … — this means

$\sqrt{2} = 1 + 1/(2 + 1/(2 + 1/(2 + ...))),$

giving the continued fraction representation for $\sqrt{2}$. Ignoring questions of convergence, this equation should be “obvious”, because it says that the continued fraction you get for $\sqrt{2} - 1$ equals the reciprocal of the continued fraction for $\sqrt{2} + 1$.

Before launching in on $e$, let me briefly recall a few well-known facts about continued fractions:

• Every rational number has a continued fraction representation of finite length. The continued fraction expresses what happens when one iterates the Euclidean division algorithm.

For example, the integer parts appearing in the continued fraction for 37/14:

$37/14 = \mathbf{2} + 1/(\mathbf{1} + 1/(\mathbf{1} + 1/(\mathbf{1} + 1/\mathbf{4})))$

duplicate the successive quotients one gets by using the division algorithm to compute $\mbox{gcd}(37, 14)$:

$37 = \mathbf{2} \cdot 14 + 9$

$14 = \mathbf{1} \cdot 9 + 5$

$9 = \mathbf{1} \cdot 5 + 4$

$5 = \mathbf{1} \cdot 4 + 1$

$4 = \mathbf{4} \cdot 1 + 0$

• A number has an infinite continued fraction if and only if it is irrational. Let $I$ denote the space of irrationals between 0 and 1 (as a subspace of $\mathbb{R}$). The continued fraction representation (mapping an irrational $x \in I$ to the corresponding infinite sequence of integer parts $a_1, a_2 \ldots$ in its continued fraction representation $x = \left[0, a_1, a_2, \ldots \right]$) gives a homeomorphism $I \to \mathbb{N}^{\mathbb{N}},$ where $\mathbb{N}^{\mathbb{N}}$ carries a topology as product of countably many copies of the discrete space $\mathbb{N}$.

In particular, the shift map $\displaystyle \sigma: \mathbb{N}^{\mathbb{N}} \to \mathbb{N}^{\mathbb{N}}$, defined by $(\sigma(f))(n) = f(n+1)$, corresponds to the map $\tau: I \to I$ defined by $\tau(x) = 1/x \pmod 1$. The behavior of $\tau$ is a paragon, an exemplary model, of chaos:

1. There is a dense set of periodic points of $\tau$. These are quadratic surds like $\sqrt{2} - 1$: elements of $I$ that are fixed points of fractional linear transformations $\displaystyle x \mapsto \frac{ax + b}{cx + d}$ (for integral $a, b, c, d$ and $|ad - bc| = 1$).
2. The transformation $\tau$ is topologically mixing.
3. There is sensitive dependence on initial conditions.

For some reason, I find it fun to observe this sensitive dependence using an ordinary calculator. Try calculating something like the golden mean $\displaystyle \frac{\sqrt{5} - 1}{2}$, and hit it with $\tau$ over and over and watch the parade of integer parts go by (a long succession of 1’s until the precision of the arithmetic finally breaks down and the behavior looks random, chaotic). For me this activity is about as enjoyable as popping bubble wrap.

• Remark: One can say rather more in addition to the topological mixing property. Specifically, consider the measure $\displaystyle \mu(E) := \int_E \frac1{1 + x} dx$ on $I$, where $\mu(I) = \log 2$. It may be shown that $\tau: I \to I$ is a measure-preserving transformation; much more significantly, $\tau$ is an ergodic transformation on the measure space. It then follows from Birkhoff’s ergodicity theorem that whenever $f: I \to \mathbb{R}$ is integrable, the time averages $\displaystyle \frac1{n} \sum_{k=0}^{n-1} f(\tau^k(x))$ approach the space average $\displaystyle \frac1{\log 2}\int_I f(u) \frac{du}{1 + u}$ for almost all $x \in I$. Applying this fact to $f([0, a_1, a_2, \ldots]) = \log(a_1)$, it follows that for almost all irrationals $x \in I$, the geometric mean of the integer parts $a_1, a_2, \ldots a_n$ approaches a constant, Khinchin’s constant $K = 2.685452...$. A fantastic theorem!

Anyway, I digress. You are probably waiting to hear about the continued fraction representation of $e$, which is $\left[2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, \ldots \right]$:

$e = 2 + 1/(1 + 1/(2 + 1/(1 + 1/(1 + 1/(4 + \ldots)))))$

Cute little sequence, except for the bump at the beginning where there’s a 2 instead of a 1. One thing I learned from Knuth is that the bump is smoothed away by writing it in a slightly different way,

$e = \left[1, 0, 1, 1, 2, 1, 1, 4, 1, 1, 6, 1, \ldots \right],$

involving triads $1, 2n, 1$, where $n = 0, 1, 2, \ldots$.

Anyway, how to prove this fact? I’ll sketch two proofs. The first is the one I found in Knuth (loc. cit., p. 375, exercise 16; see also pp. 650-651), and I imagine it is close in spirit to how Euler found it. The second is from a lovely article of Henry Cohn which appeared in the American Mathematical Monthly (Vol. 116 [2006], pp. 57-62), and is connected with Hermite’s proof of the transcendence of $e$.

PROOF 1 (sketch)

Two functions which Euler must have been very fond of are the tangent function and its cousin the hyperbolic tangent function,

$\displaystyle \tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}},$

related by the equation $\tan(ix) = i\tanh(x)$. These functions crop up a lot in his investigations. For example, he knew that their Taylor expansions are connected with Bernoulli numbers, e.g.,

$\displaystyle \tanh(x) = \sum_{n \geq 1} \frac{B_{2n}4^n (4^n - 1)}{(2n)!} x^{2n-1}.$

The Taylor coefficients $T_n = E_{2n-1}$ where $\displaystyle \tan(x) = \sum_{n \geq 1} \frac{E_{2n-1} x^{2n-1}}{(2n-1)!}$ are integers called tangent numbers; they are the numbers 1, 2, 16, … which appear along the right edge of the triangle

1

0, 1

1, 1, 0

0, 1, 2, 2

5, 5, 4, 2, 0

0, 5, 10, 14, 16, 16

where each row is gotten by taking partial sums from the preceding row, moving alternately left-to-right and right-to-left. The numbers 1, 1, 5, … which appear along the left edge are called secant numbers $S_n$, the Taylor coefficients of the secant function. Putting $E_{2n} = S_n$, the secant and tangent numbers $E_n$ together are called Euler numbers, and enjoy some interesting combinatorics: $E_n$ counts the number of “zig-zag permutations” $k \mapsto a_k$ of $\{1, 2, \ldots, n\}$, where $a_1 > a_2 < a_3 > \ldots$. For more on this, see Stanley’s Enumerative Combinatorics (Volume I), p. 149, and also Conway and Guy’s The Book of Numbers, pp. 110-111; I also once gave a brief account of the combinatorics of the $E_n$ in terms the generating function $\sec(x) + \tan(x)$, over here.

Euler also discovered a lovely continued fraction representation,

$\displaystyle \tanh(x) = 1/(x^{-1} + 1/(3x^{-1} + 1/(5x^{-1} + \ldots ))),$

as a by-product of a larger investigation into continued fractions for solutions to the general Riccati equation. Let’s imagine how he might have found this continued fraction. Since both sides of the equation are odd functions, we may as well consider just $x > 0$, where $0 < \tanh(x) < 1$. Thus the integer part is 0; subtract the integer part and take the reciprocal, and see what happens.

The MacLaurin series for $\tanh(x)$ is $x - x^3/3 + \ldots = x(1 - x^2/3 + \ldots)$; its reciprocal has a pole at 0 of residue 1, so

$\displaystyle \frac1{\tanh(x)} - \frac1{x} = f_1(x)$

gives a function $f_1(x) = x/3 + \ldots$ which is odd and analytic near 0. Now repeat: reciprocating $f_1(x)$, we get a simple pole at 0 of residue 3, and

$\displaystyle \frac1{f_1(x)} - \frac{3}{x} = f_2(x)$

gives a function $f_2(x)$ which is odd and analytic near 0, and one may check by hand that its MacLaurin series begins as $x/5 + \ldots$.

The pattern continues by a simple induction. Recursively define (for $x > 0$)

$\displaystyle f_{n+1}(x) = \frac1{f_n(x)} - (2n+1)x^{-1}$

It turns out (lemma 1 below) that each $f_n(x)$ is odd and analytic near 0, and then it becomes plausible that the continued fraction for $\tanh(x)$ above is correct: we have

$\displaystyle \tanh(x) = 1/(x^{-1} + 1/(3x^{-1} + \ldots 1/((2n-1)x^{-1} + f_n(x)) \ldots ))$

Indeed, assuming the fact that $f_n(x)$ is uniformly bounded over $n$, these expressions converge as $n \to \infty$, so that the continued fraction expression for $\tanh(x)$ is correct.

Lemma 1: Each $f_n(x)$ (as recursively defined above) is odd and analytic near 0, and satisfies the differential equation

$\displaystyle f_{n}'(x) = 1 - f_n(x)^2 - 2n x^{-1}f_n(x).$

Proof: By induction. In the case $n = 0$, we have that $f_0(x) = \tanh(x)$ is analytic and

$f_{0}'(x) = 1/\cosh^2(x) = 1 - f_{0}^2(x).$

Assuming the conditions hold when $n = k$, and writing

$f_k(x) = a_{k1}x + a_{k3}x^3 + \ldots$

we easily calculate from the differential equation that $a_{k1} = 1/(2k+1)$. It follows that

$\displaystyle f_{k+1}(x) := \frac1{f_k(x)} - (2k+1)x^{-1}$

is indeed analytic in a neighborhood of 0. The verification of the differential equation (as inductive step) for the case $n = k+1$ is routine and left to the reader. $\Box$

• Remark: The proof that the continued fraction above indeed converges to $\tanh(x)$ is too involved to give in detail here; I’ll just refer to notes that Knuth gives in the answers to his exercises. Basically, for each $x$ in the range $(-\pi/2, \pi/2)$, he gets a uniform bound $|f_n(x)| \leq \tan|x|$ for all $n$, and notes that as a result convergence of the continued fraction is then easy to prove for such $x$ (good enough for us, as we’ll be taking $x = 1/2$). He goes on to say, somewhat telegraphically for my taste, “Careful study of this argument reveals that the power series for $f_n(z)$ actually converges for $|z| < \pi \sqrt{2n+1}/2$; therefore the singularities of $f_n(z)$ get farther and farther away from the origin as $n$ grows, and the continued fraction actually represents $\tanh(z)$ throughout the complex plane.” [Emphasis his] Hmm…

Assuming the continued fraction representation for $\tanh(x)$, let’s tackle $e$. From the continued fraction we get for instance

$\displaystyle \tanh(1/2) = \frac{e^{1/2} - e^{-1/2}}{e^{1/2} + e^{-1/2}} = 1/(2 + 1/(6 + 1/(10 + \ldots)))$

Taking reciprocals and manipulating,

$\displaystyle 1 + \frac{2}{e - 1} = 2 + 1/(6 + 1/(10 + 1/(14 + \ldots ))).$

Theorem 1: $e-1 = \left[ 1, 1, 2, 1, 1, 4, 1, 1, 6, \ldots \right]$.

Proof: By the last displayed equation, it suffices to show

$2\left[ 0, 1, 1, 2, 1, 1, 4, 1, 1, 6, \ldots \right] = \left[ 1, 6, 10, 14, \ldots \right].$

This follows from a recursive algorithm for multiplying a continued fraction by 2, due to Hurwitz (Knuth, loc. cit., p. 375, exercise 14):

Lemma 2: $2\left[ 0, 2a + 1, b, c, \ldots \right] = \left[ 0, a, 1, 1 + 2\left[ 0, b-1, c, \ldots \right] \right]$, and

$2\left[0, 2a, b, c, \ldots \right] = \left[ 0, a, 2\left[ b, c, \ldots \right] \right] .$

I won’t bother proving this; instead I’ll just run through a few cycles to see how it applies to theorem 1:

$2\left[ 0, 1, 1, 2, 1, 1, 4, \ldots \right] = \left[ 0, 0, 1, 1 + 2\left[ 0, 0, 2, 1, 1, 4\ldots \right] \right]$

$= \left[ 1, 1 + 2\left[ 2, 1, 1, 4, \ldots \right] \right]$

$= \left[ 1, 1 + 4 + 2\left[ 0, 1, 1, 4, \ldots \right] \right]$

$= \left[ 1, 5 + \left[ 0, 0, 1, 1 + 2\left[ 0, 0, 4, \ldots \right] \right] \right]$

$= \left[ 1, 6 + \left[ 0, 1 + 2\left[ 4, 1, 1, 6, \ldots \right] \right] \right]$

$= \left[ 1, 6, 1 + 2\left[ 4, 1, 1, 6, \ldots \right] \right]$

$= \left[ 1, 6, 1 + 8 + 2\left[ 0, 1, 1, 6, \ldots \right] \right]$

and so on. Continuing this procedure, we get $\left[ 1, 6, 10, 14, \ldots \right]$, which finishes the proof of theorem 1. $\Box$

PROOF 2

I turn now to the second proof (by Cohn, loc. cit.), which I find rather more satisfying. It’s based on Padé approximants, which are “best fit” rational function approximations to a given analytic function, much as the rational approximants $\displaystyle \frac{p_n}{q_n} = \left[ a_0, a_1, \ldots, a_n \right]$ provide “best fits” to a given real number $x = \left[ a_0, a_1, \ldots \right]$. (By “best fit”, I mean a sample theorem like: among all rational numbers whose denominator is bounded above in magnitude by $q_n$, the approximant $\displaystyle \frac{p_n}{q_n}$ comes closest to $x$.)

Definition: Let $f$ be a function analytic in a neighborhood of 0. The Padé approximant to $f$ of order $(m, n)$, denoted $r_{m, n}(x)$, is the (unique) rational function $\displaystyle \frac{p(x)}{q(x)}$ such that $\mbox{deg}(p) = m$, $\mbox{deg}(q) = n$, and the MacLaurin coefficients of $r_{m, n}$ agree with those of $f$ up to degree $m + n$. $\Box$

This agreement of MacLaurin coefficients is equivalent to the condition that the function

$\displaystyle \frac{q(x)f(x) - p(x)}{x^{m+n+1}}$

is analytic around 0. Here, we will be interested in Padé approximants to $f(x) = e^x$.

In general, Padé approximants may be computed by (tedious) linear algebra, but in the present case Hermite found a clever integration trick which gets the job done:

Proposition 1: Let $r$ be a polynomial of degree $k$. Then there are polynomials $p, q$ of degree at most $k$ such that

$\displaystyle \int_{0}^1 r(t)e^{xt} dt = \frac{q(x)e^x - p(x)}{x^{k+1}}$

Explicitly,

$p(x) = r(0)x^k - r'(0)x^{k-1} + r''(0)x^{k-2} - \ldots$

$q(x) = r(1)x^k - r'(1)x^{k-1} + r''(1)x^{k-2} - \ldots$

Proof: Integration by parts yields

$\displaystyle \int_{0}^1 r(t)e^{xt} dt = \frac{r(1)e^x - r(0)}{x} - \frac1{x} \int_{0}^1 r'(t)e^{xt} dt$

and the general result follows by induction. $\Box$

It is clear that the integral of proposition 1 defines a function analytic in $x$. Taking $k = m + n$, this means we can read off the Padé approximant $r_{m, n}(x) = p(x)/q(x)$ to $e^x$ from the formulas for $p, q$ in proposition 1, provided that the polynomial $r(t)$ [of degree $m + n$] is chosen so that $\mbox{deg}(p) = m, \mbox{deg}(q) = n$. Looking at these formulas, all we have to do is choose $r$ to have a zero of order $n$ at $t = 0$, and a zero of order $m$ at $t = 1$. Therefore $r(t) = t^n (t-1)^m$ fits the bill.

Notice also we can adjust $r$ by any constant multiple; the numerator and denominator $p(x), q(x)$ are adjusted by the same constant multiples, which cancel each other in the Padé approximant $r_{m, n}(x) = p(x)/q(x)$.

Taking $r(t) = t^n (t-1)^m$ in proposition 1, we then infer

$\displaystyle \int_{0}^1 t^n (t-1)^m e^t dt = q(1)e - p(1).$

Notice that this integral is small when $m, n$ are large. This means that $r_{m, n}(1) = p(1)/q(1)$ will be close to $e$ (see the following remark), and it turns out that by choosing $m, n$ appropriately, the values $r_{m, n}(1)$ coincide exactly with rational approximants coming from the continued fraction for $e$.

• Remark: Note that for the choice $r(t) =t^n (t-1)^m$, the values $p(1), q(1)$ derived from proposition 1 are manifestly integral, and $q(1) \neq 0$. [In particular, $|q(1)| \geq 1$, justifying the claim that $|p(1)/q(1) - e|$ is small if $q(1)e - p(1)$ is.] In fact, $p(1), q(1)$ may be much larger than necessary; e.g., they may have a common factor, so that the fraction $p(1)/q(1)$ is unreduced. This ties in with how we adjust $r(t)$ by a constant factor, as in theorem 2 below.

For $n \geq 0$, let $p_n/q_n$ denote the $n^{th}$ rational approximant arising from the infinite continued fraction

$\left[ a_0, a_1, a_2, \ldots \right] = \left[ 1, 0, 1, 1, 2, 1, 1, 4, 1, \dots \right],$

where $\mbox{gcd}(p_n, q_n) = 1$. From standard theory of continued fractions, we have the following recursive rule for computing the integers $p_n, q_n$ from the $a_n$: $p_0 = a_0$, $q_0 = 1$, $p_{-1} := 1$, $q_{-1} := 0$, and

$p_{n+1} = a_{n+1} p_n + p_{n-1} \qquad q_{n+1} = a_{n+1} q_n + q_{n-1}.$

Explicitly, $a_{3n} = a_{3n+2} = 1$ and $a_{3n+1} = 2n$, so

1. $p_{3n} = p_{3n-1} + p_{3n-2} \qquad q_{3n} = q_{3n-1} + q_{3n-2}$
2. $p_{3n+1} = 2n p_{3n} + p_{3n-1} \qquad q_{3n+1} = 2n q_{3n} + q_{3n-1}$
3. $p_{3n+2} = p_{3n+1} + p_{3n} \qquad q_{3n+2} = q_{3n+1} + q_{3n}$

[Note: $p_1 = 1$ and $q_1 = 0$, so $p_1/q_1$ is infinite, but that won’t matter below.]

Theorem 2: Define, for $n \geq 0$,

$\displaystyle A_n := \int_{0}^1 \frac{t^n (t-1)^n}{n!} e^t dt$

$\displaystyle B_n := \int_{0}^1 \frac{t^{n+1} (t-1)^n}{n!} e^t dt$

$\displaystyle C_n := \int_{0}^1 \frac{t^n (t-1)^{n+1}}{n!} e^t dt$

Then $A_n = q_{3n}e - p_{3n}$, $B_n = p_{3n+1} - q_{3n+1}e$, and $C_n = p_{3n+2} - q_{3n+2}e$.

Proof: It is easy to see $A_0 = e - 1$, $B_0 = 1$, and $C_0 = 2 - e$. In view of the recursive relations for the $p_n, q_n$ above, it suffices to show

• $A_n = -B_{n-1} - C_{n-1}$
• $B_n = -2n A_{n-1} + C_{n-1}$
• $C_n = B_n - A_n$

The last relation is trivial. The first relation $A_n + B_{n-1} + C_{n-1} = 0$ follows by integrating both sides of the identity

$\displaystyle \frac{t^n (t-1)^n e^t}{n!} + \frac{t^n (t-1)^{n-1} e^t}{(n-1)!} + \frac{t^{n-1} (t-1)^n e^t}{(n-1)!} = \frac{d}{dt} \frac{t^n (t-1)^n e^t}{n!}$

over the interval $\left[ 0, 1 \right]$. The second relation $B_n + 2n A_{n-1} - C_{n-1} = 0$ follows by integrating both sides of the identity

$\displaystyle \frac{t^{n+1} (t-1)^n e^t}{n!} + 2n\frac{t^{n-1} (t-1)^{n-1} e^t}{(n-1)!} - \frac{t^{n-1} (t-1)^n e^t}{(n-1)!} = \frac{d}{dt} \frac{t^n (t-1)^{n+1} e^t}{n!}$

which we leave to the reader to check. This completes the proof. $\Box$

Theorem 2 immediately implies that

$e = \left[ 1, 0, 1, 1, 2, 1, 1, 4, 1, 1, 6, 1, \ldots \right];$

indeed, the rational approximants $p_k/q_k$ to the right-hand side have the property that $q_k |p_k/q_k - e|$ is one of $|A_n|, |B_n|$, or $|C_n|$ (for $k = 3n, 3n+1, 3n+2$ respectively), and looking at their integral expressions, these quantities approach 0 very rapidly.

This in turn means, since the denominators $q_k$ grow rapidly with $k$, that the rational approximants $p_k/q_k$ approach $e$insanely” rapidly, and this in itself can be used as the basis of a proof that $e$ is transcendental (Roth’s theorem). To give some quantitative inkling of just “how rapidly”: Knuth in his notes gives estimates on how close the approximant

$1/(x^{-1} + 1/(3x^{-1} + \ldots + 1/(2n+1)x^{-1} \ldots ))$

is to the function $\tanh(x)$. It’s something on the order of $\displaystyle \frac{(n!)^2}{(2n)!(2n+1)!}$ (loc. cit., p. 651).

• Remark: Quoting Roth’s theorem in support of a theorem of Hermite is admittedly anachronistic. However, the Padé approximants and their integral representations used here did play an implicit role in Hermite’s proof of the transcendence of $e$; in fact, Padé was a student of Hermite. See Cohn’s article for further references to this topic.

[Wow, another long post. I wonder if anyone will read the whole thing!]

[Edits in response to the comment below by Henry Cohn.]