One of the rare pleasures of doing mathematics — not necessarily high-powered research-level mathematics, but casual fun stuff too — is finally getting an answer to a question tucked away at the back of one’s mind for years and years, sometimes decades. Let me give an example: ever since I was pretty young (early teens), I’ve loved continued fractions; they are a marvelous way of representing numbers, with all sorts of connections to non-trivial mathematics [analysis, number theory (both algebraic and transcendental!), dynamical systems, knot theory, ...]. And ever since I’ve heard of continued fractions, there’s one little factoid which I have frequently seen mentioned but which is hardly ever proved in the classic texts, at least not in the ones I looked at: the beautiful continued fraction representation for .
[Admittedly, most of my past searches were done in the pre-Google era -- today it's not that hard to find proofs online.]
This continued fraction was apparently “proved” by Euler way back when (1731); I once searched for a proof in his Collected Works, but for some reason didn’t find it; perhaps I just got lost in the forest. Sometimes I would ask people for a proof; the responses I got were generally along the lines of “isn’t that trivial?” or “I think I can prove that”. But talk is cheap, and I never did get no satisfaction. That is, until a few (maybe five) years ago, when by accident I spotted a proof buried in Volume 2 of Knuth’s The Art of Computer Programming. Huge rush of relief! So, if any of you have been bothered by this yourselves, maybe this is your lucky day.
I’m sure most of you know what I’m talking about. To get the (regular) continued fraction for a number, just iterate the following steps: write down the integer part, subtract it, take the reciprocal. Lather, rinse, repeat. For example, the sequence of integer parts you get for is 1, 2, 2, 2, … — this means
giving the continued fraction representation for . Ignoring questions of convergence, this equation should be “obvious”, because it says that the continued fraction you get for equals the reciprocal of the continued fraction for .
Before launching in on , let me briefly recall a few well-known facts about continued fractions:
- Every rational number has a continued fraction representation of finite length. The continued fraction expresses what happens when one iterates the Euclidean division algorithm.
For example, the integer parts appearing in the continued fraction for 37/14:
duplicate the successive quotients one gets by using the division algorithm to compute :
- A number has an infinite continued fraction if and only if it is irrational. Let denote the space of irrationals between 0 and 1 (as a subspace of ). The continued fraction representation (mapping an irrational to the corresponding infinite sequence of integer parts in its continued fraction representation ) gives a homeomorphism where carries a topology as product of countably many copies of the discrete space .
In particular, the shift map , defined by , corresponds to the map defined by . The behavior of is a paragon, an exemplary model, of chaos:
- There is a dense set of periodic points of . These are quadratic surds like : elements of that are fixed points of fractional linear transformations (for integral and ).
- The transformation is topologically mixing.
- There is sensitive dependence on initial conditions.
For some reason, I find it fun to observe this sensitive dependence using an ordinary calculator. Try calculating something like the golden mean , and hit it with over and over and watch the parade of integer parts go by (a long succession of 1’s until the precision of the arithmetic finally breaks down and the behavior looks random, chaotic). For me this activity is about as enjoyable as popping bubble wrap.
- Remark: One can say rather more in addition to the topological mixing property. Specifically, consider the measure on , where . It may be shown that is a measure-preserving transformation; much more significantly, is an ergodic transformation on the measure space. It then follows from Birkhoff’s ergodicity theorem that whenever is integrable, the time averages approach the space average for almost all . Applying this fact to , it follows that for almost all irrationals , the geometric mean of the integer parts approaches a constant, Khinchin’s constant . A fantastic theorem!
Anyway, I digress. You are probably waiting to hear about the continued fraction representation of , which is :
Cute little sequence, except for the bump at the beginning where there’s a 2 instead of a 1. One thing I learned from Knuth is that the bump is smoothed away by writing it in a slightly different way,
involving triads , where .
Anyway, how to prove this fact? I’ll sketch two proofs. The first is the one I found in Knuth (loc. cit., p. 375, exercise 16; see also pp. 650-651), and I imagine it is close in spirit to how Euler found it. The second is from a lovely article of Henry Cohn which appeared in the American Mathematical Monthly (Vol. 116 , pp. 57-62), and is connected with Hermite’s proof of the transcendence of .
PROOF 1 (sketch)
Two functions which Euler must have been very fond of are the tangent function and its cousin the hyperbolic tangent function,
related by the equation . These functions crop up a lot in his investigations. For example, he knew that their Taylor expansions are connected with Bernoulli numbers, e.g.,
The Taylor coefficients where are integers called tangent numbers; they are the numbers 1, 2, 16, … which appear along the right edge of the triangle
1, 1, 0
0, 1, 2, 2
5, 5, 4, 2, 0
0, 5, 10, 14, 16, 16
where each row is gotten by taking partial sums from the preceding row, moving alternately left-to-right and right-to-left. The numbers 1, 1, 5, … which appear along the left edge are called secant numbers , the Taylor coefficients of the secant function. Putting , the secant and tangent numbers together are called Euler numbers, and enjoy some interesting combinatorics: counts the number of “zig-zag permutations” of , where . For more on this, see Stanley’s Enumerative Combinatorics (Volume I), p. 149, and also Conway and Guy’s The Book of Numbers, pp. 110-111; I also once gave a brief account of the combinatorics of the in terms the generating function , over here.
Euler also discovered a lovely continued fraction representation,
as a by-product of a larger investigation into continued fractions for solutions to the general Riccati equation. Let’s imagine how he might have found this continued fraction. Since both sides of the equation are odd functions, we may as well consider just , where . Thus the integer part is 0; subtract the integer part and take the reciprocal, and see what happens.
The MacLaurin series for is ; its reciprocal has a pole at 0 of residue 1, so
gives a function which is odd and analytic near 0. Now repeat: reciprocating , we get a simple pole at 0 of residue 3, and
gives a function which is odd and analytic near 0, and one may check by hand that its MacLaurin series begins as .
The pattern continues by a simple induction. Recursively define (for )
It turns out (lemma 1 below) that each is odd and analytic near 0, and then it becomes plausible that the continued fraction for above is correct: we have
Indeed, assuming the fact that is uniformly bounded over , these expressions converge as , so that the continued fraction expression for is correct.
Lemma 1: Each (as recursively defined above) is odd and analytic near 0, and satisfies the differential equation
Proof: By induction. In the case , we have that is analytic and
Assuming the conditions hold when , and writing
we easily calculate from the differential equation that . It follows that
is indeed analytic in a neighborhood of 0. The verification of the differential equation (as inductive step) for the case is routine and left to the reader.
- Remark: The proof that the continued fraction above indeed converges to is too involved to give in detail here; I’ll just refer to notes that Knuth gives in the answers to his exercises. Basically, for each in the range , he gets a uniform bound for all , and notes that as a result convergence of the continued fraction is then easy to prove for such (good enough for us, as we’ll be taking ). He goes on to say, somewhat telegraphically for my taste, “Careful study of this argument reveals that the power series for actually converges for ; therefore the singularities of get farther and farther away from the origin as grows, and the continued fraction actually represents throughout the complex plane.” [Emphasis his] Hmm…
Assuming the continued fraction representation for , let’s tackle . From the continued fraction we get for instance
Taking reciprocals and manipulating,
Theorem 1: .
Proof: By the last displayed equation, it suffices to show
This follows from a recursive algorithm for multiplying a continued fraction by 2, due to Hurwitz (Knuth, loc. cit., p. 375, exercise 14):
Lemma 2: , and
I won’t bother proving this; instead I’ll just run through a few cycles to see how it applies to theorem 1:
and so on. Continuing this procedure, we get , which finishes the proof of theorem 1.
I turn now to the second proof (by Cohn, loc. cit.), which I find rather more satisfying. It’s based on Padé approximants, which are “best fit” rational function approximations to a given analytic function, much as the rational approximants provide “best fits” to a given real number . (By “best fit”, I mean a sample theorem like: among all rational numbers whose denominator is bounded above in magnitude by , the approximant comes closest to .)
Definition: Let be a function analytic in a neighborhood of 0. The Padé approximant to of order , denoted , is the (unique) rational function such that , , and the MacLaurin coefficients of agree with those of up to degree .
This agreement of MacLaurin coefficients is equivalent to the condition that the function
is analytic around 0. Here, we will be interested in Padé approximants to .
In general, Padé approximants may be computed by (tedious) linear algebra, but in the present case Hermite found a clever integration trick which gets the job done:
Proposition 1: Let be a polynomial of degree . Then there are polynomials of degree at most such that
Proof: Integration by parts yields
and the general result follows by induction.
It is clear that the integral of proposition 1 defines a function analytic in . Taking , this means we can read off the Padé approximant to from the formulas for in proposition 1, provided that the polynomial [of degree ] is chosen so that . Looking at these formulas, all we have to do is choose to have a zero of order at , and a zero of order at . Therefore fits the bill.
Notice also we can adjust by any constant multiple; the numerator and denominator are adjusted by the same constant multiples, which cancel each other in the Padé approximant .
Taking in proposition 1, we then infer
Notice that this integral is small when are large. This means that will be close to (see the following remark), and it turns out that by choosing appropriately, the values coincide exactly with rational approximants coming from the continued fraction for .
- Remark: Note that for the choice , the values derived from proposition 1 are manifestly integral, and . [In particular, , justifying the claim that is small if is.] In fact, may be much larger than necessary; e.g., they may have a common factor, so that the fraction is unreduced. This ties in with how we adjust by a constant factor, as in theorem 2 below.
For , let denote the rational approximant arising from the infinite continued fraction
where . From standard theory of continued fractions, we have the following recursive rule for computing the integers from the : , , , , and
Explicitly, and , so
[Note: and , so is infinite, but that won't matter below.]
Theorem 2: Define, for ,
Then , , and .
Proof: It is easy to see , , and . In view of the recursive relations for the above, it suffices to show
The last relation is trivial. The first relation follows by integrating both sides of the identity
over the interval . The second relation follows by integrating both sides of the identity
which we leave to the reader to check. This completes the proof.
Theorem 2 immediately implies that
indeed, the rational approximants to the right-hand side have the property that is one of , or (for respectively), and looking at their integral expressions, these quantities approach 0 very rapidly.
This in turn means, since the denominators grow rapidly with , that the rational approximants approach “insanely” rapidly, and this in itself can be used as the basis of a proof that is transcendental (Roth’s theorem). To give some quantitative inkling of just “how rapidly”: Knuth in his notes gives estimates on how close the approximant
is to the function . It’s something on the order of (loc. cit., p. 651).
- Remark: Quoting Roth’s theorem in support of a theorem of Hermite is admittedly anachronistic. However, the Padé approximants and their integral representations used here did play an implicit role in Hermite’s proof of the transcendence of ; in fact, Padé was a student of Hermite. See Cohn’s article for further references to this topic.
[Wow, another long post. I wonder if anyone will read the whole thing!]
[Edits in response to the comment below by Henry Cohn.]