You are currently browsing the tag archive for the ‘integration’ tag.
A reader brought up essentially this question: does anyone happen to know a proof that does not possess an elementary antiderivative? By “elementary”, I mean a member of the class of functions which contains all constants valued in the complex numbers, the identity function, the exponential and log functions, and closed under the four basic arithmetic operations and composition.
The solutions are in! As is so often the case, solvers came up with a number of different approaches; the one which follows broadly represents the type of solution that came up most often. I’ll mention some others in the remarks below, some of it connected with the lore of Richard Feynman.
Solution by Nilay Vaish: The answer to POW-10 is . Put
and integrate by parts: we have
The first term vanishes by a simple application of L’hôpital’s rule. We now have
where the second equation follows from , and the general elementary fact that Adding the last two displayed equations, we obtain
after a simple substitution. The last integral splits up as two integrals:
but these two integrals are equal, using the identity together with the general integration fact cited above. Hence the two sides of (3) equal
recalling equation (1) above. Substituting this for the last integral in equation (2), we arrive at
whence we derive the value of the desired integral .
1. A number of solvers exploited variations on the theme of the solution above, which could be summarized as involving symmetry considerations together with a double angle formula. For example, Philipp Lampe and Paul Shearer in their solutions made essential use of the identity
in conjunction with the complementarity , and the general integration fact cited above.
2. Arin Chaudhuri (and Vishal in private email) pointed out to me that the evaluation of the integral
is actually fairly well-known: it appears for example in Complex Analysis by Ahlfors (3rd edition, p. 160) to illustrate contour integration of a complex analytic function via the calculus of residues, and no doubt occurs in any number of other places.
3. Indeed, Simon Tyler in his solution referred to this as an integral of Clausen type, and gave a clever method for evaluating it: we have
which works out to
The last integral can be expanded as a series
where the summands for odd vanish; the other terms can be collected and then resummed to yield
and by substituting this for the integral in (*), the original integral is easily evaluated.
4. Arin C. later described still another method which he says he got from an exercise in a book by Apostol — it’s close in spirit to the one I myself had in mind, called “differentiation under the integral sign”, famously referred to in Surely You’re Joking, Mr. Feynman!. As Feynman recounts, he never really learned fancy methods like complex contour integration, but he did pick up this method of differentiating under the integral sign from an old calculus book. In typical Feynman style, he writes,
“It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. So because I was self-taught using that book [Wilson's Advanced Calculus], I had peculiar methods of doing integrals. The result was, when guys at MIT or Princeton had trouble doing a certain integral, it was because they couldn’t do it with the standard methods they had learned in school. If it was contour integration, they would have found it; if it was a simple series expansion, they would have found it. Then I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.”
So, what is this method? Wikipedia has a pretty good article on it; the idea is to view a given definite integral as an instance of a smoothly parametrized family of integrals, where the extra parameter is inserted at some judicious spot, in such a way that has an easy-to-integrate derivative. Then one figures out , and evaluates it at the which yields the original definite integral.
The best way to understand it is through an example. Pretend you’re Feynman, and someone hands you
[I suppose there may be a contour integration method to handle that one, but never mind -- you're Feynman now, and not supposed to know how to do that!] After fiddling around a little, you find that by inserting a parameter in the exponent,
this has a very simple derivative:
i.e., by differentiating under the integral sign, you manage to kill off that annoying factor in the denominator:
We therefore have
and notice that from the original definition of , we have . Thus , and the problem integral evaluates to . Bam!
It takes a little experience to judge where or how to stick in the extra parameter to make this method work, but the Wikipedia article has some good practice problems, including the integral of POW-10. For this problem they recommend considering
and I’ll let you the reader take it from there.
The point is not that this method is much more elegant than others, but that a little practice with it should be enough to convince one that it’s incredibly powerful, and often succeeds where other methods fail. (Not always, of course, and Feynman had to concede defeat when in response to a challenge, his pal Paul Olum concocted some hellacious integral which he said could only be done via contour integration.)
Doron Zeilberger is also fond of this method (as witnessed by this paper). Something about it makes me wonder whether it’s secretly connected with the idea of “creative telescoping” and the powerful methods (e.g., see here) developed by Gosper, Wilf, Zeilberger, and others to establish identities of hypergeometric type. But I haven’t had time to consider this carefully.
Also solved by Arin Chaudhuri, Philipp Lampe (University of Bonn), Paul Shearer (University of Michigan), and Simon Tyler. Thanks to all who wrote in!