*Updated with a third footnote clarifying my use of the term “diverge,” thanks to suggestion by Evelyn Lamb, who has also written an excellent discussion of the problem with the video. At the end of this post I list all the critiques I’ve found so far.*

I feel like one of those grizzled action heroes who, having given it all up, is dragged reluctantly out of retirement for one more big mission. Over the past month or so (honestly, I forget how long I was working on things), I wrote a series of blog posts on the “weirdness” of infinity in mathematical set theory. Hopefully, there were two things that I got across in those posts: (1) infinity can be very weird, but (2) it can be comprehended, and even reasonable, once one understands the assumptions and limitations built into the mathematics.

Having retired from writing those posts, the other day I came across the following video:

So, using a seemingly simple series of mathematical manipulations, they “prove” the following astounding result: the *infinite* sum of *increasing* *positive* *integers* equals a *finite*, *fractional*, *negative* number. In short:

.

This video was picked up by Phil Plait at Bad Astronomy, who called* it “simply the most astonishing math that you’ll ever see.” It has already spread far and wide across the internet, including making it to the popular site Boing Boing.

But is it true? The video makes it seem so simple, and uncontroversial, almost obvious. But there are some *big* mathematical assumptions hidden in their argument that, in my opinion, make it very misleading. To put it another way: in a restricted, specialized mathematical sense, one can *assign* the value -1/12 to the increasing positive sum. But in the usual sense of addition that most human beings would intuitively use, the result is nonsensical.

To me, this is an important distinction: a depressingly large portion of the population automatically assumes that mathematics is some nonintuitive, bizarre wizardry that only the super-intelligent can possibly fathom. Showing such a crazy result without qualification only reinforces that view, and in my opinion does a disservice to mathematics.

I’ve actually discussed this result years ago on this blog, talking about the Riemann zeta function and how -1/12 isn’t really equal to the infinite sum given. But even that discussion is probably a little too abstract, especially since I don’t discuss in any detail how the result -1/12 could be *physically* accurate. As it has been noted (and I’ve noted myself), the -1/12 result can be used with surprising accuracy in physics problems. But even there, things are much more subtle than they appear.

So let’s take a closer look** at the “proof” that an infinite increasing sum can equal -1/12. We will explain why the answer is not so simple as the video makes it appear, and why it is also not quite so simple to say that physics justifies the answer. We have a lot of ground to cover, so let’s go!

We should start by getting some definitions out of the way. An infinite sum of numbers is technically known as an *infinite series*. If we generally label these numbers as , , and so forth, the series *S* may be written as

.

Neglecting the “proof” above for the moment, it is rather intuitive that not every collection of numbers will sum to a finite value, or even a single value. We say a series converges only if, on adding more and more terms to the sum, we find that the total gets closer and closer to a *single* value. Some examples are worth pointing out. The following series,

converges to the value 2, for instance, while the series

diverges. For this latter series, as we add more and more of the terms to the total it just gets bigger and bigger, approaching the infinite.

Another series of note is the following:

.

This series does not diverge***, but it does not converge, either. As we add more terms to the total, it alternates between 1 and 0, and never fixes itself on either. We will call such a series *oscillatory*.

This brings us to the “proof” that . We can summarize the reasoning as follows (following the abbreviated description at Good Math, Bad Math):

Let’s assign a value 1/2 to the oscillatory series , i.e.

because *reasons*. Then, we consider the series , given by

,

and we add this series to itself, but with terms shifted slightly one place:

This suggests that , or that .

Now we introduce another series, , given as

,

and we subtract from :

What we have left over is simply 4 times , or we may (supposedly) write

,

which in turn implies

.

This seems deceptively simple, but there is an egregious oversimplification that really changes the entire meaning of this “proof,” and results in it not showing what it purports to show!

A little reflection shows that the entire argument hinges upon the statement that

.

In justifying this, the video just argues that summing an odd number of terms gives one, and an even number of terms gives zero. “Do we stop at an odd or an even point… well we don’t know, so we take the average of the two.” So simply stated, and so completely misleading!

There is a motivation to labeling the sum of the series as 1/2, and the sum of as -1/12, but the reasoning is much more complicated. For a moment, let’s put ourselves in the mind of a mathematician and see how they would view the problem.

We have seen that series may be described as convergent, divergent, or oscillatory, based on what happens as we increase the number of summed terms without bound. A convergent series sums to a well-defined number, but divergent and oscillatory series have no defined sums — they “blow up” or oscillate endlessly, respectively.

For brilliant mathematicians like Leonhard Euler (1707-1783), Niels Henrik Abel (1802-1829) and Srinivasa Ramanujan (1887-1920), however, this answer was unsatisfying. One way to think about it: there are many different divergent series, each with different terms, and they should be distinguishable in some manner. For instance,

and

both diverge, but they do so in different ways, with different terms in their respective series. A similar argument applies to oscillating series. It is not unreasonable to assume that there must be *some* way to quantify the behavior of even a nonconvergent series.

There are, in fact, several different ways to go about it. I will focus on one that strikes the best balance between simplicity and practicality, and which employs the geometric series, one of the few series whose sum can be found by a simple argument.

The geometric series is the series of the form:

,

where *x* is a variable whose allowable values are to be determined. What is the sum of this series, if it exists? Let us look at the finite partial sum of the series, defined as

.

For a finite *N*, we note the following relations:

,

.

These two expressions must be equal to one another, so we can solve for between them to get

.

Here’s the great thing: if we let , then we get the sum of the infinite geometric series, in a very simple form! Because of the term in the numerator, we quickly see that this will only approach a finite value if

.

If , the denominator of the sum “blows up”, and if , the numerator oscillates as a function of . If , then just gets bigger and bigger as increases. We may therefore say the following:

The geometric series converges for values of , and diverges for . For , the series is either divergent or oscillatory.

If we allow to be a complex number, i.e. , with , then we can say that the geometric series converges strictly inside a circle of radius 1 centered on the origin, and it is nonconvergent on or outside that circle.

Why, you are probably asking at this point, does this matter? Let’s look at the behavior of the geometric series for :

.

If we could sum the geometric series for , we would see that it is exactly the series talked about earlier. However, *our derivation of the geometric series sum demonstrates that this series has no sum*!

This is where, as mathematicians, we decide to be tricky. Though has no sum, the geometric series has a sum for values of that are arbitrarily close to that also lie within the circle. We therefore decide to *assign* a value to the series , by saying, in essence:

The sum of the oscillating series is defined to be the sum of the geometric series that is “most like” but that still converges.

What is that geometric series that is “most like” ? Well, we start inside the circle and move as close as we can to the point without hitting it. As gets closer and closer to , the sum of the geometric series gets closer and closer to

,

which is the naive result claimed to be true in the video. This process is illustrated pictorially below.

Considering we are now 1600 words into this blog post, hopefully it is now clear that the argument is not as trivial as the video implies. In fact, to prove this result, *we are not even summing the divergent series anymore: we are summing a (convergent) geometric series that is mathematically close to it.*

The method just described is what is called Abel’s method, and the summation derived is called Abel’s sum. The crux of Abel’s method: we take our original divergent series, modify it so that it is as weakly converging as possible (“infinitely” weak, in a loose sense), and then assign the sum of this weakly converging series to our diverging series. In formulas, let us imagine that we have a diverging series given by

.

We create a new series , of the form

.

If , we have added a “damping” to the terms of the series, as each power of is smaller than the last. In the limit , we make this damping infinitely weak but, under the right conditions, the series may still converge.

This idea probably seems very abstract at this point. Let’s try and make an analogy to see what we’re actually doing! Imagine that the circle of convergence is actually a very high, very steep plateau, and that the dropoff begins at . Being the dreamers that we are, we would like to know what it is like to fly without using an airplane, but if we step off the plateau (enter the region ) we will fall to our death, because flying is impossible. What can we do? We instead inch ourselves as close as we can to the edge of the plateau without going off; when we reach that boundary, we define it as being “as close to flying as possible.”

Analogously, we can’t sum the geometric series for . We can, however, get as close as possible to that point and use the sum of the “closest” series as the defined sum of it.

Abel’s method works as well for our series , which was defined as

.

The corresponding Abel series is

,

which can again be shown to be convergent within the circle ; its sum within this circle can be shown to be

.

For , we arrive at the series . If we again get as close as we can to without reaching it, we find we can assign a sum to the diverging series of the form

,

which is in agreement with the “proof” result. Again, though, it is important to note that we are not summing the original series: we are summing an infinitely close but distinct series that has a finite sum.

It is also possible to determine an Abel-like sum for the series , and show that that sum is , but it is actually more complicated: the simple Abel argument we have used here is insufficient to “fix” the series , and more general methods must be used. The conclusion is the same, though: we are not summing the series in the usual sense of the term. If you were to truly sum the series , all the way out to infinity, you would not get -1/12.

Two questions remain. First: why, then, does the “proof” given in the video give the right answer? My suspicion is that it works because the proof is implicitly summing the Abel-like series, without having the explicit values of present! It is a calculation involving convergent series disguised as one involving divergent series.

The second question: the series sum

*does* in fact work in theoretical calculations, and does produce results consistent with experiment: doesn’t that mean that the sum is in fact correct? Here I make a point similar to one that Blake makes at Science After Sunclipse:

Suppose you have done some sort of complicated physics calculation and the result comes out to directly involve a divergent series. A good example of this is the Casimir effect, in which quantum fluctuations of the vacuum provide an attractive force between a pair of closely-spaced metal plates.

In fact, a calculation of this force directly depends on the series ! One then has two choices: assume that the calculation is completely wrong, as the force is demonstrably not infinite, or assume that there is some weak “damping” effect of unknown nature that brings the system into line. Including this “damping” is analogous to constructing an Abel-type series: we introduce an extremely weak decay into our series that makes it converge.

The wonderful thing is that the nature of this damping does not strongly effect the result! Any appropriate weak damping will push the Casimir series to the finite sum -1/12, just like the construction of the Abel sum of the series pushed the sum to approximately 1/2. The use of a finite value as the sum of a divergent series is not a supernatural mathematical phenomenon, but an acknowledgement that other unknown physics is at work.

We’ve done a lot here, so let me try and summarize. First: the “series” does not have any finite sum in the normal sense of the word “sum.” Second: there are ways, however, to assign a finite “sum” to such divergent series in a mathematically consistent way, though we stress again that this is not a “sum” in the usual sense of the word. Third: The generalized sum of a divergent series has use in physics problems, as a way to incorporate unknown damping factors into a problem that seemingly has a divergent solution. Fourth: the original video does not note any of these points.

So why is this important? Part of what I’ve tried to show on this blog is that mathematics and physics can be extremely non-intuitive, even bizarre, but that they have their own rules and logic that make perfect sense once you get familiar with them. The original video, in my opinion, acts more like a magic trick than an explanation: it shows a peculiar, non-intuitive result and tries to pass it off as absolute truth without qualification. Making science and math look like incomprehensible magic does not do any favors for the scientists who study it nor for the public who would like to understand it.

____________________________________________

* Phil has since added a post explaining that there is more to the math than the video indicates.

** Other good posts discussing issues with the “proof” is one at Good Math, Bad Math and one at Science After Sunclipse.

*** I should note that mathematicians usually say that any series that does not “converge” must “diverge” by definition, including both series that “blow up” and “oscillate.” I use a looser terminology where “diverge” specifically means “blows up.”

____________________________________________

List of all the critiques I’ve found so far, for those who would like to get other perspectives:

Thanks for writing this (so I don’t have to)! This is a very clear explanation of Abel summation and analytic continuation. Another way I was thinking of it is to say that there is no way that 1 + 2 + 3 + 4 + … = -1/12 the normal, school-math definition of “=”. So, you have to ask what it means to sum where the true answer is undefined in terms of real numbers.

Glad I could help! :)

Agreed. What struck me was how the presenter in the video notes that S1 oscillates, but then blithely proceeds to say that we should take the “average” value, 1/2, because “we need to attach a number” to S1. My immediate response was: why do we need to? If it doesn’t have a value, it doesn’t have a value. What we need to do is get at the truth, not force an expression to have a value if it doesn’t have one. Of course there may be some alternative way of interpreting that string of symbols such that we can assign it a value, but if you switch to a different interpretation, you’re changing the subject. This is a kind of fallacy of equivocation, switching the meaning of an expression during the course of an argument.

Thanks for this! Adds some illumination to the mystical darkness. I am still somewhat befuddled by the counterintuitive result of a positive integer series summing to a negative number. Can this be explained by saying that, in order to “damp” the Casimir series, you need a complex damping term (Riemann zeta function?) whose powers will be negative, and this will somehow “drag” the sum down to the negative axis?

That is probably a reasonable way to say it. I need to investigate the strategy of “damping” for the 1+2+3+… series in more detail myself, and I have a nice book on the Riemann Zeta function to dive into in the near future!

I alluded to the fact that one needs more complicated “regularization” techniques for the 1+2+3+… series in the post, and one can see why by using the second series given in the post. The series 1-2x+3x^2-4x^3+… = 1/(1+x)^2 will approach the series 1+2+3+4+… if one takes the limit x -> -1; however, the closed-form sum on the right also blows up, then, too! This suggests that one must use a different strategy than the simple Abel sum in order to define a “sum” for the divergent series.

er, by “negative axis” i mean “negative part of the real axis”, obviously

probably worth noting that Terry Tao has also addressed this in the past:

http://tinyurl.com/mnp635w

Yes, thanks! I think I linked to that discussion in my older post related to analytic continuation and the Riemann Zeta.

Reblogged this on Comentários, Críticas, Dicas etc..

i’d be curious to know about the mapping from sequences to R extended to include these divergent series.

it’s evidently a conservative mapping – series that converge in the usual sense continue to converge (and to the same number).

for the newly-minted convergent series tho, questions like the following could be asked: does the mapping MINIMIZE any conveniently-stated property? eg is there a metric on the space according to which the new sums in R are the “shortest” distance from the series? if so, such extensions could be regarded as completions of the series-space.

orthogonal projection is the picture i have in mind, with an eye to making a bigger version of the l^p spaces.

but yah obviously it’s disingenous in the extreme to call such constructions a “sum” without qualifying the nonstandard nature of it all.

All good questions, and all worth looking into! I have no good answers at the moment; I need to read into these series in more detail myself.

I like your post in general but I think you are being a bit too hard on the video itself. For example, you write that “Making science and math look like incomprehensible magic does not do any favors for the scientists who study it nor for the public who would like to understand it.”

First of all the audience for the Numberphile channel are non-experts (777,165 and counting) and the video does a great job getting such people interested in the topic. AND, in addition to getting people interested, it provides much more detail for *the public who would like to understand it.”:

At 2:28 it gives a link to another video (http://youtu.be/PCu_BNNI5x4), which takes you through a “proof” and at 8:34 states that this series does not have a limit and that 1/2 is a pseudo-limit, which is basically the point of your post as I read it.

Also, at the end of the video there is a link to another video with a more rigorous proof: http://youtu.be/E-d9mgo8FGk

I should say that I am not affiliated with Numberphile in any way other than being a fan as I am of your blog. I just think you are being unfair in this case.

I really don’t think I’m being unfair at all; the fact that the site is for non-experts is exactly why one has to be careful at how things are presented. I am quite aware that there are other videos that they’ve linked to with more nuance, but none of that nuance is presented in the original video, and how many people actually went to learn the subtleties? The Boing Boing post that I linked to did not bother to explain the subtleties, and even Phil Plait (trained in physics) didn’t feel compelled by the original video to follow up. If 900,000 people watch a video that is misleading, and only 100,000 watch the follow-up (the stats as I write this), then misinformation is out there.

And, as this post points out, it *is* misinformation. The sum 1+2+3+… = -1/12 is presented as simple, unvarnished, undeniable truth. I don’t think it would have taken too much effort to put a little disclaimer in the video to say, “Hey, though it is usually correct to imagine that this sum is meaningless, in a limited sense it can take on an incredibly strange value.” Instead, they emphasized its ‘universal’ truth: “Look: it’s in a physics book!”

I suppose one can argue the pedagogy, but in my opinion it doesn’t matter if you’re appealing to a very large group of people, if you’re giving them a distinctly wrong answer.

(P.S. Though I appreciate your thoughts on the matter!)

I am a non-mathematician, so I really appreciated this down to earth explanation. The thing I am missing from all these explanations is why, when you sum two infinite series, do you shift the second one?

In general, you don’t have to! If I were allowed to sum two infinite series, I would normally sum them term-by-term in what would be the obvious way; the only reason the shift is done in the proof is to show that the sum equals 1-1+1-1+…

Notice that I said if I were “allowed?” The summation of infinite series is actually a rather tricky business, and even pairs of convergent series only possess a meaningful sum under certain circumstances. There are a class of series called conditionally convergent series that only converge “weakly,” in a sense, to a limit; I’ve talked about these in an older blog post as well. In conditionally convergent series, changing the order in which the series is summed changes its value! Also, for conditionally convergent series, I believe you can add two converging ones together and get a diverging one! (I’ll have to look for an example, to be sure I’m remembering correctly.)

When working with diverging series, the matter is even worse: one isn’t in general allowed to add them together in such a simple way, even term-by-term. As I said in the post, the only reason I suspect it works is that the additions are really implicitly the sums of Abel-like convergent series.

I got stuck on the first one too :)

Assuming that 1-1+1-1 add infinitum will get to a half may make sense from some Eye of a God, but putting it that statement in the addition and negations comes out the same way, by oscillating between nothing and a half, depending on where you stop. It’s not a good proof of anything except from muddy thinking, in where you assume it to be able to show a ‘average’ of whatever numbers you put into the ‘equation’? To make something mathematically mean something I would expect all other meanings to have been proven wrong.

And that why I don’t like the proof for a one to one correspondence to prove bigger and lesser infinities either. Because you won’t be able to close the door where, sitting down to count ‘one to one’ comparing a bigger infinity to a lesser, the correspondence converge into a small infinity relative a big. One to one is a very intelligent definition, making all infinities equal from this simple assumption. The other reasoning reminds me of the one you showed here.

Not quite sure what you mean here! “One to one” counting in infinite sets does in fact lead to the discerning of different size infinities, and contradicts the idea that they are all the same!

You really summed that up nicely.

Thanks!

I call BS.

The “hocus-pocus” is in the first math statement that the answer is 1/2. The “average” is not the same as the sum. And you can’t just shift over some numbers when talking about an infinite series and say that its the same number.

You could do the same, starting with a different sequence, get a different average, and get a different value.

Using his logic:

1-1+1-1+1… Sum = 1/2

but…

-1+1-1+1-1… Sum = -1/2

Then shift the 2nd and add the digits..

1-1+1-1… sum = 1/2

0-1+1-1… sum = -1/2

—————

1 now equals the added sum of 0

Can I have a PhD now.

Yes, that hits upon the problem with manipulations of infinite series which are divergent or “conditionally” convergent, which I touched upon in a comment below! In general, infinite rearrangements of terms of a divergent series, or even term-by-term summation of pairs of such series, are a no-no: the result is not what it is intended to be! As I alluded in the post, I believe the only reason the result comes out to something that is formally “correct” (in the limited sense in which a finite sum can be assigned to an infinite series) is because they are implicitly summing convergent Abel-type series. In an Abel series, which assigns a power of x to each term in the series, such simple “shifts” don’t work!

Dummy that I am, I thought the fallacy was that we know infinity is even, not odd, because infinity can be divided by two with no remainder, so we “know” to stop at an even number of steps, and S = 0. :)

The concepts of oscillating, divergent, and convergent sums makes a lot more sense though, and if I am understanding you the error is actually that we can’t just “average” an oscillating sum… Thank you for taking the time to explain this! It reminds me a bit of a number of paradoxes in philosophy where the apparent paradox is a category mistake (for example does the bibliography of all books list itself, and similar paradoxes discussed at length by Bertrand Russell). Series and sets must be the mathematical objects most prone to abuse…

Thanks for commenting! I would say that the fallacy isn’t even a mathematical one, but one of interpretation. Sure, you can *define* the sum of an oscillating series an an average, but that sum is *not* the same thing as a sum in a traditional sense. The original video sort of magically brushes this distinction under the rug, making it seem like it is obvious, or natural, to do such a thing, even though it changes the type of math one is doing in a complete and fundamental way.

And, yes, series and sets are totally ripe topics for abuse! :)

I don’t understand this element of the text:-

“and we subtract S_2 from S_3: … What we have left over is simply 4 times S_3,”

If you group the terms into pairs, S3 gives you:-

1+2=3, 3+4=7, …

The subtraction gives you:-

0+4=4, 0+8=8, …

I don’t understand why 3vs4 +7vs8 + … results in “4 times S3″?

Thanks, Martin

Well, you don’t need to group anything into pairs. You can just remove all the zeroes from the S3 – S2 series, because they’re zero; they don’t add anything to the sum. And then you’re left with just 4 + 8 + 12 + … , which is four times 1 + 2 + 3 + … .

Saying that the sum of the oscillating series 1-1+1-1+… is 1/2 probably makes sense to a physicist doing quantum stuff, because I gather that when calculating quantum mechanical behavior you have to account for every distinct possibility and its probability. Sure, the series never converges, but if it did, the expected value would 1/2. Obviously, the averaging method has to be consistent with whatever other math you are using to add things up, but if you think in terms of all paths, quantum mechanics lets you get a single answer even when it is mathematically impossible.

This doesn’t bode well for quantum computing. If you are using a quantum computer to add up your paychecks, do you really want 1+2+3+4+5+… to equal -1/12?

Here is my proof that shows the theory is incorrect!!! Watch it!

Your presentation is clear, simple and to the (actual) point. But! The blogname Sculls in the Stars jars. If anything it ought to be Scul closer to Earth, considering that you are putting starstruck videos in perspective, dragging them much closer to the mathematical-cum-physical ground. That is a service to the public well worth appreciating, while Sculls in the Stars suggests too much American hype. Those few (so far) of your bloggings I have read (just now) speak well enough for themselves.