We have seen two different ways of thinking about the size of a subset of the natural numbers. Cardinality is the most natural concept for finite sets - how big is a set of things? why, the number of things in it - and is utilized in the definition of density, something that allows us to deal with infinite subsets by thinking of them as occupying a certain proportion of the natural numbers. Both ideas are more or less intuitive. So when I tell you that our next notion of the size of a set of numbers involves standing every element on its head and adding them all up, a little bit of skepticism wouldn't surprise me.

Why would we try this? Well, first, by 'standing every element on its head,' what I mean is taking its reciprocal - the process of "flipping" a fraction (so 2 goes to 1/2, 47 goes to 1/47, 3/2 goes to 2/3, etc.). If we do this to every element of a subset of the natural numbers, what get are a bunch of numbers not exceeding 1. We can then add these reciprocals up, and whether the result is finite (it is not immediately obvious that it could ever be finite - you're adding infinitely many things!) or infinite (it is not immediately obvious that it could ever be infinite - each summand is so small!) says something about the set we started with.

OK. There is a lot here to make precise, but let's start with the obvious question: What the hell does it even mean to add up infinitely many things? Does it even make sense to add infinitely many things together? Well, not literally, given that it isn't really possible to "picture" an infinite set at all. So, like density, the value of an infinite sum is defined in terms of a limit: We see what happens when we add up more and more elements of the set. If, by adding sufficiently many of the numbers together (i.e. the first N of them, for some large value of N), we can make the sum as close as we want to a particular quantity, then we take the value of the infinite sum to be that quantity.

An example is in order, one that illustrates that, yes, in this sense, we can add up infinitely many things and obtain something finite. Consider the set of (nonzero) powers of two: {2, 4, 8, 16, 32, 64, ...}. What happens if we sum the reciprocals of this set, i.e. 1/2 + 1/4 + 1/8 + 1/16 + ...? If we view this addition as "filling up" an interval of unit length, notice that at each "step," we fill up half of what remains: We start with 1/2, then we add 1/4 (i.e. half of the remaining half), then 1/8 (i.e. half of the remaining quarter), and so on. We can get as close to 1 as we want - at each step, we have 1 minus the reciprocal of some power of two, and the reciprocals of powers of two become arbitrarily small - but no partial sum will ever equal 1; therefore, we take the value of the infinite sum to be 1.

So not only can the sum of infinitely many things be defined, it can actually be quite small. In fact, it may be hard to see why this process might ever yield a divergent sum - one that tends to infinity as the partial sums include more and more summands. Well, why don't we look at the biggest thing possible - the sum of the reciprocals of all the natural numbers? What can we say about 1 + 1/2 + 1/3 + 1/4 + 1/5 + ...?

Notice, for starters, that

1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + 1/9 + 1/10 + 1/11 + 1/12 + ...

is definitely bigger than

1 + 1/2 + 1/4 + 1/4 + 1/8 + 1/8 + 1/8 + 1/8 + 1/16 + 1/16 + 1/16 + 1/16 + ...

since each summand in the first sum is bigger than the summand in the same position in the second sum. But! The second sum is equal to


1 + 1/2 + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + ... = 1 + 1/2 + 1/2 + 1/2 + ...

We rigged the second sum so that it's just the sum of the same number over and over again. But summing the same number infinitely many times can't possibly be finite! And since the sum we care about is strictly greater than this sum which tends to infinity, it, too, must tend to infinity.

So the natural numbers correspond to an sum which tends to infinity, and the powers of two correspond to a sum which tends to 1. This makes sense: The natural numbers occupy quite a large proportion of themselves, while the powers of two are basically nowhere to be found - the density of the set of powers of two, as a limit, tends to zero quite rapidly. In this way, the sum of the reciprocals of a set's elements says something about the density of a set: It is certainly believable (and true, but I don't feel like proving it) that any set whose corresponding sum is finite has density zero.


It would be nice if the converse of this statement were true, but often in mathematics, whatever follows "it would be nice if" is false: There are density zero subsets of the natural numbers whose corresponding sum diverges. You'll never guess what an example of th— oh. Yeah, I guess I have been ranting about the prime numbers kind of a lot.

The primes are a density zero subset of the natural numbers, and the sum of their reciprocals diverges. "Density zero subset" = there are not too many primes. "Sum of their reciprocals diverges" = there are not too few primes. Not only are these two statements true, you can use the second one to prove the first. There are not too many primes, because there are not too few primes. I think I said that exact thing in the last Mathspin, and now I wish I hadn't, because a lot of the dramatic effect has been lost, but still: That's insane!

Maybe next time I'll try to prove either of those two facts about the primes, but doing so without any notational capabilities whatsoever is daunting. So, uh, does anybody have any requests? Any burning mathematical questions that have been eating you alive from the inside, that keep you up at night, that make you burst into tears for no reason, and your loved ones say "Derklejorp, what's wrong?" and you say, between sobs, "I just... I just can't figure out what's so goddamn special about this number e = 2.718...!" or some shit? Let me know in the comments.