16 September 2007

The principle of explosion

Pop quiz:

1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + ... = ?

It's obviously 0, right? Or maybe it's 1. In the 17th and 18th centuries, everyone apparently thought the correct answer was ½ (and it wasn't because they were stupid back then: this includes people like Leibniz and Euler).

Eh, so math is inconsistent. So what?

It is important to point out that it is not enough to consider at the same time two conflicting statements in order to develop in pupils' minds the awareness of an inconsistency and the necessity of second thoughts (Schoenfeld, 1985): the perception of some mutually conflicting elements does not always imply the perception of the situation as a problematic one (Tirosh, 1990).

Infinite series: from history to mathematics education (PDF), Giorgio T. Bagni.

Huh.

Now, maybe this is because math doesn't make a whole lot of sense to most kids to begin with. But I think the main cause is that kids, like the rest of us, are used to things being inconsistent sometimes. And they live with it. I mean, what are you going to do?

Well, let me tell you something. In math, you can't live with a contradiction.

The principle of explosion is built into the fundamental rules of logic, rules that both mathematicians and ordinary people use to reason with. Ex falso sequitur quodlibet: from a contradiction, anything follows. Or as an old friend of mine used to say, after you swallow the first pill, the rest go down real easy.

In math, if you accept a single contradiction, the entire system comes crashing down around you.

(Now there's such a thing as paraconsistent logic, in which inconsistencies are not so destructive. But it's quite different from ordinary logic, and not many people are familiar with it.)

In the above case, mathematicians eventually discovered a formal notion of “convergence” and found that the sum 1 - 1 + 1 - 1 + ... does not converge. That is, there's no answer, just as there's no answer for the sum 1 + 2 + 3 + 4 + ..., and for the same reason: you can go on for as long as you like, and your numbers are never going to converge on some specific value.

Is this a cop-out? It's a hard fact of life that some mathematical problems just don't have answers. The ancients considered 2 - 7 to be undefined, because the answer would be less than nothing, which was clearly nonsense. Today we have negative numbers, but other things, like division by zero, are still undefined. Given all that, maybe it's not so surprising if expressions that end in the innocent-looking “+ ...”, as if to say “oh don't mind me, I'm just a little infinite series, tra la”, sometimes fall into this category.

Floating

It's not until you study math a little bit that you realize just how awful floating-point arithmetic is. I mean, so it's a little inexact. So what? But the following are examples of statements that are absolutely true for integers, rationals, and reals, but not for floating-point numbers (even setting aside floating-point infinities and “not-a-number”s):

a - (a - b) = b

If b > 0, then a + b > a.

The associative law: (a + b) + c = a + (b + c)

The distributive law: a × (b + c) = (a × b) + (a × c)

Cancellation: If a × b = a × c and a ≠ 0, then b = c.

In short, if you've ever done any reasoning about floating-point numbers, you were probably wrong.

It's easy to come up with formulas that floating-point arithmetic gets seriously wrong, especially for numbers very close to zero.

The inexactness is impossible to track in a useful way, so you never know just how bad the result is. Typically you just assume it's almost right until you find out it's wrong. The only surprising thing is that it works so often.