## 31 March 2006

### Metamath and the art of mathematics

My brilliant coworker G.D. thinks Metamath, and the successor I envision (in the previous entry), are sort of pointless.

I think his main beef with the idea is this: people prove things by inventing crazy new concepts that won't get formalized for a long time. This inventiveness is the really interesting part of math. I think G.D.'s impression is that just about any proof you read in a mathematical journal today probably can't be put into a formal, machine-checkable form for decades to come.

He is probably right. So my hypothetical checked theorem database can never be up-to-the-minute. Then what value could it have?

• Exploration and education. Even if the database only covers formally grounded math, I think that could get us up to maybe 1950. Number theory is pretty well grounded, I think. Real analysis. Calculus. Euclidean geometry. Group theory. Think how huge it would be. Thousands of human-readable proofs, every step checked, and you can drill down into any step if you don't understand the reasoning. Basically every theorem an undergrad ever bumps into; every theorem I personally could ever hope to understand. It sounds exciting to me.

• Finding assumptions. Does the fundamental theorem of calculus depend on the axiom of choice? Metamath doesn't have that theorem yet, but when it does, then you'll know. Exposing previously hidden assumptions sounds good for math.

• Finding gaps. Check out the axiom of choice. Here's a discovery that rocked mathematics. Are there any more of these still lurking in standard math? It seems wildly unlikely... but then, it also seems unlikely that what we've got is that perfect.

• Mathematician productivity. The ultimate (200-year) goal is to evolve the system into a work environment for mathematicians. If you started every expedition by plunking your conjecture into a worksheet, every once in a while the program could immediately respond with the (known or easily found) proof. Also, Andrew Wiles's first proof of Fermat's last theorem was found to contain a serious error that it took him a year to fix. Of course he would have preferred to have discovered that prior to publication. The current proof is 200 pages; I doubt it's been scrutinized. These thoughts make this idea seem inevitable. It just sounds too useful as a dumb tool.

And: I think the time gap between a new invention in math and its formal grounding has been shrinking. Maybe formalism will catch up to discovery!

I think the essential question is this: do you think the Zermelo-Fraenkel axioms really are one of humankind's most profound achievements? Or were these guys just mincing grammar snobs, coming in after the Newtons and Galoises of the world, acting all important, dotting the i's and crossing the t's—only to get blown out of the water by Gödel before they even finished the job?

### Metamath

Q: What is Metamath?

A: Metamath is a tiny language that can express theorems in abstract mathematics, accompanied by proofs that can be verified by a computer program. This site has a collection of web pages generated from those proofs and lets you see mathematics developed formally from first principles with absolute rigor. Hopefully it will amuse you, amaze you, and possibly enlighten you in its own special way.

It has certainly amused and amazed me. (Note: If you don't consider the Zermelo-Fraenkel axioms “one of the most profound achievements of mankind” [here], Metamath might not be for you.)

Metamath is a database of machine-verifiable mathematical proofs. This is actually something of a long-term dream of mine, so I'm surprised and impressed.

Here are the frustrating bits:

• The database is largeish, but its reach is very limited. Metamath currently comprises 6,403 theorems, of which “2+2=4” is number 5,214.

• The proofs themselves are unenlightening. For example, Metamath has proved that there are infinitely many primes. But click the link. It's totally indecipherable.

What I'd do differently, to try and address those problems:

• Make the database more searchable, by giving pages readable English names, not numbers or junk like `caoprmo`.

• Allow anyone to edit.

• Encourage more English text mixed in with the formal statements.

• Allow pages with non-verifiable proofs, missing steps, syntax errors, etc. But if a proof is correct and verifiable, you'd get a nice string of green lights down one side of the page.

• Work hard on notation. It has to be human-editable and human-readable. (This strikes me as extremely difficult.)

• I wouldn't require contributors to prove every step formally. This requires a deep understanding of formal logic that most mathematicians frankly don't have. Instead, you'd just type all the steps in the proof, with the usual gaping logical chasms in between; and the system would use automatic theorem provers to check that each step follows from the preceding ones. This would help with both readability and reach.

• Now theorem provers suck, so often the system won't be able to prove every step. No problem. Contributors could close a gap by (a) giving the system a hint (e.g. adding the words “...by [[Schphlebotsky's Theorem]]”), (b) inserting a clarifying intermediate step, or (c) making a separate page proving the step as a lemma. And they could work incrementally.

Metamath is already more and better than I would've thought possible. And there are extremely good reasons that Metamath exists and my idea is just a dream. But I think the dream has potential. Who knows—Metamath might evolve in this direction.

## 30 March 2006

### C# lambda expressions and constraint programming

Neat thing about C# lambda expressions: the recipient of a lambda gets to choose whether it wants a compiled function or just a parse tree.

```    var f = orders.Where(o => o.ShipCity == "London");
```

Here the lambda `o => o.ShipCity == "London"` compiles into either a function or an `Expression` object, depending on the parameter type of `orders.Where()`. As the caller, you don't care which.

So I got to thinking. Languages that implement constraint programming as a library feature, like Alice ML, seem to end up looking rather clunky:

```    fun money sp =
let
val v as #[S,E,N,D,M,O,R,Y] = fdtermVec (sp, 8, [0`#9])
in
distinct (sp, v, FD.BND);
post (sp, S `<> `0, FD.BND);
post (sp, M `<> `0, FD.BND);
post (sp, `1000`*S `+ `100`*E `+ `10`*N `+ D `+
`1000`*M `+ `100`*O `+ `10`*R `+ E `=
`10000`*M `+ `1000`*O `+ `100`*N `+ `10`*E `+ Y, FD.BND);
branch (sp, v, FD.B_SIZE_MIN, FD.B_MIN);
{S,E,N,D,M,O,R,Y}
end
```

It looks a lot nicer in a language like Oz, which has language support for logic variables and so forth.

```    proc {Money Root}
S E N D M O R Y
in
Root = sol(s:S e:E n:N d:D m:M o:O r:R y:Y)
Root ::: 0#9
{FD.distinct Root}
S \=: 0
M \=: 0
1000*S + 100*E + 10*N + D
+            1000*M + 100*O + 10*R + E
=: 10000*M + 1000*O + 100*N + 10*E + Y
{FD.distribute ff Root}
end
```

(Example from Mozart documentation.)

Now C# doesn't have that, but it seems like a constraint library similar to Alice's, designed in C#, could take advantage of expression trees to make the rules somewhat more readable:

```    sp.Post(S => S != 0);
sp.Post(M => M != 0);
sp.Post((S, E, N, D, M, O, R, Y) =>
1000*S + 100*E + 10*N + D
+ 1000*M + 100*O + 10*R + E
== 10000*M + 1000*O + 100*N + 10*E + Y);
```

Of course, lambdas only help with rules, and there's much more to the problem statement than rules. Still, I wonder if anyone is doing this.

### Procedural thinking and bugs

I get the feeling that procedural thinking leads to bugs.

As an example: suppose I have some software that runs on several servers. The flow of data through all the various components is complex. As long as it's all set up right, everything is great, but suppose I want to move one data-processing component from machine X to machine Y. Should I expect some data to get lost or not?

Now, a system designed by a mathematician would have certain properties (such as the one I'm looking for here) by design, orthogonal to all sorts of operations on the system. Because that's how mathematicians think. Cases like the one above, that maybe weren't specifically considered at design time, will nonetheless work as desired. Furthermore, the designer would be able to answer the question immediately. (“Sure, every operation is atomic, and all the state is tracked. It'll work, as long as you don't delete any files from X prematurely.”)

Of course, if moving components had been a use case to start with, a procedural thinker could certainly implement it correctly. But I think there are always unspecified use cases like this.

On the other hand, I find procedural code easier to read and modify than functional code. Perhaps software design should be more abstract, but code should be procedural.

## 28 March 2006

### Asynchronous APIs

I have a server program where there are a dozen or so different objects called service objects. At run time, each of these gets initialized and may have its own thread(s) and other resources. To shut down the server cleanly, I have to shut down these services:

```    for svc in activeServiceObjects:
svc.shutdown()
exit(0)
```

(Actually my program is C++, but it would work the same way in Python.)

It would be faster if I could shut them down all at the same time:

```    for obj in activeServiceObjects:
obj.startShutdown()

waitForAllToFinish(activeServiceObjects)
exit(0)
```

Unfortunately these objects have no `startShutdown()` method. There's just `shutdown()`. But I noticed that internally, quite a few of the `shutdown()` methods are actually doing something like this:

```    def shutdown(self):
```

The actual work to shut down the object happens in a separate thread. But the method blocks, because that's how `shutdown()` is defined to behave.

Let's stop here. I think code often evolves this way: an API gets designed; people realize the API can take a long time to run; someone decides to implement an asynchronous (that is, nonblocking) version of the API.

There is no standard way to signify “this API has both blocking and nonblocking versions”, so the nonblocking version shows up as a separate method, often several of them. And if you want to call the asynchronous version, you need to look it up in the docs and figure out how it works. Wouldn't it be nice if you could just specify when you call a method, “nonblocking call, please”, and the compiler would figure out what to do with that? And when you implement a function, wouldn't it be nice if you could write either the synchronous or the asynchronous version, and have the language autogenerate the other one if needed? This would reduce API clutter and mental overhead. It would make asynchronous functionality easier to use and more readily available.

It turns out both these things are actually pretty easy to do. The first relevant idea is the future. Here is an idea whose time has come. A future is simply an object that represents the result of an asynchronously executing call. Futures provide the standard API for asynchronous versions of APIs. The asynchronous version should always work just like the synchronous version, except that it returns a future.

(By the way, Java 1.5 has a standard interface for futures: `java.util.concurrent.Future`.)

Futures are easy to implement in a lot of languages. But the rest of it (allowing the user to call a function either synchronously or asynchronously on a whim; and allowing the programmer to implement only one version and have the other autogenerated) is an exercise in metaprogramming. Python is particularly well-suited to that sort of thing.

Below: a complete implementation. The syntax for a normal blocking function call is just the usual `fn(arg1, arg2, ...)`. The syntax for an asynchronous call is `async(fn, arg1, arg2, ...)`. I think this syntax is fairly nice. I would happily write `async(my_socket.read, 100)` if it worked.

The `async()` function handles the case where the function is synchronous but needs to be called asynchronously. What about the opposite case: a function is implemented in asynchronously but needs to be called synchronously? This one is just as easy. The function just needs to be marked with the `@asyncToSync` decorator, like this:

```    @asyncToSync
def shutdown(self):
```

I'm still shaking my head over how cool Python is. Here's the code.

```""" futures.py - Futures and asynchronous function calls for Python. """

class Future:
""" A future is a proxy for the result of a function call that
may not have finished running yet.

When you call async(), it returns a future representing the
result of the function you're calling asynchronously.  You can
call the get() method to block until the call finishes.  When
the call does finish, get() returns its return value.  If the
call fails with an exception, get() re-throws that exception.
"""

def __init__(self):
self._exc_info = None
self._value = None

def is_set(self):
""" Returns true if this future has completed.
If this is true, get() will not block.
"""
return self._done_event.isSet()

def get(self):
""" Get the result of this future.

If neither set_value() nor set_error() has been called, this
blocks until one of them is called.

This returns a value if set_value() was called;
it throws an exception if set_error() was called.
"""
self._done_event.wait()
if self._exc_info is not None:
t, v, tb = self._exc_info
raise t, v, tb
else:
return self._value

def set_value(self, v):
if self._value is not None:
raise ValueError("This future already has a value.")
if self._exc_info is not None:
raise ValueError("This future has already failed and can't be set.")
self._value = v
self._done_event.set()

def set_error(self, exc_info):
if self._value is not None:
raise ValueError("This future already has a value.")
if self._exc_info is not None:
raise ValueError("This future has already failed.")
self._exc_info = exc
self._done_event.set()

def async(fn, *args, **kwargs):
""" Call the function fn in a separate thread.
Return a future that will receive the result of the call.
"""

if hasattr(fn, '__async__'):
return fn.__async__(*args, **kwargs)

result = Future()

def async_processing():
try:
rv = fn(*args, **kwargs)
except:
result.set_error(sys.exc_info())
else:
result.set_value(rv)

return result

class AsyncAwareFunction:
""" A function that has an efficient asynchronous implementation.

This is a wrapper for a function that works asynchronously.

The wrapper has a __call__ method that lets the user call the function
synchronously--that is, the Future.get() call on the result is
automatically added.  But async() can be used to get the underlying
asynchronous behavior.
"""

def __init__(self, fn):
""" fn should be a function that returns a future. """
self.fn = fn
self.__name__ = fn.__name__
self.__doc__ = fn.__doc__

fn.__name__ += '_async'

def __call__(self, *args, **kwargs):
""" Call the function and wait for it to complete. """
return self.fn(*args, **kwargs).get()

def __async__(self, *args, **kwargs):
""" Call the function and don't wait for it to complete.
This should return a future.
"""
return self.fn(*args, **kwargs)

def asyncToSync(f):
""" Decorator.  Use this on functions implemented asynchronously.
This wraps them to run synchronously unless called via async().
"""
return AsyncAwareFunction(f)
```

## 23 March 2006

### Domain errors (or, stupid compiler tricks)

Warning: the following blog entry is long and boring, and has a Static Typing Index of 9.4.

A domain error is when you pass an invalid argument to a function. I think domain errors are different from other runtime errors, and I think some PL support for domain checks might be a nice feature.

A domain error should be treated as a bug in the caller, not a runtime error in the callee. I think domain errors and asserts should be treated the same way. (That's not to say I like the way asserts work in, say, C++. I'm just saying they're the same sort of thing. If the check fails, that's a bug.)

Suppose you could set domain restrictions out in front of a procedure rather than explicitly checking them inside the body of a procedure.[Footnote 1] For example:

```  // Return a sorted copy of a slice of the given list.
List getSortedSlice(List source, int start, int stop)
where source != null, 0 <= start, start <= stop, stop <= source.size()
{
List result = new ArrayList(source.subList(start, stop));
Collections.sort(result);
return result;
}
```

What good is that?

• The code is easier on the typing fingers than explicit asserts, but it could compile into equally good code. And it looks easier to generate good runtime error messages from it.

• You could display it in auto-generated API documentation. This means a restriction only needs to be typed once, instead of once in the documentation (to tell the user about it) and again in the code (to enforce it). Nice. I think it's particularly nice for Java-style interfaces.

• A function's domain restrictions could be automatically propagated to its callers. That is, if `f(x)` calls `g(x)`, `f` should inherit `g`'s restrictions on `x`—without `f`'s author having to type them in again.

This would cause errors to be caught sooner, closer to the source of the bug, when the stack is shorter. Stack traces are so extremely useful because they let you drill down to find the real source of a bug, which is often far down the stack from where the bug is detected. But it would be even more useful to catch the error at its source.

• The compiler could try to prove that the arguments to a function call meet the domain restrictions. This is an optimization. The compiled function would have two entry points: one that checks arguments and one that doesn't. For a given call site, if the compiler can statically prove all the restrictions are met, the runtime check becomes redundant and can be optimized away. This may not be as hard as it sounds. If all the restrictions are propagated outward, for example, the runtime checks can be skipped. If the call is sheltered by an earlier call (or assert) in the same function that enforces the same or stronger restrictions, the runtime checks can be skipped. (I think some Java JIT compilers do a weak version of this for array accesses. Come to think of it, I bet they do it for `NullPointerException` checks, too; I haven't looked.)

• And if the compiler can prove that an argument will fail a precondition, it can treat that as a compile-time error.[Footnote 2] Talk about catching an error at its source.

It amounts to: catching errors earlier is good. This is why I think some errors should be treated differently from others. Many errors, like “out of memory” or “connection refused”, are essentially unpredictable. But with a little effort, a compiler could probably very often find a way to detect that an assert is going to fail, or a function is going to be called with an invalid argument, or a null reference is going to be used, before it actually happens. It should be allowed to generate code that detects the error early. (Especially if it can boost performance by reducing the number of runtime checks.) If the compiler is able to detect that sort of error at compile time, so much the better. It should be allowed to flag it as a compile error.

I don't think this is the case with Java. I think the language spec requires `javac` to generate code that waits for the error to happen. Principle of least astonishment, or sheer blockheadedness? You decide.

Bonus cool bit: how much of this information could be obtained without the end user having to type anything? Consider the example above. The first restriction, `source != null`, is implied by the body of the function. `source.subList()` is called unconditionally on line 1 of the function; the code can't succeed if `source` is null. The other three restrictions could be put on `List.subList()` instead; they would automatically propagate outward to `getSortedSlice()`. Huh.

[1] You can already say things like this in languages like ML and Haskell that have pattern matching. They're called guards, and they look like this in Haskell:

```getSortedSlice start stop source
| 0 <= start, start <= stop, stop <= length source
= sort \$ take (stop - start) \$ drop start \$ source
```

If the guard isn't satisfied, the call fails at runtime. But that isn't really the design purpose of guards, as I understand them. I don't know if there's any Haskell compiler that attempts the sort of shenanigans I'm suggesting. I sort of doubt it.

Closer would be Eiffel, which has preconditions that (for all I know) might be intended to do this sort of thing.

[2] Conventional wisdom is that features like this lull the programmer into a false sense of security. If you can detect only X% of a particular class of errors at compile time, with X<100, you're actually doing the user a disservice. I think I disagree. I mean, the same could be said of unit tests, right? In fact, just running code and having it work instills a sense of confidence that may or may not be warranted. My experience is that all code has bugs, and every bug the compiler catches saves me time.

## 16 March 2006

### Python and functional programming

Last year Guido announced that the `reduce()` builtin will go away in Python 3.0. A lot of the commentary was along the lines of this:

Maybe Guido's right, and functional programming doesn't belong in Python. Of course, that would imply that functional programmers don't belong in Python, either... I hear Perl's more receptive these days, and you won't find Ruby or Smalltalk giving up blocks/lambdas anytime soon. Or heck, try a real functional language like Haskell, Ocaml, Dylan, Nice, Scheme, or what have you. Even C++ is getting more functional these days. Too bad to see Python going backwards.

Bitterness abounds! What I was completely unable to find, though, was any example of Python programmers saying, “I use `reduce()` all the time. I really hope it doesn't go away.” In fact a Google search for python "i use reduce" found only 13 hits.

Following one of them, I found this, which I think is fair:

My Solar System calculator BOTEC has a similar application; when plotting courses, you can have an arbitrary number of transfers, and those transfers can each have an arbitrary number of burns. So it's quite convenient to do a reduce on the list of burns (a sequence of sequences), and then a reduce on the list of deltavees for the transfers (a sequence of numerics).

Note the difference in tone. Strangely, moving `reduce()` out of `builtins` and into a `functional` module, where it unquestionably belongs, is not the end of the world.

Another way of looking at all this is that Python doesn't need `map`/`filter`/`reduce` because it has syntax for all three. For `map` and `filter`, you can use list comprehensions. For `reduce`, there's an even cleverer syntax. It does everything `reduce` can do, but it's more readable; it runs faster; it's much easier to change; it can accumulate multiple values without having to pack and unpack tuples—it even has an early-escape feature. It's called a `for` loop.

## 04 March 2006

Last week in a code review I ended up writing a little code like this:

```void Frobs::tidy()
{
for (PtrVec::size_type n = 0; n < m_vec.size(); )
{
if (m_vec[n]->isExpired())
vec.erase(n);
else
n++;
}
}
```

The guy I was working with prefered this, saying it would be easier to read:

```void Frobs::tidy()
{
for (PtrVec::size_type n = 0; n < m_vec.size(); )
{
m_vec[n]->isExpired() ? vec.erase(n) : n++;
}
}
```

Obviously I disagree, or there'd be nothing to write about. What do you think? My take is in the comments.