27 January 2007

Neurology on the edge

Benjamin Libet conducted this experiment in the 1970s. Apart from one or two electrodes on the scalp, there's really nothing creepy about the experiment. Until you read about the results.

Libet asked his experimental subjects to move one hand at an arbitrary moment decided by them, and to report when they made the decision (they timed the decision by noticing the position of a dot circling a clock face). At the same time the electrical activity of their brain was monitored. Now it had already been established by much earlier research that consciously-chosen actions are preceded by a pattern of activity known as a Readiness Potential (or RP). The surprising result was that the reported time of each decision was consistently a short period (some tenths of a second) after the RP appeared.

from “Pre-empted decisions”, a page on Conscious Entities

The RP starts to ramp up as much as 0.3 seconds before the reported decision time. It continues to increase after that, leading up to actual hand movement about 0.2 seconds later.

What does this mean? All the test subjects, of course, felt they had consciously chosen to move. But if unconscious brain activity precedes the conscious experience of decision-making, then surely we must conclude that the decision is not consciously made. Effects don't precede causes.

Now there are countless philosophical objections to this conclusion. Some philosophers claim that to interpret this result at all competently, you have to be well-versed in the philosophy of the mind. Which seems reasonable enough, but it's a deep field with centuries of literature in many languages. So this prerequisite rules out anyone who has spent his life studying anything as patently irrelevant as mere neurology. To say nothing of random bloggers.

I'll wade in anyway, of course. Just don't get the impression I know anything about this subject. I don't.

There's a really nice, compelling interpretation that permits free will. It goes like this. The way the mind interprets time is anything but objective. In the Libet experiment, what's happening is that the mind shifts the experience of deciding to move forward in time and shifts the experience of motion backward in time, effectively bringing them closer together. So the conscious decision to act actually does cause the RP ramp-up. But the subject incorrectly reports the decision as having happened later, because his brain has deceived him about the timing.

Why would the brain do this? It seems likely to me that there's an evolutionary benefit to perceiving decision, action, and effect as a single event. I don't think we're equipped to deal with that kind of time lag consciously. Just think—there's a half-second lag between when you decide to move a muscle and when it moves. Have you ever played the piano? If you were aware of this lag all the time, could you do that? Could you run? Could you fight?

An article, “Free Will and Free Won't” (in American Scientist; $12 to download the PDF from their site), puts the Libet experiment alongside four or five other rather clever experiments into will and consciousness. Then it starts talking about alien hand syndrome. The brain is strange.

26 January 2007

Wonderfully odd thing of the moment

The island of Yap in the Pacific Ocean used varying sized stones as money, of which the largest weighing several tons were the most valuable. The stones had been brought by sea from the Island of Palau 210km away. [...] The Yapese valued them because large stones were quite difficult to steal and were in relatively short supply. However, in 1874, an enterprising Irishman named David O'Keefe hit upon the idea of employing the Yapese to import more “money” in the form of shiploads of large stones, also from Palau. O'Keefe then traded these stones with the Yapese for other commodities such as sea cucumbers and copra. Over time, the Yapese brought thousands of new stones to the island, debasing the value of the old ones. Today they are almost worthless, except as a tourist curiosity.

From Wikipedia. It's given as an instance of hyperinflation.

16 January 2007

Brouwer's shopping mall diorama theorem

This is a math post, but it also involves some audience participation. There's a crafts project. It may also require some driving. Ready?

Pick any closed, contiguous region of the universe—like, say, the nearest mall. Draw a map of it. Or you can make a diorama, if you're just that fond of the mall, or of making dioramas.

Go ahead. It doesn't have to be to scale.

While you're working, I'll say something profoundly obvious. The whole idea of a map, of course, is that every place in that part of the real world corresponds to exactly one spot on the map.

Done? Good. Now take the map (or model) and put it inside the closed region of space that it represents. That is, go to the mall. Brouwer's fixed-point theorem says that the map now has a fixed point: there's a point on the map that is actually at the very location that it represents.

This works no matter how large or small your map is. If your map is the size of the entire food court, and you take it there and spread it out on the floor, there will be a spot somewhere in the food court that exactly lines up with the corresponding spot on the map. Shift the map a little bit, and that spot won't line up anymore—but some other spot will. Always. You can turn the map around to face the wrong way. You can hold your 3-D model upside down. It doesn't matter. In fact, this works even if your map is not drawn to scale or if things are totally the wrong shape. There are only two requirements regarding accuracy. First, your map can't leave anything out. So if you forgot to draw the Banana Republic, you have to mentally squeeze it in between Orange Julius and The Icing where it belongs. Second, your map must be continuous. That is, any path someone might take from one point to another in the mall has to make a continuous path (without any “jumps”) on your map as well.

In the language of topology, any continuous function that maps a closed ball in Rn into itself has a fixed point. I have no idea why this works. Amazing.

It may have occurred to you that there already are nice, large maps conveniently located throughout the mall. Brouwer's theorem applies to those maps, too. In fact, in honor of Brouwer, the fixed points of these maps are always clearly marked, usually with a red dot or an arrow. Next time you're in a mall, take a look.

11 January 2007

Fiction, meet science

At some point, Dartmouth University offered a semester course on Renaissance Math in Fiction and Drama. From the site:

This course explores scientific developments in Renaissance astronomy and their portrayal in literature past and present. By reading some of the writings by Copernicus, Galileo and the prolific Kepler, we will attempt to draw a portrait of scientific upheaval during that period. The science fiction of the Renaissance offers a window into the popular response to these developments, as do various commentaries of the time. Dramatic pieces both recent and of that period show the artistic reconstruction of scientific events, sometimes through a very modern lens.

“Science fiction of the Renaissance”? There's not a huge amount of this, as it turns out, but one amazing, atypical example is Johannes Kepler's Somnium, which was at once a fanciful journey to the moon and a serious thought experiment in support of Copernican heliocentrism. Wow.

05 January 2007

Perfect numbers

Perfect numbers are numbers that are equal to the sum of their factors: 6 is perfect because its factors are 1, 2, and 3, and 1 + 2 + 3 = 6. Likewise 28 = 1 + 2 + 4 + 7 + 14; and so on. So far, 44 perfect numbers are known.

Puzzle: Can you prove that if 2n - 1 is prime, then 2n - 1(2n - 1) is perfect?

planx_constant mentioned that little theorem to me over vacation. It was first proved by Euclid. Millenia later, Euler proved that all even perfect numbers are produced by this formula. But it is not known whether there are any odd perfect numbers. Most mathematicians seem to think there are none. Here's James Joseph Sylvester, writing in 1888:

...a prolonged meditation on the subject has satisfied me that the existence of any one such—its escape, so to say, from the complex web of conditions which hem it in on all sides—would be little short of a miracle.

Yet there is hope, and indeed the search is on.

A complex story, part 4

(See parts 1, 2, and 3.)

Everybody literally sees the world from a different point of view. Each person is standing in a different location and looking out in a different direction from everyone else. But all viewpoints share certain similarities. If you and I are near one another, we'll see the same events happen in the same order, and although we may differ in our use of the words “right” and “left”, if we're watching something from opposite sides, we'll at least agree on the distances between things. If I see two people holding hands, you'll never see them on separate sides of the street at the same time, no matter where you're standing. All the different viewpoints preserve certain observed properties: distances, angles, durations, causality, and so on.

Mathematically, we can write this in two equations. For each of us, every event has a measurable position in space (x, y, z) and time (t). If we put my observations on the left-hand side and yours on the right, they will match.

We agree on distances: x2 + y2 + z2 = x'2 + y'2 + z'2

We agree on durations: t = t'

Even if I'm in a car doing eighty and you're sitting on the sidewalk enjoying an ice cream cone, we'll agree on the distances between and durations of any events we both happen to witness as I zoom by.

...Or so everyone thought. Don't get me wrong, this is a lovely picture. Mathematically, it's your basic three-dimensional Euclidean geometry, plus a separate dimension for time. All our viewpoints are identical except for a bit of spacial displacement and rotation. There's only one problem. This isn't how the universe really behaves.

1887 was the year of the famous Michelson-Morley experiment, which blew this nice, simple Newtonian view all to hell. For twenty years, confusion reigned. By 1905, a mere eyeblink in academic terms, physics had righted itself, now with a totally new model of space and time.

The new theory was called special relativity. It was built on brilliant new insights from Hendrik Lorentz, Henri PoincarĂ©, and Albert Einstein. And it went something like this: Two observers traveling at incredible velocities (relative to one another) actually do not agree on distances, angles, durations, or even the relative time-order of events. But they will agree on something even more fundamental: the basic laws of nature, including laws of motion, causality, and—in particular—the speed of light.

This had the advantage of being, you know, consistent with experiment. But geometrically, it was awfully weird. It wrecked the two equations above. Individual viewpoints were not simple spacial rotations and translations of one another. They were, uh, Lorentz transformations. Yeah. It was two more years before geometry caught up with physics.

In 1907, Hermann Minkowski discovered a kind of geometry (a four-dimensional manifold) that exactly describes the spacetime of special relativity. That is, Minkowski space is the actual geometry of the universe around us, according to relativity. Minkowski's geometry succeeded by treating space and time as interrelated. For example, in Minkowski space:

We may not agree on the spatial distance between two events: x2 + y2 + z2x'2 + y'2 + z'2

We may not agree on the passage of time: tt'

But we will agree on a particular mathematical combination of the two: x2 + y2 + z2 - ct2 = x'2 + y'2 + z'2 - ct'2

(Here c is the speed of light.)

Now comes the controversial, beautiful part. Define a variable w as ict. We're going to use w as our time coordinate, instead of t. Then the last equation above becomes:

x2 + y2 + z2 + w2 = x'2 + y'2 + z'2 + w'2

This looks a lot like our original equation for distance. And in fact this equation describes basic Euclidean geometry in four dimensions. Time becomes just another spacial dimension. All viewpoints are again simple rotations and translations of one another—not in three-dimensional space, but in four-dimensional spacetime.

Here the role of the complex numbers is to provide a new way of looking at the geometry of the universe.

But... what does it all mean? Is time really an imaginary dimension? What does it mean for three dimensions to be real numbers and one to be an imaginary number? These questions are, in a way, the same questions RT asked me months ago, the questions that got me interested in telling this story. What are the imaginary numbers? Do they exist? Do they appear in nature? I don't think anyone really knows. Einstein found the ict trick interesting at least (he mentions it twice in his short book Relativity, which by the way I enthusiastically recommend), but some physicists think it's a red herring. Maybe we're just dressing the universe up to look more comfortable and familiar.

A complex story, part 3

(See parts 1 and 2.)

In the early 1800s, Joseph Fourier found that every periodic function is made up of (a possibly infinite series of) sine and cosine functions of various frequencies and magnitudes. Just add the right sine waves together and you'll get the desired function. Any function. This collection of waves is called the Fourier series, and it would soon propel the complex numbers from the ivory tower of pure math onto the mad merry-go-round of technology.

Mathematicians used the Fourier series to shift difficult problems to an easier battleground, by transforming a complicated function into an infinite sum of very simple ones. This was the beginning of frequency-domain analysis. It was soon discovered that—thanks to Cotes's discovery—the Fourier series was much simpler if you used complex numbers. Other such transformations were discovered too, notably the Fourier transform and the Laplace transform. Both are based on the complex numbers.

Frequency-domain analysis was the killer app for complex numbers. And then came electricity. As it happens, most of electrical engineering would be practically impossible without frequency-domain analysis. Beginning problems in circuits—problems that in the time domain would require two or three semesters of college-level calculus to tackle—can be solved in the frequency domain with basic high-school algebra and a few complex numbers.

Fourier-related transforms are also essential to the compression of digital images, music, and video. So it's safe to say the complex numbers will be with us for a while yet.

There is just one more application of the complex numbers I want to talk about, by far the weirdest, probably the most controversial, and just maybe the most beautiful of them all.

(Concluded in part 4.)