30 May 2016

A bitrot anecdote

The consequences of bitrot are not contained in the domain of the system administrator. Here’s Mario Carneiro on the state of a formal, computer-checkable proof of a mathematical theorem:

I was blown away when I discovered that Isabelle’s proof of the prime number theorem was done in Isabelle2005, and it was not upkept along with the AFP (archive of formal proofs). Moreover, apparently not even the creators know how to run Isabelle2005 anymore, and everything has changed since then so it’s not backward compatible by any stretch. Basically, the proof is lost at this point, and they are more likely to formalize an entirely different proof than upgrade the one they already have.

Meanwhile prose proofs dating back to the 1800s are still good. Though time will presumably destroy those too, if they are not “upkept”. Languages evolve and go extinct. Books are destroyed.

10 May 2016

Print

I have subscribed to a daily print newspaper.

I really enjoy a good paper, but the one I subscribed to is not a good paper. It’s owned by Gannett. A lot of the content is exactly what’s in USA Today. Section B of my paper is, as far as I can tell, USA Today’s section A verbatim. And local coverage is thin. Sometimes the front page has 12 column-inches of news on it. A few stock photos and two-inch headlines holding up not so many words.

I knew all this going in. It’s why I never subscribed before.

I subscribed for three reasons.

  1. The handful of reporters left in that newsroom are vital civic infrastructure. The world is going to get a lot darker when they’re gone, and we’ll all wonder why.

  2. Not scowling at newsprint was a major gap in my curmudgeonly persona.

  3. I need a better news source.

We are going to have to have a little talk about that last thing. Shut up, you knew this was coming. Pull up a chair.

I know you. You get most of your news from Twitter and Facebook. (Or maybe you’re one of those assholes that bragged to me at a party that you get all your news from The Daily Show. Well, congratulations. But your news comes in headlines, followed by applause or boos, followed by sketch comedy, just like Twitter. It doesn’t get any shallower and you’re no better than anyone else.)

Oh, you also listen to This American Life? Gold star.

So how’s it going?

Even the worst newspaper is pretty great compared to the Internet. The ads are less intrusive. Even the wrap-around ad that I have to physically tear off the front page before I can read my paper is less intrusive than the crap people have to endure or physically dismiss online. (Yeah, I know, you use an ad blocker, so you are blissfully unaware of this.)

When the Internet first came along, we were all pretty excited about getting out from under the filters that the media imposed on us. Instead, our friends and the people we admire would be our filters. Well, we’ve discovered something interesting about ourselves. The filter we create for ourselves is dramatically worse. We never have any real idea what’s going on. We read more trash. We read more pop culture fluff. We have invented whole new genres of trash and pop culture fluff. We’re making ourselves worse.

Reading a newspaper is frustrating and enlightening and stupid and entertaining and anti-entertaining. The paper is chock full of content that’s not for me. ...And maybe that’s what’s so good about it. The people creating the content are not hostile toward me; they just don’t know I’m here. It’s relaxing.

Bitrot

There was a time not so long ago when software had a shelf life of five or ten years, easy.

Of course there was a time before that when software was written for very specific machines, like an Atari 2600, and those programs still run today on those very specific machines. Their shelf life is unlimited. But I’m talking about the PC era, when there were constantly new machines and new OS versions being released, and yet old software would still run on the newer stuff for years and years.

That time is over. Ubiquitous internet access is the culprit. We’re on a treadmill now.

Say you use a program called Buggy and Buggy uses OpenSSL. If OpenSSL releases a critical patch, nobody is going to wait to see what the Buggy team thinks about the new version. The old OpenSSL library is going to be deleted off your computer with prejudice, and the new one dropped in its place. Buggy will immediately start using this new OpenSSL version it was never tested with (and never will be -- Buggy’s maintainers are too busy testing their current codebase). The longer this goes on, the greater the difference between the environment on your computer and any environment in which Buggy could possibly have been tested by its maintainers. Eventually, something breaks.

A security-sensitive library like OpenSSL may sound like a special case, but it’s not. For one thing, you don’t know which software on your computer is security sensitive. But also, you’re going to get updates that fix other (non-security) bugs. You probably want those fixes; you definitely want the security fixes. And given that, the optimum strategy is to keep updating everything. As long as all your software stays on the treadmill, you’re OK.

But unmaintained software now rots. It happens fast. We’re not talking about this. I don’t know why.

24 February 2016

Former NSA director warns against back doors

The headline this Monday was: “Former director of NSA backs Apple on iPhone ‘back doors’”.

“Look, I used to run the NSA, OK?” Hayden says. “Please, please, Lord, put back doors in, because ... that back door will make it easier for me to do what I want to do. ...

“But when you step back and look at the whole question of American security and safety writ large, we are a safer, more secure nation without back doors.”

I have to give Michael Hayden credit for changing his mind on this and for speaking up about it, but it is a little late. The right time to “step back and look at the whole question of American security and safety writ large” is when you are in fact the director of the National Security Agency.

The NSA is the agency charged with protecting U.S. information systems against foreign attack. Sure, that mandate is less sexy than NSA’s sigint mission, but it’s even more important in practice. We are vulnerable. And it’s great that signals intelligence and information security are under the same agency, since there are tradeoffs to consider... except that nobody is considering them. The head of the National Security Agency didn’t see security as any part of his job.

Apparently, he thought he had a sigint job. But really I suspect he thought he had a political job, working for the President.

That's a shame. If the current director of the NSA happens to be reading this: please do your country a service and take a good hard look at your job description. Or better yet, a good hard look in the mirror. Act like an adult. Do the job that needs doing.

02 December 2015

How I finally learned git

“Einstein repeatedly argued that there must be simplified explanations of nature, because God is not capricious or arbitrary. No such faith comforts the software engineer. Much of the complexity he must master is arbitrary complexity […] because they were designed by different people, rather than by God.” —Fred Brooks

True facts:

  • git has outstandingly bad UI design.

  • The git man pages are written in such heavy jargon that I was never able to get anything useful out of them until very recently.

I just recently broke through some sort of internal motivational barrier and really learned git. Here’s what I did:

  • Promise to give a talk about git in front of a bunch of people. I can’t recommend this, but it happened.

  • Read the Pro Git book. Nope. I tried several times, and the book is good and free, but for some reason I couldn’t get through it.

  • Keep a list of my open questions. This is what really worked for me. I made a file (ignorance.md) containing stuff like:

    • What exactly is HEAD?

    • What exactly is the reflog?

    • What exactly is the index? How is it stored?

    • How does git pull differ from git pull origin master?

    • What is this `origin/master` syntax? When would I want to use it?

    Then I attacked questions in no particular order, plugging them into Duck Duck Go or messing around with git in a throwaway repo.

    When I got an answer, I typed it into the file, in my own words. Each answer led to three or four new questions, so I put those in there too, and kept going. Right now I have 39 open questions. (“[W]e do not yet know all the basic laws: there is an expanding frontier of ignorance.” —Richard Feynman)

  • Poke around in .git. There’s no substitute. (“Show me your flowcharts…”)

  • Randomly read GitGuys.com. It’s incomplete, but what’s there is great.

I don’t think all this took more than maybe 8 hours, really. At some point, the git man pages started making sense... mostly.

I would try that ignorance.md thing again. It’s been fun.

11 May 2015

Determinism

I think I am going to be a fairly extreme determinist until someone convinces me otherwise.

This is not an informed position. My influences are:

  • “Story of Your Life” by Ted Chiang. (Including this in the list is about 37% joke, but go ahead and read it.)
  • the first couple chapters of SICM (all I’ve managed to read so far) and my vague and confused understanding of physics generally
  • the ancient Greek Stoics, whom I find alternately incomprehensible and convincing on this subject.

Why did the Stoics believe the universe was deterministic?

Paraphrasing the Stanford Encyclopedia of Philosophy: “Chrysippus was convinced that the law of excluded middle applied even to contingent statements about particular future events.” The truth of a proposition never changes: if “jorendorff breaks his arm on 12 May 2015” turns out to be true tomorrow, then it is eternally true—it must be true already.

Huh! It hardly seems airtight, right? Inasmuch as the universe appears to be nondeterministic at quantum scale, there’s evidently a flaw in that logic somewhere.

However if you look past that, the Stoics start to look better and better. And let me start here by sniping at their rivals. The Epicureans, like the Stoics, believed that natural laws governed pretty much everything that happened. Even better, unlike the Stoics, they were atomists. But they added something extremely weird to this system. They claimed that atoms drifting downward under their own weight would occasionally “swerve” in a nondeterministic way. Lucretius:

For if they had not this characteristic of moving out of the direct line, they would all fall downwards like drops of rain through the depths of the void; no collision would take place, no one atom would strike upon another; and so nature would never have produced anything at all.

In the Epicureans’ defense, I think I’ve heard that the asymmetry of the universe and the very-large-scale variability in its density are interesting puzzles for present-day cosmologists too. Even the atomic swerve, ludicrous as it sounds, is somewhat vindicated in the atomic-scale nondeterminism of quantum mechanics.

Later Epicureans at least saw the swerve as giving rise to free will. Lucretius again:

You must admit therefore that the same principle holds true of the atoms: that, apart from weight and the blows of one atom on another, there must be another cause for motion, from which comes this power that is born in us, since we see that nothing can be produced out of nothing. It is weight that prevents everything being caused by the blows of one atom on another, as it were by an external force; but it is the minute swerve in the atoms, taking place at no definite time or place, which keeps the mind itself from being governed by an internal necessity in all its actions, and from being as it were subdued by this necessity so as to be merely a passive subject.

There must be people who think exactly the same thing about quantum fluctuations. But there is an interesting difference: nondeterministic quantum effects have been carefully studied, and are apparently purely random, with probability distributions that can be derived from the theory. Any consistent deviation from randomness could be (at least probabilistically) observed, and would prove the theory wrong.

I wonder if truly random outcomes can be the source of what we call “free will”. It seems to me rather that free will must refer to the choices of us, of our character and our desires. Free will, then, is quite the opposite of randomness. Free will is not only compatible with determinism, it is a kind of determinism: self-determinism.

Stoic philosophy contains something along these lines:

Chrysippus used the illustration of a cylinder rolling down a hill as an analogy for actions that are within our control (Cicero and Gellius, 62C-D). It is true that the force that starts its motion is external to it. This is analogous to the impressions we have of the world. But it rolls because of its shape. This is analogous to our moral character. When our actions are mediated by our characters, then they are ‘up to us’. Thus, if I see an unattended sandwich and, because I am a dishonest person, steal it, then this is up to me and I am responsible. All things come about by fate but this is brought about by fate through me (Alex. Aphr. 62G). When, however, I trip and fall, knocking your sandwich to the floor, this is not up to me. The chain of causes and effects does not flow through my beliefs and desires.

I don't think nondeterminism means what we have assumed it means, and I have some computer-sciencey things to say about that later. But for now, this will do: It’s a mistake to rush to embrace randomness, desperately, lest the tyranny of a universe operating under natural laws reduce us to mere “passive subjects”. It’s the random choice that is, by definition, meaningless.

“Let me introduce myself”

This sort of thing never gets old, never ceases to charm me.

The dialect can be defined self-referentially as follows:

grammar : rule + ;
rule    : nonterminal ':' productionrule ';' ;
productionrule : production [ '|' production ] * ;
production : term * ;
term : element repeats ;
element : LITERAL | IDENTIFIER | '[' productionrule ']' ;
repeats : [ '*' | '+' ] NUMBER ? | NUMBER ? | '?' ;

04 February 2015

Why raising the dead is safe

Scientists have cloned a vintage virus. Still works! The article contains a nice explanation of why this is a safe thing to do:

“There's a theoretical risk of this, and we know that the nucleic acid of the virus was in great shape in our sample,” study author Eric Delwart of the University of California told New Scientist. “But old viruses could only re-emerge if they have significant advantages over the countless perfect viruses we have at present.”

I wonder if this is the conventional wisdom among biologists. I think I understand the argument. Delwart is saying that the viruses we have today are extremely well adapted to our environment, and a randomly selected virus from 700 years ago is correspondingly unlikely to have any particular advantage over them.

On the other hand, the same argument says that invasive species should never have an advantage over native ones, right?

And just generally, I think of natural selection as a greedy algorithm, which means it finds local maxima and gets stuck there. Randomly going back and thawing out 700-year-old viruses seems like simulated annealing—in other words it’s exactly what a computer scientist would do on purpose to help their evolutionary algorithms get unstuck!

All of which is just idle speculation coming from me. I should emphasize that it would be nuts to take a 43-word quote in a short blog post as fully characterizing anyone’s view on the subject. Presumably this has all been discussed to death by people who actually know something about it. I wonder where I can read more.

17 November 2014

Old programs

These remarks are my contribution to a keynote address by Eliza Brock Marcum at Nodevember, a JavaScript and Node.js conference in Nashville, Tennessee. Eliza’s talk was on history and the value of history in computing.

I’m always delighted by the light touch and stillness of early programming languages. Not much text; a lot gets done.

—Richard Gabriel, “50 in 50”

Old programming books are better on average than new ones.

Today there are a lot of meh programming books bringing down the average. Hastily written, barely edited, extravagantly typeset, quickly obsolete. Not bad, exactly, but not outstanding. Maybe you’ve written one!

The books we need aren’t necessarily being written. Fortunately a lot of them were written 30 years ago and they’re still good.

I’m going to drop a couple of examples here, but please take these as a literally random sample, not particular recommendations.

I was recently given this copy of Programming Pearls by Jon Bentley. He wrote for the ACM’s magazine before the ACM turned evil, and this is a collection of his columns. It was just one a big box of old programming books, but it stood out because it has some of the worst cover art I have ever encountered. I figured it had to be the content that was good.

And it’s good. I challenge you to name a programming book with as much sheer joy in its pages as there is in the first twenty pages of this book. There’s this one page with three programs written in three different languages that work together to solve a problem because that’s the Unix way. The right way.

C:

    #define WORDMAX 101
    main()
    {   char thisword[WORDMAX], sig[WORDMAX];
        while (scanf("%s", thisword) != EOF) {
            strcpy(sig, thisword);
            qsort(sig, strlen(sig), 1, compchar);
            printf("%s %s\n", sig, thisword);
        }
    }

awk:

    $1 != prev  { prev = $1; if (NR > 1) printf "\n" }
                { printf "%s ", $2 }
    END         { printf "\n" }

shell:

    sign <dictionary | sort | squash >gramlist

Publication date: 1986. These programs still run today.


This month I opened a book that was written in 1977 and unexpectedly found the annotated source code of an operating system in it. It’s chapter 5. Seventy pages of a slim, 300-page book that isn’t even about operating systems. It’s Architecture of Concurrent Programs by Per Brinch Hansen.

It says that the whole operating system, including some parts that aren’t in the book, is five thousand lines of code. It was written by a team of two.

Imagine understanding a whole operating system.

(In fact later I found out this book contains listings from four different small operating systems. It also contains a summary of the book in Danish.)

These old books are a delight to read. The programs in them are a delight to read. There is beauty and poetry in them. This will sound trite, but they were written in a simpler world. They really were. Programming was simpler. There was no client/server divide. No HTTP, no URLs, no network, no browser, no events, no queues, no GUI. There was no concept of your data living in some other database process. You always had your hands on the data directly. These programs have a clarity and simplicity and immediacy that is strange to us now. But I can’t shake the feeling this is what we need to be striving for.

If you haven’t experienced it, you can’t strive for it. You’re designing the platform we’re all going to be using tomorrow.

Read yesterday’s programs. We’ll all be better off.


The past is a foreign country: they do things differently there.

—L. P. Hartley

There is another reason to dip into the past. Travel is broadening.

Programming was once younger, and people had crazy ideas. Ideas that were uncontaminated by practice, formed by brains that weren’t ground down by hard experience.

One out of every thousand crazy ideas you hear is going to change your life.

Take functional programming. The first major functional programming language was APL, implemented in 1963. It looked like this (example from Wikipedia):

life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}

And when did the golden age of functional programming begin? It was last Tuesday actually. Yeah. We need it today.

Seven years ago I remember sitting in an ECMAScript standard committee meeting, thinking we can’t expect JS programmers to understand closures and lexical scoping. I have never been more wrong in my life. FP is here to stay (with slighty different syntax, as it turns out). It only took 50 years to catch on.

The operating system I was telling you about earlier? That was never a commercial product. As far as I can tell it never went anywhere. And yet.

That book is full of ideas that make me think of Erlang (but it was written 9 years before Erlang) and even more so, Rust (but it was 25 years before Rust).

It makes me want to build new things.

27 January 2014

He who fights the future



Over a hundred years ago a Scandinavian philosopher, Sören Kierkegaard, made a profound observation about the future. … “He who fights the future,” remarked the philosopher, “has a dangerous enemy. The future is not, it borrows its strength from the man himself, and when it has tricked him out of this, then it appears outside of him as the enemy he must meet.”


We in the western world have rushed eagerly to embrace the future—and in so doing we have provided that future with a strength it has derived from us and our endeavors. Now, stunned, puzzled and dismayed, we try to withdraw from the embrace, not of a necessary tomorrow, but of that future which we have invited and of which, at last, we have grown perceptibly afraid. In a sudden horror we discover that the years now rushing upon us have drained our moral resources and have taken shape out of our own impotence. At this moment, if we possess even a modicum of reflective insight, we will give heed to Kierkegaard’s concluding wisdom: “Through the eternal,” he enjoins us, “we can conquer the future.”


The advice is cryptic; the hour late. Moreover, what have we to do with the eternal? Our age, we know, is littered with the wrecks of war, of outworn philosophies, of broken faiths. We profess little but the new and study only change.



—Loren Eiseley, The Firmament of Time, 1960.

ORD Camp 2013

Last year I spent a weekend at ORD Camp, a Chicago unconference populated by hackers of all descriptions.


There was a heady mix of 3d printing enthusiasts, robot-builders, programmers, drinkers, dreamers, proud Chicagoans, and werewolves. Everyone brought a talk, or an activity, or at least a bottle.


I couldn’t make it this year. Here’s what I remember from 2013.


  • Christina Pei brought lockpicking sets and gave everyone a chance to use them. It’s not hard to pick a Master key lock with two simple metal tools! I found out about The Open Organization of Lockpickers (@toool). And I learned that it is legal to own lockpicks in 49 states, the one exception being my own home state of Tennessee. (Note that many cities have their own lockpick possession laws, so the rest of you are not necessarily out of the woods.)


  • Strangers host and producer Lea Thau crammed a two-hour-plus workshop on storytelling into 40 minutes, leaving time for attendees to write—and then for a few to tell—true stories of their own lives. Awesome.


  • Human cannonball Kate McGroarty (@KateMcGroarty) did a mile-a-minute improv workshop. I learned: you can define a character just by choosing a funky shape for your spine. Or the way you walk. Also, the best gift you can give your improv partner is a name. Marvel at the speed with which your mind fills in a character for: “Earl”. “Anastasia”. I think my wife and I are going to host an improv party at our house. When did we stop being shy people? I blame Kate.


  • Third Coast International Audio Festival artistic director Julie Shapiro’s session was called Choose Your Own Audio Adventure. Lights off, bunch of nerds in a room listening to beautiful short scraps of audio and voting on what to play next. One example: Radiolab’s story of what happened on day 86 of Aleksander Gamme’s trek to the South Pole (just the first 5 minutes).


  • Jim Blandy’s session, live-coding the lambda calculus, was as virtuosic as you’d expect, if you know him.


  • Around a table, I asked Louis Wasserman what kind of math he studied before he got into programming, and he said combinatorics. What little I know of combinatorics (I said) is a few counting techniques, and the proofs for those always seem really ugly, with a lot of tricky case analysis. To counter that notion, Louis showed me a surprising proof of a theorem about complete subgraphs. I hope I get around to blogging it here later. It’s dead sexy.


  • And I got to chat with Jennifer Brandel, lead producer of WBEZ’s Curious City and organizer of Dance Dance Party Party. The common thread here seems to be: these are beautiful, beautiful things that could totally happen in your city.


Even with all that, my favorite parts of the trip are not even on the list, because they’d be boring to you. Meeting people I’ve wanted to meet for a long time. Listening to music.

Honestly I spent most of the time at ORD Camp sick or else in introvert people-overload. But the event is still unfolding in my head. It was unique, and I’m grateful for it.

Here, have a list of books

Someone linked me to an image, “Top 10 Books I Want My Kids To Read”. It’s now a dead link, but this isn’t about that particular list anyway.


The books on the list were not children’s books. They were books the author hoped his children would read, eventually. How, then, does such a list differ from “Ten Books I Would Recommend To Anyone”?


  1. You might choose books that act on the mind, hoping they will help fulfill your parental responsibility.
  2. You might choose books that tell your kids who you are, and why.
  3. You might choose books that are special to your family.

I guess in the first category, I’d pick some of these:

  • Silas Marner by George Eliot. It’s not superb, but good enough, and it’s about how what you do changes you morally, even if your motives have nothing in particular to do with morality. Morality tales that resonate with one person often ring hollow to the next person, so this is no slam dunk.
  • A Tale of Two Cities. Amazing.
  • The Gospel of Luke. It’s just good, and the message is about love.
  • The Handbook of Epictetus. (However, I do also recommend this every time anyone asks for book recommendations.)

I’m tempted to put in a pair books about science and how it works. Maybe The Demon-Haunted World by Carl Sagan and the book I’m reading now, The Firmament of Time by Loren Eiseley. But I’m not sure those rise to the level of the others, and it’s not like I’m well read enough in this area to make good picks.


In the second category:

  • Ox-Cart Man. This book tells more about me than anything. All the nerd stuff you see in my life is window dressing; the implicit moral background to this poem is where I’m really coming from. (Sorry to disappoint you!)
  • The Handbook of Epictetus. This again? Yes.
  • The Moon Is a Harsh Mistress by Robert Heinlein. I don’t know if I should recommend this. I haven’t tried to read it lately. I read it at an impressionable age and was impression’d.
  • Mountains Beyond Mountains by Tracy Kidder. If you can square this pick with the Heinlein pick, you understand me better than I do.

This list doesn’t make a very flattering self-portrait. I’m moralistic but I’m not sure what is right.


In the third category:

  • The Phantom Tollbooth by Norton Juster. Rhyme and reason for all ages.
  • The Thirteen Clocks by James Thurber. Rich verbal liquor in a fairy-tale-shaped container, and not a moral in sight.

That is only nine.


I don’t think any book I’ve ever read is indispensable, but books are indispensable.

20 January 2014

Lisp Machines

I’m doing some casual research on Lisp Machines. I find myself wishing for the patience of a reference librarian. Or just a very patient friend to lend moral support.


Dead links everywhere. Decay is a natural thing, and on the whole I am grateful that the Web decays. It just doesn’t suit my purpose at the moment.


As recently as 2007:


A wiki has been set up to capture some notes about using lispm's
and the unlambda emulators.  Contributions are welcome and
encouraged.  Thanks to Dan Moniz for setting it up and all who
have contributed.

   http://labs.aezenix.com/lispm

Tim Newsham
http://www.thenewsh.com/~newsham/

The link is not exactly broken, but the wiki doesn’t work anymore. Imagine two bits of software, once friends, one still alive, one mysteriously vanished who knows when.


But soon enough, the message itself will vanish from the web too; and this message you’re reading now; and in time, the Blogger service. Gone the way of the Lisp Machine itself, interesting to some, gone over the horizon of what’s worth preserving.

01 April 2013

The halting problem and garbage collection

At Talk Day, after Brandon Bradley’s talk about the halting problem, Martha Kelly asked: if it’s impossible to predict the future course of an arbitrary program, how can garbage collectors tell which objects will be used and which can be collected?


Martha’s right. There is a connection!


    function easy() {
        var obj = new Object;
        gc();
    }

If gc() does garbage collection, then certainly it could collect the new Object, since we are never going to use it. But many garbage collectors will not collect it yet, since there is still a variable referring to that object.


You might think, well, those garbage collectors are stupid. If I ever wrote a garbage collector, it would be perfectly precise, retaining only objects that the program will use in the future, and collecting all others.


Alas, there’s a simple halting-problem-like proof that a perfectly precise garbage collector is impossible.


    function hard() {
        var obj = new Object;
        var objWasGarbage = isThisObjectGarbageOrNot(obj);
        gc();
        if (objWasGarbage)
            alert(obj);
    }

The Catch-22 goes like this:


  • If isThisObjectGarbageOrNot(obj) returns true, then the Object is not garbage. It will be used later by alert(obj).
  • But if it returns false, then the object really is garbage: it will never be used.

So it seems the function isThisObjectGarbageOrNot() cannot possibly live up to its name in this case. But that is exactly the function which our perfect garbage collector would need in order to do its job perfectly! Therefore a perfect GC cannot exist.


Real GCs err on the side of caution and retain any objects that aren’t known to be garbage. They use reachability as a (conservative) estimate of which objects the program will use in the future.

27 August 2012

Privacy as a weapon

Remember the bogus bomb threats at the University of Pittsburgh? Apparently they were sent by email, anonymously, through a system called Mixmaster. The email passed through a computer in New York, which the FBI seized in April.

Now it is natural to wonder why we even have such things. Why is it OK for people to send email anonymously when it can cause such mayhem? Here’s what the computer’s owners have to say about it:


Q: Doesn’t Mixmaster/anonymous remailers enable criminals to do bad things?

A: Criminals can already do bad things. Since they’re willing to break laws, they already have lots of options available that provide better privacy than mixmaster provides. They can steal cell phones, use them, and throw them in a ditch; they can crack into computers in Korea or Brazil and use them to launch abusive activities; they can use spyware, viruses, and other techniques to take control of literally millions of Windows machines around the world.

Mixmaster aims to provide protection for ordinary people who want to follow the law. Only criminals have privacy right now, and we need to fix that.


All this is true, up to a point. Criminals have actually done all those things. It is also entirely plausible, though, that the particular culprit in question chose Mixmaster. Shortly after that server was seized, the bomb threats stopped.

My thoughts about privacy have changed. I used to think this:

People who keep secrets have something to hide.


I understood at the time that it was a simplistic truism, but it seemed useful anyway. But it’s not useful, because:

People who wear clothes have something to hide.


See? It just doesn't work. Here is what I think now:

Everyone has something to hide from a sufficiently reprehensible adversary.


It doesn’t trip off the tongue quite as lightly.


The FBI in this case was presumably acting with the best intentions, but many governments around the world are plenty reprehensible. Privacy cuts both ways. The ability to track down a miscreant sending bogus bomb threats is exactly the same thing as the ability of an oppressive government to track down activists and rebels and kill them. This is a real concern in some places, and people in those places have to use secure systems that protect their privacy or else give up the fight.


I do think it’s good to have some form of technological constraint on government surveillance, in addition to a reasonable system of checks and balances (requiring warrants for wiretaps, for example). Tracking people down and finding out every detail of what they’ve been doing should be hard. If it’s not, the government will eventually just track everything we do.

People who work on privacy and censorship-circumvention software have already shifted to building systems where there’s no central equipment to seize. Systems like Tor. Governments still have ways of attacking such systems, technologically and otherwise. How governments have tried to block Tor is a startling and absolutely fascinating 2011 talk about this. Watch the first five minutes of that.


One last thing. Anyone in the U.S. will recognize the “Criminals can already do bad things” quote as an argument against gun control. Whether it’s anonymity or a handgun, powerful tools have both offensive and defensive uses. Giving everyone such power is dangerous. Taking this power away from the people is dangerous.