I guess I haven't been around here much.
I've been busy writing a virus that, whenever a machine does floating point multiplication, rounds off the low bits into a secret account. So far I've got 2.153591407913833e-149
.
I guess I haven't been around here much.
I've been busy writing a virus that, whenever a machine does floating point multiplication, rounds off the low bits into a secret account. So far I've got 2.153591407913833e-149
.
The other day I learned a short proof that the Propositional Calculus is consistent.
This was a bit of a surprise to me since I’m steeped in all sorts of much more advanced Gödel results, which I’m not really qualified to contemplate, and the whole thing makes it seem like proving a system consistent should be a Herculean effort. But for PC it turns out to be rather simple.
I won't try to explain here what PC is. But I will try to explain what “consistent” means.
The key is to split everything you know about boolean logic into two disjoint categories: semantics and derivations.
The semantics of PC are about truth-values. In a semantic view of the formula ((P ⊃ Q) ⊃ (¬Q ⊃ ¬P)), the letters P and Q are boolean variables, ¬ is boolean NOT, and ⊃ is a binary operator such that a ⊃ b means "NOT (a AND NOT b)". This particular formula is valid, which means that no matter what values you assign to P and Q, the formula evaluates to true.
Derivations are proofs. There's a formal definition of what constitutes a proof in the Propositional Calculus: you're given some axioms to start with, and some derivation rules that produce new theorems from those axioms.
Key point: those derivation rules, and derivations generally, don't operate on boolean values at all: it's more like processing the formulas themselves, as strings, using regular expressions. You can check a proof without knowing that "¬" means NOT or that "∧" means AND. (For this reason, instead of "semantics and derivations", the two categories are typically called "semantics and syntax".) Conversely, you can show that a formula is semantically valid without proving it with a derivation—just check all possible combinations of boolean values.
But both views are well-defined. There's nothing mysterious about boolean operators. And there's nothing all that mysterious about string manipulation either.
PC is considered "sound" if every theorem you can prove (in the derivation system) is in fact valid (semantically). It's a kind of agreement between these two independent systems.
It's pretty easy to show that PC is sound:
You can just manually check that each axiom is valid by trying all possible combinations of boolean values. Spoiler: they're all valid.
Then you must show that the derivation rules of PC preserve validity. If the inputs to the Modus Ponens rule are both valid, for example, then the result will be valid. I'll skip the details but it's also pretty straightforward.
But taken together, (1) and (2) mean that every step of a proof is necessarily a valid formula. And so the conclusion of every proof is valid, and thus PC is sound, Q.E.D.
Now, a deductive system is "consistent" if it is impossible to prove two contradictory theorems (that is, two theorems that are the same except one adds a ¬ operator at the front). (There are other possible definitions of an inconsistency, but they're all equivalent as far as PC is concerned.)
So here we are finally.
Theorem: PC is consistent.
Proof: Suppose we did have proofs in PC of two contradictory theorems. Both must be valid, since PC is sound. But if the first is valid, it always evaluates to true. And in that case the second would be always false, and thus not valid—a contradiction. Therefore there are no such contradictions in PC, and therefore it is consistent.
What if you wanted to create your own Rust attribute that’s sort of like #[test]
, but not #[test]
?
That is, you’ve got an attribute macro—let’s call it #[subcommand]
—and at run time, you want a list of all functions tagged with that attribute. Maybe your program is a command-line build tool with several subcommands, and you want to use this list to process the command line, or even just to print out a nice help message.
You can, of course, manually write out the list you want, and just accept that you’re going to have to maintain it:
pub fn subcommands() -> Vec<SubcommandFn> { vec![ build_command, run_command, help_command, ... ] }
That’s how cargo
does it. Can we get rid of that boilerplate?
It's not easy to see how! The list would be a global variable. Rust strongly discourages those. Worse, your #[subcommand]
attribute would have to somehow add values to the list, and Rust has no support for initialization code (code that runs “on startup”, before main
)—much less sprinkling that kind of code throughout a crate.
You could try writing a build.rs script that looks at your code, finds all the #[subcommand]
functions, and builds the list. But this would be tricky; it’s easy to imagine it failing to find functions that are defined inside macros, for example.
Today I asked a few friends how they might approach this, and got two different answers, each astonishing in its way.
You can use a custom linker section.
Amazingly, you can put the #[link_section="your_section_name"]
attribute on the constants generated by your #[subcommand]
procedural macro, and the Rust compiler will put those constants together, in a special place in the resulting binary. Accessing this at run time is then some platform-specific magic.
I didn’t ask for details. Sometimes it’s better not to know.
Maybe it’s better not to do this.
Put yourself in the shoes of a programmer who’s unfamiliar with this codebase. Is it helpful to you that the list is autogenerated? How do you find the code for the build
subcommand? You'd have to know where that one function is defined, and it could be anywhere, because the #[subcommand]
attribute works anywhere.
This friend writes: “Since the user's experience is of a global table of subcommands, one could argue that the code should reflect that, that adding a subcommand entails adding an entry to a global list. Then it’d be vastly easier to find definitions, to find conflicts, to notice overlap.”
Most of the axioms of ZF set theory are pretty easy to understand. They tell which sets exist, creating a rich universe of objects for set theory to talk about.
Null. There is a set with no elements.
Pair. Given two sets a and b, there is a set that contains exactly those two sets.
Union. Given a set of sets S, there is a set that’s the union of the sets in S. In other words, a set that contains exactly the elements of the elements of S.
Even if you stopped here, you’ve already agreed to the existence of infinitely many sets, and sets of every finite size!
Infinity. There is a set that corresponds to the natural numbers. It contains the empty set, and for each set x in it, it also contains a set with one more element, consisting of all x’s elements, plus x itself.
Subsets. Given a set S, there is a set of all subsets of S.
Comprehension. Given a set S and a function F, there is a set that contains exactly F(x) for all x in S.
Separation. Given a set S and a predicate P, there is a set that contains exactly those elements of S that satisfy the predicate.
In the formal language of mathematics, these axioms are mostly phrased quite similarly: “[For all <given ingredients>,] there exists a set A such that, for all x, x is in A if and only if <properties>.” The basic linguistic technology of populating a universe.
The other two axioms of ZF that are not like these. I’m saving those for later.
Over the past couple of years, my notebooks are increasingly full of questions. I wonder more, and read even slower.
Stars mark questions that are at least a bit beyond my grasp so far.
There is a mathematical notation for the sets postulated by each of these axioms. For example, Null points to the empty set, which is written ∅. How do you write the others?
Does your favorite programming language have anything like sets? Sure it does—arrays or lists are close enough. How many of the axioms does your language satisfy? How can you write those sets? For example, the null set could be written []
in JavaScript. Most languages I know provide at least Null, Pair, Union, Comprehension and Separation, though not necessarily under those names!
Famously, if there were a “set of all sets”, then you could use Separation to obtain a contradiction. How? (Answer.)
If you apply Union to the null set, you get the null set. (There are no elements of elements of ∅.) Is there any other kind of set that is the union of its own elements?
Which axioms can take a set and prove the existence of a larger set? Do these axioms imply the existence of any sets larger than the one postulated by Infinity? If so, how?
✻ How is it that these axioms, applied finitely many times, can construct objects such as non-computable numbers?
✻ What does “exist” mean here?
✻ The standard version of Infinity is rather particular. Is it possible to write “there is a set that is infinite” in the language of set theory? Is that not sufficient?
✻ The last two axioms, Comprehension and Separation, require funny ingredients: a “function” and a “predicate”, respectively. This produces a strong sympathetic vibration deep in the computer science part of my brain. It seems like these axioms are ways to turn mathematical expressions into sets. In programming terms, they turn code into data. Can this be made precise? Do the axioms in effect say “There is an oracle that serves as an ‘interpreter’ for mathematical expressions”? Is this like the relationship between lambdas and the functions they compute?
✻ Why can’t Comprehension and Separation be written in terms of sets? All functions you could productively use with Comprehension are sets, right? (Functions are relations, and relations are sets of ordered pairs, which can be expressed using sets.)
✻ Why isn’t Separation redundant? Doesn’t Subsets postulate the existence of all subsets of a set? And Separation the existence of a particular subset?
✻ It’s pretty common in math to say, “Let X be the smallest set satisfying the following conditions:” and provide a list of rules like the axioms above. Why wouldn't this work for defining the universe of sets? That is, “Let V be the smallest class of sets satisfying the following axioms: ...”. Is this kind of approach inexpressible in the language of set theory, or just logically unsound?
When you learned math as a kid, you started from the very concrete: learning to count physical objects.
The foundations of math progress the opposite way, from the vague to the specific. The first thing you need to know is the use of variables as placeholders. Partly this is because variables are powerfully expressive—eventually, you’ll need them to write really interesting statements. But the other reason is that you start out knowing nothing. The less you know, the more gaps there will be in the statements you make. Variables fill gaps.
Now you’re ready to study propositional logic. The axioms of this field are things like,
A ∧ B ⟹ A
“If A and B, then A.”
The variables are placeholders for statements. The purpose of an inane-sounding axiom like this one is to tell precisely what “and” means in logic.
Next is the predicate calculus. Logic had variables that stood for statements; predicate calculus offers several kinds of statements you can write and plug into those logic variables. The variables of predicate calculus stand for two kinds of thing:
objects, the actual things we will eventually get around to talking about, like numbers, shapes, points, functions, and so on; and
predicates, which are statements about objects: “x is even”, “s is convex”, “p has integer coordinates”, and so on.
But predicate calculus does not by itself say what the objects are. You can take predicate calculus and apply it in a universe where there the objects are points, or where the objects are functions, and it will work equally well in either setting.
An axiom of the predicate calculus:
(∀x . P(x)) ⟹ P(t)
“If for all x, P(x), then P(t).”
This axiom is telling you what “for all” means. x stands for an object; P is a predicate. Note that we are still using the language of logic, but as we proceed, the variables are getting rather more specific, and we’re adding new vocabulary.
Next is set theory. This is a particular kind of universe for you to use predicate calculus with. Whatever we’re talking about in math, we will want to talk about sets of those things: sets of points, sets of numbers. It turns out you’ll also very soon want to talk about sets of sets of things, so set theory lets you build sets of sets, sets of sets of sets, and so on without end. The variables in set theory stand for sets.
An axiom of set theory is:
∀S . ∃T . ∀x . x ∈ T ⟺ x ∈ S ∧ P(x)
“Given a set S and a predicate P, there exists a set that contains exactly the elements of S that satisfy P.”
The surprising thing is that now you’re done.
The universe of sets contains sets that have exactly the same structure as: the integers; the Cartesian plane; the functions on real numbers; and so on. It contains basically everything mathematicians study*, so there's no need to go on adding axioms**. The picture is*** filled in.
*not totally true
**except when there is, or when mathematicians just feel like it
***never
I recently went back to school on parsers. I always had an involuntary mental resistance to the whole field, because the theory, while well-established, is inelegant. Oh, it starts out pretty enough, with the regular languages, but from there it gets gradually worse and worse until you’re reading things like “and if there are any slots with multiple entries in table X, then the grammar isn’t LR(1).” The algorithm as definition.
Plus, I was totally unable to map the workings of LR parsers to anything else in my experience. So, apropos of that, here’s a journal entry from my wanderings in the wilderness:
The stack in an LR parser is a lot like the C stack.
When you reach the end of a production, you return, popping the stack—much like how returning from a C function pops its stack frame. Of course, in C, the return address is on the stack; whereas in an LR parser, you instead use the corresponding stack value to look up the next state in a goto table... rather like returning to a switch
statement.
Like the C stack, the LR stack is heterogenous. A typical stack data structure contains elements that are all the same type. While you can certainly write an LR parser where the stack is just a stack of grammar symbols, you can also assign a different type to every terminal and nonterminal in your grammar, even types of different sizes, and the whole scheme still works. In fact, real parsers do use the LR stack to store partially built parse trees, although I don’t know that they support nonterminals of many different types. They probably should.
I think of each state in an LR parser as a quantum superposition of possible parses of the input so far. The next token might rule out some of these possibilities, fully or partially collapsing the waveform. What makes it LR is that at any given point, all the possible parses happen to assign exactly the same types to everything on the stack, by construction. I imagine a plain old C program (maybe a recursive descent parser) throwing up its hands and saying, “well, it's not clear which branch to take, we’d better just take them all” and splitting off half a dozen universes. The situation remains miraculously executable on normal hardware, because of this curious stack discipline: all the possible universes end up doing exactly the same thing with the same data at each step.
LR parsers, and automata generally, feel like debuggers to me. It’s hard to put my finger on it: something about the notion of “stepping”, and considering all the program code and state as mere data. The table generation algorithm is abstract interpretation of… something; but again, I’m not sure what. An Earley parser, maybe? The grammar itself, somehow considered as a program?
Herb Lin argues that the NSA isn't responsible for the recent wave of ransomware attacks based on a vulnerabilities the NSA discovered, hoarded, and failed to secure.
In his essay, Lin says that computer administrators and Github are really to blame for the attacks. One almost gets the feeling he’s casting about for someone else to blame, anyone but the NSA:
Microsoft issued a fix for the vulnerability in question in March—a month before it was released by the Shadow Brokers. [...] System administrators should be patching their systems when patches are available unless there’s some very good reason for not doing so.
[...]
Does NSA bear any responsibility for the outbreak of WannaCrypt through its stockpiling of some vulnerabilities that were subsequently revealed? Sure, [...] But one could argue just as well that Github—the distribution channel for the Shadow Brokers—was equally responsible for making the vulnerability and exploit code widely available. So why isn’t anyone complaining about Github’s actions in this regard? At the very least, both entities share some degree of responsibility—NSA for allowing the vulnerability to be leaked, and Github for publicizing it.
Let's set aside blame for a moment and just talk about practical security. How do we prevent the next attack?
Any given administrator could only have protected their own systems—doubtless thousands of them did, by keeping their software up to date. No administrator or existing organization of administrators could have prevented the overall attack, and it’s foolish to expect them to prevent the next one. Likewise, it was never in Github’s (or anyone else’s) power to prevent dissemination of the NSA’s trove of software vulnerabilities, because as we all know, Github is not the only way to publish files on the Internet.
Think how staggeringly poor your grasp of the Internet (or blame itself) would have to be, to blame Github for this.
There are only a few parties that could have prevented this attack:
We must look to one of these to prevent the next attack. I don't expect the criminals and hostile foreign powers of the world to cooperate. Microsoft is already doing its level best, and publicly begging for the NSA's help. This leaves the NSA, an organization whose mission is to protect the United States from exactly the sort of mayhem it enabled here.
Even if, like Lin, we want to talk about blame, common sense says only parties that could have prevented an event can be blamed for it. Bottom line: Herb Lin’s column is logically and ethically incoherent. He should retract it and start over.
Two extremely corpulent ladies, who were much incommoded by their embonpoint, had caused to be made for them under petticoats mounted on hoops, which they only wore in the privacy of their own apartments. One summer evening, however, they were tempted to take a stroll in the Tuilerics so accoutred. In order to avoid the remarks of the mob of footmen at the gates, they entered by the orangery, but lords and ladies are not less curious than their lacqueys. As soon as the pair appeared they were surrounded. The numbers rapidly increased ; they had barely time to retreat behind a bench, and but for the protection of a musketeer they would have been smothered or crushed to death by the pressure of the crowd. The poor women reached home more dead than alive, believing they had caused a great scandal. Far from that, they had set the fashion to the Court as well as the city.
—from A Cyclopaedia of Costume or Dictionary of Dress, Including Notices of Contemporaneous Fashions on the Continent ; and A General Chronological History of the Costumes of the principal Countries of Europe from the Commencement of the Christian Era to the Accession of George the Third by James Robinson Planché, Esq.
Either that (says the book), or it came from England; or maybe Germany. Or the theatre. Nobody really knows.
The consequences of bitrot are not contained in the domain of the system administrator. Here’s Mario Carneiro on the state of a formal, computer-checkable proof of a mathematical theorem:
I was blown away when I discovered that Isabelle’s proof of the prime number theorem was done in Isabelle2005, and it was not upkept along with the AFP (archive of formal proofs). Moreover, apparently not even the creators know how to run Isabelle2005 anymore, and everything has changed since then so it’s not backward compatible by any stretch. Basically, the proof is lost at this point, and they are more likely to formalize an entirely different proof than upgrade the one they already have.
Meanwhile prose proofs dating back to the 1800s are still good. Though time will presumably destroy those too, if they are not “upkept”. Languages evolve and go extinct. Books are destroyed.
I have subscribed to a daily print newspaper.
I really enjoy a good paper, but the one I subscribed to is not a good paper. It’s owned by Gannett. A lot of the content is exactly what’s in USA Today. Section B of my paper is, as far as I can tell, USA Today’s section A verbatim. And local coverage is thin. Sometimes the front page has 12 column-inches of news on it. A few stock photos and two-inch headlines holding up not so many words.
I knew all this going in. It’s why I never subscribed before.
I subscribed for three reasons.
The handful of reporters left in that newsroom are vital civic infrastructure. The world is going to get a lot darker when they’re gone, and we’ll all wonder why.
Not scowling at newsprint was a major gap in my curmudgeonly persona.
I need a better news source.
We are going to have to have a little talk about that last thing. Shut up, you knew this was coming. Pull up a chair.
I know you. You get most of your news from Twitter and Facebook. (Or maybe you’re one of those assholes that bragged to me at a party that you get all your news from The Daily Show. Well, congratulations. But your news comes in headlines, followed by applause or boos, followed by sketch comedy, just like Twitter. It doesn’t get any shallower and you’re no better than anyone else.)
Oh, you also listen to This American Life? Gold star.
So how’s it going?
Even the worst newspaper is pretty great compared to the Internet. The ads are less intrusive. Even the wrap-around ad that I have to physically tear off the front page before I can read my paper is less intrusive than the crap people have to endure or physically dismiss online. (Yeah, I know, you use an ad blocker, so you are blissfully unaware of this.)
When the Internet first came along, we were all pretty excited about getting out from under the filters that the media imposed on us. Instead, our friends and the people we admire would be our filters. Well, we’ve discovered something interesting about ourselves. The filter we create for ourselves is dramatically worse. We never have any real idea what’s going on. We read more trash. We read more pop culture fluff. We have invented whole new genres of trash and pop culture fluff. We’re making ourselves worse.
Reading a newspaper is frustrating and enlightening and stupid and entertaining and anti-entertaining. The paper is chock full of content that’s not for me. ...And maybe that’s what’s so good about it. The people creating the content are not hostile toward me; they just don’t know I’m here. It’s relaxing.
There was a time not so long ago when software had a shelf life of five or ten years, easy.
Of course there was a time before that when software was written for very specific machines, like an Atari 2600, and those programs still run today on those very specific machines. Their shelf life is unlimited. But I’m talking about the PC era, when there were constantly new machines and new OS versions being released, and yet old software would still run on the newer stuff for years and years.
That time is over. Ubiquitous internet access is the culprit. We’re on a treadmill now.
Say you use a program called Buggy and Buggy uses OpenSSL. If OpenSSL releases a critical patch, nobody is going to wait to see what the Buggy team thinks about the new version. The old OpenSSL library is going to be deleted off your computer with prejudice, and the new one dropped in its place. Buggy will immediately start using this new OpenSSL version it was never tested with (and never will be -- Buggy’s maintainers are too busy testing their current codebase). The longer this goes on, the greater the difference between the environment on your computer and any environment in which Buggy could possibly have been tested by its maintainers. Eventually, something breaks.
A security-sensitive library like OpenSSL may sound like a special case, but it’s not. For one thing, you don’t know which software on your computer is security sensitive. But also, you’re going to get updates that fix other (non-security) bugs. You probably want those fixes; you definitely want the security fixes. And given that, the optimum strategy is to keep updating everything. As long as all your software stays on the treadmill, you’re OK.
But unmaintained software now rots. It happens fast. We’re not talking about this. I don’t know why.
The headline this Monday was: “Former director of NSA backs Apple on iPhone ‘back doors’”.
“Look, I used to run the NSA, OK?” Hayden says. “Please, please, Lord, put back doors in, because ... that back door will make it easier for me to do what I want to do. ...
“But when you step back and look at the whole question of American security and safety writ large, we are a safer, more secure nation without back doors.”
I have to give Michael Hayden credit for changing his mind on this and for speaking up about it, but it is a little late. The right time to “step back and look at the whole question of American security and safety writ large” is when you are in fact the director of the National Security Agency.
The NSA is the agency charged with protecting U.S. information systems against foreign attack. Sure, that mandate is less sexy than NSA’s sigint mission, but it’s even more important in practice. We are vulnerable. And it’s great that signals intelligence and information security are under the same agency, since there are tradeoffs to consider... except that nobody is considering them. The head of the National Security Agency didn’t see security as any part of his job.
Apparently, he thought he had a sigint job. But really I suspect he thought he had a political job, working for the President.
That's a shame. If the current director of the NSA happens to be reading this: please do your country a service and take a good hard look at your job description. Or better yet, a good hard look in the mirror. Act like an adult. Do the job that needs doing.
“Einstein repeatedly argued that there must be simplified explanations of nature, because God is not capricious or arbitrary. No such faith comforts the software engineer. Much of the complexity he must master is arbitrary complexity […] because they were designed by different people, rather than by God.” —Fred Brooks
True facts:
git
has outstandingly bad UI design.
The git
man pages are written in such heavy jargon that I was never able to get anything useful out of them until very recently.
I just recently broke through some sort of internal motivational barrier and really learned git. Here’s what I did:
Promise to give a talk about git in front of a bunch of people. I can’t recommend this, but it happened.
Read the Pro Git book. Nope. I tried several times, and the book is good and free, but for some reason I couldn’t get through it.
Keep a list of my open questions. This is what really worked for me. I made a file (ignorance.md
) containing stuff like:
What exactly is HEAD
?
What exactly is the reflog?
What exactly is the index? How is it stored?
How does git pull
differ from git pull origin master
?
What is this `origin/master` syntax? When would I want to use it?
Then I attacked questions in no particular order, plugging them into Duck Duck Go or messing around with git
in a throwaway repo.
When I got an answer, I typed it into the file, in my own words. Each answer led to three or four new questions, so I put those in there too, and kept going. Right now I have 39 open questions. (“[W]e do not yet know all the basic laws: there is an expanding frontier of ignorance.” —Richard Feynman)
Poke around in .git
. There’s no substitute. (“Show me your flowcharts…”)
Randomly read GitGuys.com. It’s incomplete, but what’s there is great.
I don’t think all this took more than maybe 8 hours, really. At some point, the git
man pages started making sense... mostly.
I would try that ignorance.md
thing again. It’s been fun.
I think I am going to be a fairly extreme determinist until someone convinces me otherwise.
This is not an informed position. My influences are:
Why did the Stoics believe the universe was deterministic?
Paraphrasing the Stanford Encyclopedia of Philosophy: “Chrysippus was convinced that the law of excluded middle applied even to contingent statements about particular future events.” The truth of a proposition never changes: if “jorendorff breaks his arm on 12 May 2015” turns out to be true tomorrow, then it is eternally true—it must be true already.
Huh! It hardly seems airtight, right? Inasmuch as the universe appears to be nondeterministic at quantum scale, there’s evidently a flaw in that logic somewhere.
However if you look past that, the Stoics start to look better and better. And let me start here by sniping at their rivals. The Epicureans, like the Stoics, believed that natural laws governed pretty much everything that happened. Even better, unlike the Stoics, they were atomists. But they added something extremely weird to this system. They claimed that atoms drifting downward under their own weight would occasionally “swerve” in a nondeterministic way. Lucretius:
For if they had not this characteristic of moving out of the direct line, they would all fall downwards like drops of rain through the depths of the void; no collision would take place, no one atom would strike upon another; and so nature would never have produced anything at all.
In the Epicureans’ defense, I think I’ve heard that the asymmetry of the universe and the very-large-scale variability in its density are interesting puzzles for present-day cosmologists too. Even the atomic swerve, ludicrous as it sounds, is somewhat vindicated in the atomic-scale nondeterminism of quantum mechanics.
Later Epicureans at least saw the swerve as giving rise to free will. Lucretius again:
You must admit therefore that the same principle holds true of the atoms: that, apart from weight and the blows of one atom on another, there must be another cause for motion, from which comes this power that is born in us, since we see that nothing can be produced out of nothing. It is weight that prevents everything being caused by the blows of one atom on another, as it were by an external force; but it is the minute swerve in the atoms, taking place at no definite time or place, which keeps the mind itself from being governed by an internal necessity in all its actions, and from being as it were subdued by this necessity so as to be merely a passive subject.
There must be people who think exactly the same thing about quantum fluctuations. But there is an interesting difference: nondeterministic quantum effects have been carefully studied, and are apparently purely random, with probability distributions that can be derived from the theory. Any consistent deviation from randomness could be (at least probabilistically) observed, and would prove the theory wrong.
I wonder if truly random outcomes can be the source of what we call “free will”. It seems to me rather that free will must refer to the choices of us, of our character and our desires. Free will, then, is quite the opposite of randomness. Free will is not only compatible with determinism, it is a kind of determinism: self-determinism.
Stoic philosophy contains something along these lines:
Chrysippus used the illustration of a cylinder rolling down a hill as an analogy for actions that are within our control (Cicero and Gellius, 62C-D). It is true that the force that starts its motion is external to it. This is analogous to the impressions we have of the world. But it rolls because of its shape. This is analogous to our moral character. When our actions are mediated by our characters, then they are ‘up to us’. Thus, if I see an unattended sandwich and, because I am a dishonest person, steal it, then this is up to me and I am responsible. All things come about by fate but this is brought about by fate through me (Alex. Aphr. 62G). When, however, I trip and fall, knocking your sandwich to the floor, this is not up to me. The chain of causes and effects does not flow through my beliefs and desires.
I don't think nondeterminism means what we have assumed it means, and I have some computer-sciencey things to say about that later. But for now, this will do: It’s a mistake to rush to embrace randomness, desperately, lest the tyranny of a universe operating under natural laws reduce us to mere “passive subjects”. It’s the random choice that is, by definition, meaningless.