Popular Posts

Wednesday, December 10, 2008

Rule Following

"This was our paradox: no course of action could be determined by a rule, because any course of action can be made out to accord with the rule."
Ludwig Wittgenstein, Philosophical Investigations

"If the rule you followed brought you to this, of what use was the rule?"
Anton Chigurh, No Country for Old Men


One the great myths of modern society is that we're following rules to obtain outcomes. I mean rules, roughly, in the sense described here, although I'll also feel free, for these purposes, to equivocate a bit between plans and rules. No harm should be done for now.

So the myth of rule following. We see it in software development (no one seems to notice, or if they do, they dare not mention, that the "rule" was changed a thousand times between conception and completion of project), we see it in the economy, the social sciences, and indeed everywhere that the veneer of science and technology and the almost pathological need for certainty manages to obscure deeper truths about the fragility of our capacities.

Retrodiction, not prediction, is what we're best at, though it is unfortunately and for obvious reasons of little interest. And for various psychological reasons that I'm neither qualified nor interested in researching directly, we're strikingly good at painting failure after failure to predict what comes next with ex post facto explanations that make things just so. Political science is perhaps paradigmatic. It was common in the 1950s to prognosticate about how the (now defunct) USSR would be the preeminent superpower by the 1970s. France (yes, France) was widely thought to be emerging in the 1970s. Japan in the 1980s. China of course today. Our ability to keep proclaiming, generation after generation, our cock-sure predictions about the future state of human societies in the next year, five years, decade (or God forbid, century), is simply amazing, and defies logic. Yet we keep doing it. And we will keep doing it, in spite of all evidence of consistent failure to the contrary.

The psychology of rule following tells us that there's a rule (or a set of rules) that we followed to get to a result (or that will allow us to predict a future result). And when we achieve the result, we tend to confirm the application of the rule, when in fact (chances are) we've made innumerable on-the-fly judgements to get to our result, and then we've tidied things up after the result was achieved by giving credit to the rule. So everything fits. Feels like progress.

On the other side of the coin, when a result is not achieved, instead of recognizing the general problem of using rules, we tend to assume that the particular rule (we claimed to) use, was in fact not adequate. And we set about looking for a new rule, which will of course not be adequate in many contexts too. Such is the nature of our (unexamined) selves. In a deeper and more honest sense we might someday admit that progress (at least in messy, complex situations that we're immersed in), is mostly a function of insights, adaptive thinking as the environment changes, and, well, luck. But we don't see it this way. It doesn't sound like something an expert would say.

So I think that in complex systems (like the weather, or any system where human choice can enter in), our capacity to formulate generalizations that tell us how things will be at time t+n, when we refer to them at time t, is effectively a chimera (whenever n is large enough, which depends on features of the system). Things are constantly new, and different. We formulate plans, and rules, and they guide us, but very loosely, because the environment is constantly in flux. Rules we've grabbed onto "work", only because we keep adjusting things to make them seem to work. The real driver is rather our own wits and insight. And with these much more powerful tools, software does get developed. The Surge in Iraq works. The Space Shuttle (mostly) arrives at the Space Station. And when I get correct directions and follow them, I typically get where I'm going (even if unexpected snags happen). And on and on.

Anyway, in some other post I promise to explain in more depth exactly how rule-following is a mirage, which I haven't yet done (I've asserted mostly only that it is). To be continued. Until then, rest assured that our rules are grains of salt. They just masquerade as so much more.

3 comments:

mijopo said...

The problem of induction isn't so much that we make inductions so poorly, isn't it more about explaining why or whether our, prima facie, reliable inductive inferences are in fact as reliable as they are.

This post moves a bit quickly for me. You observe, correctly, that we often fail in making predictions. But I'm not sure how you get from there to general skepticism or despondency about rules. The predictions of the sort you discuss aren't things that are subject to simple rules, but complex situations subject to a vast complex and interacting set of rules. It's not surprising we often get them wrong, more impressive that we ever get them right.

Erik J. Larson said...

mijopo, the fact that we can't predict is more of a symptom of a deeper problem that we have; namely, the belief that we're actually following rules in the first place. When things work our for us, we suppose it's becuase of some rule(s), and when not, we must have the wrong rule. But in fact in both cases, very often and in many ordinary situations, we haven't followed the rule at all. We just think we have.

Now, I haven't cashed out the many connections between these claims in just this post (I think I mentioned as much somewhere in the post). And it'll have to wait until I can give it a longer treatment (or I'll try to break it up into several).

The general skepticism I have about rule following is in fact part of a project I'm working on that, in the end, will probably end up a full-length book.

Thanks for the comments.

Erik J. Larson said...

My kids allowed me a little more time to write. I didn't predict that! So...

"...complex situations subject to a vast complex and interacting set of rules. It's not surprising we often get them wrong, more impressive that we ever get them right."

My claim is not that we use rules successfully until they get too complicated and then we expect to sometimes or even frequently fail (this seems to be your point). My point is that we don't use rules at all--or not nearly as much as we think we do--when drawing inferences in dynamic, complex environments.

Why? Well, if we're just using rules, we're subject to their limitations. On the one hand, if they're general, we'll expect many exceptions to them (and there are always exceptions--this is essentially the problem of qualification mentioned by McCarthy, also the hard part of the frame problem, as Dennet refers to it).

On the other hand, if we're using some "vast complex interacting ruleset to draw inferences", the question is how are we selecting the right rules to use for particular reasoning tasks? This quickly leads into a regress, where we keep adding rules to figure out which other rules to use, and adding other rules to figure out... you get the point. Fodor digs into this problem with rule selection in much more detail in The Mind Doesn't Work That Way.

Anyway, to fix the rule selection problem, we can invoke some global or abductive inference "mechanism". This is fraught with many, many difficulties which I can't possibly cash out in depth here. Fodor again is a good source on this, although take your pick as there are many others.

If we don't like the global inference solutions we can adopt something like the massive modularity thesis (this is all assuming the Computational Theory of Mind, which is essentially the idea that we're using rules to make inferences). MMT solves the abduction problem at the expense of creating a very implausible model of human cognition. Take your pick.

So that's the rough water with rules with respect to inference, and as a special but important case, inferring the future state of some system or situation given access to the past and present states.

Now, the question of whether theories -- sets of generalizations that apply to some domain -- are a different case than the mind-as-rules discussion above is interesting. I'd say the following: there's a deep link between the fact that we can't make smart robots (or, the fact that we can't see how we're smart on the assumption that we're only using rules), and the fact that our science seems frustratingly inadequate for predicting the behavior of complex systems. In both cases, the features of the environment severely limit the usefulness of the generalizations (rules) we want to use to make predictions in the environment. And adding a statistical component to rules won't help us: stastical generalizations based on past behavior of the system will fail to generalize whenever the system changes in a way not yet seen in the history -- the dataset. And then the model prediction is wrong.

Anyway, to reiterate, your comment about how impressive it is when we do manage to "get them right" is assuming what in fact I'm questioning, which is whether we're using rules in the first place. My claim is that we're not using them for inference in the first place. What's impressive to me is how we keep thinking we are. But anyway this is more of a thesis, not a blog post that can be wrapped up neatly, and it will take me some time to develop and then to spell out.