Monday, August 06, 2007

Types of uncertainty 

Some dogs didn't like this post last week.
What the professor has given us is the unctuous, rational technocratic explanation of the [I-35W] bridge problem. If you accept his framing of the question, his solution seems inevitable. Shake your head in sorrow, tell the victims that your "heart goes out to them," and move on. The dollars have delivered their verdict.
It's not inevitable, since we have as yet no determination of why the bridge failed. There will be a determination, certainly; someone will say X happened, then Y, and then the steel began to twist and the structure began to fall. That's what good engineers do: They document what happened, and then tell you what could have been done to keep that from happening again. And the political process will take that report and throw money at it. (Sometimes, they don't even wait for the report.)

But the Left will go further, as does this dog:
What we really know is that a bridge, being a mechanical device, will inevitably fail unless it is repaired or replaced. The risk of failure increases as the bridge ages. These things are just as true "no bridge can ever be made perfectly safe." Since we know bridges will fail unless cared for, and since we also know that the consequences of a failure will be, to use the governor's own term, "catastrophic," it seems sensible to be rigorous in inspection of and fixing of bridges. As Spot wrote before, God didn't knock the damn thing down.
Let's assume Spot is not being literal, or serio-comedic in that last line. There was no Monty Python foot that squashed the bridge. But let's think seriously about bridge failures. Take three sets of events, each a subset of the previous one.
  1. The set of all bridge failures.
  2. The set of all bridge failures for which we can reconstruct explanations why they fell ex post;
  3. The set of all bridge failures for which we could have constructed explanations for why they fell ex ante. (We assume if you could have explained it ex ante, you can explain it ex post.)
We know, from the remains at the bottom of the river, that that bridge is part of set 1, the largest set. We do not know yet if it is part of set 2, though I will venture to say set 2 is very large. There will be a desire to put this event in set 2 -- I worry that there will be political pressure for the official investigation to do so -- but let's assume that the I-35W bridge failure is part of set 2. I think Spot's statement is trying to push me into a position that the bridge failure is not a member of set 2. I don't think that's the case.

What would it mean to be part of set 2 and not set 3? It would mean that what happened to that bridge is an event that we can explain, but that we cannot predict. I think in this case about the difference between risk and uncertainty taught by Frank Knight back in the 1920s. As explained by John Maynard Keynes a decade later:
By `uncertain' knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty...The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence...About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. (QJE 1937.)
Now because an event is uncertain doesn't mean we don't know about it. What it does mean is that we cannot write down an expected cost or utility function that captures the possibility of it happening. I'll write a separate post on what this means for cost-benefit analysis a little later.

But the point is that even if this bridge failure is part of set 3, it does not mean that as a matter of public policy we should have done whatever it takes to not let that bridge fall. This gets to the point I was making to Michael on Final Word yesterday: You can buy too much safety. We laugh at Tony Shahloub in Monk for his OCD; he buys too much safety. Andy Samberg buys too little in Hot Rod. My favorite example for students is to ask when was the last time they checked the batteries in their smoke detectors in their dorm rooms or apartments. Is it working right now? How do you know? Because your smoke detector's failure could lead to deaths in neighboring apartments, should we pass a law that requires us to check those batteries every day? Should we have inspectors look at smoke alarms annually? Semi-annually? If you thought about checking the alarm on Tuesday but didn't because you remembered you had checked it only a couple of months ago, would you be considered negligent if a fire destroyed the building on Thursday, killing five of your neighbors?*

Batteries in smoke alarms, at least, have a rate of failure that one can write down. Bridges may be more Knightian in their uncertainty. While more maintenance may have helped prevent some kinds of failure, they don't prevent all types. We do not know which type occurred here yet.

The reason for my using the term black swan was not just to refer to a book I read. Taleb refers to the case of a "narrated" black swan, one of which we become too aware. We suffer from hindsight bias (to use a term Taleb uses in this interview) and think they are everywhere. From his book:
The [narrative] fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.

...The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding. (pp. 63-64)
My contention thus far is that dogs use explanations to give their readers the impression that they know that which they cannot yet really know. Attempting to connect dots of a rare event at this stage is highly premature.

*-- more on this later as well; worth noting that such calculations can be used to determine the value of a life. See slides 12-13 of this presentation from MIT's course on regulation. I suspect this will make dogs bark.

Labels: , ,


[Top]