Thursday, August 09, 2007

CBA under real uncertainty 

I notice from Learned Foot that someone still can't learn a new trick.
Bingo, grasshopper. Just like our slippery goat owner, the professor has some problems with his arguments. You can't argue that the risk of something is unknowable and then claim that the politicians and the public made a rational cost-benefit analysis. If you don't know the costs or the benefits, you can't possibly do the analysis.

Oh...I think you can grasshopper. I can come up with myriad everyday examples about how governments, businesses and individuals do a rational cost benefit analysis on certain things without knowing the risk.
Yes, though it's awfully hard. Since I'm writing a course for someone to teach CBA, I have some notes to draw on, but this section is quite hard. This is usually the bailiwick of environmental regulation, where CBA is the most disputed.

When I've talked about it discussing black swans, I spent a good deal of time discussing Frank Knight discussing the difference between risk and uncertainty. Risk is measurable. Sometimes you can use a ready-made probability distribution based on existing data. Sometimes you have to do something different because there's nothing to draw on for a probability distribution. One option is Monte Carlo analysis, which asks you to model the risk by replication over a thousand or ten thousand samples of the event you are trying to measure. In reliability engineering, for instance, you want to know the mean time between failures as a way to get at the likelihood, say, that a structure might fall down. If you could create a computer simulation of vehicles driving over a bridge, simulate the cars, weather, river current, etc., you might get a number. But that's hard, and there are lots of bridges.

So maybe you couldn't have that either. There are other options, such as sensitivity analysis, but I doubt any of those might have been very useful, or provided a result in which one could have had any confidence. One of the worse things I see in this analysis is when someone tries to make a probability judgment -- "The risk of bridge failure is 0.00014% for the next twelve months" -- based on unreliable estimates. As we know from cognitive psychology, people in general are rather poor judges of probability. Aaron Wildavsky noted "large proportions of people care more about avoiding loss than they do about making gains. Therefore, they will go to considerable lengths to avoid losses, even in the face of high probabilities of making considerable gains." I think this, in fact, is where ol Spot is coming from.

But that doesn't justify using other people's money to avoid those losses, and it doesn't justify the use of poor decisions, and those societies that rely on learning from mistakes, as a strategy to reduce uncertainty, will fare better. Wildavsky:

In regard to the consequences of technological risk, there are two major strategies for improving safety: anticipation versus resilience. The risk-averse strategy seeks to anticipate and thereby prevent harm from occurring. In order to make a strategy of anticipation effective, it is necessary to know the quality of the adverse consequence expected, its probability, and the existence of effective remedies. The knowledge requirements and the organizational capacities required to make anticipation an effective strategy�to know what will happen, when, and how to prevent it without making things worse�are very large.

A strategy of resilience, on the other hand, requires reliance on experience with adverse consequences once they occur in order to develop a capacity to learn from the harm and bounce back. Resilience, therefore, requires the accumulation of large amounts of generalizable resources, such as organizational capacity, knowledge, wealth, energy, and communication, that can be used to craft solutions to problems that the people involved did not know would occur. Thus, a strategy of resilience requires much less predictive capacity but much more growth, not only in wealth but also in knowledge. Hence it is not surprising that systems, like capitalism, based on incessant and decentralized trial and error accumulate the most resources. Strong evidence from around the world demonstrates that such societies are richer and produce healthier people and a more vibrant natural environment.

Because you can't know what is truly uncertain, you can either choose a precautionary principle and try to avoid any activity that might increase risk to human life, which reduces the ability of a society to specialize and exchange. Or you can embrace the dynamic system of capitalism and its most important resource -- the human mind -- to alter behavior to new lessons learned out of failure as well as successes, and to minimize the likelihood of repeating the failures.

What Knight argued was that some situations of CBA under uncertainty involve trying to calculate the profit or loss in a relatively unique situation. You can't measure the probability of success of an act that's never been done before. When private firms do this, the losses and mistakes are borne by them alone. When governments do this, they don't necessarily pay for it. That's why we worry so about getting the incentives right for the public sector.

Labels:


[Top]