Monday, August 06, 2007
The entire 1000 foot long section was tied together structurally to save money. It had no tolerance for partial failure. If one section failed, the entire section would go down. This more modern bridge was ugly as well as a poor design. This bridge was designed by modern engineers who have no sense of beauty and think they can calculate every decision on the basis of cost/benefit. They practice a destructive type of design called value engineering - taking out the expensive stuff if it�s redundant or optional.Or take Joel Hirschhorn at the Daily Kos:
I can absolutely guarantee that there were countless discussions over the years by engineers, bureaucrats and politicians that combined risk assessment and cost-benefit types of thinking. It comes down to this: To bite the bullet and conclude that to provide maximum protection of public safety the bridge should be replaced would inevitably face debates about the incredibly high costs and the enormous difficulties of obtaining the funding, and also how obtaining such funding would imperil other government projects.Which assumes, to start, that every preventable disaster should be prevented. Should it? Can we prevent them all? Again, at what cost? Compared to what? How do you know?
So the decision moves in the direction of higher levels of inspection to postpone the inevitable high cost/low risk scenario. On the other hand, all people connected to these kinds of discussions know one big reality: If the crap hits the fan and there really is a catastrophic bridge failure, then the money WILL be readily available to replace it! This is the way the system works. Of course, to replace the bridge and deal with all the many horrific impacts of the bridge failure AFTER the fact will cost much, much more money than if a planned replacement strategy had been adopted!. But that is exactly what the Minneapolis story ultimately is all about. It is what virtually all of our national infrastructure thinking is about. The New Orleans disaster was totally preventable, as every technically sound and objective analysis showed.
So what does cost-benefit analysis (CBA) ask the policymaker to do? Simply put, it totes up all the benefits of a policy change, such as bridge replacement or a different bridge inspection program, against the costs, all put into a common metric (most often, net present value.) Thayer Watkins of San Jose State provides a useful tutorial. Ed Gramlich, whose book is for me the standard of the profession, spoke a few years ago about where it can and cannot be used:
It can be quite helpful when relevant markets exist, when market or nonmarket techniques for valuing inputs and outputs have been developed, when scientific uncertainties are limited, and when time periods are not inordinately long. Should any of these conditions not be present, one still has ways of proceeding; but the methodology becomes more speculative, and the uncertainty band grows. One can still factor uncertainty into the analysis by providing probability distributions of net benefits from some policy change, but the resulting probability distributions are likely to be very wide. This uncertainty makes it difficult for analysts to give programmatic recommendations with any degree of confidence, and difficult to compete with special interests that have very clear ideas of their valuations of programmatic outcomes.
As a result, benefit-cost analysis is more or less useful in providing a framework for policy decisions, depending on the underlying properties of these decisions. If markets and valuation methods are clear and time periods are relatively limited, as with river and harbor projects, subsidy measures, tax or tariff changes, benefit-cost analysis is usually able to frame public decisions pretty clearly. But for programs involving research, climate change, or other types of extreme scientific uncertainty, decisions are framed much less clearly. Even in these areas, benefit-cost analysis has value, as I will try to illustrate, but one must be very careful to ensure that the results of the analysis are not misused.
Here's a rather simple example related to roads from John Quiggin, who is arguing whether to use cameras on roads to limit speeding on a highway:
People who formerly travelled at above the speed limit will go slower and take more time. On the other hand, since this will reduce both average speed and speed variance, there will be less accidents.
For the costs, I�ve assumed 15 000 cars per year, 20 per cent of whom speed consistently, maintaining an average of 130km/h (vs a limit of 110). I�ve given them a value of time saved of $20/hour (higher than is standard), and I estimate an annual cost of $10 million per year from enforcing the limit.
As already noted, the cost per life lost is
above $20 million, between $5 million and $10 million so we only need to prevent one or two fatalities per year to get benefits>costs (there have been about 8 deaths per year in the last five years).
So too with a cost-benefit analysis of bridge inspection. You cost out the increased travel time and increased probability of accidents in a narrowed road environment (or perhaps using detours, whichever is better) versus the benefit from the information gained by additional inspections. Those benefits, of course, are subjective; it certainly was higher on Friday when they inspected the De Soto bridge here in St. Cloud than it would have been last Tuesday. Still, do you see anything wrong with that decisionmaking rule? Charlie Quimby does not:
Which is fair criticism. Any CBA will have a set of sensitivity analyses that demonstrate the conditions under which a project -- perhaps more frequent bridge inspections, or bridge replacement -- will either be supported or rejected. You and I can look at that sensitivity and come to different decisions. CBA doesn't give you just one answer; it gives you a framework for rational analysis. And to decide ex post that those who looked at the CBA and decided the more expensive projects for shoring up the bridge were not just wrong but malign is a hindsight fallacy.
We all make judgments about how long to keep pouring money into our cars. If you're like me, you take good care of your car but drive it for as long as you can. At some point, it clearly becomes structurally deficient � burning some oil, quarter panels rusty, on its third set of rotors, muffler corroded, maybe a crack in the windshield. But it's driveable, safe and still far less costly to operate than a new car with a monthly payment.
I once drove a rotary engine Mazda until it caught fire for the third time. It was a snowy winter, after all, and the blazes on the engine block could be quickly extinguished, with no apparent effect on other functions of the car.
I'm no economist or bridge inspector, but after the third fire, I could see it was getting to be time for replacement.
We should not blame MN/DOT for its analysis, assuming no grievous misjudgments were made about bridge condition. But we should revisit the question of where we want to be as state on scale of new car to flaming Mazda.
But the criticism of Hirschhorn is still in need of answering: Did the MnDOT inspectors lean on the scales to favor visual inspection in order to avoid having to go to the boss and say the bridge was unsafe?
To that end it's instructive to note the authorship of the 2001 and 2005 reports on the bridge's safety were not from MnDOT. These independent assessments go exactly to Gramlich's recommendation that government officials doing CBA need to use independent agencies to verify their work. The University of Minnesota, which did the 2001 assessment, did not have incentives to fiddle the numbers.
What I suspect is that the complaints are not about CBA itself but about the budget process, which reinforces CBA just as CBA reinforces it. Gramlich writes:
The federal budget process is one of the most criticized governmental processes around, with observers routinely pointing to projects that should have been, but were not, screened out. But the federal spending and budget process seem to help benefit-cost analyses of projects. The budget imposes discipline, if imperfectly. When projects threaten to become too costly, as the SSC [superconducting super collider --kb] did and dredging the Delaware River may be, OMB [the Office of Management and Budget] and other budget cutters can be expected to join the fray and often support the results of the benefit-cost analyses.Yes, "increasingly tight budgets." In any cost-benefit analysis one must impose a cost of funds used for a project. What are the alternative possibilities for the funds to be used? For example, the Sunday paper here carried a story of fifteen bridges in the area that are "structurally deficient". Some bridges are seventy years old. Yet under current circumstances and with current traffic flows, there are other projects more in need of help.
Indeed, the relationship can be symbiotic. OMB needs benefit-cost analysis to weed out projects (otherwise they would be acting in a peremptory manner). Benefit-cost analysts often welcome the sheer political heft of OMB. Those opposing the dredging of the Delaware River will find stopping the project easier with the help of OMB than without it--just as stopping the SSC was easier once costs started escalating rapidly in an era of increasingly tight budgets.
The detractors of CBA seem to live in a world where tradeoffs do not exist. To respond to the first comment, 'beauty' is a good that competes for resources with other goods, like education or meals on wheels. So too with safety. As "vinod" notes on his blog, Thomas Sowell's "tragic vision" means that libertarians and conservatives understand tradeoffs are real, while the anointed dogs believe that "problems must be solved in totality with 'no one left behind' and 'no one left at risk'." Even in a world with a few black and gray swans, you can't fix everything.
I have a few loose ends to tie up, to wit:
- how do you factor the black swans into CBA?
- "isn't it really heartless of you to price the value of a life in these calculations, King?"