Thursday, January 27, 2005

It counts 

The Chronicle of Higher Education leads today with a story of two researchers wondering why their study of Iraqi civilian casualties, which they rushed to print the week before the U.S. presidential elections, didn't get more attention. (The Chronicle wants you to read this so much that it's not subscriber protected like most of their stuff.)

Les F. Roberts, a research associate at Hopkins and the lead author of the paper, was shocked by the muted or dismissive reception. He had expected the public response to his paper to be "moral outrage."

On its merits, the study should have received more prominent play. Public-health professionals have uniformly praised the paper for its correct methods and notable results.

Note the part I italicized -- this is not an opinion piece but supposed to be straight news. The writer of this article seems to share Prof. Roberts' pique. And it's clear that the authors wanted to get the results out to influence U.S. opinion, but they wonder why people are skeptical.

"On the 25th of September my focus was about how to get out of the country," he recalls. "My second focus was to get this information out before the U.S. election." In little more than 30 days, the paper was published in The Lancet.

Mr. Roberts and his colleagues now believe that the speedy publication of that data created much of the public skepticism toward the study. He sent the manuscript to the medical journal on October 1, requesting that it be published that month. Mr. Roberts says the editors agreed to do so without asking him why.

Despite the sprint to publication, the paper did go through editing and peer review. In an accompanying editorial, Richard Horton, editor of the The Lancet, wrote that the paper "has been extensively peer-reviewed, revised, edited, and fast-tracked to publication because of its importance to the evolving security situation in Iraq."

While it was edited and peer-reviewed as a public health analysis, its understanding of the nature of sampling for war deaths may have been suspect, as the Economist pointed out in November.

Nan Laird, a professor of biostatistics at the Harvard School of Public Health, who was not involved with the study, says that she believes both the analysis and the data-gathering techniques used by Dr Roberts to be sound. She points out the possibility of �recall bias��people may have reported more deaths more recently because they did not recall earlier ones. However, because most people do not forget about the death of a family member, she thinks that this effect, if present, would be small. Arthur Dempster, also a professor of statistics at Harvard, though in a different department from Dr Laird, agrees that the methodology in both design and analysis is at the standard professional level. However, he raises the concern that because violence can be very localised, a sample of 33 clusters really might be too small to be representative.

This concern is highlighted by the case of one cluster which, as the luck of the draw had it, ended up being in the war-torn city of Fallujah. This cluster had many more deaths, and many more violent deaths, than any of the others. For this reason, the researchers omitted it from their analysis�the estimate of 98,000 was made without including the Fallujah data. If it had been included, that estimate would have been significantly higher.

The Fallujah data-point highlights how the variable distribution of deaths in a war can make it difficult to make estimates.

And the other problem is that we are counting both deaths among the insurgents and civilian/non-combatant casualties.

Of the increase in deaths (omitting Fallujah) reported by the study, roughly 60% is due directly to violence, while the rest is due to a slight increase in accidents, disease and infant mortality. However, these numbers should be taken with a grain of salt because the more detailed the data�on causes of death, for instance, rather than death as a whole�the less statistical significance can be ascribed to them.

So the discrepancy between the Lancet estimate and the aggregated press reports is not as large as it seems at first. The Lancet figure implies that 60,000 people have been killed by violence, including insurgents, while the aggregated press reports give a figure of 15,000, counting only civilians.

So it could be that 45,000 insurgents were killed, which would not be necessarily bad news. And while any deaths are a bad thing, it's worth remembering the increase in mortality in postwar Germany or Japan.

There were reasons for skepticism of the Roberts et al. study, therefore, and its appearance on page A16 in the Washington Post, for example, probably was a fair reading. Why is the Chronicle whining about its lack of placement on A1? Given the tone of the article, one can only conclude that its editors too are upset with the outcome of the 2004 elections.