Wednesday, April 11, 2007

USNWR 

University presidents are refusing to fill out a survey of the reputation of their peer institutions that is used as part of the US News and World Report rankings.
Dozens of schools have recently refused to fill out surveys used to calculate ranks, and efforts are now afoot for a collective boycott.

..."This increasing interest in measuring everything � these so-called science-based measures of [educational] outcomes and the like � seems to me to be so misguided that it's now captured the imagination of the leadership in higher education," says Christopher Nelson, president of St. John's College in Annapolis, Md., who heads an association of 124 prestigious liberal arts schools. "This is a bad way of talking about an education. [Students] aren't consumers shopping for a product."

The boycott of the U.S. News rankings could be extended in coming weeks as a draft letter makes the rounds of academia. The letter, formulated by a dozen college presidents and an education activist, calls for others to join them in neither filling out the magazine's survey form nor touting rankings in marketing materials.

The "reputational survey," as it's called, asks college administrators to rank the quality of hundreds of schools on a one to five scale. The data � which critics call a "beauty contest" � account for 25 percent of the overall U.S. News rankings.

It's hard for me to figure out who I side with in this battle. The presidents hardly elicit support when their answer to problems in a third-party ranking system is to try to sabotage it with a boycott.

But I certainly agree that there are many "science-based measures" that are little better than a SWAG at what rankings should be for this or that. As I've mentioned, I'm working on a book looking at measures of things like corruption or rule of law, which are also reputational in nature. Country X is said to be corrupt because a group of businessspeople say it is. That seems pretty circular, and sometimes not really scientific.

Likewise, university president rating universities are relying mostly on a single vague dataset.

Several college presidents suggested that they personally could evaluate only five to 10 schools � a far cry from the hundreds on the list. "We know each other through reputation, but that's different than having the kind of intimate knowledge you should have when you are making a ranking," says Robert Weisbuch, president of Drew University in Madison, N.J., who plans to sign the letter.

The intent of the administrator survey is to capture the opinions of those who are experts inside the industry, says Brian Kelly, executive editor of U.S. News. The survey asks them to rank only those schools with which they are familiar. If that number is only five, says Mr. Kelly, "well, gee, maybe you need to know some more about your competitors."

But why? I really know only so much about my peer institutions. I don't really know that many other economists at other MnSCU institutions. I know people in my own area of research specialization, but it's sufficient specialized that knowing my field isn't that hard to do. And just because I give a school $50,000 to educate Littlest Scholar doesn't mean the school has to give USNWR 50,000 rankings, as Mr. Kelly seems to imply.

Moreover, one thing I always hate about their college rankings is that the weighting scheme is very much one-size-fits-all. Mr. Kelly says that is changing to allow you to use your own weights. But weighted bad data still gives bad rankings.

An objective measurement is only as good as the consensus on what is to be measured. We rely on thermometers' mercury because they correlate with our own perceptions of heat and cold. As long as the market purchases USNWR's college guide you have to assume somebody values the information inside. Maybe it's time university presidents figure out why prospective students need a third-party guide.

Labels:


[Top]