Imagine University of Chicago philosophy professor of law and self-styled member of the American peerage Brian Leiter reposted on his blog his criticism of the US News and World Report Law School rankings, the most easily criticized ranking system our nation has had since the President and Vice President were elected separately, and that he got the criticism nearly entirely wrong. Imagine no more! Here comes the breakdown:
1. Contrary to what one sometimes hears, it is clear to me, and I imagine any other informed observer of school evaluations, that the reputational surveys are the one component of the U.S. News ranking that actually keeps the results tethered to reality. Unfortunately, as Professor Stake of Indiana has shown, the superficial survey method U.S. News employs is increasingly producing an echo chamber effect, with the reputation of a school essentially tracking the overall rankings from prior years by U.S. News. In order to minimize that effect, I suggest you switch to an on-line survey system with academics (your response rate from academics is already quite high, and I imagine that for an on-line survey it will be even higher), in which evaluators are presented with concrete information about each school, rather than simply a school name: e.g., a current faculty roster, numerical credentials of the student body, a list of distinguished alumni (let the school provide a list, limited to 50 names, say), and so on. Ask academics to evaluate the scholarly and professional excellence of the school, not simply the "reputation" they associate with a name.
It's clear to anyone who actually talks to prospective students, the intended audience of the US News rankings, despite how much Leiter thinks they exist for his own personal glorification, that the survey of reputation among other law professors is the most absurd, least grounded in reality part of the whole ranking. (That is if you use a weighted measure. Spending per capita is more absurd, but only a small part of the ranking. Peer reputation is 25% of the whole damn thing.)
50 names of professors and distinguished alumni? Outside of a few super-elite schools that horde brand-name professors and produce Senators and Supreme Court Justices, the median number of names you'd (or a tenured professor) would recognize from the list is 0. And even if you do recognize someone's name, unless they're writing in the same field as you, you're going to have no idea what they've done with their career in the last 5 or 10 years. Arthur Miller at NYU, incredibly famous (not as famous as the other Arthur Miller, but still, famous among law profs) -- any idea what he's written lately? Probably not. NYU gets some reputation points for having him, despite the fact that he hasn't published a law review article since 2006, and only has 2 published in the last 15 years.
So on top of not having a clue what these people are even doing, when you do know what they're doing what they're doing is the wrong thing to evaluate them on because it's based on scholarship, and prospective students really don't care about that. They want professors who know how to teach, and it's highly unlikely that any professor filling out the USN survey has spent the 10,000 hours it would take to sample each of the 200 schools' 50 top professors.
2. To the extent you continue to employ data self-reported by the schools, you really must undertake more aggressive audits of the data. This year--to take the most notorious example that has already attracted widespread attention--the University of California at Berkeley claimed an astounding 99% of its students employed at graduation, a fact to which Professor Lindgren of Northwestern has already called attention. In prior years, Berkeley has reported (going backwards by year) 97.2% employed at graduation, 74.4%, 89.8%, 88.7%, 96.8%, and 93.2% . Berkeley is a state school, subject to open record requirements. Have you assigned a reporter for your magazine to investigate anomalous data reporting by schools? The integrity of the enterprise surely demands an occasional follow-up investigation.
It did seem that Berkeley was fudging the numbers, and some sort of auditing it necessary to ensure the integrity of the data. Where Leiter goes off the rails is that when Law School Transparency filed a complaint with the ABA about a school publishing false employment data Leiter called the complaint "frivolous." And how did LST discover the fraud? They analyzed some publicly available data that contradicted Rutgers-Camden's marketing claims and then ...filed an open records request.
So why the change of heart about the importance of following up on employment claims? Two likely explanations.
First, professors love the rankings because it gives them a sense of pride. They need to protect the rankings to keep their egos puffed up. Employment stats that are presented to prospective graduates don't affect the rankings, and complaining about them hurts the profession as a whole. Can't have that.
Second, Leiter is allowed to criticize his peers. But middle-class members of the professions engaging in the same criticism? Insolence.
3. Since what can only be facetiously called the "objective" data that schools self-report is the source of most of the egregious trickery and deceit that renders the results dubious, why not take steps to reduce your reliance on this data? (That was a primary consideration in the Canadian law school rankings I designed for MacLean's.) Eliminate expenditures altogether: that alone would put a halt to the worst offenses. What schools spend on utilities and secretaries and landscaping has nothing to do with anything. Per capita expenditures systematically penalize larger schools for their economies of scale and reward inefficiency: there is simply no denying this. Even expenditures on faculty salaries is a very poor proxy for faculty quality, and would be, in any case, redundant upon well-done reputational surveys or citation studies, which would provide a direct measure.
The expenditure ranking is pretty ridiculous. What's stupid here is that Leiter doesn't know what the word "objective" means. Being fraudulent is not the same as not being objective. Also, there's not any evidence that schools mis-report their expenditures. Leiter could have filed an open records request to see what the actual numbers are, though he didn't. But, we've never heard of such a complaint. Instead, the complaint is that schools actually are spending more money to boost their rank. Students would love it if they just lied instead of increasing tuition.
And again, reputational surveys would not provide a direct measure of faculty quality. By definition a reputation is an indirect measure. And reputational surveys of faculty quality are, as previously said, not even really surveys of quality at all.
You should also eliminate the self-reported employment data, which is, as you well know, a work of fiction: it bears some resemblance to reality, but it is mainly a work of the imagination. Substitute data in the public domain, like the representation of school graduates as associates at leading law firms nationwide, or in federal clerkships. Eliminating expenditures data, and substituting public data on employment success for self-reported employment statistics, would immediately increase the credibility of the results, and would get U.S. News out of the business of rewarding trickery and deceit.
The problems with the USN employment data isn't that it's a work of fiction, it's that it's a poorly crafted work of non-fiction. USN simply lumps all jobs together, part-time, short-term, non-professional. Doesn't matter, a job is a job. Using something like the LST Employment Score methodology would be a vast improvement, counting only full-time, long-term jobs requiring bar passage, and not counting solo practices. Are some schools lying on the data they send to the ABA? Maybe, and there should be audits, but narrowing what counts as a real job means you at least have to really commit fraud to game the system. Right now schools can just hire someone part-time for a couple weeks in February and count them the same as a BigLaw associate. No fiction, just a bad metric.
Leiter's alternative though is perhaps even worse than the current system. Only 45 schools send 10% or more of their class to firms with 101 or more attorneys (and that's a very liberal definition of BigLaw). Only 16 send 30% or more. 51 schools send 3% of more of their class to federal clerkships, and only 8 send 10% or more.
The BigLaw+Federal Clerkship number is important for figuring out how good a school is, but it's hardly the only important metric. It's basically only important for figuring out which of the elite schools is the most elite, which is of course the only thing that Leiter really cares about. But if you want to know whether to go to Alabama or Samford, the scores are 18.3% vs. 12.2%. They're just not the most relevant numbers for people working in Alabama, and the same is going to be true for most states that aren't dominated by a major legal market. If you compare Alabama and Samford's Employment Score, you get 71.3% vs. 59.5%, and the Under-Employment Score comparison is 7.9% vs. 19.6%. Those are the numbers (along with salary data) that matter to people making that choice.
So there you have it. If you actually want to fix the US News rankings, you need to approach it with the eye of someone applying to law school. That generally means a focus on employment outcomes, and ditching the national scale because no one needs to know if how Arizona compares to Fordham. Or, you can take Leiter's approach, which is basically to make US News a more reliable gauge for professors to compare their relative prestige.