New Placement Data and Rankings at PhilosophyNews

Andrew Carson at PhilosophyNews.com has produced a new series of rankings based on placement data stretching back to 2000. The results are interesting and worth a look.

In the discussion that has followed, serious concerns have been raised about the quality of Carson’s data. See here, here, here, and here. Most of the criticisms raised are legitimate (except the worry about Carson’s existence, which has thankfully been put to rest). In particular, his inclusion of very old data calls into question the value of his report as an indicator of future placement success. (This was why I posted “moving windows” of 3 and 6 years on this site.) Commendably, Carson has indicated that he will attempt to refine his dataset and update his conclusions.

Like I do here, Carson ranks departments on the basis of their placement success. The major difference between Carson’s effort and the data presented here is the fact that Carson has tried to collect reported placements directly from the schools, not from the Leiter Reports. This task must have been as nightmarish as it was herculean. When I first started to put this site together, I thought about trying to gather data from departmental placement pages. But the various reporting methods on those pages are so inconsistent as to make the data practically unusable. Indeed, I immediately recognized what Carson aptly calls “The Madness.” I simply threw up my hands; Carson has more fortitude.  In the event, I settled for the most consistent reporting I could find, the Leiter Reports hiring threads. This is far from a perfect solution, and I’ve advocated for the collection of more comprehensive data. Personally, I think this is the responsibility of the APA, but I’m glad that Carson (and PhilosophyNews) has undertaken the effort to produce a consistent and comprehensive dataset. I hope its current flaws can be remedied.

Carson’s efforts, meanwhile, have made me think a little bit harder about what all these rankings are good for. When I first compiled my tables, Prof. Leiter remarked that “such exercises are of very limited value.” (Leiter’s post has since been scrubbed, for some reason, but it can be found here.) I wholeheartedly agree, and I think this applies to all rankings–even Leiter’s own. Rankings of philosophy graduate programs can be helpful to those not yet “in the know” who are deciding between them, but rankings cannot be read or used uncritically–and I have always been careful to say as much. Decisions about entering philosophy programs should never be based on rankings alone, or even primarily.

But then the question arises as to what rankings are good for, and how they should be read. And this has led me to reconsider what I have written previously. Carson explicitly considers placement success to be a measure of probable placement success. Similarly, I wrote (in the introduction to this site) that my rankings could be read “as a way to help judge the likelihood of academic employment upon completion.” I no longer think that this is correct. That is, a program’s success in getting its past graduates jobs should not be taken as an indication of how successful it will be getting a new student a job sometime in the future. In part, this is because (as Leiter notes) past success is not necessarily an indication of future success. It’s also because getting a job is idiosyncratic and individualized, so it’s nearly impossible to infer anything about a particular individual’s possible success from a program’s general success. More importantly, though, the job market sucks. For everyone. It might be marginally better for some folks or marginally worse for others, but it’s universally difficult to find employment as a professional philosopher. The effect of a school’s rank on placement is easily swamped by the general state of the market. So I don’t think placement data can be used as an effective measure of the likelihood of a particular individual’s finding a job sometime in the future. Those entering the field should not presume attending higher-ranked programs will significantly improve their placement chances. I’ve stricken that statement from the introduction.

How then to read the rankings? I think they can be read as a measure of the quality of departments. In truth, I’ve always had this primarily in mind. (In the first edition of the site, I even reported correlations with the Gourmet Report’s reputational rankings, which were actually pretty good.) In the process of making hiring decisions, a hiring department evaluates–often very carefully–the quality of applicants’ work and their preparation to enter the field as teachers and scholars. The decision to hire an individual indicates a judgment that his or her qualifications are superior to other applicants–not necessarily the best, of course, but certainly not the worst. And insofar as a graduate program is meant to develop the philosophical abilities of its students and their preparation to enter the field, this judgment is, by proxy, a verdict on the quality of the program. It says, in effect, that the graduate program of the successful applicant is a better place to study. A department’s placement success is an aggregation of such judgments, and therefore can be a useful indicator of how good a program it is for its students. Someone deciding between graduate programs can use the rankings as a way to help judge the quality of the programs.

This is not to say that placement success is a perfect measure of program quality, or of anything at all, for that matter. And it is not–at all!–to say that hiring decisions reflect a perfect judgment of applicant or department quality. I have never pretended that the rankings I compiled here are anything more than a source of information to be considered carefully in concert with others. Yet I do think that one has to be careful about thinking what rankings like this–and Carson’s and Leiter’s–mean. Carson’s commendable effort is another source of useful information, but I think he is mistaken, as I was, to suggest that placement success can be read as a measure of placement success.

Advertisements

Posted October 10, 2013 by David Marshall Miller in Uncategorized