I’ve now updated this site with the hiring data from 2014. As of this moment, I’m inclined to say that this will be the last time I will update this site.
When I first posted my tables a few years ago, I was using them to make two points: (1) consistent placement information should be made more readily available, preferably by the APA in service to the profession, and (2) rankings based on placement would offer supplemental and more objective measures of program quality, to be used alongside the Philosophical Gourmet Report’s reputational surveys.
As for (1), the APA has started collecting consistent placement data in its Grad Guide, and there has been increased transparency by departments about their placement success. Indeed, the new “Appointments” information at PhilJobs is a centralized, proactive locus for placement data collection, sponsored by the APA–something for which I have lobbied at this site from the outset. One consequence of this proliferation of placement data is that the hiring thread at Leiter Reports now seems (though I have no firm data on this) to be a less reliable representation of the market than in years past. Since this site relies exclusively on those postings, its reliability is likewise diminished.
As for (2), the use of placement data to rank departments has now become quite visible. Andrew Carson and, especially, Carolyn Dicey Jennings have developed analyses that now strike me as very robust. (Even Prof. Leiter himself is joining in, though he uses placement data to defend the PGR.) These analyses are more thorough than mine, are the product of a deeper commitment of time and resources, and will be of more use in the long run.
I welcome these developments as evidence that my points have been made, regardless of whether I had anything to do with it. So I will take the opportunity to simply declare victory and cede the field.
Addendum: I do not wish to offer extensive comment on the rather vitriolic (and ad hominem) dispute that Brian Leiter and Carolyn Dicey Jennings have had in the blogosphere about their respective rankings. I will say, to again quote Leiter, that “all such exercises are of very limited value.” Nevertheless, they are of some use, and should be made available, so long as the methodology and limitations of the analysis are made clear. I think the PGR and the placement rankings by Jennings, Carson, and myself all meet this standard. They can be used to supplement each other by those seeking measures of graduate program quality. (Indeed, Leiter’s recent use of placement data to defend the PGR tacitly admits as much.)
Ever since I first posted this site in 2012, I’ve consistently envisioned it as a proof of concept of something the APA should take up as a systematic endeavor. Today comes word that the APA is embarking on a whole range of data collection efforts, including placement data. All I can say is, Hurrah!
Thus continues the remarkable transformation of the APA into a functional professional organization that actually serves the interests of its members under the executive directorship of Amy Ferrer.
By the way, I’ve long put off a comment about the 2013 edition of the APA Grad Guide, the 2012 version of which I commented on here. In brief, the 2013 version was a vast improvement, and began offering the data that could support apples-to-apples comparisons between departments that was lacking in the earlier version. There are still problems with accurate and complete reporting–but it appears that the fault lies with individual programs. The APA itself is requesting consistent data, which is very welcome. Eventually, I’d hope to correlate my data with the data offered in the Guide, but other things have gotten in the way. Nevertheless, I’m very excited by the increasing redundancy of this site! Here’s to the new APA!
Following a link from Eric Schliesser’s Digressions & Impressions, Teresa Blankmeyer Burke (Gallaudet) landed on this page. She noticed that the dataset from 2011 was incomplete. As it turns out, I failed to collect the last batch of postings from the Leiter thread that year, thereby missing 58 unique postings. The data has now been corrected.
I apologize for the error. I am grateful for the correction.
Andrew Carson at PhilosophyNews.com has produced a new series of rankings based on placement data stretching back to 2000. The results are interesting and worth a look.
In the discussion that has followed, serious concerns have been raised about the quality of Carson’s data. See here, here, here, and here. Most of the criticisms raised are legitimate (except the worry about Carson’s existence, which has thankfully been put to rest). In particular, his inclusion of very old data calls into question the value of his report as an indicator of future placement success. (This was why I posted “moving windows” of 3 and 6 years on this site.) Commendably, Carson has indicated that he will attempt to refine his dataset and update his conclusions.
Like I do here, Carson ranks departments on the basis of their placement success. The major difference between Carson’s effort and the data presented here is the fact that Carson has tried to collect reported placements directly from the schools, not from the Leiter Reports. This task must have been as nightmarish as it was herculean. When I first started to put this site together, I thought about trying to gather data from departmental placement pages. But the various reporting methods on those pages are so inconsistent as to make the data practically unusable. Indeed, I immediately recognized what Carson aptly calls “The Madness.” I simply threw up my hands; Carson has more fortitude. In the event, I settled for the most consistent reporting I could find, the Leiter Reports hiring threads. This is far from a perfect solution, and I’ve advocated for the collection of more comprehensive data. Personally, I think this is the responsibility of the APA, but I’m glad that Carson (and PhilosophyNews) has undertaken the effort to produce a consistent and comprehensive dataset. I hope its current flaws can be remedied.
Carson’s efforts, meanwhile, have made me think a little bit harder about what all these rankings are good for. When I first compiled my tables, Prof. Leiter remarked that “such exercises are of very limited value.” (Leiter’s post has since been scrubbed, for some reason, but it can be found here.) I wholeheartedly agree, and I think this applies to all rankings–even Leiter’s own. Rankings of philosophy graduate programs can be helpful to those not yet “in the know” who are deciding between them, but rankings cannot be read or used uncritically–and I have always been careful to say as much. Decisions about entering philosophy programs should never be based on rankings alone, or even primarily.
But then the question arises as to what rankings are good for, and how they should be read. And this has led me to reconsider what I have written previously. Carson explicitly considers placement success to be a measure of probable placement success. Similarly, I wrote (in the introduction to this site) that my rankings could be read “as a way to help judge the likelihood of academic employment upon completion.” I no longer think that this is correct. That is, a program’s success in getting its past graduates jobs should not be taken as an indication of how successful it will be getting a new student a job sometime in the future. In part, this is because (as Leiter notes) past success is not necessarily an indication of future success. It’s also because getting a job is idiosyncratic and individualized, so it’s nearly impossible to infer anything about a particular individual’s possible success from a program’s general success. More importantly, though, the job market sucks. For everyone. It might be marginally better for some folks or marginally worse for others, but it’s universally difficult to find employment as a professional philosopher. The effect of a school’s rank on placement is easily swamped by the general state of the market. So I don’t think placement data can be used as an effective measure of the likelihood of a particular individual’s finding a job sometime in the future. Those entering the field should not presume attending higher-ranked programs will significantly improve their placement chances. I’ve stricken that statement from the introduction.
How then to read the rankings? I think they can be read as a measure of the quality of departments. In truth, I’ve always had this primarily in mind. (In the first edition of the site, I even reported correlations with the Gourmet Report’s reputational rankings, which were actually pretty good.) In the process of making hiring decisions, a hiring department evaluates–often very carefully–the quality of applicants’ work and their preparation to enter the field as teachers and scholars. The decision to hire an individual indicates a judgment that his or her qualifications are superior to other applicants–not necessarily the best, of course, but certainly not the worst. And insofar as a graduate program is meant to develop the philosophical abilities of its students and their preparation to enter the field, this judgment is, by proxy, a verdict on the quality of the program. It says, in effect, that the graduate program of the successful applicant is a better place to study. A department’s placement success is an aggregation of such judgments, and therefore can be a useful indicator of how good a program it is for its students. Someone deciding between graduate programs can use the rankings as a way to help judge the quality of the programs.
This is not to say that placement success is a perfect measure of program quality, or of anything at all, for that matter. And it is not–at all!–to say that hiring decisions reflect a perfect judgment of applicant or department quality. I have never pretended that the rankings I compiled here are anything more than a source of information to be considered carefully in concert with others. Yet I do think that one has to be careful about thinking what rankings like this–and Carson’s and Leiter’s–mean. Carson’s commendable effort is another source of useful information, but I think he is mistaken, as I was, to suggest that placement success can be read as a measure of placement success.
The 2012-2013 hiring season is for the most part concluded, though posts to the Leiter thread continue to sporadically appear.
The data for 2013 (as of today) have been tabulated, and this site has been updated accordingly. In addition, I’ve updated and reorganized the site somewhat. A new version of the introduction and new tables for different yearly intervals have been posted. You’ll find them in the menu above.
For posterity’s sake, the old (2012) introduction page and tables have been archived here.
Addendum (August 14, 2013): I haven’t seen any new posts in quite a while. I’ve added the few new placements since the last update (June 19).
In the ranking tables on this site, I linked to the graduate program placement pages that I could find. It might be helpful to post those links separately, so here they are.
Continuing its remarkable reform and modernization under its new Executive Director, Amy Ferrer, the American Philosophical Association has released a new version of the Guide to Graduate Programs in Philosophy.
They are to be commended for this effort. To be honest, I had no idea the APA had ever published such a thing, since in their own words the Guide had “languished” for many years. I’m very glad to see that they are again taking on their responsibility to keep track of what’s going on in the profession. Those seeking to enter philosophy, more than those already in it, will find the new Guide immensely useful.
It must be said that the Guide is fairly pedestrian. It was obviously compiled simply by soliciting information from graduate programs. There was not much effort to enforce consistency or completeness in the responses. Still, the Guide is valuable, especially in that it is a centralized and authoritative resouce, backed by the APA itself.
Of interest related to this venue, the Guide includes placement information for PhD programs. I am very happy about this. The main purpose of this site, as I note in the introduction, is to show that placement information can be reliably collected and compared, and thus to lobby the APA or some other authoritative body to take on this project.
Though it is an admirable first start, the Guide does not do a very good job of presenting useful placement information. It replicates all the inconsistency of the various departments’ own reporting on their placement pages. For instance, some departments report initial placements, while others report current positions. E.g., I don’t think a 2009 graduate is still in a “1Y” (one-year) position, as listed in one cases, or that a 2008 graduate was placed into a “tenured” position, as listed in another case. Also, there is no indication, since names are not used, whether programs are duplicating placements; e.g., reporting an initial placement in 2008 and a subsequent placement of the same individual in 2010. As a result, it is hard to make apples-to-apples comparisons of the placement records of different programs using the Guide.
The APA can improve its reporting if it starts collecting placement information prospectively, in a uniform and rigorous way. This would be much like Leiter’s placement threads, except that the reporting would be solicited, not voluntary, and thus much more complete. (I can imagine that the APA might have to note that a program “did not provide information,” as it does in the new Guide for other bits of information.)
Still, I want to stress that the Guide is much, much better than what the APA had done before–which is to say nothing at all. I hope to see continued publication and improvement of the Guide.
UPDATE: Other concerns have been raised over at Leiter’s blog here.