Introduction (2012)

This report provides a ranking of graduate institutions based on their placement records as reported to Brian Leiter’s Leiter Reports blog each spring from 2007 to 2012. It serves as a supplement to Leiter’s own rankings in the Philosophical Gourmet Report, which are based on reputational surveys of professional philosophers. It indicates the employment success of graduates of philosophy programs, and is therefore an indirect measure of the quality of the institutions. It will be especially useful for those entering graduate programs as a way to help judge the likelihood of academic employment upon completion, since the evaluation depends on judgments about the preparation provided, not on the reputation of the faculty alone.

Note: Prof. Leiter has linked to this page from the Leiter Reports, emphasizing some of the caveats I point out below. I completely agree that these caveats are serious limitations on the value of this report. In general, I concur with Prof. Leiter: “I think [all] such exercises are of very limited value.”

Data and Ranking Method

Each spring, Brian Leiter starts a thread on his blog that invites commenters to post junior-level placements into tenure-track and post-doctoral positions. The posts generally contain the hired person’s name, their graduate institution, the position into which they have been hired, and any previous positions the person has held.

To generate this ranking, I collected the comment threads from 2007 to 2012 into a spreadsheet and recorded:

  1. the graduate institution,
  2. whether the hire was a tenure-track or post-doc position,
  3. any second positions (as some individuals are hired into a post-doc preceding a tenure-track job), and
  4. up to three previous positions (tenure-track, post-docs, or visiting).

For each year, I deleted duplications, such that each hire was reported only once. For the six years recorded, this ultimately amounted to 834 distinct postings from 129 graduate institutions. (I did make a handful of corrections based on inconsistencies in the postings; e.g., when a subsequent tenure-track hire was not accompanied by an initial post-doc appointment, even though the latter had been posted in an earlier year.)

From this, I was able to count, for each institution:

  1. the total number of placements,
  2. the number of tenure-track placements,
  3. the number of “direct” tenure-track placements (where the hired person got a tenure-track job without any previous position reported), and
  4. and the number of “duplicate” postings (where the same individual was hired into different jobs in different years).

From this, three rankings were generated for a) the number of individuals placed, which is the total number of placements minus the duplicate postings, b) the number of tenure-track placements, and c) the number of direct tenure-track placements. The ranks were then averaged into an “average” rank, and those “average” ranks were ordered into an “overall” rank. This averaging takes into account the quality of the placements, given that tenure-track placements are assumed to be more valuable than post-doctoral placements, and “direct” tenure-track placements are assumed to be more relevant to program quality than “indirect” ones.

In fact, the rankings thus generated correlate well with the reputational surveys. For institutions ranked in the top 50 faculties in the English-speaking world in 2011 (see here), the correlation between the reputational rank and the “average” placement rank is rho=0.64. This is unsurprising, since reputation of graduate institution plays a significant role in hiring decisions. A ranking by total number of individuals placed alone (rho=0.63), number of tenure-track placements alone (rho=0.62), or of number of “direct” tenure-track placements alone (rho=0.60) does not correlate as well with the reputational rankings. Even so, there are several schools that make it into the top 50 of the placement rankings that are not in the top 50 of the reputational rankings.

Important Caveats

Most importantly, the data for this ranking is entirely dependent on what was voluntarily reported to the Leiter Reports blog. Any inaccuracy or incompleteness in that data is reproduced here. (Indeed, my own placement was completed too late to be reported to the blog, and is not included here.) Any conclusions drawn from this ranking should be qualified with “based on what has been reported to Leiter Reports.” The unreported placements not included would have some effect on the rankings. Indeed, the sparseness of placement data makes the higher rankings (above 60 or so) extremely variable. I would suggest taking the bottom half of the table as one large, unranked group. I also suspect underreporting accounts for the low rankings of British universities. I have made the underlying spreadsheet available for correction and use (see the link above left). Data for individual years and institutions can easily be viewed using the filters on the spreadsheet. One should also be aware that many (but not all) institutions post placement information on their departmental webpages, and I encourage careful consideration of that information alongside this report. I also hope that the American Philosophical Association or some other entity will gather this information more rigorously in the future, for the benefit of all. (Really, it’s not that hard!)

No account is taken of the size of the graduate programs. In particular, there is no consideration of graduates who do not successfully find academic appointment (that is posted to Leiter Reports) or do not complete the program. Thus, the rankings should not be taken as an indication of placement rate.

[Note (June 5, 2012): Prof. Danks’s addendum to the Leiter post stresses this limitation. Like Prof. Danks, I am not sure how to correct for this. However, one reason I stopped at six years of data is that I took this to be an average length of a PhD program. Hence, the data represents something like the placement for a graduate population; i.e., what one might expect for a graduate student body at an institution. But this measure is very rough, and I hesitate even to mention it.]

In keeping with Leiter’s practice, no distinctions are made between different programs at the same institution. Thus, Pittsburgh’s and Indiana’s rankings include the placement records of both the Philosophy and the History and Philosophy of Science programs, Irvine includes both Philosophy and the Logic and Philosophy of Science programs, and Washington University in St. Louis includes both Philosophy and Philosophy-Neuroscience-Psychology programs. One can consult the underlying data to get a better sense of the placement records of these programs individually.

There is no attempt to discriminate the “quality” of the placements beyond differentiating tenure-track and post-doctoral appointments. In some cases, this helps explain why schools not in the reputational top 50 appear in the placement top 50. For instance, Yale’s placements might be judged more valuable than Purdue’s placements even though Yale and Purdue are tied in the rankings (24). However, such evaluations are subjective. Again, one is encouraged to consult the underlying data.

No account is taken of areas of study. One could easily extend this analysis by recording the reported areas of specialization (AOSs) of the individuals hired and then rank institutions by placements in those areas, but decisions would have to be made about what counts as a top-level sub-discipline of philosophy, and I wish to resist such subjective categorizations.

Data Sources

Posted June 9, 2012 by David Marshall Miller