Introduction (2014)

This report provides a ranking of graduate institutions based on their placement records as reported to Brian Leiter’s Leiter Reports blog each spring, following the academic hiring season. It serves as a supplement to Leiter’s own rankings in the Philosophical Gourmet Report, which are based on reputational surveys of professional philosophers. It indicates the employment success of graduates of philosophy programs, and is therefore an indirect measure of the quality of the institutions. It will be especially useful for those entering graduate programs as a way to help judge the likelihood of academic employment upon completion, since the evaluation depends on judgments about the preparation provided, not on the reputation of the faculty alone.

This report should not be used uncritically. There are important caveats to consider, some of which are discussed below. In general, I concur with Prof. Leiter: “I think [all] such exercises are of very limited value.”

Data and Ranking Method

Each spring, Brian Leiter starts a thread on his blog that invites commenters to post junior-level placements into tenure-track and post-doctoral positions. The posts generally contain the hired person’s name, their graduate institution, the position into which they have been hired, and any previous positions the person has held.

To generate this ranking, I collected the comment threads from 2007 to 2014 into a spreadsheet and recorded:

  1. the graduate institution,
  2. whether the hire was a tenure-track or post-doc position,
  3. any second positions (as some individuals are hired into a post-doc preceding a tenure-track job), and
  4. up to three previous positions (tenure-track, post-docs, or visiting).

For each year, I deleted duplications, such that each hire was reported only once. For the years 2007-2014, this ultimately amounted to 1345 distinct postings from 150 graduate institutions. (I did make a handful of corrections based on inconsistencies in the postings; e.g., when a subsequent tenure-track hire was not accompanied by an initial post-doc appointment, even though the latter had been posted in an earlier year.)

From this, I was able to count, for each institution:

  1. the total number of placements,
  2. the number of tenure-track placements,
  3. the number of “direct” tenure-track placements (where the hired person got a tenure-track job without any previous position reported), and
  4. and the number of “duplicate” postings (where the same individual was hired into different jobs in different years).

From this, three rankings were generated for a) the number of individuals placed, which is the total number of placements minus the duplicate postings, b) the number of tenure-track placements, and c) the number of direct tenure-track placements. The ranks were then averaged into an “average” rank, and those “average” ranks were ordered into an “overall” rank. This averaging takes into account the quality of the placements, given that tenure-track placements are assumed to be more valuable than post-doctoral placements, and “direct” tenure-track placements are assumed to be more relevant to program quality than “indirect” ones.

Important Caveats

Most importantly, the data for this ranking is entirely dependent on what was voluntarily reported to the Leiter Reports blog. Any inaccuracy or incompleteness in that data is reproduced here. Any conclusions drawn from this ranking should be qualified with “based on what has been reported to Leiter Reports.” The unreported placements not included would have some effect on the rankings. Indeed, the sparseness of placement data makes the higher rankings (above 60 or so) extremely variable. I would suggest taking the bottom half of the table as one large, unranked group. I also suspect underreporting accounts for the low rankings of British universities. I have made the underlying spreadsheet available for correction and use (see the link above). Data for individual years and institutions can easily be viewed using the filters on the spreadsheet. One should also be aware that many (but not all) institutions post placement information on their departmental webpages, and I encourage careful consideration of that information alongside this report. The American Philosophical Association also has begun to gather this information, though I hope it will do so more rigorously in the future, for the benefit of all. (Really, it’s not that hard!)

No account is taken of the size of the graduate programs. In particular, there is no consideration of graduates who do not successfully find academic appointment (that is posted to Leiter Reports) or do not complete the program. Thus, the rankings should not be taken as an indication of placement rate. I have provided reports for 3- and 6-year intervals in order to indicate trends over time. In addition, these might be used to give some indication of what a typical graduate cohort can expect. If one takes 6 years to be the average length of a PhD program, then the recorded placements suggest what one might expect for the graduate student body at an institution. But this measure is exceedingly rough, and I hesitate even to mention it.

No distinctions are made between different graduate programs at the same institution. Thus, Pittsburgh’s and Indiana’s rankings include the placement records of both the Philosophy and the History and Philosophy of Science programs, Irvine includes both Philosophy and the Logic and Philosophy of Science programs, and Washington University in St. Louis includes both Philosophy and Philosophy-Neuroscience-Psychology programs. One can consult the underlying data to get a better sense of the placement records of these programs individually.

There is no attempt to discriminate the “quality” of the placements beyond differentiating tenure-track and post-doctoral appointments. For instance, one school’s placements might be judged more valuable than another’s even though they are tied in the rankings. However, such evaluations are subjective. Again, one is encouraged to consult the underlying data.

No account is taken of areas of study. One could easily extend this analysis by recording the reported areas of specialization (AOSs) of the individuals hired and then rank institutions by placements in those areas, but decisions would have to be made about what counts as a top-level sub-discipline of philosophy, and I wish to resist such subjective categorizations.

Data Sources

Posted July 11, 2014 by David Marshall Miller