Introduction (2016)

This report provides a ranking of graduate institutions based on their placement records as reported by PhilJobs. It serves as a supplement to the rankings in the Philosophical Gourmet Report, which are based on reputational surveys of professional philosophers. It indicates the employment success of graduates of philosophy programs, and is therefore an indirect measure of the quality of the institutions. It will be especially useful for those entering graduate programs, since the evaluation depends on judgments about the preparation provided, not on the reputation of the faculty alone.

This report should not be used uncritically. There are important caveats to consider, some of which are discussed below. In general, I concur with Prof. Leiter: “I think [all] such exercises are of very limited value.” This report should be used in conjunction with other sources of information about graduate programs in order to make judgements about their quality.

Note especially that the data is not very robust at a level of close detail. A change of as little as one placement can affect rankings markedly. Moreover, the underlying data source contains inadvertent inaccuracies and duplications. I have tried to minimize these errors, but some may have been missed.

Data and Ranking Method

PhilJobs, a project supported by the American Philosophical Association and the PhilPapers Foundation, has posted information about individual appointments of PhD graduates in philosophy. These have been collected from a variety of sources, including submissions from hired individuals, hiring departments, and graduate program placement directors, as well as historical records (such as the hiring threads on Brian Leiter’s Leiter Reports blog that formed the data source for earlier versions of this report). In this way, they have compiled useful records back to 2005 (for which 131 junior appointments are noted).

Happily for the purposes of this report, PhilJobs has made their data available in machine-readable format. That data feed includes a record of each appointment’s year, PhD program, whether the appointment is junior or senior, and whether it is tenured/tenure-track or not. The data also includes unique identifiers for each appointee.

To generate the rankings reported here, I imported the PhilJobs data (as of January 16, 2017) into a spreadsheet and recorded for each appointment:

  1. whether it was a junior appointment,
  2. whether it was a tenure-track appointment,
  3. and whether there had been any previous appointments recorded for the same appointee.

The original list was then culled for duplicate entries by searching for combinations of PhD granting and hiring institutions. From this, I was able to count, for each institution:

  1. the total number of junior placements,
  2. the number of tenure-track placements,
  3. and the number of “duplicate” postings, where the same individual was hired into multiple jobs.

From this, two rankings were generated for a) the number of individuals placed, which is the total number of placements minus the duplicate postings and b) the number of tenure-track placements. The ranks were then averaged into an “average” rank, and those “average” ranks were ordered into an “overall” rank. This averaging takes into account the quality of the placements, given that tenure-track placements are assumed to be more valuable than post-doctoral placements.

(Note that earlier versions of this report also counted and ranked the number of “direct” tenure-track placements–i.e., appointments into tenure-track positions without prior post-PhD employments. This information is recorded by PhilJobs, but is not currently part of the dataset they make publicly available. Future iterations of this report may restore this measure if and when it becomes available.)

Important Caveats

Most importantly, the data for this ranking is entirely dependent on what is reported by PhilJobs. Any inaccuracy, incompleteness, or redundancy in that data is reproduced here. Any conclusions drawn from this ranking should be qualified with “based on what has been reported by PhilJobs.” Most importantly, the unreported placements not included would have some effect on the rankings. Indeed, the sparseness of placement data makes the higher rankings (above 60 or so) extremely variable. I would suggest taking the bottom half of the table as one large, unranked group. I also suspect underreporting accounts for the low rankings of universities outside North America.

One should be aware that many (but not all) institutions post placement information on their departmental webpages, and I encourage careful consideration of that information alongside this report. It is also possible to search for placement information by institution on the PhilJobs site. PhilJobs continues to work diligently to improve their dataset, in part on behalf of departments. The APA and the PhilPapers Foundation deserve applause for their support of this effort, and I encourage everyone to contribute in whatever way they can, most of all by submitting accurate information about appointments.

No account is taken of the size of the graduate programs. Thus, the rankings should not be taken as an indication of placement rate. There is no consideration of graduates who do not find academic employment (that is reported by PhilJobs) or who do not seek it, or of those who do not complete the program. Also, this introduces a significant ranking bias in favor of larger graduate programs. I have provided reports for 3- and 6-year intervals in order to indicate trends over time. In addition, these might be used to give some indication of what a typical graduate cohort can expect. If one takes 6 years to be the average length of a PhD program, then the recorded placements suggest what one might expect for the graduate student body at an institution. But this measure is exceedingly rough, and I hesitate even to mention it.

Here, again, PhilJobs is making an effort to improve the data by collecting graduation rates for institutions. At the moment, this data is very incomplete, but I hope to include a measure of placement rate in future versions of this report.

No distinctions are made between different graduate programs at the same institution. Thus, Pittsburgh’s and Indiana’s rankings include the placement records of both the Philosophy and the History and Philosophy of Science programs, Irvine includes both Philosophy and the Logic and Philosophy of Science programs, and Washington University in St. Louis includes both Philosophy and Philosophy-Neuroscience-Psychology programs. One can consult the underlying data to get a better sense of the placement records of these programs individually.

There is no attempt to discriminate the “quality” of the placements beyond differentiating tenure-track and non-tenure-track appointments. For instance, one school’s placements might be judged more valuable than another’s even though they are tied in the rankings. However, such evaluations are subjective. Again, one is encouraged to consult the underlying data. Note also the “ecosystem” discussion below.

No account is taken of areas of study. One could easily extend this analysis by recording the reported areas of specialization (AOSs) of the individuals hired and then rank institutions by placements in those areas, but decisions would have to be made about what counts as a top-level sub-discipline of philosophy, and I wish to resist such subjective categorizations.


There are some features of the rankings that are worth mentioning, especially to those unfamiliar with professional academic philosophy. The first is to repeat the caveat mentioned above: the method employed to generate these rankings favor larger graduate programs (e.g., Wisconsin, Toronto). By the same token, institutions with more than one graduate program (Pittsburgh, Indiana, Irvine, Washington University) also get an artificial boost in the rankings, since all of their placements are counted together.

Second, the philosophically uninitiated should know that academic philosophy, for better or worse, is subdivided into different “ecosystems.” The most important distinction is that between “analytic” and “continental” approaches. What these mean need not concern us, but it is important that most institutions are known to represent either one or the other (with “analytic” in the majority), and that institutions of one sort tend to place graduates at other institutions with the same orientation. That is, graduates of “continental” programs get jobs at other “continental” programs, but not at “analytic” programs, and mutatis mutandis for graduates of “analytic” programs. Stony Brook University, for instance, is a “continental” program that ranks well, but its graduates tend not to find employment at similarly ranked institutions, since they are “analytic.” A similar (and overlapping) subdivision is that determined by religious affiliations. Catholic institutions tend to hire from Catholic PhD programs, for example. Baylor University is remarkably successful in placing graduates at other religious institutions. Finally, some schools place well only within a certain geographic area. As examples, University of South Florida and University of Cincinnati have good placement records, but a lot of their placements are local.

For all of these ecosystems, graduate programs get a rankings boost from their access to exclusive employment markets (since this report does not consider the hiring department), but they do not fare as well on the broader market. Those entering academic philosophy should learn about these disciplinary nuances lest they unintentionally constrain their future employment possibilities. Again, do not use the ranking uncritically, and study the underlying placement records by examining the spreadsheet (linked above) or by searching the PhilJobs database.


Thanks to the APA and the PhilPapers Foundation for supporting PhilJobs and for permitting use of their data. David Bourget has been remarkably helpful in facilitating this work.


Posted January 16, 2017 by David Marshall Miller