We develop a multi-period theoretical model to characterize the relationship between a publication that ranks universities and prospective attendees — high school students — who might view the ranking and use it to help decide which university to attend. We assert that published rankings not only offer information about the objective quality of universities, but also have an effect on the prestige of universities, which is an element in students’ utility functions beyond objective quality elements. We show that a prestige effect can incent publications to take actions that are not in the best interest of the students; an example would be the excessive changes to ranking methodology that U.S. News & World Report (USNWR) is usually accused of. We show that if a ranking that uses an attribute-and-aggregate ranking methodology (the ranking methodology publications like USNWR and BusinessWeek use) creates prestige, then the publication (a) optimally chooses attribute score weights that do not match student preferences and (b) changes these attribute score weights over time even if there are no changes in student preferences. If a prestige effect is not present, then, according to our model, the publication optimally chooses attribute score weights that match student preferences. We use our model to characterize a socially-optimal ranking methodology — one that maximizes the sum of the publisher’s profit, the utilities of students who view the ranking, and the utilities of the students who do not view the ranking — and show that the socially-optimal ranking methodology evolves over time toward a stable ranking that diverges from the publisher’s optimal ranking. We conclude by discussing how students should deal with published rankings in the current environment, and what types of ranking methodologies might be developed to better represent student preferences.
Note: Research papers posted on SSRN, including any findings, may differ from the final version chosen for publication in academic journals.