RateMyBlog.com

Two weeks ago, Forbes magazine released their second annual ratings of U.S. colleges and universities.  The Forbes ratings are competing with the market leader, U.S. News & World Report, whose rankings are taken way too seriously by the American public and the institutions that are ranked.  Moreover, as I’ve argued recently, these ranking and rating schemes are wholly inadequate for their purported purpose:  helping students and their families discern whether a particular institution is likely to be a good fit between a student’s needs and interests and a school’s capacity to meet those needs and interests.  In fact, the situation is much worse for choosing colleges and universities than for choosing elementary or secondary schools.  There is even more variability in the experiences of students within a given college or university than within a typical elementary or secondary school, due to the fact that college students have more specialized programs of study.

Forbes has gone to great lengths to distinguish its rating scheme from the one used by U.S. News.  The Forbes rankings are based on listings of alumni in Who’s Who in America;  salaries of alumni;  student evaluations from RateMyProfessors.com;  four-year graduation rates;  numbers of students receiving nationally competitive awards;  and the number of faculty receiving awards for scholarship and creative pursuits.  This differs dramatically from the U.S. News criteria, which emphasize peer assessments, retention rates, faculty and financial resources, selectivity, graduation rate performance, and alumni giving rates.  There’s nothing scientific about the choice of indicators making up the respective rankings;  it’s a matter of judgment, and any reader is free to proclaim that these aren’t the indicators that she or he would choose, or that some indicators should get more or less weight than others.

Perhaps the most striking feature of the Forbes rankings is the reliance on RateMyProfessors.com ratings for 25% of the total score.  Founded in 1999, RateMyProfessors.com (RMP) is a division of MTV Viacom.  I can see a case for incorporating students’ reports of their satisfaction with their courses, as long as one doesn’t mistake such reports for direct evidence of what students learned in those courses.  But using RMP is highly problematic for this purpose, because students choose to rate professors on the website, and the students in a particular college who choose to do so may not be representative of all of the students who attend that college.  If the students who post ratings are not representative of the population of students in a given college, the average of those ratings doesn’t tell us much that is useful about the typical experience of students.

Forbes and their consultants at the Center for College Affordability and Productivity at Ohio University anticipate this concern, and defend their choice of RMP ratings on three grounds:  RMP ratings are similar to student evaluations of teaching used by institutions themselves;  when ratings from hundreds of instructors on a campus are added together, the potential bias from overly unhappy or overly complimentary students washes out;  and whatever bias exists is probably similar from one campus to the next.  

The volume of student ratings on RMP is impressive—over 6,000 schools, one million professors, and ten million comments, proclaims the website—but fundamentally misleading.  If we look at just the past seven years, I estimate that undergraduates in four-year institutions alone have taken roughly 425 million courses.  Those 10 million ratings represent about 2% of that figure, and that’s an upper bound estimate, since RMP includes ratings of professors at two-year institutions as well, and some ratings from 1999 to 2002.  The ratings on RMP probably represent fewer than 1% of the courses taken by students on most four-year campuses.

This wouldn’t be a problem if that 1% or so were representative of all students’ experiences on a campus.  But there’s really no evidence to support the claim that the ratings on any particular campus are typical of students’ experiences.  The few existing studies of RMP show that the dimensions of the ratings (e.g., easiness, helpfulness, clarity, rater interest) are correlated with one another, and on two campuses—Landers University in South Carolina, and the University of Maine—the ratings correlate with the institution’s own student evaluations of teaching, although more clearly when the volume of RMP ratings is high than when it is low.  Neither type of study demonstrates the representativeness of the sample of students who post ratings on RMP for the schools rated by Forbes.

And what of my own ratings on RMP?  I’ve taught 580 graduate students over the past five years, and two of those 580 students posted ratings.  One didn’t care for me, describing me as “not very helpful, a bit sarcastic.”  The other apparently liked me, describing me as “very helpful with questions,” and awarding me a coveted chili pepper (indicating that I’m “hot.”)   There’s no accounting for taste.

About our First Person series:

First Person is where Chalkbeat features personal essays by educators, students, parents, and others trying to improve public education. Read our submission guidelines here.