Data Are Good; More Data May Not Be Better

Nowadays, it seems like anybody with a fast server, some GIS software, and some links to federal and state education databases can put up a website comparing schools.  Among the latest entries to the school comparison derby is schooldigger.com, a service of Claarware LLC, billed as “The Web’s Easiest and Most Useful K-12 Search and Comparison Tool for Parents.”  Schooldigger’s title evokes the imagery of digging into the interior of schools to see what makes them tick.

 The rhetoric on schooldigger’s website is typical.  The site purports to rank schools within states from best to worst.  “Other sites charge over $20 a month for this service!” the site exclaims, but schooldigger does it for free.  For New York, the rankings are based on the sum of the average percent proficient in English and math across tested grades.  The rankings of schools are aggregated to enable cities and districts to be ranked as well.  Schools, cities and districts in the 90th to 100th percentiles of the distribution get five stars;  those in the 70th to 90th percentiles get four stars;  those in the 50th to 70th percentiles get three stars;  the ones in the 30th to 50th percentiles receive two stars;  those in the 10th to the 30th percentiles get one star;  and those in the bottom 10% of the distribution receive 0 stars.   

 Sites such as schooldigger may have some interesting bells and whistles, but they can never adequately address the question that I think is of greatest interest to parents:  How would my child fare in this school, as compared to another school?  If this is, indeed, the question, then school comparison websites are doomed to provide poor and potentially misleading answers.

 There are several reasons for this, but I’ll focus on just two.  First, the rankings do not take account of the kinds of students who attend a given school.  Since we know that there is a powerful association between family economic status and student achievement, schools serving high concentrations of poor children will, on average, rank lower than schools serving a predominantly middle- or upper-class population.  Stating this is not, I believe, a case of the soft bigotry of low expectations.  Rather, it’s an acknowledgment that a school’s context matters in judging how well the school is serving its students.

 I used schooldigger to identify schools within a mile of my office, and one of the schools that showed up was P.S. 180, the Hugo Newman School on 120th St. in Harlem.  At Hugo Newman, 85% of the students were proficient in math in 2008, and 65% were proficient in English Language Arts.  If we set aside reservations about using high-stakes tests as a measure of school performance—which I’ll do solely for the purpose of this posting—that sounds pretty good, especially when we take note of the fact that 88% of the students attending Hugo Newman are eligible for a free or reduced-price lunch.  The school’s letter grade on the student performance section of the 2007-2008 School Progress Report—boy, I’m breaking all of the rules here, aren’t I?—was an A.  Not too shabby, right?

 Schooldigger gave Hugo Newman one star.  That’s because it ranked tied for 1,592nd out of 2,276 elementary schools in New York State, which represents the 30th percentile of all New York State elementary schools. Comparing Hugo Newman to the elementary schools of Syosset or Jericho, with median family incomes of well over $100,000 per year, seems kind of ridiculous, doesn’t it?  Those schools do not serve the kinds of children that Hugo Newman enrolls.  

 But even if we are able to solve the problem of comparing apples to apples, there is another challenge.  With the exception of the occasional brown spot (or worm!), biting into an apple in one place is pretty much the same as biting into it in another place.  That is, apples are pretty homogeneous in their composition.  And that means that one bite of an apple tells you a lot about the apple overall.  Not so for schools.  Even in schools that are relatively homogeneous in the kinds of children who attend them—the color of their skin, or their family economic standing—there frequently are substantial differences among children in their experiences in the school and how much they have learned.  The last 40 years of educational research have demonstrated conclusively that in the United States, there is far more variability in children’s achievement within a given school than there is across schools.  Much of this variability is masked when children’s learning is measured in the metric of proficiency rates, in which all children who are above the proficiency threshold are assumed to be achieving at similar levels.

 The fact that there is more variation in achievement within schools than between them may seem counterintuitive when we are drawn to think about schools that are exceptional, and a large school system such as New York’s has a number of schools whose reputations, and average student achievement, are extraordinary.  But nobody needs a school comparison website to figure out that the youth who attend New York City’s specialized exam high schools are high-achievers.  There are a lot more schools which are not extraordinary, and which are populated with students who are doing okay, on average, with some students doing very well, and others not so well.  The kinds of data available on a site such as schooldigger are ill-suited to predicting where in that distribution of outcomes a particular child might fall.  Any suggestion to the contrary is wishful thinking.

About our First Person series:

First Person is where Chalkbeat features personal essays by educators, students, parents, and others trying to improve public education. Read our submission guidelines here.