As the number of incentive programs has risen in New York City, so have the city’s state test scores. But does that mean the two trends are related?
If you believe the Post’s headlines, then the answer is yes. Yesterday, the paper ran one article about test scores rising in virtually all schools where students are now paid for their performance and another about test scores inching up at the 158 schools where teachers are to receive bonuses if their students do well. Today, the Post reported that 23 of the 51 middle schools targeted for extra resources due to persistent low performance post higher-than-average gains.
But you have to look no further than the Post’s own coverage to see that it’s impossible to determine whether any of the incentives programs have paid off. In two of the programs — teacher bonuses and the middle school initiative — more than half of participating schools saw below average improvement. The Post declares that the third, the controversial new program that pays some students in selected schools for particular successes, “dramatically improved test scores” because scores rose in almost all of the 35 schools included in the program. But here, as with all of the incentive programs, we know only about correlation, not causation. And as part of Opportunity NYC, the student-payment program included only some students at the eligible schools — a fact that suggests that some other force may also have been at play in those schools’ test score jumps.
In addition, while the DOE said it selected schools for the new programs because other initiatives hadn’t caused test scores to budge in the past, we know it’s easier to move students at the low end of the spectrum than it is to generate large improvements in test scores among high-performing students, so it would make sense that most schools posting higher-than-average gains would be low-performing to start.
Instead of looking for answers among the many initiatives thrown against the wall at each school, we should be working to identify the common features of schools that posted large test score gains. Do they all have an extended day program? Do they all have principals who have the respect and trust of their teachers? Or have they all given up art and music in favor of extra test prep sessions?
If only the fledgling movement to create an independent research institute to evaluate the DOE’s initiatives and claims had been successful faster — social scientists who are not beholden to the DOE could be looking at the data while controlling for different variables, allowing us to know whether in fact any of these programs alone made a difference in students’ test scores. Complicating the picture with many different programs might make it easier for the chancellor and mayor to justify continued incentive programs, but it does little toward building better schools.