In Defense of High School Progress Reports

In a post on this page earlier this week, “Comparing Small Apples to Large Apples,” Teachers College Professor Aaron Pallas raised several important issues with respect to New York City’s high school progress reports. A frank dialogue about the strengths and weaknesses of our accountability system is important as it helps us make improvements while deepening the public’s understanding of how the system works. There are several areas in Dr. Pallas’s argument that I’d like to address to clarify our approach and avoid potential misconceptions.

The high school progress report accounts for multiple data measures: four and six-year graduation rates; attendance; five Regents exams; and credit accumulation at the end of ninth, tenth and eleventh grade.

Pallas questions the usefulness of credit measures. Credit data is derived from the grades teachers assign to students. As a former high school principal, I know firsthand that the progress students make in earning credits is a key predictor of graduation. Academic success in ninth grade in particular better predicts graduation than either demographics or prior academic achievement.

Pallas concedes that the use of peer schools to contextualize each school’s performance and progress is a “good feature” of the progress report; however, he criticizes our method for identifying peer schools. The student characteristics most predictive of high school success — as measured by ability to earn credits, pass Regents, and graduate — are students’ incoming proficiency levels and special education and over-age status. Our peer index controls for these factors. Pallas suggests that we should also control for socio-economic factors like race and Title I status. While these are correlated with achievement outcomes, they are less significant indicators when we control for a student’s incoming proficiency. Socio-economic factors are typically used as a proxy for academic achievement level in the absence of reliable data on student achievement levels. By contrast, we have and use real data on incoming achievement levels for high school students. Our method guards against unduly rewarding academically screened schools with high-minority or low-income populations. It’s also important to note that while English language learners are not part of the peer group determination and Pallas faults progress reports for “systematically penaliz[ing]” these students (among others), schools serving high numbers of ELL students tend to outperform their peers on progress reports.

Pallas recommends that we include school size and per-pupil expenditure as components in identifying peers. In a comment, Leonie Haimson adds class size to the mix. In fact, neither budget nor class size are correlated with a school’s progress report score. With respect to school size, there is a small correlation between the enrollment of a school and its progress report score: Smaller schools perform slightly better on average (though there are many cases of large schools outperforming smaller schools — Francis Lewis, our largest non-specialized high school, earned an A, for example). 

So why don’t we include school size in the peer index? Because if we did, we would merely be identifying, for ourselves, the city’s best-scoring big school or small school. We don’t want to control for size; we want to compare what all schools contribute to student learning irrespective of their size. We identify where the creation of smaller units, either in the form of small schools or through small learning communities in large schools, leads to better student outcomes — and where it does not. Similarly, we identify cases where larger schools have achieved better outcomes than smaller ones. We believe this is the information families and school leaders find most useful.

Let’s step back for a moment and consider more broadly the equity concerns Pallas raised.

Everyone understands that there is a shameful achievement gap in New York City. The DOE is focused on eliminating it. We’re making progress after decades in which no one was held accountable for the profound failure to educate our neediest students. We are working to create a school measurement system that compares schools on a level playing field and explicitly rewards their efforts to close the achievement gap and move our highest-need students forward. Each year we have been incrementally more successful. We identify peer schools based on similarly performing student populations. We create incentives in many of our measures that reward schools for outstanding performance and progress with special education, over-age, and lower-performing students. We have measures dedicated to closing the achievement gap in the school (credit measures in the student progress section) and in the city (additional credit measures tied to exemplary progress among special education, English language learners, black students in the lowest third citywide, Hispanic students in the lowest third citywide — a separate measure not linked with black students — and all other students in the lowest third citywide). These measures have schools focused on traditionally underperforming groups of students that for too long were being overlooked by our system.

In other words, we evaluate success using a system that controls for much of the demographic differences across schools and we have established incentives that we hope will reduce what remains of those demographic differences. Almost every other accountability system in the country has far more severe demographic skews in the results because a) these systems don’t take the steps we do to control for the differences in performance among groups of students and b) don’t measure progress.

Pallas closes by suggesting that we have stacked the deck in favor of the new small schools created in this administration. Here is the demographic composition of students in the 125 new high schools opened since 2002 and the composition of students in the other high schools receiving grades:

This doesn’t seem so inequitable. The new small schools have higher percentages of minority students, higher percentages of Title I students, lower incoming proficiency, and parity with respect to English language learners and special education students. Despite these demographics, these new small schools have received higher progress report scores on average.

Look at one example: the Van Arsdale campus, which was the site of the press conference for the high school progress reports earlier this week. The three new small schools on that campus have a collective graduation rate above 80%. The school they replaced had a 35% graduation rate in 2003-04. The demographics for the campus then and the campus now are similar, though not exactly the same (variances of 4-5 points in high-needs categories). It’s implausible to suggest that very modest demographic differences led to a difference of 45 points in the graduation rate between 2003-04 and 2008-09. By any standard, this campus is a terrific educational success story.

Progress reports have helped to increase the focus on student achievement and are holding principals accountable for student performance in a way that never existed before. We are not claiming they are perfect. We continually look for ways to improve them and solicit feedback from our educators in the field as well as external proponents and critics of our system. We have been doing this for the last three years and will continue to do so going forward. We are also reviewing the elementary-middle school progress reports to address concerns noted by Pallas and others.

It’s essential to remember, however, that for us this is not an academic exercise. We are not interested in creating the most statistically complex measurement system. That system would sacrifice a critical goal of the Progress Report: to serve as a tool with clear measures that educators in the field can understand and use. Since 2007, when the Progress reports were first released, principals have focused more intently on student learning and achieved notable gains as a result. Principals know that if they can enable their highest-need students to succeed, they will earn significant rewards on the Progress Report from the additional credit section, the diploma weights for Special Education and over-age students, and credits earned by lower-performing students. Dr. Pallas’s attempt to dismiss the credibility of the system suggests a desire to explore the perfect design from a statistical perspective without serious consideration for what is effective in practice. In fact, the Progress Report is a much better school evaluation and performance management tool than anything we’ve ever had before in New York City and has become a model for other districts nationally and internationally. We’ll continue to work to improve it and look forward to a continued dialogue on the best ways to do that.

Shael Polakow-Suransky is the Department of Education’s chief accountability officer.

About our First Person series:

First Person is where Chalkbeat features personal essays by educators, students, parents, and others trying to improve public education. Read our submission guidelines here.