Add one more point of critique to the city’s Teacher Data Reports: Experts and educators are worried about the bell curve along which the teacher ratings fell out.
Like the distribution of teachers by rating across types of schools, the distribution of scores among teachers was essentially built into the “value-added” model that the city used to generate the ratings.
The long-term goal of many education reformers is to create a teaching force in which nearly all teachers are high-performing. However, in New York City’s rankings — which rated thousands of teachers who taught in the system from 2007 to 2010 — teachers were graded on a curve. That is, under the city’s formula, some teachers would always be rated as “below average,” even if student performance increased significantly in all classrooms across the city.
The ratings were based on a complex formula that predicts how students will do — after taking into account background characteristics — on standardized tests. Teachers received scores based on students’ actual test results measured against the predictions. They were then divided into five categories. Half of all teachers were rated as “average,” 20 percent were “above average,” and another 20 percent were “below average.” The remaining 10 percent were divided evenly between teachers rated as “far above average” and “far below average.”
IMPACT, the District of Columbia’s teacher-evaluation system, also uses a set distribution for teacher ratings. As sociologist Aaron Pallas wrote in October 2010, “by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students’ learning.”
New York City schools erupted in controversy last week when the school district released its “value-added” teacher scores to the public after a yearlong battle with the local teachers union. The city cautioned that the scores had large margins of error, and many education leaders around the country believe that publishing teachers’ names alongside their ratings is a bad idea.
Still, a growing number of states are now using evaluation systems based on students’ standardized test-scores in decisions about teacher tenure, dismissal, and compensation. So how does the city’s formula stack up to methods used elsewhere?
The Hechinger Report has spent the past 14 months reporting on teacher-effectiveness reforms around the country and has examined value-added models in several states. New York City’s formula, which was designed by researchers at the University of Wisconsin-Madison, has elements that make it more accurate than other models in some respects, but it also has elements that experts say might increase errors — a major concern for teachers whose job security is tied to their value-added ratings.
“There’s a lot of debate about what the best model is,” said Douglas Harris, an expert on value-added modeling at the University of Wisconsin-Madison who was not involved in the design of New York’s statistical formula. The city used the formula from 2007 to 2010 before discontinuing it, in part because New York State announced plans to incorporate a different formula into its teacher evaluation system.
The New York Times' first big story on the Teacher Data Reports released last week contained what sounded like great news: After years of studies suggesting that the strongest teachers were clustered at the most affluent schools, top-rated teachers now seemed as likely to work on the Upper East Side as in the South Bronx.
Teachers with high scores on the city's rating system could be found "in the poorest corners of the Bronx, like Tremont and Soundview, and in middle-class neighborhoods," "in wealthy swaths of Manhattan, but also in immigrant enclaves," and "in similar proportions in successful and struggling schools," the Times reported.
Education analyst Michael Petrilli called the findings "jaw-dropping news" that "upends everything we thought we knew about teacher quality."
Except it's not really news at all. Value-added measurements like the ones used to generate the city's Teacher Data Reports are designed precisely to control for differences in neighborhood, student makeup, and students' past performance.
The adjustments mean that teachers are effectively ranked relative to other teachers of similar students. Teachers who teach similar students, then, are guaranteed to have a full range of scores, from high to low. And, unsurprisingly, teachers in the same school or neighborhood often teach similar students.
“I chuckled when I saw the first [Times story], since the headline pretty much has to be true: Effective and ineffective teachers will be found in all types of schools, given the way these measures are constructed,” said Sean Corcoran, a New York University economist who has studied the city’s Teacher Data Reports.
The Department of Education released a final installment of Teacher Data Reports today, for teachers in charter schools and schools for the most severely disabled students.
Last week, the city released the underlying data from about 53,000 reports for about 18,000 teachers who received them during the project's three-year lifespan. Teachers received the reports between 2008 and 2010 if they taught reading or math in grades 4 through 8.
When the department first announced that it would be releasing the data in response to several news organizations' Freedom of Information Law requests, it indicated that ratings for teachers in charter schools would not be made public. It reversed that decision late last week and today released "value-added" data for 217 charter school teachers.
Participation in the data reports program was optional for charter schools and some schools entered and exited the program in each year that it operated, with eight schools participating in 2007-2008 and 18 participating in 2009-2010. At the time, the city had about 100 charter schools.
The department also released reports for 50 teachers in District 75 schools, which enroll the city's most severely disabled students. The number is small because few District 75 students take regular state math and reading exams. Also, District 75 classes are typically very small, and privacy laws led the city to release data for teachers who had more than 10 students take state tests. District 75 also teachers received reports only in 2008 and 2010; the program was optional in the district's schools in 2009.
Department officials cautioned last week that the reports had high margins of error — 35 percentage points for math teachers and 53 percentage points for reading teachers, on average — and urged caution when interpreting them.
Tomorrow's planned release of 12,000 New York City teacher ratings raises questions for the courts, parents, principals, bureaucrats, teachers — and one other party: news organizations. The journalists who requested the release of the data in the first place now must decide what to do with it all.
At GothamSchools, we joined other reporters in requesting to see the Teacher Data Reports back in 2010. But you will not see the database here, tomorrow or ever, as long as it is attached to individual teachers' names.
The fact is that we feel a strong responsibility to report on the quality of the work the 80,000 New York City public school teachers do every day. This is a core part of our job and our mission.
But before we publish any piece of information, we always have to ask a question. Does the information we have do a fair job of describing the subject we want to write about? If it doesn't, is there any additional information — context, anecdotes, quantitative data — that we can provide to paint a fuller picture?
In the case of the Teacher Data Reports, "value-added" assessments of teachers' effectiveness that were produced in 2009 and 2010 for reading and math teachers in grades 3 to 8, the answer to both those questions was no.
We determined that the data were flawed, that the public might easily be misled by the ratings, and that no amount of context could justify attaching teachers’ names to the statistics. When the city released the reports, we decided, we would write about them, and maybe even release Excel files with names wiped out. But we would not enable our readers to generate lists of the city’s “best” and “worst” teachers or to search for individual teachers at all.
It's true that the ratings the city is releasing might turn out to be powerful measures of a teacher's success at helping students learn. The problem lies in that word: might.
The city can release teacher ratings data to news organizations, the state's second-highest court ruled today in another serious blow to the union's effort to keep individual teachers' scores out of the press.
The release won't happen right away while the legal fight continues, Department of Education officials said.
But the union is running out of chances to stop the ratings from being published. In December, a State Supreme Court judge ruled that the city could release Teacher Data Reports for at least 12,000 teachers who have them. After the Appellate Court ruling today, the union's last hope is the state's highest court, the Court of Appeals.
The union is already working on its appeal, UFT President Michael Mulgrew announced moments after the Appellate Court ruling.
Because the four judges on the Appellate Court ruled unanimously against the union, there's no guarantee that the Court of Appeals will hear the case. Instead, the Appellate Court has to give permission. Within days, the union will ask the appellate court for permission to have the case heard in the Court of Appeals. If permission isn't granted, the union can also ask the Court of Appeals itself. If the Court of Appeals declines to hear the case, then the Appellate Court's decision would stand and the union would be out of options.
The teachers union and the city are heading back to court today, for the second round in an ongoing battle over the public release of teacher ratings.
Last December, a state judge ruled that that the city could release controversial teacher evaluations. Today, the union seeks to reverse that decision in Appellate Court.
The stakes are high for the city, which could use the release of teacher ratings as a key engine for galvanizing public support in favor of doing away with seniority layoffs. But the union, which wants to maintain "last in, first out" layoff rules, says that the evaluations are too inaccurate to be used for such high-stakes decisions.
The "value-added" evaluations, which grade teachers by comparing their students’ test scores to forecasted scores, were created as an internal assessment, designed to help teachers gauge their own performance. But the Department of Education announced it would release the ratings publicly after several news organizations filed Freedom of Information Law requests for them. This decision prompted a UFT lawsuit.
The parent of a Queens public school student is accusing the New York Post of fabricating his support for publicly releasing teachers' effectiveness scores.
Queens Community Education Council member Brian Rafferty said that an op/ed published in the New York Post last week bore his byline, but not his views. Rafferty, who is also the executive editor of the Queens Tribune, made the accusation at a council meeting in Ridgewood, Queens last night. The piece, titled "Dad: Union putting my child last," criticized the city's teachers union for going to court to block the city from releasing teachers' ratings.
Last night, Rafferty told a room packed with parents and teachers that he does not support releasing 12,000 teachers' ratings with their names included.
"I might be skeptical of the union sometimes, no offense guys, but there is absolutely no way that these opinions are mine," he said.
For parents of students in the "average" city teacher's class, learning the teacher's rating may not tell them very much, Chancellor Joel Klein wrote in a letter to principals today.
In his email, Klein explained the city's decision to release teachers' effectiveness ratings and the teachers union's move to block this from happening. He noted that the ratings, which measure teachers against estimations of how much their students' test scores ought to rise, would be most useful in identifying very high and low performing teachers. He wrote:
One indication will never tell the whole story, and sometimes it is hard to discern definitive evidence from data alone — such as with a teacher who is "average" according to these numbers, for example. But where teachers have performed consistently toward the top or the bottom, year after year, these data surely tell us something very important. Namely, we need to retain and reward the great teachers, and we need to develop the low-performing teachers. And those who don't improve quickly need to be replaced with better-performing teachers.
Klein's full letter:
City education officials are saying they want to release teachers' ratings publicly as a way of helping bad teachers improve and reward those who are excelling.
In an interview with John Gambling on WOR-AM (710) this morning, Deputy Chancellor John White said the union's concerns about how parents and the public would use the data were legitimate. But, he said, those concerns should not be an obstacle to improving how teachers are evaluated. He told Gambling:
And these data show that, actually, there are plenty of teachers who every year, year after year after year, are performing at the top of their game. We need to honor those teachers. This is not just about failing teachers.
But there are cases where we see every year, teachers in the bottom. And you can sit there and say, "Oh there's this exception, this teacher's is not a perfect score, it doesn't reflect this," but at the end of the day when you have teachers who are performing way at the top year after year after year, way at the bottom year after year after year, you have to say: are we doing the right thing for kids? We've got to keep that teacher at the top, we've got to pay that teacher right, at the top, and that teacher at the bottom, they've got to get better or we've got to get a better teacher.
It's unclear how making teachers' ratings public would improve their performance, as principals and teachers already have access to the ratings. This year, principals are supposed to use the ratings as a factor in tenure decisions and by 2012 they will be a significant part of all teachers' evaluations.
Mayor Bloomberg on NBC today, announcing a crackdown on seniority-based layoffs and a new tenure policy.
In his first major education policy announcement for the new school year, Mayor Michael Bloomberg this morning vowed a renewed attack on seniority laws that protect veteran teachers and a change in how teachers are awarded tenure.
He made the remarks on NBC, which is dedicating this week to school reporting in a project called "Education Nation."
The attack on seniority laws came as city officials made a dire budget prediction for next year, saying that they will likely have to lay off public school teachers as federal stimulus funding runs out. Under the current state law, teachers with the least seniority would be the first to lose their jobs — a policy known as "last in, first out." The mayor and Chancellor Joel Klein oppose this policy, but their effort to change the law, which the teachers union does support, went nowhere last year.
Today, the mayor said he would try dismantling the policy again before the city confronts an expected $700 million budget hole and possible layoffs next year.
"It's time for us to end the 'last-in, first out' layoff policy that puts children at risk here in New York — and across our wonderful country," Bloomberg said on NBC. "How could anyone argue that this is good for children? The law is nothing more than special interest politics, and we're going to get rid of it before it hurts our kids," he added.
Teachers union officials immediately squashed any possibility that they might partner with the mayor.