dropdown
Next

Fact Or Opinion?

WHAT IS FIRST PERSON?

In the First Person section, we feature informed perspectives from readers who have firsthand experience with the school system. View submission guidelines here and contact our community editor to submit a piece.

What counts as a “fact”? New York State Supreme Court Justice Cynthia Kern’s ruling on the release of the New York City Teacher Data Reports reflects a view very much at odds with the social science research community. In ruling that the Department of Education’s intent to release these reports, which purport to label elementary and middle school teachers as more or less effective based on their students’ performance on state tests of English Language Arts and mathematics, was neither arbitrary nor capricious, Kern held that there is no requirement that data be reliable for them to be disclosed. Rather, the standard she invoked was that the data simply need to be “factual,” quoting a Court of Appeals case that “factual data … simply means objective information, in contrast to opinions, ideas or advice.”

But it is entirely a matter of opinion as to whether the particular statistical analyses involved in the production of the Teacher Data Reports warrant the inference that teachers are more or less effective. All statistical models involve assumptions that lie outside of the data themselves. Whether these assumptions are appropriate is a matter of opinion. Among the key assumptions that are necessary to make inferences about teacher effectiveness from student performance on the state tests are the following:

  • The tests are valid measures of students’ mastery of English Language Arts and mathematics.
  • A student’s performance on the test, which is taken on a particular date, reflects how that student would perform on the test on other dates.
  • The student, classroom and school-level variables taken into account in the value-added model underlying the Teacher Data Reports are appropriate for inferring that a particular teacher caused the test-score gains experienced by that teacher’s students.
  • Test-score gains observed on tests administered in the middle of one year and the middle of the following year can be properly apportioned to the prior-year teacher and the current-year teacher.

The fact that reasonable people might disagree about these assumptions makes clear that they are a matter of opinion. For example, research by testing expert Dan Koretz, Jennifer Jennings and others shows that the tests at issue were subject to score inflation, because they covered an increasingly predictable and small subset of the curricular standards set by New York state, and failed to predict whether students were well-prepared for college and life after high school. Researchers such as economist Jesse Rothstein have questioned whether value-added models such as the ones used in the production of the Teacher Data Reports are able to simulate a “level playing field” in which teachers can be assumed to have equivalent classes of students.

Even the Department of Education’s own contractors have been of different minds about how to apportion gains when students are exposed to two different teachers between last year’s test and this year’s test. Initially, the gains were apportioned based on the number of months of exposure to last year’s teacher and this year’s teacher. But the most recent technical report for the production of the Teacher Data Reports attributes all of the gains between last year’s test and this year’s test to the current-year teacher.

Value-added measures such as the Teacher Data Reports are constructed through a social process involving expert judgments, and there may be a great deal of consensus around many of those judgments. But that doesn’t make the Teacher Data Reports “facts” that are somehow removed from the realm of opinion and assumption. The data don’t create the categories used to label teachers as above or below average; the labels are a matter of opinion.

There are many definitions of the term “fact,” and perhaps the definitions relied on in legal reasoning differ substantially from those used in social and educational research. But State Supreme Court Justice Cynthia Kern’s argument that the Teacher Data Reports are “facts” makes little sense. In my opinion.

This post also appears on Eye on Education, Aaron Pallas’s Hechinger Report blog.

ABOUT THE CONTRIBUTOR

Aaron Pallas headshot

Aaron Pallas

Aaron Pallas is Professor of Sociology and Education at Teachers College, Columbia University. He has also taught at Johns Hopkins University, Michigan State University, and Northwestern University, and served as a statistician at the National Center for Education Statistics in the U.S. Department of Education.

WHAT IS FIRST PERSON?

In the First Person section, we feature informed perspectives from readers who have firsthand experience with the school system. View submission guidelines here and contact our community editor to submit a piece.

What counts as a “fact”? New York State Supreme Court Justice Cynthia Kern’s ruling on the release of the New York City Teacher Data Reports reflects a view very much at odds with the social science research community. In ruling that the Department of Education’s intent to release these reports, which purport to label elementary and middle school teachers as more or less effective based on their students’ performance on state tests of English Language Arts and mathematics, was neither arbitrary nor capricious, Kern held that there is no requirement that data be reliable for them to be disclosed. Rather, the standard she invoked was that the data simply need to be “factual,” quoting a Court of Appeals case that “factual data … simply means objective information, in contrast to opinions, ideas or advice.”

But it is entirely a matter of opinion as to whether the particular statistical analyses involved in the production of the Teacher Data Reports warrant the inference that teachers are more or less effective. All statistical models involve assumptions that lie outside of the data themselves. Whether these assumptions are appropriate is a matter of opinion. Among the key assumptions that are necessary to make inferences about teacher effectiveness from student performance on the state tests are the following:

  • The tests are valid measures of students’ mastery of English Language Arts and mathematics.
  • A student’s performance on the test, which is taken on a particular date, reflects how that student would perform on the test on other dates.
  • The student, classroom and school-level variables taken into account in the value-added model underlying the Teacher Data Reports are appropriate for inferring that a particular teacher caused the test-score gains experienced by that teacher’s students.
  • Test-score gains observed on tests administered in the middle of one year and the middle of the following year can be properly apportioned to the prior-year teacher and the current-year teacher.

The fact that reasonable people might disagree about these assumptions makes clear that they are a matter of opinion. For example, research by testing expert Dan Koretz, Jennifer Jennings and others shows that the tests at issue were subject to score inflation, because they covered an increasingly predictable and small subset of the curricular standards set by New York state, and failed to predict whether students were well-prepared for college and life after high school. Researchers such as economist Jesse Rothstein have questioned whether value-added models such as the ones used in the production of the Teacher Data Reports are able to simulate a “level playing field” in which teachers can be assumed to have equivalent classes of students.

Even the Department of Education’s own contractors have been of different minds about how to apportion gains when students are exposed to two different teachers between last year’s test and this year’s test. Initially, the gains were apportioned based on the number of months of exposure to last year’s teacher and this year’s teacher. But the most recent technical report for the production of the Teacher Data Reports attributes all of the gains between last year’s test and this year’s test to the current-year teacher.

Value-added measures such as the Teacher Data Reports are constructed through a social process involving expert judgments, and there may be a great deal of consensus around many of those judgments. But that doesn’t make the Teacher Data Reports “facts” that are somehow removed from the realm of opinion and assumption. The data don’t create the categories used to label teachers as above or below average; the labels are a matter of opinion.

There are many definitions of the term “fact,” and perhaps the definitions relied on in legal reasoning differ substantially from those used in social and educational research. But State Supreme Court Justice Cynthia Kern’s argument that the Teacher Data Reports are “facts” makes little sense. In my opinion.

This post also appears on Eye on Education, Aaron Pallas’s Hechinger Report blog.

NEXT UP:

A parent says her concerns with Black go beyond the jokes

More in CommunityMORE IN COMMUNITY