AISR Speaks Out: Commentary on Urban Education
Value-Added Teacher Assessments
Value-added models — the centerpiece of a national movement to evaluate, promote, compensate and dismiss teachers based in part on their students’ test scores — have proponents throughout the country, including school systems in New York City, Chicago, Houston and Washington, D.C. In theory, a teacher’s “value-added” is the unique contribution he or she makes to students’ achievement that cannot be attributed to any other current or past student, family, teacher, school, peer or community influence.
Sean P. Corcoran, assistant professor of educational economics at New York University’s Steinhardt School of Culture, Education and Human Development, and research fellow at the Institute for Education and Social Policy, recently prepared a report, published by the Annenberg Institute, that concluded that value-added assessments of teacher effectiveness are at best a “crude indicator” of the contribution that teachers make to their students’ academic outcomes. In practice, states Corcoran, it is exceptionally difficult to isolate a teacher’s unique effect on academic achievement.
“The promise that value-added systems can provide a precise, meaningful, and comprehensive picture is much overblown,” argues Corcoran whose research report is entitled Can Teachers be Evaluated by Their Students’ Test Scores? Should they Be? The Use of Value-Added Measures of Teacher Effectiveness in Policy and Practice. “Teachers, policy-makers and school leaders should not be seduced by the elegant simplicity of value-added measures. Given their limitations, policy-makers should consider whether their minimal benefits outweigh their cost.”
The paper is part of the “Education Policy for Action” series of research and policy analyses by scholars convened by the Annenberg Institute as part of its mission to stimulate debate in the field on matters of important consequence for national education policy. We asked Professor Corcoran some questions to dig a little deeper into the debate.
AISR: Given your report’s conclusions, what are you proposing as an alternative to “value-added?”
SPC: I would like to see test score data reported to teachers and incorporated as a small but important part of a holistic evaluation of their performance. School leaders, senior colleagues, or both would conduct this evaluation. Test score reports could take the form of statistically based value-added measures, but a much less complex measure could probably convey at least as much information for these purposes. Value added would never be seen as a decisive or even as a strong measure of teachers’ performance. Rather, teachers and their colleagues would review them as one piece of information that may or may not be useful in their own practice.
Some argue that value-added must be weighed against the status quo, which in many schools is a weak or non-existent system of teacher evaluation, and not the “ideal” system. However, the question is not whether value-added provides more information than our existing system. It is whether the added benefits of this system outweigh the added costs.
AISR: What would you recommend to improve value-added to increase its effectiveness?
SPC: If we are going to adopt value-added measures for assessment, I would like to see better reporting about the kinds of information used to generate value-added measures. For example, if a teacher has taught 75 children over the past three years, yet only 40 of these had sufficient data to contribute to her value-added results, this needs to be reported to the teacher (and anyone else using this information).
AISR: Should value-added results be made public?
SPC: No. When value-added rankings are released to the public — as the Los Angeles Times recently did — they come with an implicit assumption that parents can draw meaningful inferences about the relative quality of teachers from these numbers. Moreover, it is implied that one can forecast their own child’s performance under teacher A versus teacher B, based on these reports. In fact, value-added cannot confidently differentiate the performance of the vast majority of teachers, nor can they be interpreted as the likely effect a teacher will have on any given student. Given these facts, I think reporting value-added estimates creates many more problems than it solves.
AISR: A number of research reports have already questioned the validity of value-added; what new ground have you broken in your research?
SPC: In my work with the Annenberg Institute, I took a closer look at the value-added systems in place in Houston and New York City, and I include a lot of examples from these systems in my report. Two things from this work stood out in particular.
First, the value-added rankings reported to teachers in New York City confirm just how imprecise these measures are. I found that the vast majority of teachers could not be statistically distinguished from the vast majority of other teachers in the district. The end result is that the New York City Teacher Data Reports provide meaningful information to a very small minority of teachers.
Second, in looking at Houston data, I was surprised how many of a teacher’s students do not contribute to his or her value-added estimate. In a highly mobile school district like Houston, many students are not tested in two consecutive years. These students are simply ignored in a value-added system. Data availability should not dictate which students “count” toward teachers’ job performance, and which students don't.