Manhattan Institute for Policy Research.
search  
 
Subscribe   Subscribe   MI on Facebook  Find us on Twitter      
   
     
 

New York Daily News

 

How To Read the Teacher Data Reports

February 27, 2012

By Marcus A. Winters

PRINTER FRIENDLY

They won’t tell you everything — but are a valuable tool

On Friday, the long-awaited moment arrived: As required by a court ruling, the New York City Department of Education fulfilled a Freedom of Information request and released to several media outlets rankings of teacher performance — by name — as measured by an analysis of their students’ scores on standardized tests.

Whether or not the court was right on the law — my admittedly nonlegal-expert hunch is that it probably was right — is currently irrelevant.

What’s important now is that New Yorkers read them cautiously. Unfortunately, not everyone is likely to heed this advice.

Many parents are understandably interested to see how their kid’s teacher rates relative to others. But the reports offer only very limited measures that shouldn’t be used in isolation to assess a teacher’s effectiveness. Parents and community members who want to look at the teacher scores should keep a few things in mind.

First, the information in the reports is very imprecise. The reports are based on complicated statistical formulas that include a considerable margin of error. As a result, value-added analysis is not very good at ranking teachers with generally average achievement.

Parents shouldn’t necessarily worry if their kid is enrolled with the fourth-best teacher in their school according to the rankings. Rather, you should only consider the broad category in which the teacher’s score falls. Pay attention to whether the teacher’s score is either very high, very low or near the middle relative to other teachers in the city.

Parents should also keep in mind that assessments based on multiple years of test score data are far more accurate than are scores based on only a single year. Thus, it should not be particularly concerning if a teacher receives a bad score in a single year. A red flag should only go up if the teacher’s score is consistently bad over multiple years.

Since many are unlikely to consider the results carefully, it’s a shame that individual reports of a teacher’s performance based on imperfect information must be made public. Teachers are understandably concerned about widespread access to what are imperfect measures of their performance in the classroom. The results of these reports will surely lead to some uncomfortable conversations at the supermarket with worried parents.

But that doesn’t mean that such analyses are generally unreliable and should be abandoned. Responsible use of value-added assessments can and should play a major role when evaluating teachers.

Value-added assessments are too imprecise to provide actionable information about an individual teacher’s performance during a particular school year. But research shows that value-added measures of a teacher’s performance provide a much more accurate picture of his future effectiveness than do the characteristics prioritized by the current system, such as obtainment of advanced degrees and years of experience.

Value-added measures also provide us with an objective take on the teacher’s performance that is missing under the current system. No matter what your profession, you probably don’t want your job performance to be judged according to a single statistical measure. Similarly, even in the best case, standardized test scores only tell us part of what we want to know about a teacher’s performance in the classroom. Further, the margin for error involved in the calculations ensure that some teachers will get ratings that are higher or lower than what they deserve.

That’s why it is important that subjective measures of a teacher’s performance supplement analysis of student test scores when we evaluate teachers. By observing teachers in the classroom, principals will be able to point out when the test scores aren’t telling the whole story. The teacher reports about to be disseminated do not include meaningful subjective measures of effectiveness.

In contrast, the new evaluation system recently agreed to by Gov. Cuomo and the teachers unions responsibly include both quantitative and qualitative assessments of teacher’s performance. Under these new systems, 40% of a teacher’s rating will depend on an evaluation of student test scores while the remaining 60% will be based on subjective measures of a teacher’s performance including at least one unannounced classroom observation.

If implemented properly, this new evaluation system will provide a much richer assessment of teacher performance than will be found in the reports to be publicly released.

The court might be correct that the public has a legal right to information about teacher performance. But people shouldn’t abuse that right by reading too much into the data reports. Value-added assessment of teacher performance is a promising tool to inform teacher evaluations. But parents and the public should keep in mind its limitations.

Original Source: http://www.nydailynews.com/opinion/read-teacher-data-reports-article-1.1028204?localLinksEnabled=false

 

 
 
 

Thank you for visiting us. To receive a General Information Packet, please email support@manhattan-institute.org
and include your name and address in your e-mail message.

The Manhattan Institute, a 501(c)(3), is a think tank whose mission is to develop and disseminate new ideas
that foster greater economic choice and individual responsibility.

Copyright © 2014 Manhattan Institute for Policy Research, Inc. All rights reserved.

52 Vanderbilt Avenue, New York, N.Y. 10017
phone (212) 599-7000 / fax (212) 599-3494