|The Mission of the Manhattan Institute is
foster greater economic choice and
A new yardstick for education
by Jay P. Greene and Gary W. Ritter
Policymakers, practitioners, journalists, and the public regularly use the results of state accountability tests to assess the quality of schools. Schools with high scores must be "good schools," people assume, while schools with low scores must be "bad schools." Unfortunately this use of test results is actually a misuse.
Test score results are only partially a reflection of the quality of school instruction; they are also partially a reflection of the advantages and disadvantages that students bring to school. A school with high test scores might actually be of sub-par quality propped up by very advantaged students. Conversely, a school with low test scores might actually be of high quality masked by the severity of its students' disadvantages. Unless one isolates the influence of student characteristics and other resources made available to schools, it is impossible to determine school quality simply from test results.
In an attempt to disentangle school quality from the advantages and disadvantages given to a school, we, along with colleagues in the Department of Education Reform, have developed the School Performance Index (available at http://www.uark.edu/ua/oep/der_spi_index.htm).
By controlling statistically for a host of student characteristics, community characteristics, and resources, we are able to predict how well each school should be performing on standardized tests given those inputs. The extent to which schools perform better or worse than we would expect given the context in which they operate is our best estimate of the quality of the school itself.
Based on our analyses, we uncovered a number of interesting findings. First, we found that once demographics are taken into account, Arkansas students perform slightly higher than the national average. While one might not know it from the way people disparage Arkansas education, the facts suggest that overall our schools perform about as well as schools nationwide-which is not to say that they are great, but it also means that they are not terrible.
Second, once we control for the advantages and disadvantages students bring to school, we find that Arkansas schools that spend more money per pupil do not do any better than schools that spend less. This is consistent with the bulk of research on educational spending and student achievement. While we would normally expect that an organization that had more money would do better than one with less, in education that relationship does not hold because schools that receive additional money tend not to use that money effectively to improve student learning.
To be sure, there were real disparities and resource shortcomings that spurred the Lake View court case; and the state has come a long way in addressing those issues in response to the lawsuit. But the concentration of the Arkansas Supreme Court and the advocates in the Lake View case on increasing education spending to ensure an adequate education is misplaced. Today, lack of funding is not the primary difficulty denying students in Arkansas an adequate education and increasing spending further is unlikely by itself to improve the situation. Instead, we have to think about how we can provide school systems with stronger incentives and support to use their money effectively so that we get better student achievement out of the additional dollars we do spend on education.
Third, by rating every school and every school district in the state we found that some school districts did better and some did worse than people might have expected. For example, we found that some districts with highly disadvantaged students, such as Helena or West Memphis, were actually performing fairly well once those disadvantages were taken into account. It is true that those districts have very low test scores, but they also face very serious educational challenges. When most people look at the test results in those districts, they fail to consider the severity of those challenges and incorrectly conclude that those districts must be doing a poor job.
Some districts have highly disadvantaged students, but still fail to perform very well even after those challenges are taken into account. For example, Little Rock and Fort Smith have relatively disadvantaged students but still fail to perform as well as the national average after adjusting for those difficulties. The disadvantages of students in those districts cannot fully explain their low test scores. Those school systems need to think about how they could do better and not simply rationalize their low performance by claiming that they have disadvantaged students.
Of course, all school systems could do better, even those performing well above the national average after adjusting for demographic factors. Helena and West Memphis should not be satisfied with their low test scores simply because they are performing better than we would expect given their challenges. They can and should do better. But we should also be fair in praising school districts for what they have achieved so far in overcoming obstacles and not pretend that all schools face the same obstacles.
The measure we have developed, the School Performance Index, is certainly not perfect. For example, it may not fully control for all advantages and disadvantages given to each school, thereby confusing school quality with unmeasured student characteristics. Nevertheless, we believe that it is a significant advance in assessing school quality. As the policymakers in the Arkansas Department of Education consider ways of reporting the level of school performance as required in Act 35, we hope they will consider the approach outlined in our report.
Jay P. Greene and Gary W. Ritter hold endowed chairs in the Department of Education Reform at the University of Arkansas.
Home | About MI | Scholars | Publications | Books | Links | Contact MI|
City Journal | CAU | CCI | CEPE | CLP | CMP | CRD | ECNY
|Thank you for visiting us. |
To receive a General Information Packet, please email firstname.lastname@example.org
and include your name and address in your e-mail message.
|Copyright © 2009 Manhattan Institute for Policy Research, Inc. All rights reserved.|
52 Vanderbilt Avenue, New York, N.Y. 10017
phone (212) 599-7000 / fax (212) 599-3494