Manhattan Institute for Policy Research.
search  
 
Subscribe   Subscribe   MI on Facebook Find us on Twitter Find us on Instagram      
 
   
 

Civic Report
No. 32 December 2002


Effects of Funding Incentives on Special Education Enrollment

Jay P. Greene, Ph.D.
Senior Fellow, Manhattan Institute for Policy Research
Greg Forster, Ph.D.
Senior Research Associate, Manhattan Institute for Policy Research

Executive Summary

The report examines the effect of state funding systems and high stakes testing on special education enrollment. It specifically finds that:

  • Nationally, special education enrollment grew from 10.6% of all students to 12.3% during the study period, from the 1991–92 school year to 2000-01.
  • During this period, 33 states and the District of Columbia had “bounty” funding systems, which create financial incentives to place children in special education. Sixteen states had “lump-sum” funding systems, which do not create such incentives. New Hampshire had no state funding system until 1999.
  • There is a statistically significant positive relationship between bounty funding systems and growth in special education enrollment. Bounty funding results in an additional enrollment increase of 1.24 percentage points over ten years.
  • The effect of the bounty system accounts for 62% of the enrollment growth experienced by bounty states during the study period. This represents roughly 390,000 extra students in special education, resulting in additional spending of over $2.3 billion per year.
  • If all bounty states had switched to lump-sum systems in 1994–95, their special education enrollments in 2000–01 would have been lower by an average of 0.82 percentage points. This represents roughly 258,000 students and over $1.5 billion per year in extra spending.
  • Between 1991–92 and 2000–01, 29 states and the District of Columbia employed high stakes testing, and 21 did not.
  • High stakes testing has no statistically significant effect on special education enrollment.
  • The average (i.e. not weighted by population) state enrollment level in the states that had lump-sum funding during the study period rose from 11.1% to 12.4%, an increase of 1.3 percentage points.
  • The average enrollment level in states with bounty funding rose from 10.5% to 12.8%, an increase of 2.3 percentage points.
  • Total special education enrollment under lump-sum funding systems grew from 10.5% to 11.5%, a 1 percentage point change.
  • By comparison, total special education enrollment under bounty funding systems increased by 2 percentage points, from 10.6% to 12.6%.

About the Authors

Jay P. Greene is a Senior Fellow at the Manhattan Institute for Policy Research where he conducts research and writes about education policy. He has conducted evaluations of school choice and accountability programs in Florida, Charlotte, Milwaukee, Cleveland, and San Antonio. He has also investigated the effects of school choice on civic values and integration.

His research was cited four times in the Supreme Court’s opinions in the landmark Zelman v. Simmons-Harris case on school vouchers. His articles have appeared in policy journals, such as The Public Interest, City Journal, and Education Next, in academic journals, such as The Georgetown Public Policy Review, Education and Urban Society, and The British Journal of Political Science, as well as in major newspapers, such as the Wall Street Journal and Christian Science Monitor. Most recently he published a piece on vouchers and school integration in the Wall Street Journal, analyses of problems with special education in Education Week, National Review Online and The Education Gadfly, and a defense of high stakes testing in Education Next.

Greene has been a professor of government at the University of Texas at Austin and the University of Houston. He received his B.A. in history from Tufts University in 1988 and his Ph.D. from the Government Department at Harvard University in 1995. He lives with his wife and three children in Weston, Florida.

Greg Forster is a Senior Research Associate at the Manhattan Institute’s Education Research Office. He is the co-author of several education studies and op-ed articles. He received a Ph.D. with distinction in Political Science from Yale University in May 2002, and his B.A. from the University of Virginia, where he double-majored in Political and Social Thought and Rhetoric and Communications Studies, in 1995.

Acknowledgements

We would like to thank Matt Ladner of Children First America for his useful suggestions and support, and the staff of the federal and state departments of education for all their assistance in gathering information on special education financing arrangements.

*********************************************

Introduction

Over the past decade, the U.S. special education enrollment rate has increased from 10.6% of all students to 12.3%. The rate of growth is accelerating and shows no sign of slowing down, and policy makers are anxious to determine why. Critics of the U.S. special education system argue that it creates perverse financial incentives to label children as disabled. School districts have traditionally received state funding based on the size of their special education programs, so in effect they receive a bounty for each child they place in special education. Critics claim that this rewards schools for placing students in special education unnecessarily. Some defenders of the system argue that special education enrollment is growing because the real incidence of disabilities in children is growing, but this explanation does not withstand scrutiny very well. A number of researchers are now pointing towards still another culprit: perverse incentives arising not from funding systems but from high-stakes testing. When schools are held accountable for students’ performance on standardized tests, they have an incentive to remove the lowest-scoring students from the testing pool by placing them in special education, where they will be exempt from testing requirements.

Several states, struggling to cope with the ever-accelerating growth of special education, have adopted new funding systems that eliminate the bounty for new special education students. An even larger number of states have adopted high-stakes testing, in the hope that it will improve education outcomes. However, no national statistical studies have attempted to measure what effect these new lump-sum funding and high-stakes testing policies are having on special education enrollment.

This study finds that funding systems have a dramatic effect on special education enrollment, while high-stakes testing has no significant effect. We estimate that in the states that adhere to the traditional bounty system, over the last decade the rate of special education enrollment grew a total of 1.24 percentage points more than it would have if these states had lump-sum funding systems, accounting for a full 62% of these states’ total increase in special education enrollment. This represents approximately 390,000 extra students placed in special education because of the bounty system, resulting in additional spending of over $2.3 billion per year. Using another method that is more sensitive to the timing of changes in states’ funding systems, we estimate that if all bounty system states had switched to lump-sum systems in the 1994–95 school year, their special education enrollments in the 2000–01 school year would have been lower by an average of 0.82 percentage points. This margin represents a difference of roughly 258,000 students and over $1.5 billion per year in extra spending. In light of these findings, reforms that would remove the perverse incentives of bounty funding systems—such as switching to lump-sum systems or offering private school scholarships to disabled children—are urgently needed.

Previous Research

Enrollment in special education has been growing steadily for decades, and the rate of growth has been accelerating for the past ten years. Already high—over 10% of the student population—at the beginning of the 1990s, special education enrollment is now approaching 13% and shows no sign of slowing down. Since special education students are a significantly greater burden on schools than regular students because of the individual attention and specially trained staff they require, this expansion of the special education population is becoming a more and more urgent concern for U.S. education. Indeed, the percentage of students in special education has been going up at a time when the average cost of special education per student has also been rising, exacerbating the problem further.

Unfortunately, any effort to address this problem must first overcome sharp disagreement over what is causing it in the first place. At least three different culprits have been identified: greater real incidence of disabilities, the advent of high-stakes testing, and the financial incentives created by special education funding.

Defenders of the U.S. special education system argue that the growth of enrollment in special education reflects growth in the real incidence of disabilities in children. According to this explanation, there are simply more disabled students than there used to be, and those students have more costly disabilities. Sheldon Berman, Perry Davis, Ann Koufman-Frederick, and David Urion argue that increases in special education enrollment and spending “have been primarily due to the increased numbers of children with more significant special needs who require more costly services.” They attribute this alleged growth in student disabilities to social forces over which schools have no control, pointing to three factors in particular: improvements in medical technology, deinstitutionalization of children with serious difficulties, and increases in childhood poverty (see “The Rising Costs of Special Education in Massachusetts: Causes and Effects,” in Finn, Rotherham, and Hokanson 2001).

However, this account is not consistent with the facts. The authors argue that there are now more children with mental retardation because improved medicine saves more low-birth-weight babies. While it is true that the number of such babies expected to exhibit retardation has grown, the actual number of students classified as mentally retarded has dropped remarkably—from about 961,000 in 1976–77 to about 599,000 in 2000–01.[1] Improvements in prevention of mental retardation have more than offset any growth in mental retardation caused by increased numbers of surviving low-birth-weight babies. As a general matter, while medical improvements will certainly cause some number of children to survive with disabilities where in a previous era they would have died, it will also cause other children to avoid developing disabilities where in a previous era they would have become disabled. From improved prenatal medicine to safer child car seats to reductions in exposure to lead paint, medical improvements have saved untold thousands of children from disabilities. Furthermore, the decline in the number of students with mental retardation, as well as those with other severe types of disability, also disproves the argument that deinstitutionalization of students with severe problems is driving increases in special education enrollment. As for childhood poverty, it hasn’t actually increased. For children under 6, it was 17.7% in 1976 when federal law first required special services for disabled students, and it was 16.9% in 2000. Even that understates the case, since the standard for what counts as “poverty” goes up over time as society gets richer (see Greene 2002b).

If the real incidence of childhood disabilities isn’t going up, then more students are being classified as disabled when there has been no change in the number of students who actually are disabled. Some of this change may be caused by improved diagnosis of existing disabilities. For example, growth in the number of students classified as autistic may be attributable to improved diagnosis (see Sack 1999).[2] Likewise, growth in the number of students placed in special education under the category of “other health disorders” may be attributable to more widespread recognition and diagnosis of attention-deficit disorder and related disorders—though most students with such disorders are not placed in special education, some students with severe cases are.

But it is extremely unlikely that improved diagnosis is the most important cause of the last decade’s overall growth in special education. Autism represents only a tiny fraction of total special education enrollment, and the category of “other health disorders,” though larger, is not large enough to even come close to explaining the explosive growth in special education enrollment. As for other categories, we have no reason to believe that between 1990 and 2000 schools dramatically improved their ability to accurately identify students with most types of disabilities.

This leaves us with a less benign explanation—that schools are increasingly diagnosing students as disabled and placing them in special education for reasons unrelated to those students’ genuine need for special education services. This would help explain not only the growth of special education enrollment, but also the recent increase in graduation rates for special education students—if more students who aren’t truly disabled are being placed in special education, we would expect to see improvements in the academic performance of students in special education (see Fine 2002a).

Why would schools place more students in special education when they didn’t truly need it? Some researchers are now identifying high-stakes testing as a possible cause. More and more states have adopted test-based accountability programs in which significant consequences, such as student promotion and graduation or school funding cuts, are attached to performance on a standardized test. The goal of such programs is to provide schools with a firm incentive to improve performance—if students do poorly on the test, schools can be held accountable. But these programs can also create a perverse incentive: an incentive to game the system by getting low-performing students out of the testing pool altogether. By labeling such students as disabled and placing them in special education, schools can exempt them from mandatory testing. In some states, special education students who are considered testable are included in mandatory testing, but schools could still game the system by labeling special education students untestable (that is, too disabled to take the test). When low-performing students are exempt from testing, schools’ average test scores go up, which makes the schools look better.

Examining a high-stakes statewide test in Texas, Deere and Strayer found that students who failed the test in one year were more likely to be classified as exempt from the test (either as special education students or limited English proficient students) the next year; that schools were more likely to classify minority students as exempt if this would reduce the number of minority students tested to a low enough level that the school’s minority test scores would not be reported; and that when the state started counting the scores of special education students who did take the test towards schools’ accountability ratings, the percentage of special education students who were classified as exempt from the test went up, reversing a downward trend (see Deere and Strayer 2001a, 2001b, and 2002). Figlio and Getzler, examining a high-stakes test in Florida, found that special education enrollment went up after the introduction of the test, that students in tested grades were more likely than students in untested grades to be placed in special education, that lower-scoring students were more likely to be placed in special education, and that severe disability categories did not rise after the introduction of the test (see Figlio and Getzler 2002). Jacob, studying Chicago schools, found that the percentage of students exempted from testing through special education rose faster after the introduction of high-stakes testing, and most quickly among lower-scoring students (see Jacob 2002a and 2002b).

These findings are limited to various extents by research methodology. Most obviously, all these studies are confined to one state or city. None of them attempts to control for the national trend in special education enrollment, or otherwise compare states with high-stakes testing to states without high-stakes testing (although a few of the findings do compare students who are and are not subject to high-stakes testing within the same state). Special education enrollment was increasing nationwide throughout the 1990s, and the nationwide rate of growth increased as the decade progressed. Thus, finding that special education enrollment in a state or city grew faster after it adopted high-stakes testing does not, in itself, prove that high-stakes testing caused faster growth; it only proves that the state or city in question behaved in a manner consistent with the national trend.

Likewise, correlations between low test scores and special education enrollment are of limited value. If low-performing students are more likely to be enrolled in special education programs, it may well be that schools are pushing those students into special education to remove them from the testing pool. But it is also possible that those students’ low test scores are indicative of genuine disabilities, for which they were subsequently diagnosed and enrolled in special education. Deere and Strayer try to account for this by comparing more than one set of paired years—that is, they look not only at whether students are more likely to be enrolled in special education one year after performing poorly on the test, but also at whether they are more likely to be enrolled in special education after two or three years of performing poorly. However, this does nothing to overcome the problem; it only proves that when schools put low-performing students into special education, they do not always do so after only one year of low performance. In fact, this is exactly what schools are supposed to do—they are not supposed to put students into special education based solely on a low test score. Deere and Strayer’s finding is simply a multi-year correlation between low test scores and special education enrollment, which is still just as easily attributable to real disabilities in low-performing students as it is to schools’ desire to remove those students from the testing pool.

Hanushek and Raymond conducted the only prior national study of high-stakes testing and special education enrollment, covering 1995–2000. They looked only at states with high-stakes testing, but they controlled for the national trend in special education enrollment. This control serves to implicitly compare states with high-stakes testing to states without high-stakes testing, a significant advantage over previous research. They found that when the control for the national trend was applied, the significant statistical relationship between high-stakes testing and special education enrollment disappeared entirely (see Hanushek and Raymond 2002).

In explaining surging enrollment in special education, there is another possible culprit besides high-stakes testing. School districts have traditionally received state funding for special education, which makes up the bulk of all special education funding, in such a manner that they receive more money if their special education programs are larger. This provides school districts with a financial reward—a bounty, so to speak—for placing students in special education. Critics of the U.S. special education system have long argued that this creates a perverse financial incentive to put as many students as possible into special education.

Defenders of the system often argue that funding for special education cannot create perverse incentives because placing a student in special education creates costs at least equal to the new funding it generates. This misrepresents what truly is and is not a “cost” of placing a child in special education. A true cost is an expenditure that the school would not have made otherwise. Some services that a school would have provided to a particular child no matter what can be redefined as special education services if the child is placed in special education; these services are not truly special education costs because they would have been provided anyway. For example, if a school provides extra reading help to students who are falling behind in reading, the school must bear that cost itself. But if the same school redefines those students as learning disabled rather than slow readers, state and federal government will help pick up the tab for those services. This is financially advantageous for the school because it brings in new state and federal funding to cover “costs” that the school would have had to pay for anyway. Furthermore, there are many fixed costs associated with special education that do not increase with every new child. For example, if a school hires a full-time special education reading teacher, it will pay the same cost whether that teacher handles three students a day or ten. However, the school will collect a lot more money for teaching ten special education students than it would for teaching three.

Although there have been no national statistical studies of this question, and in particular no studies directly comparing states with and without bounty system funding, there has been a study of the relationship between financial incentives and special education enrollment. Cullen studied how school districts in Texas responded to changes in financial incentives arising from court-mandated restructuring of the state education financial system. She found that after the court order took effect, in districts where the amount of money provided for placing a student in special education went up, special education enrollment also went up. Specifically, she found that a 10% increase in the bounty for placing a student in special education could be expected to produce a 1.4% increase in a district’s special education enrollment rate. The relationship between changes in financial incentives and changes in special education enrollment was strong enough that Cullen found it explained 35% of the growth in special education in Texas from the 1991–92 school year through the 1996–97 school year.

Method

To perform the study, we needed two types of enrollment data for each state: enrollment of students served in special education under the Individuals with Disabilities Education Act (IDEA) and total public school enrollment during the school years from 1991–92 through 2000–01. We obtained special education enrollment data from the Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act, published each year by the U.S. Department of Education’s Office of Special Education Programs. We obtained total enrollment data from the Digest of Education Statistics, published by the U.S. Department of Education’s National Center for Education Statistics. All these figures included students between the ages of 6 and 21. For each state in each year, we divided special education enrollment by total enrollment to determine the special education enrollment rate.

To obtain information on special education funding systems, we contacted each state’s education department and asked three questions: what kind of funding system was in place, whether the system had been changed since the 1991–92 school year (and if so, when and from what kind of system), and whether there had been any other major changes in special education funding since 1991. For each state, we classified the funding system as a bounty system if it caused state funding to vary significantly by the size of each district’s special education program. This included systems that distributed funds according to the number of special education students in each district, the number of special education staff in each district, or the level of special education spending in each district. Systems that did not cause state funding to vary significantly by the size of each district’s special education program, which typically distributed funds according to the total student population in each district, were classified as lump-sum systems.[3]

We also collected information on high-stakes testing in each state. A test was considered high-stakes if any of the following depended upon it: student promotion or graduation, accreditation, funding cuts, teacher bonuses, a published school grading or ranking system, or state assumption of at least some school responsibilities. We obtained information on accountability tests from “Assessment and Accountability in the Fifty States,” a report published by the Consortium for Policy Research in Education.[4]

Our first method of analysis was a linear regression. To provide a measurement of growth in special education enrollment to serve as the dependent variable, we plotted each state’s special education enrollment rates for the years included in the study, fitted a line to these data points using the ordinary least squares (OLS) method, and determined the slope of the line. The OLS line represents the closest possible approximation of the state trend in special education enrollment; the slope of the OLS line serves as a measurement of the rate at which special education enrollment grew during the study period. The independent variables in our analysis (that is, factors that might explain growth in special education enrollment rates) were both binary measurements: whether or not the state had a lump-sum system during the study period, and whether or not the state had a high-stakes test.

This regression analysis has the advantage of identifying the difference in the rates of special education growth in states with different funding systems. However, it is not sensitive to when changes in funding systems occurred during the study period. Of the states that had lump-sum systems in the 2000–01 school year, only a few had those systems since the 1991–92 school year; the rest switched from a bounty system to a lump-sum system at some point in between. The regression analysis counts all of these states as lump-sum states; it does not differentiate between states that had lump-sum systems for the whole decade and states that had lump-sum systems for only part of that period.

To capture the difference that changes in state funding systems may have made during the study period, we performed another analysis using a method that keeps track of which states had lump-sum systems in each specific year. First, we determined that the average school year in which states adopted lump-sum systems (counting the four states that had always had such systems during the study period as if they had switched in 1990–91) was 1994–95. In that year, seven states had lump-sum systems, a sufficiently large number for meaningful analysis. For each year from 1994–95 onward, for all states that had lump-sum systems in that year we subtracted that state’s special education enrollment rate for the previous year from its special education enrollment rate in that year. For example, for each state that had a lump-sum system in 1994–95 we subtracted that state’s special education enrollment rate in 1993–94 from its enrollment rate in 1994–95. This gave us the change in enrollment rate for each state with a lump-sum system in each year. We then calculated the average change in enrollment rate for all lump-sum system states in each year.

Turning to the remaining states—those that had bounty systems for the entire study period—we took each state’s special education enrollment rate in 1993–94 and added the average change in enrollment rate for lump-sum states in 1994–95. This gave us projected values for the enrollment rates that those bounty states would have had in 1994–95 if they had switched to lump-sum systems in that year. We then added the average change for lump-sum states in 1995–96 to get a projected value for that year, and so on through 2000–01. This gave us projected values for the enrollment rates that bounty states would have had in 2000–01 if they had switched to lump-sum systems in 1994–95. By subtracting each state’s projected 2000–01 rate from its actual 2000–01 rate and taking the average difference for all bounty states, we were able to estimate the average effect it would have had if all bounty states had switched to lump-sum systems in 1994–95.

Results

Our findings for state special education funding systems and high-stakes testing are summarized in Table 1. Four states had lump-sum systems for the entire study period, twelve states began with bounty systems and switched to lump-sum systems during the study period, and 33 states (plus the District of Columbia) had bounty systems for the entire study period. Twenty-nine states (plus the District of Columbia) had high-stakes testing and 21 states did not. By far the most common type of high-stakes testing was a requirement that students pass a certain test to be promoted to the next grade or graduate from high school.

Hawaii and the District of Columbia each have only one school district. Rather than classifying them according to the system by which funds are distributed to school districts, we classified them according to the system by which the school district distributes funds to individual schools. Though the incentives are at a different organizational level, they work the same way.

One state, New Hampshire, did not have any state-level funding of special education until 1999. In that year, to comply with a court order, it created a new state program that funds special education by a bounty system. To prevent distortion of our results by this unusual case in which there was no state funding system of any kind for many years, we excluded New Hampshire from all calculations.[5]

The national special education enrollment rate is shown in Figure 1. It grew from 10.6% of all students in the 1991–92 school year to 12.3% in the 2000–01 school year. The rate of growth has accelerated consistently during the past decade.

Figure 2 shows the special education enrollment rate over the same period, with figures separated into enrollments under lump-sum systems and bounty systems. Special education enrollment under lump-sum systems grew from 10.5% in the 1991–92 school year to 11.5% in the 2000–01 school year, an increase of one percentage point. Meanwhile, special education enrollment under bounty systems grew from 10.6% to 12.6% in the same period, an increase of two percentage points.

In interpreting Figure 2, we must bear in mind that it represents enrollments under the two types of funding systems rather than enrollments in two fixed sets of states. Enrollment figures in states that changed from bounty to lump-sum systems during the study period were included in the “bounty system” set for years before the state changed, and in the “lump-sum system” set for years after the change. The line for enrollment under lump-sum systems includes four states in 1991–92, but this rises to 16 states by 2000–01.We must also bear in mind that these rates are based on total figures rather than averages, so they are population-weighted. The 1998 funding system change in California will produce a much larger impact on these figures than the 1995 funding system change in Rhode Island. Figure 2 represents national totals for each funding system type.

Figure 3 shows average special education enrollment rates in lump-sum system states and bounty system states. The average special education enrollment rate for states that had lump-sum systems at any time during the study period grew from 11.1% in the 1991–92 school year to 12.4% in the 2000–01 school year, an increase of 1.3 percentage points. In the same period, the average special education enrollment rate for states that maintained bounty systems for the entire study period grew from 10.5% to 12.8%, an increase of 2.3 percentage points.

In Figure 3, state classifications are fixed. The line for enrollment in lump-sum system states always includes the same states: the 16 states that had lump-sum systems in the 2000–01 school year. Also, Figure 3 shows the average special education enrollment rate for the states in each group, rather than the total rate in each group. This means the population of each state has been factored out; all states are equally weighted. Thus, Figure 3 represents a population-controlled comparison of two sets of states, rather than population-weighted national totals for the two system types.

Our regression analysis found a statistically significant relationship between a state’s special education enrollment rate and whether or not that state had a lump-sum system during the study period.[6] The regression coefficient for funding systems was 0.124. This means that every year, bounty system states experienced 0.124 more percentage points of growth in special education enrollment than they would have experienced if they had lump-sum systems. Over a ten-year period this adds up to 1.24 percentage points of additional special education enrollment because of the bounty system. The 33 states (plus the District of Columbia) that adhered to the bounty system saw special education enrollment grow by two percentage points over the study period, so a full 62% of that growth can be attributed to the effects of the bounty system.[7] Also, 1.24% of total enrollment in bounty states in 2000–01 represents 390,000 extra students placed in special education because of the bounty system, resulting in additional spending of over $2.3 billion per year.[8]

As for high-stakes testing, not only did the regression analysis find that the relationship between special education enrollment and high-stakes testing was not statistically significant, the regression coefficient for high-stakes testing was negative. That is, states with high-stakes testing actually had lower rather than higher rates of special education enrollment, although not so much so that we can be highly confident that this reflects a real relationship between high-stakes testing and special education enrollment. This study cannot tell us why there isn’t a statistically significant relationship between high-stakes testing and higher special education enrollment, but one possible explanation is that states may be anticipating perverse incentives from high-stakes testing and taking preventative measures against them, but not taking similar measures against perverse incentives from funding special education by the bounty system.

The results of our second analysis are summarized in Figure 4, which compares actual and projected average special education enrollment rates in the states that stuck to the bounty system. The projected rate estimates the average special education enrollment these states would have had if they had switched to lump-sum systems in the 1994–95 school year. The difference between the actual and projected rates represents the average extra enrollment in these states attributable to the bounty system. In the 2000–01 school year this difference is 0.82 percentage points. In these states, 0.82% of total enrollment in 2000–01 represents roughly 258,000 additional students in special education, which would generate over $1.5 billion per year in extra spending.

Conclusion

State funding systems are having a dramatic effect on special education enrollment rates. In states where schools had a financial incentive to identify more students as disabled and place them in special education, the percentage of all students enrolled in special education grew significantly more rapidly over the past decade. By contrast, high-stakes testing appears to be having no significant effect on special education enrollment. This is contrary to the findings of previous studies that have looked only at individual cities or states and have not controlled for national trends, but agrees with the finding of the only previous national study.

The ever-accelerating growth of special education enrollment is becoming an urgent problem for American education, drawing off more and more billions of dollars that could otherwise be spent on better education for all students. The finding that state funding systems are responsible for the bulk of the past decade’s growth in special education enrollment suggests how this problem could be curtailed. The most obvious policy solution would be for bounty system states to adopt lump-sum funding systems, removing the perverse financial incentive to place students in special education. However, state funding reform is not the only way to remove that incentive.

There are several ways in which the federal government could help alleviate the problem of perverse funding incentives. One approach would be to provide private school scholarships to all special education students, on the model of Florida’s popular McKay Scholarship Program. This would mitigate perverse incentives from state special education funding, since placing a student in special education would not automatically bring more money into a school district’s budget. It would also have the advantage of potentially providing the other benefits of school choice to families with disabled students, such as the ability to choose for themselves which school will provide the best education for their children.

If full-scale private school scholarships are not politically feasible, there are several smaller steps that could be taken in this direction. For example, existing federal IDEA funding could be made portable. Under such a system, families could choose either to continue receiving special education services from their public school, in which case federal money would continue to go to that school, or to take their federal dollars to another service provider of their choice. In cases where only limited special education services are needed, families choosing to seek services elsewhere could even leave their children in public school for regular educational services.

Another way to combat the effects of perverse state funding incentives would be to begin federal auditing of special education placements. The federal government could identify districts with especially high or especially low rates of special education placement, either generally or for certain groups. Other districts could be chosen for audits at random. Independent experts could then make their own diagnoses of students in special education in those districts, to determine how frequently students have been misdiagnosed. This would serve to expose to the public the true extent to which students without disabilities have been placed in special education; at the very least, such exposure would generate much stronger political pressure for reform of the system.

Finally, Congress could redirect its spending priorities when considering how new IDEA funds should be structured. Giving higher financial priority to types of disability that have more clearly objective diagnostic standards—such as autism, visual impairments, and hearing impairments—would send a clear message to states that the federal government will not provide infinite amounts of money for out-of-control special education programs. It would also have the beneficial effect of directing new federal money towards disability categories that place larger financial burdens on schools.

References

Endnotes

  1. Some have argued that the dramatic reduction in students classified as mentally retarded is actually a result of improved diagnosis of autism—that many autistic students who used to be misdiagnosed as mentally retarded are now correctly diagnosed as autistic. However, the decrease in mental retardation is much larger than the increase in autism. Furthermore, a recent study of medical records and parent surveys in California provides strong evidence that no such shift in diagnosis from mental retardation to autism has taken place (see Byrd 2002).
  2. A recent, highly-publicized study claims to show that “some, if not all, of the observed increase [in autism diagnoses] represents a true increase in cases of autism” (Byrd 2002). However, the study’s methodology does not justify this conclusion (see Forster 2002).
  3. In making these classifications, we disregarded special funding programs targeted at high-cost students with especially severe disabilities. These programs, which are almost always funded on a bounty-system basis, represent a relatively small portion of state funding. Furthermore, the standards for diagnosis of severe disabilities—such as cerebral palsy and autism—are usually very clear and objective, and thus such diagnoses are probably not very responsive to funding incentives.
  4. The CPRE report did not provide information on high-stakes testing in the District of Columbia. We obtained that information from Patricia Anderson at the District’s Department of Education (phone interview, 1:36 pm on Sept. 26, 2002). Tests were only counted as high-stakes if the stakes were mandatory; that is, if the given consequences were required to follow from the test, rather than merely being a possible result of the test.
  5. There is one calculation in this study that would not be distorted by including New Hampshire: the total national special education enrollment rate (see Figure 1). However, when the numbers are rounded to the nearest hundredth of a percentage point, including or excluding New Hampshire produces no change except in the 2000–01 rate, which rises from 12.28% to 12.29% if we include New Hampshire.
  6. Following standard practice, we considered a result statistically significant if p was less than 0.05.
  7. Total special education enrollment in the 33 states (plus the District of Columbia) that had bounty systems for the whole study period was 10.6% of all students in the 1991–92 school year, and 12.6% in the 2000–01 school year, a difference of two percentage points. The special education enrollment rate in this fixed set of bounty states for 1991–92 appears to be the same as the special education enrollment rate for all students under bounty systems (the data shown in Figure 2) because the difference (0.0113%) disappears when figures are rounded to tenths of a percentage point. For 2000–01 these two methods produce identical results, since these states were the only ones that still had bounty systems in that year.
  8. Cost estimates are based on an estimated additional expenditure per special education student of $5,918 in the 1999–2000 school year, calculated on behalf of the U.S. Department of Education by the Center for Special Education Finance (Chambers, Parrish, and Harr 2002). This figure represents spending on special education students over and above what is spent on regular students.

 


Center for Civic Innovation.

EMAIL THIS | PRINTER FRIENDLY

CR 32 PDF (64 kb)

WHAT THE PRESS SAID

SUMMARY:
This report examines the effect state funding methods have on the number of students enrolled in special education. It finds that states with “bounty” funding systems provide financial incentives to schools to increase the identification of students with special needs by paying schools more for each additional student in special education. The authors find that those incentives are responsible for 62% of the increase in special education enrollment in those states over the past decade. Nationally, the report shows that this has led to roughly 390,000 children wrongly placed in special education programs at an annual cost of $2.3 billion. The authors also find that high-stakes testing, which has been suggested as an alternative culprit for the increase, has no significant effect on special education enrollment.

TABLE OF CONTENTS:

Executive Summary

About the Authors

Acknowledgements

Introduction

Previous Research

Method

Results

Table 1: State Special Education Funding Systems

Figure 1: U.S. Special Education Enrollment, 1991–2000

Figure 2: Special Education Enrollments under Bounty and Lump-Sum Systems, 1991–2000

Figure 3: Average Special Education Enrollment in Lump-Sum and Bounty States, 1991–2000

Figure 4: Actual and Projected Special Education Enrollment in Bounty States, 1991–2000

Conclusion

References

Endnotes

 


Home | About MI | Scholars | Publications | Books | Links | Contact MI
City Journal | CAU | CCI | CEPE | CLP | CMP | CRD | ECNY
Thank you for visiting us.
To receive a General Information Packet, please email support@manhattan-institute.org
and include your name and address in your e-mail message.
Copyright © 2009 Manhattan Institute for Policy Research, Inc. All rights reserved.
52 Vanderbilt Avenue, New York, N.Y. 10017
phone (212) 599-7000 / fax (212) 599-3494