A few weeks ago, the National Center for Education Statistics, the statistical arm of the U.S. Department of Education, released a report with the headline finding that there is no difference in the test scores of students attending charter and traditional public schools. The finding is technically correct but highly misleading.
NCES simply compared the average test scores of students who are enrolled in charter schools with those of children who attend traditional public schools. That analysis literally answers the question: Do the test scores of kids in charter and traditional public schools differ? The answer isn’t remotely interesting. The policy-relevant research question is: What is the difference in later outcomes for students who enroll in a charter school compared with the outcomes that the same students would have achieved had they instead attended a traditional public school? Answering that question requires research design.
Comparing average school scores alone simply cannot answer that policy-relevant question because charter and traditional public schools enroll fundamentally different students. As the NCES report also shows, on the one hand, students attending charter schools are more likely to be African American or Hispanic and be eligible for free or reduced lunch than children enrolled in traditional district schools, but on the other hand, they are less likely to be learning English or have a disability. More importantly, charter and traditional public school students also almost certainly differ in other ways that we don’t observe. For instance, students in charter schools have parents who were active enough to make an affirmative decision to enroll them in a school of choice. It isn’t obvious whether charter or traditional public schools serve more advantaged students. But the two sectors clearly have different populations.
Several studies use a strong research design that is capable of accounting for the differences between charter and traditional public school students to measure the effect of attending a charter school. The most convincing papers compare the later outcomes of students who enrolled in a charter school with those of students who also applied through a random lottery but did not get a seat. As in a medical trial, since among these students it is essentially a flip of a coin that determines access to a charter school, any difference in their later outcomes can be attributed to whether they were assigned to a charter school.
In a recent review of the evidence that uses such a randomized field trial approach, researchers from MIT, Columbia and Toronto University found that the research tends to find that students do better when they enroll in a charter school than if they had attended a traditional public school. The biggest benefits from attending a charter are reaped by students in urban areas and those who enroll in so-called no-excuses charters.
In fact, in just the past few weeks, researchers from Mathematica released a study finding that students randomly assigned to a charter school operated by the national KIPP network were much more likely to enroll in college than students who applied to KIPP but did not get in.
To be clear, the evidence on charter schools isn’t all rosy. And we still have more to learn about charter school operations and effectiveness. Nonetheless, there now exists a large enough base of rigorous studies tending to find that students benefit from attending a charter school to warrant action. Policymakers and thought leaders need to ignore the noise and focus on the evidence capable of identifying the effect of attending a charter school.
This piece originally appeared at The 74
Marcus Winters is a senior fellow at the Manhattan Institute, an associate professor at Boston University, and author of the new report, “Should Failing Schools Be Closed? What the Research Says.” Follow him on Twitter here.
Photo by PragasitLalao/iStock