Social promotion has long been the normal practice in American schools. Critics of this practice, whereby students are promoted to the next grade regardless of academic preparation, have suggested that students would benefit academically if they were made to repeat a grade. Supporters of social promotion claim that retaining students (i.e, holding them back) disrupts them socially, producing greater academic harm than promotion would. A number of states and school districts, including Florida, Texas, Chicago, and New York City, have attempted to curtail social promotion, by requiring students to demonstrate academic preparation on a standardized test before they can be promoted to the next grade.
This study analyzes the effects of Floridaâ€™s test-based promotion policy on student achievement two years after initial retention. It builds upon our previous evaluation of the policy in two ways. First, we examine whether the initial benefits of retention observed in the previous study continue, expand, or contract in the second year after students are retained. Second, we determine whether discrepancies between our evaluation and the evaluation of a test-based promotion policy in Chicago are caused by differences in how researchers examined the issue, or by differences in the nature of the programs.
Our analysis shows that, after two years of the policy, retained Florida students made significant reading gains relative to the control group of socially promoted students. These academic benefits grew substantially from the first to the second year after retention. That is, students lacking in basic skills who are socially promoted appear to fall farther behind over time, whereas retained students appear to be able to catch up on the skills they are lacking.
Further, we find these positive results in Florida both when we use the same research design that we used in our previous study and when we use a design similar to that employed by the evaluation of the program in Chicago.The differences between the Chicago and Florida evaluations appear to be caused by differences in the details of the programs, and not by differences in how the programs were evaluated.