Manhattan Institute for Policy Research.
search  
 
Subscribe   Subscribe   MI on Facebook Find us on Twitter Find us on Instagram      
 
   
 

Education Working Paper
No. 3  September 2003


Public High School Graduation and College Readiness Rates in the United States

Jay P. Greene, Ph.D.
Senior Fellow, Manhattan Institute for Policy Research
Greg Forster, Ph.D.
Senior Research Associate, Manhattan Institute for Policy Research

Funding for this report was provided by the Bill & Melinda Gates Foundation

*********************************************

Executive Summary

Students who fail to graduate high school prepared to attend a four-year college are much less likely to gain full access to our country’s economic, political, and social opportunities. In this study we estimate the percentage of students in the public high school class of 2001 who actually possess the minimum qualifications for applying to four-year colleges.  To be “college ready” students must pass three crucial hurdles: they must graduate from high school, they must have taken certain courses in high school that colleges require for the acquisition of necessary skills, and they must demonstrate basic literacy skills.

Using data from the U.S. Department of Education we are able to estimate the percentage of students who graduate high school as well as the percentage that finish high school ready to attend a four-year college. We are also able to produce these estimates by racial/ethnic group as well as by region and state.

Specifically, the study’s findings include the following:

  • Only 70% of all students in public high schools graduate, and only 32% of all students leave high school qualified to attend four-year colleges.
  • Only 51% of all black students and 52% of all Hispanic students graduate, and only 20% of all black students and 16% of all Hispanic students leave high school college-ready.
  • The graduation rate for white students was 72%; for Asian students, 79%; and for American Indian students, 54%. The college readiness rate for white students was 37%; for Asian students, 38%; for American Indian students, 14%.
  • Graduation rates in the Northeast (73%) and Midwest (77%) were higher than the overall national figure, while graduation rates in the South (65%) and West (69%) were lower than the national figure. The Northeast and the Midwest had the same college readiness rate as the nation overall (32%) while the South had a higher rate (38%) and the West had a lower rate (25%).
  • The state with the highest graduation rate in the nation was North Dakota (89%); the state with the lowest graduation rate in the nation was Florida (56%).
  • Due to their lower college readiness rates, black and Hispanic students are seriously underrepresented in the pool of minimally qualified college applicants. Only 9% of all college-ready graduates are black and another 9% are Hispanic, compared to a total population of 18-year-olds that is 14% black and 17% Hispanic.
  • We estimate that there were about 1,299,000 college-ready 18-year-olds in 2000, and the actual number of persons entering college for the first time in that year was about 1,341,000. This indicates that there is not a large population of college-ready graduates who are prevented from actually attending college.
  • The portion of all college freshmen that is black (11%) or Hispanic (7%) is very similar to their shares of the college-ready population (9% for both). This suggests that the main reason these groups are underrepresented in college admissions is that these students are not acquiring college-ready skills in the K-12 system, rather than inadequate financial aid or affirmative action policies.

*********************************************

About the Authors

Jay P. Greene
is a Senior Fellow at the Manhattan Institute for Policy Research where he conducts research and writes about education policy. He has conducted evaluations of school choice and accountability programs in Florida, Charlotte, Milwaukee, Cleveland, and San Antonio. He also recently published a report and a number of articles on the role of funding incentives in special education enrollment increases.

His research was cited four times in the Supreme Court’s opinions in the landmark Zelman v. Simmons-Harris case on school vouchers. His articles have appeared in policy journals, such as The Public Interest, City Journal, and Education Next, in academic journals, such as The Georgetown Public Policy Review, Education and Urban Society, and The British Journal of Political Science, as well as in major newspapers, such as the Wall Street Journal and the Washington Post.

Greene has been a professor of government at the University of Texas at Austin and the University of Houston. He received his B.A. in history from Tufts University in 1988 and his Ph.D. from the Government Department at Harvard University in 1995. He lives with his wife and three children in Weston, Florida.

Greg Forster is a Senior Research Associate at the Manhattan Institute’s Education Research Office. He is the co-author of several education studies and op-ed articles. He received a Ph.D. with distinction in Political Science from Yale University in May 2002, and his B.A. from the University of Virginia, where he double-majored in Political and Social Thought and Rhetoric and Communications Studies, in 1995.

Acknowledgements

The authors thank the Bill & Melinda Gates Foundation for funding this study. They would also like to thank the U.S. Department of Education and the state departments of education for providing the necessary data.

About Education Working Papers

A working paper is a common way for academic researchers to make the results of their studies available to others as early as possible. This allows other academics and the public to benefit from having the research available without unnecessary delay. Working papers are often submitted to peer-reviewed academic journals for later publication.

*********************************************

Introduction

Every year about a million young people who should graduate from high school don’t, condemning them to a lifetime of lower income and limited opportunities. The continuing failure of U.S. public schools to keep so many of their students in school—particularly their black and Hispanic students, who fail to graduate at much higher rates than whites and Asians—must be considered one of the most urgent problems in education policy.

But even this does not convey the full extent of the problem. More than half of the students who do manage to graduate from high school, and more than two-thirds of all students who start high school, do not graduate with the minimal requirements needed to apply to a four-year college or university. While some colleges are more selective than others, virtually all four-year colleges require that a student have taken certain courses and possess certain basic skills before they will even consider his application. High school graduates who don’t meet these requirements are not “college ready”; they are shut out of the college market before they even enter it. This represents another lifelong barrier to higher incomes and greater opportunities. As with graduation itself, black and Hispanic students are disproportionately unlikely to be college ready.
 
This study estimates public high school graduation rates using a reliable and yet simple method. It also uses data from a large national study performed by the U.S. Department of Education to estimate college readiness rates. The results show that only 70% of all students in public high schools graduate, and that only 32% of all students leave high school qualified to attend a four-year college. Among black and Hispanic students the numbers are far lower: only 51% of all black students and 52% of all Hispanic students graduate, and only 20% of all black students and 16% of all Hispanic students graduate college-ready. While the overall graduation rate of 70% for the graduating class of 2001 represents a one-point improvement over our findings in last year’s study and two points better than for the class of 1998, additional years of results will be needed to confirm whether these small changes represent a real upward trend in graduation rates.

Because of the disparities in graduation and college-readiness rates among racial groups, black and Hispanic students are seriously underrepresented in the pool of minimally qualified college applicants. Only 9% of all college-ready graduates are black and another 9% are Hispanic, compared to a total population of 18-year-olds that is 14% black and 17% Hispanic. The portion of all college freshmen that is black (11%) or Hispanic (7%) is very similar to their shares of the college-ready population. This suggests that the main reason these groups are underrepresented in college admissions is not insufficient student loans or inadequate affirmative action, but the failure of public high schools to prepare these students for college. So long as black and Hispanic students are less likely to graduate high school, and less likely to be college ready even if they do graduate high school, no financial aid or college admission policy can effectively increase their representation in higher education.

The public school system can be thought of as a pipeline. Students should flow from the start of the pipeline (entering preschool or kindergarten) all the way through to the end (graduating high school prepared for college). The problem is that too many minority students “leak” out of the pipeline along the way. Improving student financial aid or making affirmative action policies more aggressive is like opening the spigot at the end of the pipeline wider. It has no effect on the flow of minority students into higher education because the problem isn’t blockage at the end of the pipeline; it’s leakage in the middle. To be effective, any strategy for increasing minority representation in higher education has to focus on fixing the leaks in our public school system, ensuring that minority students graduate from high school with the skills needed to be ready for college.

Previous Research

High School Graduation Rate

Statistics on high school graduation rates are unnecessarily confusing and notoriously unreliable. There are three types of methods by which graduation rates are generally computed. The first method relies upon surveys, such as the Current Population Survey (CPS) or the National Education Longitudinal Study (NELS), in which respondents identify their own levels of educational attainment or those of other members of their households. The second method relies on efforts by schools, school districts, and departments of education to track the whereabouts of individual students over time to identify those who drop out and those who graduate. The third method relies on enrollment and diploma counts to estimate the rate at which students in a cohort graduate.

The first method, relying on surveys like the CPS or NELS, is the main one used in the annual report on dropouts released by the National Center for Education Statistics (NCES), a branch of the U.S. Department of Education (see Kauffmann, Alt, and Chapman 2001). According to that report, 86.5% of students complete high school. However, their method is hindered by a number of problems and produces a misleadingly high estimate. First, their high school completion rate includes both regular high school graduates and GED[1] recipients. The problem is that GED recipients are fundamentally different from regular high school graduates in their expected life outcomes.

Some researchers find that GED recipients are statistically indistinguishable from high school dropouts in their expected employment prospects and earnings (see Cameron and Heckman 1993). Other researchers see modest advantages for GED recipients over dropouts (see Murnane, Willett, and Boudett 1995). But no research suggests that GED recipients are even close to equivalent to regular high school graduates in terms of their future prospects. Grouping GED recipients and regular high school graduates together as “high school completers” combines two unlike categories of students, a practice that obscures more than it reveals.

Furthermore, counting GED recipients as if they were high school graduates is misleading if our purpose is to gauge the success of the high school system in graduating students. Properly speaking, GED recipients are dropouts from high school who later decided to seek a credential. Crediting the efforts of these GED recipients to the high schools from which they dropped out is a grave distortion of reality. Unfortunately, as we will see below, counting GED recipients as graduates of the high schools from which they dropped out is a distortion that also occurs in other methods of computing graduation rates.

This problem might be alleviated if researchers could report results for students who completed high school with a regular diploma separately from those for students who received a GED. However, difficulty with the wording of survey questions has prevented the NCES report from distinguishing between these two groups of students. The 86.5% figure combines unlike categories of students that cannot be disentangled.

In addition to the problem of counting GED recipients as regular high school graduates, the survey method used by NCES produces misleadingly high estimates because of difficulties with the survey sample. The CPS intentionally excludes from its sample military personnel, prisoners, and other persons living in institutional settings. To the extent that non-graduates are over-represented among these excluded populations, the resulting high school completion rate will be significantly inflated.

Third, the method used by NCES produces misleadingly high estimates because it relies upon accurate self-reporting by survey respondents. People may be inclined to overstate their educational accomplishments when answering survey questions, which would make the NCES rate too high. When this self-reporting problem is considered along with the problems related to the inclusion of GED recipients and the exclusion of non-graduate populations from the survey sample, it is clear that the NCES high school completion numbers are simply not reliable indicators of the true high school graduation rate.

The second common method by which graduation rates are computed relies on efforts by schools, school districts, and departments of education to track the status of individual students over time. The problem with this approach is that these government entities neither have the resources nor the incentives to track students accurately. Tracking individuals as they move from place to place and change their life arrangements is maddeningly difficult. The U.S. Census has a hard enough time doing it, even though that is its primary mission and it has thousands of employees who do almost nothing else. Schools, whose primary mission is to educate the students they do have, naturally give a lower priority to finding and tracking students they don’t have (that is, dropouts) and devote few resources to this task.

In addition, given the negative consequences associated with identifying students who have left school as dropouts, schools have strong incentives to count those students as anything other than dropouts. Especially when information on a student is ambiguous or missing, school and government officials are inclined to say that students moved away rather than say that they dropped out. This misidentification of students’ dropout status is facilitated by the inability of researchers, journalists, or other independent parties to verify claims made about the status of individual students because of student privacy protections.

The problem is further compounded by strange state and local regulations that exclude students from being identified as dropouts. For example, in some places, such as Texas, students who drop out of school but later receive a GED (or even just state that they are seeking one) are not counted as dropouts (see TEA 2003). In other places, such as Washington state, students for whom information is missing are automatically excluded from the dropout and graduation rate calculations (see Greene 2002). This produces very strong incentives to artificially inflate the graduation rate by not collecting information on student whereabouts.

While at first glance the tracking of individual students sounds like the most precise method of calculating graduation rates, in practice it is horribly inaccurate, sometimes comically so. As recent reports in the New York Times and Houston Chronicle on graduation rates in Houston have shown, school district graduation rates based on the tracking of individual students appear to have been greatly inflated (see Schemo 2003 and Peabody 2003). Unfortunately, this problem is neither recent nor confined to the Houston school district. As documented in previous graduation rate reports, official graduation rates going back many years have been highly misleading in New York City, Dallas, the state of California, the state of Washington, several Ohio school districts, and many other jurisdictions (see Greene 2001, Greene 2002, Greene and Hall 2002, and Greene and Winters 2002).

The third method of calculating graduation rates relies on comparisons of enrollment and diploma counts. The method used in this report is an example of this approach. Similar methods have been developed by researchers at the Urban Institute (see Swanson and Chaplin 2003 and Chaplin 1999) and the Business Roundtable (see Sum, et. al. 2003). The U.S. Department of Education’s Digest of Education Statistics compares the number of diplomas awarded each year to the 17-year-old population, producing an aggregate graduation rate (see NCES 2003). While they vary slightly in their methods of adjusting for population changes and other details, these studies all produce remarkably consistent results. The Digest’s method also permits an estimate of the long-term trends in graduation rates. The comparative virtue of the particular method used in this study is that it more easily allows graduation rates to be estimated for different racial groups, for different school jurisdictions, and for the public school system as distinct from private schools.

College Readiness Rate

On top of the need to accurately measure graduation rates, another problem that education policy makers are increasingly concerned with is that too many graduates aren’t college ready. There is a gap between what high schools require for graduation and what four-year colleges require before they can consider students’ applications, causing many students to graduate from high school unable to apply to college. Since college is a key to greater opportunity throughout the rest of a student’s life, this gap in the educational pipeline has serious consequences for those students whose high schools fail to prepare them, as well as for equality of educational opportunity among students of different races.

Obviously the term “college ready” could have many different meanings. Since the relevant issue is educational opportunity, here we are interested in whether students have the bare minimum qualifications necessary before a college will even consider their applications. Some colleges are more selective than others, of course, but there are certain absolute minimum criteria that a student must meet to apply to virtually any four-year college.

Unfortunately, we have even less reliable information about the college readiness of high school graduates than we have about the percentage of high school students who graduate. There have been some studies on how many freshmen at four-year colleges have to take remedial courses, with estimates ranging from 22% at public institutions and 13% at private institutions (see Lewis and Farris 1996) to as high as 49% at all institutions (see Education Trust 2001). But while this gives us some idea of how many of the students who make it to college were inadequately prepared in high school, it doesn’t tell us how many students failed to make it to college at all because they were inadequately prepared.

There has been a great deal of research on high school academic outcomes as measured by test scores. The large gap in achievement between black and Hispanic students on the one hand, and white and Asian students on the other, is well documented (for details see Education Trust 2001). The basic skills measured by these tests are certainly relevant to college readiness. However, this information alone doesn’t tell us enough to adequately measure how many students are college ready in every way that they need to be. The gaps in overall college readiness between racial groups may be larger than the gaps in test scores indicate, or they may be smaller.

Sometimes researchers use the number of students who take college entrance exams (the SAT and the ACT) as an indicator of how many students are college ready. One study found that a significant number of students with high GPAs, test scores, or class ranks don’t go on to college, but virtually all of them do—even among low-income students—if we count only those who take an entrance exam and apply to college. The study concluded that lack of financial aid is not a major barrier to college attendance for low-income students who are college ready. The report cites low educational expectations, poor academic preparation, lack of information about available financial aid, and failure to take entrance exams and apply to college as factors limiting college access (see Berkner and Chavez 1997). Drawing this conclusion implies that taking an entrance exam is a necessary component of college readiness properly understood.

College officials have also used college entrance exams as a barometer of college readiness. In 1999, a committee of the Texas Senate summoned the chancellors of the state’s five largest universities to explain why they did not enroll more minority students. They said the problem was that the K-12 education system produced very few college-ready minority students, citing (among other factors) the low number of minority students who had taken the required entrance exams (see Hock 2003).

Colleges can’t be expected to enroll students who have not taken entrance exams, of course, but the number of students who take these exams is not a good measurement of college readiness. Not every college-ready student takes college entrance exams. If a student knew he was unlikely to attend college despite being college ready, perhaps because of financial hardship or just because he didn’t want to go, he might not bother to take an entrance exam. Without some way to distinguish students who didn’t take an exam because they weren’t college ready from students who didn’t take an exam for other reasons, this method is likely to underestimate the number of college-ready youth. For this reason, we should not use the number of students who take these exams as a measurement of how many students are college ready.

There has been a previous attempt to directly measure college readiness. NCES researchers have developed a college-readiness index that ranks students as “marginally or not qualified,” “minimally qualified,” “somewhat qualified,” “highly qualified,” or “very highly qualified.” Cutoff points for these ranks were established for each of five criteria—grade point average in academic courses, class rank, score on the NELS test (an NCES aptitude test), SAT score, and ACT score. Each student was judged based on his highest-scoring criterion; if a student’s SAT score ranked him “somewhat qualified” but his GPA ranked him “highly qualified,” he counted as “highly qualified.” In addition to these five criteria, students were moved up a rank if they had taken “rigorous academic coursework,” defined as having taken four years of English; three years each of natural science, social science, and math; and two years of foreign language. “Very highly qualified” students who had not taken such courses were moved down a rank. One study using this index found that 64.5% of 1992 high school graduates were at least “minimally qualified,” with significant differences by race, income, and parents’ education (see Berkner and Chavez 1997).

While superficially attractive, the NCES college-readiness index is also not an accurate indicator of which students are able to apply to college. Perhaps the most important problem is that the standard for minimum college readiness is set too low. A student is considered “minimally qualified” for college if his GPA is at least 2.7, or if his class rank is in at least the 54th percentile, or if his NELS test score is in at least the 56th percentile, or if his SAT score is at least 820, or if his ACT score is at least 19.

These cutoff points may not seem too low at first glance, but we must bear in mind that each student is ranked only according to his highest-scoring criterion. A student with a 2.7 GPA is considered college ready regardless of his test scores, class rank, or transcript. NCES adopted this system in order to cope with lack of information; for many students, especially low-income and minority students, information was not available on all five criteria. Many students had information available on only one or two criteria. But while the NCES method does allow us to rank students for whom little information is available, it does not reflect the way colleges really select students. No doubt there are many students with 2.7 GPAs who could get into college, but it is equally certain that there are many who could not.

One other problem with the NCES index is its handling of students’ transcripts. NCES gives students a bonus for having taken the right combination of courses, but four-year colleges require students to have taken certain courses before they can even apply. Students who have taken the wrong courses don’t just move down a step in college eligibility; they are completely shut out of the college market. On top of this, as we will see in the next section, the NCES transcript screen is too tough; it doesn’t accurately reflect the requirements of less selective four-year colleges.

A measurement of college readiness that more accurately reflects the minimum admissions requirements for college is essential for education policy. Such a measurement will allow us to determine the extent of our schools’ failure to prepare students to apply to college. It will also answer crucial questions regarding inequality of opportunity for students in different racial groups.

In particular, it will allow us to explain the lower rates of college participation among black and Hispanic students. Some claim that lack of adequate financial aid is the major factor in denying college access to minority students (see ACSFA 2002 and Winter 2003). Others blame insufficient affirmative action in college admission policies (see NAACP 2003). Still others, like the Texas university chancellors mentioned above, say that almost all college-ready students who want to attend college are already doing so, and the problem is that the K-12 education system turns out too few college-ready minority students. We cannot properly evaluate these claims until we are first able to determine which students are truly college ready.

Method

High School Graduation Rate

To estimate high school graduation rates, this study follows the Greene Method (see Greene and Winters 2002). The Greene Method relies on relatively accurate data, counts only full diploma recipients as graduates, and provides information specifically on the public school system. The number calculated by the Greene Method is only an estimate of the public school graduation rate, but so is any other calculation of the graduation rate, and the Greene Method is relatively reliable and transparent.

The Greene Method relies on enrollment data and diploma counts collected by the U.S. Department of Education’s Common Core of Data (CCD), a national clearinghouse for education data. Enrollment data are far more reliable than dropout counts, because they are easy to collect and there is little incentive for officials to distort them. What’s more, CCD sets strict procedural requirements for data collection in order to ensure the reliability of its data.

CCD’s “State Nonfiscal Public Elementary/Secondary Education Survey Data” provides enrollment numbers for every grade level as well as diploma counts.[2] This information is provided separately for each state, and in most states is also provided broken down by racial group. Taking advantage of this detailed data set, we used the Greene Method to estimate public high school graduation rates nationwide and in each region and state. In states where racial data are available in every year, we also calculate the graduation rate by race, and then add up enrollments in those states to calculate the regional and national graduation rates for racial groups.

To estimate the graduation rate for a given cohort, we need two numbers: a numerator that measures the number of graduates and a denominator that measures the number of students from that cohort who should have graduated if none had dropped out. The numerator is easy enough; we use CCD’s count of the number of regular high school diplomas awarded in spring of 2001, the year our cohort graduated from high school. The denominator, however, requires some calculating.

We begin by estimating the number of students in our cohort when it first entered high school. The graduating class of 2001 entered high school in 1997-98, so we must determine how many students were in 9th grade for the first time in that year. We do not just use the total 9th grade enrollment for that year because we want to account for students from the previous cohort who were held back (a particularly large number of students are held back in 9th grade). Instead, we take the average of three numbers: the total 8th grade enrollment in 1996-97, the total 9th grade enrollment in 1997-98, and the total 10th grade enrollment in 1998-99. This process, called statistical “smoothing,” gives us a good estimate of the size of our cohort when it entered 9th grade in 1997-98.

Next, we use this estimate to determine how many students would have been in our cohort four years later if no students had dropped out. This requires us to adjust for population changes. Obviously if a significant number of students moved out of the country (or region, or state) while our cohort was in high school, we would want to lower our estimate of the cohort size to avoid labeling the students who moved out as dropouts. Similarly, if students moved into the country (or region, or state) we would want to increase our estimate of the cohort size.

We cannot directly measure the change in our cohort’s population using enrollment data. However, we can measure something very close to it: the overall change in the high school population. We can reasonably expect that population changes in successive student cohorts will be similar. There is no reason to think that, say, large numbers of 10th graders will flee a state at the same time as large numbers of 11th graders are flocking into it.

We add up total high school enrollment in the year our cohort entered high school (1997-98) and the year it left (2000-01). Then we subtract the former from the latter to get the change in the high school population, and divide this by the total high school enrollment in the year our cohort entered high school (1997-98) to get the percentage change in the high school population. We multiply this by our smoothed estimate of the cohort size in 9th grade to estimate the change in the cohort population, then add this change to our 9th grade cohort estimate to get our 12th grade cohort estimate.

Now we have our denominator. All that remains is to divide the numerator (the number of diplomas awarded in spring 2001) by the denominator (our estimate of the size of our cohort in 2000-01). The result is the graduation rate.

For example: to calculate the graduation rate of black students in Florida, we begin by taking the average of the state’s 8th grade black enrollment in 1996-97 (41,625), its 9th grade black enrollment in 1997-98 (53,365), and its 10th grade black enrollment in 1998-99 (42,698). This gives us our smoothed 9th grade cohort estimate (45,896). We then add up the state’s total high school enrollment for black students in 1997-98 (150,748) and in 2000-01 (164,237), subtract the former from the latter to get the population change (13,489), and divide this by the total high school enrollment in 1997-98 to get the percentage high school population change (8.95%). We multiply this by our 9th grade cohort estimate to get our estimate of the change in the cohort population (4,108) and add this to our 9th grade cohort estimate to get our 12th grade cohort estimate (50,003).[3] Finally, we divide the number of high school diplomas awarded to black students in Florida in spring 2001 (23,608) by our 12th grade cohort estimate to get our graduation rate for black students in Florida (47%).

Though our adjustments for population changes are effective in large cohorts (e.g. whites in Texas) they are vulnerable in small cohorts (e.g. American Indians in Rhode Island). Particularly small cohorts, as well as those with exceptionally high population changes, are more susceptible to unique events that our population adjustments cannot adequately account for. These cohorts could distort the results of our enrollment adjustments and produce implausible results.

Because of the sensitivity of our estimates to enrollment anomalies, the Greene Method applies a set of rules for eliminating graduation estimates. We eliminate any cohort for which the smoothed 9th grade cohort estimate before adjusting for population change is fewer than 200 students, as well as any cohort for which there is a greater than 30% population change. Furthermore, if a cohort has a smoothed 9th grade cohort estimate of fewer than 2,000 students, we eliminate it if it has a population change greater than 20%. These rules allow us to focus exclusively on cohorts for which we have greater confidence and eliminate those where anomalies within the population or a limited cohort enrollment are more likely to taint the results.[4]

College Readiness Rate

In addition to calculating high school graduation rates using the Greene Method, we also calculate college readiness rates in this study. We are able to do this by taking advantage of a large national study performed for NCES on a sample of 1998 high school graduates. The study, called the “NAEP High School Transcript Study,” compiled detailed information on the high school record of each student in its representative sample, including courses taken and scores on the 1998 administration of the NAEP reading test (a nationally administered standardized test).[5] Although the data in this study are from 1998, we have no reason to believe that levels of college readiness had changed dramatically by 2001, so we use these data when calculating our estimate of the 2001 college readiness rate.

We measure college readiness by applying three screens that separate those who do or do not meet three minimum requirements that are necessary to apply to virtually any four-year college. It is important to note that our screens are specifically intended to measure the job that public schools do in making students college-ready—that is, the movement of students through the public school “pipeline.” Thus, our screens do not look for students who have “leaked” out of the public school pipeline but have subsequently made themselves college-ready. For example, a student might drop out of high school and then obtain a GED and attend community college classes to make himself college-ready. Such a student is to be commended for bouncing back, but he is not to be counted as a student who successfully navigated the public school pipeline.

The first screen is the most obvious: a student must have completed high school to apply to college. Thus, our high school graduation rate also serves as our first screen for college readiness. The percentage of each group that passes the first screen is simply the Greene Method estimate of the high school graduation rate for that group.

Our second screen looks at student transcripts. Colleges will only consider applications from students who have taken a certain set of courses. To pass our second screen, a student must have taken four years of English, three years of math, and two years each of natural science, social science, and foreign language. This standard reflects the minimum coursework a student must have to apply to four-year colleges with any reasonable hope of attending.

To develop this screen, we reviewed transcript requirements for admission at a number of four-year colleges, including state universities in California, Texas, Florida, New York, Illinois, and Michigan, representing a large portion of all public university students in the country. In particular, we focused our review on colleges that would be representative of the lowest level of prestige and selectivity, such as the California State University system, the City University of New York system, and Wayne State University in Michigan. Our transcript screen is the very lowest set of requirements we found among all the colleges we examined; most of the colleges we reviewed actually had entrance requirements stricter than our screen.

Our transcript screen is not intended to represent the very lowest set of requirements at any college anywhere. No doubt one could find examples of colleges with lower requirements. However, the number of admission slots available at these colleges must pale to insignificance in comparison to the number of public high school graduates lacking solid academic transcripts, even if such graduates were only a very small portion of all graduates. If hundreds of thousands of students are graduating each year without academic courses on their transcripts, it would be cold comfort to point out that a tiny number of them will nonetheless be able to go to college at the few schools that will take them.

What our transcript screen does represent is the minimum coursework a student should have if he has any serious intention of attending a four-year college right out of high school. Even the least prestigious and selective colleges believe that students lacking these courses do not have the minimal skills that the courses are meant to convey. Any student whose transcript does not pass our screen does not have a reasonable chance of being accepted to a four-year college.

Our third screen looks at basic reading skills. A student who has taken all the right courses still can’t go to college if he does not possess the basic literacy skills required of students in four-year colleges. The NAEP Transcript Study allows us to measure the academic performance of a representative sample of high school graduates, as opposed to measuring the performance of only those students who choose to take a college entrance exam. Also, crucially, we can identify graduates who pass our transcript screen and measure their academic performance separately. A student passes our third screen if his NAEP reading score is at least 265, the official cutoff for what NAEP calls a “basic” level of achievement, the lowest level of achievement it recognizes (see Loomis and Bourque 2001).

To measure outcomes for the second and third screens, we analyze the data from the NAEP Transcript Study. We exclude students who graduated with alternative forms of certification, such as certificates of attendance, as well as students who attended private schools. Then we determine what percentage of the high school graduates in the study pass our transcript screen, and what percentage of those who pass the transcript screen also pass the test score screen.[6] Finally, we apply all three screens by multiplying the percentage of students who graduate high school by the percentage of graduates who pass the transcript screen by the number of transcript-ready graduates who pass the test score screen.

The student sample in the NAEP Transcript Study is large enough to be representative at the national and regional level, but not large enough to still be representative if broken down all the way to the state level. When analyzing the data from this study for our second and third college-readiness screens, we did not break it down to the state level. Instead, we calculated figures for each region and then used the regional figures as estimates for each state. Thus while our national and regional figures for college readiness are the result of direct measurement, our state figures are estimates that do not reflect variation within each region. For this reason our state-level college readiness figures should be accepted with a lower degree of confidence than our regional and national figures.

Comparison of Overall, College-Ready, and College-Entering Populations

Using our estimate of the national college readiness rate for students of different races, we address the question of whether lower college attendance rates of black and Hispanic students is attributable to lower college readiness rates among those groups. If the K-12 education system is disproportionately failing to prepare black and Hispanic students to apply to college, this may explain why so few students in those groups attend college. To the extent that lower college attendance by black and Hispanic students is attributable to lower college readiness rates, it cannot be attributed to insufficient financial aid or inadequate affirmative action policies.

We begin with Census population data on the overall 18-year-old population in 2000.[7] Then we multiply the number of 18-year-olds in each racial group by our national college readiness rate for that group. This gives us a picture of the college-ready population in that year.[8]

Reliable data for the racial composition of incoming college freshmen in 2000 are not readily available. But we are able to calculate a good estimate by using enrollment data for racial groups among all four-year college students, taken from the NCES report “Enrollment in Postsecondary Institutions, Fall 2000 and Financial Statistics, Fiscal Year 2000.”[9] We calculate each racial group’s percentage of all four-year college students in 2000 and then use this as an estimate of the racial composition of incoming freshmen in that year.[10] To translate these percentages into estimated enrollment numbers, we multiplied them by the total number of incoming freshmen in four-year colleges in 2000, taken from the Digest of Education Statistics.[11]

Results

High School Graduation Rate

The overall results of our calculation of high school graduation rates are provided in Tables 1 and 2. The national high school graduation rate for the class of 2001 was 70%. This represents a one-point increase from the Greene Method calculation of the graduation rate for the class of 2000, which in turn was a one-point increase over the Greene Method calculation for the class of 1998 (see Greene and Winters 2002). While these one-point increments of movement are too small to represent a sizeable change in the graduation rate, the upward direction of the trend is encouraging. Nonetheless, with 30% of all high school students failing to graduate, the public school system clearly has a long way to go.

The state with the highest graduation rate in the nation was North Dakota, with a rate of 89%. North Dakota had the second highest graduation rate for the previous year. Other states with high graduation rates include Utah (87%), Iowa (85%), South Dakota (85%), and West Virginia (84%).

The state with the lowest graduation rate in the nation was Florida, with a rate of 56%. Florida also had the lowest graduation rate for the previous year. Other states with low graduation rates include Georgia (56%), South Carolina (57%), Tennessee (60%), and Nevada (61%).

There were regional differences as well. Graduation rates in the Northeast (73%) and Midwest (77%) were higher than the overall national figure, while graduation rates in the South (65%) and West (69%) were lower than the national figure. However, when interpreting these results we should also bear in mind the regional results for our calculation of college readiness rates (see below).

The results of our calculation of high school graduation rates broken down by racial group are provided in Table 1 and separately in Tables 3-7. The overall graduation rate for white students was 72%; for black students, 51%; for Hispanic students, 52%; for Asian students, 79%; and for American Indian students, 54%. Thus the graduation rates for black, Hispanic, and American Indian students continue to be significantly lower than those of white and Asian students.[12]

The states with the highest graduation rates for particular racial groups were North Dakota for white students (93%), New Mexico for black students (73%), Louisiana for Hispanic students (74%), Arkansas for Asian students (94%), and Oklahoma for American Indian students (72%). The states with the lowest graduation rates for particular racial groups were Florida for white students (61%), Wisconsin for black students (44%), New York for Hispanic students (42%), Mississippi for Asian students (65%), and Wyoming for American Indian students (40%).

We had to exclude some states from our racial group analyses because enrollments in those states were not provided broken down by race in every year. For this reason, our national figures for the graduation rates of racial groups should be interpreted cautiously, as they do not include data from every state. Also, the set of states excluded for this reason is different from those excluded from our racial group analyses in previous years, so comparisons should not be drawn between class of 2000 and class of 2001 figures. In particular, the inclusion of two large states—Michigan and Ohio—in this year’s national calculations for white and black students will render them non-comparable to the previous year’s figures.

In other cases, particular racial cohorts in some states were excluded because they were too small or too mobile to allow a reliable estimate of their graduation rates. However, even though their individual results are not reported separately, the data from these racial cohorts were included when we calculated our national analyses of racial groups.

College Readiness Rate

The results of our calculations of college readiness rates are provided in Tables 8 and 9. Table 8 gives the percentage of all students who pass our first two screens. These students have graduated high school with a regular diploma and having taken the necessary courses to apply to college. Nationally, 36% of all students meet these criteria. For white students the percentage passing these two screens is 39%; for black students, 25%; for Hispanic students, 22%; for Asian students, 46%; for American Indian students, 21%.

Table 9 gives the percentage of all students who pass all three of our screens—that is, the college readiness rate. The national college readiness rate was 32%. There were significant differences between racial groups. The national white college readiness rate was 37%, the national black college readiness rate was 20%, the national Hispanic college readiness rate was 16%, the national Asian college readiness rate was 38%, and the national American Indian college readiness rate was 14%.

Two of the regions we analyzed, the Northeast and the Midwest, had the same college readiness rate as the nation overall (32%) while the South had a higher rate (38%) and the West had a lower rate (25%). Given that the Northeast and Midwest have high school graduation rates higher than the nation overall, for their college readiness rates to be the same as the nation overall it would have to be the case that students in those regions who do graduate high school are less likely than the average U.S. student to be college ready. Meanwhile, the South had the lowest graduation rate of any region but the highest college readiness rate, indicating that among those who do graduate from Southern high schools a particularly high percentage is college ready.

In most cases, the difference between the figures in Table 8 and the figures in Table 9 are relatively small, which may lead some to conclude that the transcript screen disqualifies a much larger number of students than the reading skills screen. This appearance is misleading, because large numbers of students are excluded by both screens. Students who haven’t taken college-preparatory courses are more likely to be unable to read at Basic level on NAEP than students who have taken such courses. Thus, whichever screen is applied first (transcript or reading skills) will remove a large number of students, leaving behind a relatively small number of students to be caught by the remaining screen. See the Conclusion for further discussion.

Comparison of Overall, College-Ready, and College-Entering Populations

The results of our comparison of the overall, college-ready, and college-entering populations are provided in Table 10. Because black and Hispanic students have lower college readiness rates, it follows that they will make up a smaller portion of the college-ready population than of the overall population. Our calculation is that while black students made up 14% of the overall 18-year-old population in 2000, they made up only 9% of college-ready 18-year-olds in that year, and while Hispanic students made up 17% of all 18-year-olds, they made up only 9% of college-ready 18-year-olds.

These figures are very similar to the racial composition of the 2000 college-entering population. Black students, who made up 9% of the college-ready population, made up 11% of the college-entering population; Hispanic students, who also made up 9% of the college-ready population, made up 7% of the college-entering population. Our estimate of the size of the overall college-ready population is also very similar to the actual size of the 2000 college-entering population; we estimate that there were about 1,299,000 college-ready 18-year-olds in 2000, and the actual number of persons entering college for the first time in that year was about 1,341,000.

Conclusion

Our calculation of high school graduation rates demonstrates that the public school system is not only losing 30% of all its students before graduation, it also loses disproportionately more black and Hispanic students than white and Asian students. Our calculation of college readiness rates shows that only 32% of all students—fewer than half of those who graduate and about one-third of all students who enter high school—leave high school with the bare minimum qualifications necessary to apply to college. Again, black and Hispanic graduates are disproportionately not college ready as compared to their white and Asian peers.

Our estimate of the college-ready population is very similar to the actual college-entering population, in terms of both its size and its racial composition. There are no large differences between the number of students in each racial group who graduate from high school college-ready and the number in each racial group who are entering college. In the case of black students, however, the difference between the two populations does stand out, not for the size of the difference but because the number actually attending college is larger than the number who are college ready. There are a number of possible explanations for this. One is simply measurement error; these figures are estimates, not exact counts. Another is that black students may be more likely than other students to become college ready through alternative means—recall that our study measures only those who become college ready through the public school pipeline, excluding those who become college ready on their own. Finally, it is possible that a significant number of black students are being admitted to college without being college ready. All three of these explanations may be true to some extent.

Based on the overall findings of our study, we conclude that by far the most important reason black and Hispanic students are underrepresented in college is the failure of the K-12 education system to prepare them for college, rather than insufficient financial aid or inadequate affirmative action policies. Our calculations indicate that there is not a large disparity between the population that is minimally qualified to attend college and the population that actually does attend college, which would be the case if large numbers of students were kept out of college by financial hardship or inadequate affirmative action. While some college-ready students are undoubtedly denied the opportunity to attend college, the results of this study suggest that the number of such students is not large.

It seems likely that the primary effect of more aggressive affirmative action policies would not be to expand the college attendance of minorities but to change the existing distribution of minority college students. Affirmative action policies cannot increase the total number of minority students who are college ready. If large numbers of college-ready minority students were not currently enrolling in college, one might argue that more aggressive affirmative action might expand new opportunities to those students. But given that almost the entire pool of college-ready black and Hispanic students already enrolls in college, the only thing left for affirmative action to do is to shuffle those students around from school to school. Intensified affirmative action policies might raise the number of black and Hispanic students at a particular school, but these gains would have to come almost entirely from losses at other schools.

Some might be tempted to expand college opportunities by pressuring colleges to lower their entrance standards, in particular by dropping their transcript requirements. But colleges do not require applicants to have taken particular academic courses simply because they enjoy gratuitously tormenting high school students or because they’re getting kickbacks from the nation’s English and math teachers. Colleges believe that the courses they require convey crucial skills that students will need to get by in college. There isn’t much point in pressuring colleges to accept students who haven’t taken math classes if those students are just going to flunk out of college later because they lacked the necessary math skills.

The colleges’ position that academic courses in high school convey skills that are necessary in college is reasonable, and the data we collected in our study provide some support for it. Students who passed our transcript screen were more likely to read at Basic level on the NAEP (88.2%) than students who did not (66.9%). This gap was similar across all racial groups. For example, among black students 79.2% of students passing our transcript screen read at Basic level, compared to 54.6% of students not passing the screen; among Hispanic students 74.8% of students passing our transcript screen read at Basic level, compared to 54.5% of students not passing the screen.

Finally, some may say the blame for low rates of college readiness lies with the inability of the students themselves to learn rather than with the job public schools do of teaching them. They may argue that the problem of college readiness can’t be solved until poverty, racism, illegitimacy, and a host of other social ills are cured first. No doubt it is true that a certain number of students will not graduate college-ready regardless of what schools do, because they lack either the ability or the motivation to achieve college readiness. However, the potential effect—positive or negative—that public schools can have on the college readiness of their students is very large. It seems likely that very few people would go so far as to say that the public school system is now doing the best possible job of preparing students for college. The existence of other social problems does not excuse the public school system’s inadequate performance.

So long as the public education system disproportionately fails to produce college-ready black and Hispanic graduates, those populations will always be underrepresented in college. No amount of financial aid or affirmative action can change this mathematical reality. Reform of the K-12 education system is the key to improving college access for these groups.

 


Center for Civic Innovation.

EMAIL THIS | PRINTER FRIENDLY

EWP 03 PDF (96 kb)

WHAT THE PRESS SAID

SUMMARY:
Students who fail to graduate high school prepared to attend a four-year college are much less likely to gain full access to our country’s economic, political, and social opportunities. In this study, Manhattan Institute Senior Fellow Jay P. Greene, Ph.D., and Senior Research Associate Greg Forster, Ph.D., estimate the percentage of students in the public high school class of 2001 who actually possess the minimum qualifications for applying to four-year colleges. The study finds that, nationally, only 32% of students in the Class of 2001 were college ready, with significantly lower rates for black and Hispanic students. This suggests that the main reason these groups are underrepresented in college admissions is that they are not acquiring college-ready skills in the K-12 system, rather than inadequate financial aid or affirmative action policies.

TABLE OF CONTENTS:

Executive Summary

About the Authors

Acknowledgements

About Education Working Papers

Introduction

Previous Research

High School Graduation Rate

College Readiness Rate

Method

High School Graduation Rate

College Readiness Rate

Comparison of Overall, College-Ready, and College-Entering Populations

Results

High School Graduation Rate

College Readiness Rate

Comparison of Overall, College-Ready, and College-Entering Populations

Conclusion

Endnotes

References

Appendix

Table 1: High School Graduation Rate by State and Race

Table 2: Ranking of States by High School Graduation Rate

Table 3: Ranking of States by White High School Graduation Rate

Table 4: Ranking of States by Black High School Graduation Rate

Table 5: Ranking of States by Hispanic High School Graduation Rate

Table 6: Ranking of States by Asian High School Graduation Rate

Table 7: Ranking of States by American Indian High School Graduation Rate

Table 8: Proportion of All Students Who Graduate with College-Ready Transcripts

Table 9: College Readiness Rate

Table 10: Comparison of Overall, College-Ready, and College-Entering Populations in 2000

 


Home | About MI | Scholars | Publications | Books | Links | Contact MI
City Journal | CAU | CCI | CEPE | CLP | CMP | CRD | ECNY
Thank you for visiting us.
To receive a General Information Packet, please email support@manhattan-institute.org
and include your name and address in your e-mail message.
Copyright © 2014 Manhattan Institute for Policy Research, Inc. All rights reserved.
52 Vanderbilt Avenue, New York, N.Y. 10017
phone (212) 599-7000 / fax (212) 599-3494