Manhattan Institute for Policy Research.
search  
 
Subscribe   Subscribe   MI on Facebook Find us on Twitter Find us on Instagram      
 
 
   
 
     
 

 

Project FDA Report

No. 7 April 2014


AN FDA REPORT CARD:

Wide Variance in Performance Found Among Agency's Drug Review Divisions

Joseph A. DiMasi, Ph.D.
Christopher-Paul Milne, D.V.M., M.P.H., J.D.
Alex Tabarrok, Ph.D.


ABSTRACT

Center for Medical Progress.
DOWNLOAD PDF
PRESS RELEASE
Table of Contents:
Abstract
Foreword
Executive Summary
About the Authors
Introduction
Improving the FDA
Inconsistency Across FDA Divisions
Measuring the Efficiency of the FDA by Division
Productivity Metrics
Discussion
The High Value of Increased Life Expectancy and Pharmaceutical Innovation
The Gains from a Faster FDA
Recommendations
Conclusion
Appendix 1: CDER Reorganizations
Appendix 2: Data Sources
Endnotes
References

The United States Food and Drug Administration (FDA) reviews and must ultimately approve any new drug as "safe and effective" before it can be marketed for sale in the United States. The question of whether the agency is too cautious in its reviews (delaying access to critically needed treatments), or too fast in issuing approvals (potentially exposing patients to undetected risks from new products), has long been a subject of public debate.

This study attempts to provide a more objective examination of the FDA's performance by examining disparities in review and approval times across 12 review divisions within the FDA's Center for Drug Evaluation and Research (CDER). After reviewing nearly 200 products accounting for 80 percent of new drug and biologic launches from 2004 to 2012, the authors find wide variation in division performance. In fact, the most productive divisions (Oncology and Antivirals) approve new drugs roughly twice as fast as the CDER average and three times faster than the least efficient divisions—without the benefit of greater resources, reduced complexity of task, or reduction in safety. The authors estimate that a modest narrowing of the CDER divisional productivity gap would reduce drug costs by nearly $900 million annually. The worth to patients, however, would be far greater if the agency could accelerate access to an additional generation of (about 25) drugs. Greater agency efficiency in approving a single generation of drugs would be worth about $4 trillion in value to patients, from enhanced U.S. life expectancy. To reap such gains, this study encourages Congress and the FDA to more closely evaluate the agency's most efficient drug review divisions, and apply the lessons learned across CDER. We also propose a number of reforms that the FDA and Congress should consider to improve efficiency, transparency, and consistency at the divisional level.


FOREWORD

The Digital Future of Molecular Medicine: Rethinking FDA Regulation

by Dr. Andrew C. von Eschenbach, Chairman Project FDA
Former commissioner, U.S. Food and Drug Administration; director, National Cancer Institute

By overseeing products vital to the health of all Americans, the U.S. Food and Drug Administration may well be the most important regulatory agency within the federal government. But even the most knowledgeable experts can be uncertain as to exactly how the FDA conducts the business of regulation and arrives at its decisions—especially when it approves, or decides to withhold, a drug or medical device from the market. Such uncertainty about the process of regulatory decision making is often the source of much criticism, generating public and congressional concern about the FDA's performance.

As the most recent former FDA commissioner, I can personally testify that the agency's staff are, across the board, among the country's most dedicated, talented public servants. Indeed, their knowledge and capabilities qualify them for far more lucrative positions in the private sector; it is to their enormous credit that they continue to serve the public through the agency. This report suggests that it may be the process, rather than the people, that is affecting FDA performance: wide variations in drug approval times among the agency's Center for Drug Evaluation and Research (CDER) divisions cry out loudly for a formal and scientific process assessment—one that could lead to marked improvement of the policies and procedures employed in regulating new drug development.

Any rigorous attempt to analyze performance must begin with data: detailed metrics that allow us to analyze the steps in the process, along with the variables affecting the outcome of the process. The authors of this report have taken a giant step in that direction by assembling and analyzing a wide array of publicly available information about the relative performance of individual CDER divisions. Their analysis provides compelling data that should be viewed—by the agency, its overseers in Congress, and the Obama administration—as an important contribution to the assessment of the FDA's performance and ongoing debate over how it might be improved.

Continuous, quality improvement measures routinely used by private industry could serve FDA leadership, sponsors, and patients by discerning factors that contribute to an optimal level of performance and, more important, disseminating such practices to ensure that all divisions achieve that performance. The payoff for such an effort could be enormous. The authors suggest that cutting the current divisional performance gap in half would not only deliver enormous gains to patients from faster access to new medicines (improving health and extending longevity) but would also reduce drug development costs (by hundreds of millions of dollars annually). Lower costs would, in turn, encourage greater investment in new medicines, spurring a virtuous cycle of investment and innovation.

Process improvement should not be a controversial proposal. An organization like the FDA—which is over a century old and which has maintained its current, basic organizational framework for decades—requires new tools to adapt to changing circumstances. What do agency processes involve? Simply stated, when making regulatory decisions, the FDA first acquires data about a new drug or device (or a report of an adverse event). Second, it aggregates and analyzes; and finally, it acts on that information. That action (whether to approve, reject, or request product withdrawal from the market) is a decision that is a result of a process. As in many other sectors, such processes can be greatly improved by embracing new technologies and practices based on a systematic, constant assessment of the various steps and control points employed.

While it may not be possible, or even preferable, to employ all the strategies for process improvement utilized in the private sector, this does not preclude adopting the concept to help make the FDA more responsive, flexible, and forward-looking in the evaluation and updating of its own processes.

In addition to the need for a thorough mechanism for ongoing internal assessment of the variables affecting performance, another consideration for the agency is the lack of formal mechanisms for ensuring timely access to external inputs. When, as director of the National Cancer Institute, I assessed our performance, I was blessed to have strategic advice and oversight from formal boards, including the National Cancer Advisory Board, the Board of Scientific Advisors, and the Board of Scientific Counselors. At present, the FDA commissioner, as de facto CEO of the agency, has no such resource to serve this function: existing FDA "advisory committees" are either too narrowly focused on scientific questions or are organized, ad hoc, to focus on specific product reviews.

Congress, of course, ultimately retains responsibility for conducting meaningful oversight of the FDA. In practice, however, congressional hearings are sporadic and too often focused on problems rather than on fundamental processes. Congress is, admittedly, occupied by many other critically important issues and has relatively little time to systematically delve into the agency's internal workings. Moreover, its infrequent, limited reviews are usually associated only with the reauthorization of industry user fees. This study suggests that the time has come to consider another way for Congress and the administration to oversee FDA performance—by not only requiring an internal, continuous, quality performance mechanism but also by creating an external review body. Creating an independent external review board (something akin to the National Cancer Advisory Board) would inform and support the ability of the FDA commissioner to implement significant management, structural, and process changes across the agency, while greatly enhancing the ability of Congress to provide nuanced, timely oversight.

With data in hand on an ongoing basis, the commissioner, together with FDA division heads, could manage meaningful change and improvements to the regulatory process. As the authors of this report correctly remind us, change has indeed happened in the past—yet mostly as a response to crises rather than to opportunity. Strategic change based on performance data directed toward improvement of outcomes, combined with dissemination of best practices, should be an ongoing way of doing business across the FDA.

I have enjoyed no greater privilege in my professional career than serving alongside the FDA's talented staff. Today, the agency has more potential than ever to help the U.S. lead the world in advancing a biomedical revolution, one that will have an impact on every aspect of America's economy and health-care system by improving health, increasing productivity, and reducing overall health-care costs.

The FDA, in short, has no more important and honorable role than to serve as a reliable bridge, extending the benefits of this latest medical revolution to patients. Indeed, rather than a criticism, this report should be viewed as a positive, constructive contribution to a desperately needed dialogue on how to assist the agency in fulfilling this vital national goal.


EXECUTIVE SUMMARY

Drug development is an important American industry—both in terms of the health of the country's citizens and the nation's economic competitiveness. The Food and Drug Administration (FDA) plays a crucial role as gatekeeper: it reviews and must ultimately approve any new pharmaceutical drug as "safe and effective." The question of whether the agency is too slow in its reviews (delaying access to critically needed treatments) or is too fast in issuing approvals (potentially exposing patients to undetected dangers) has long been a subject of public debate.

A number of studies in recent decades have tried to explain why some drugs are approved by the FDA more quickly than others. The answers have ranged from fast-tracking more important drugs to pressure from Congress or groups of patients.

But the FDA is not monolithic. Its Center for Drug Evaluation and Research (CDER) has more than a dozen divisions with reviewing authority, each specializing in a particular medical sector or sectors. For example, one division regulates drugs that affect metabolism and endocrinology, and another reviews anti-infective medications.

We gathered a wide variety of data measuring output and input (workload) from CDER's review divisions over the 2004–12 period. Our analysis focuses on 12 of the review divisions with the most consistent therapeutic focus and extensive activity over our study period. The divisions we examined collectively accounted for 184 new drugs or biologics and 80 percent of all new CDER-approved drugs over the period. This paper is one of the first to examine and compare the agency's performance at the divisional level.

Our study finds, notably, considerable variation among the divisions. In fact, the median time for approval at the slowest division is three times as long as the approval time at the fastest. The slowest, the Neurology division, took nearly 600 days to approve a drug, and the two fastest units, Oncology and Anti-Viral, took under 200 days.

An examination of numerous variables suggests that the performance of the leading divisions cannot be explained by a lower workload, differences in the type and complexity of the drugs under review, or a diminution in safety. Indeed, Oncology and Anti-Viral had a relatively higher workload than other units while the divisions that appeared to be below-average performers (Cardiovascular/Renal, Neurology, and Psychiatry) had a lower workload.

The findings have broad implications for public health in the United States and for the nation's economic growth.

The Oncology division is about 60 percent faster on average in its approval process than the other divisions taken as a whole. If the other divisions could cut that gap in half, the average development cost of a new drug would drop by an estimated 4.6 percent, or $46 million. With an average of 19 non-oncology drugs winning approval each year, the total savings on the development front would come to $874 million. Over time, the reductions in the cost of development would likely spur more new drugs coming to market.

But these savings, as big as they are, would be dwarfed by the potential gains in life expectancy resulting from a faster approval process. A conservative estimate of the value of each additional year of life expectancy in the United States is $150,000, which translates to some $45 trillion for the population as a whole. From 2000 to 2011, life expectancy increased by 0.182 years annually. Assuming that half that increase is due to new pharmaceuticals, the value of the increase in life expectancy created by the drugs is about $4 trillion a year.

That astonishing number is potentially in play if the overall productivity level at the FDA were to get more in line with those of its fastest divisions. If, for example, one generation of new drugs could be introduced just one year faster, the increase in life expectancy would be worth $4 trillion.

More study is needed to identify the best practices at work in the most productive divisions and to apply them throughout the FDA. Such an examination would help to reverse the decline in the number of new drug approvals in recent years, a period also marked by increasing drug development costs. Improvements in the FDA's bureaucratic structure and procedures are especially important now because of the accelerating pace of technological change.

Advances in research techniques and computing power suggest great opportunities for treatment breakthroughs, particularly in the development of so-called personalized medicine, through which treatments can be customized based on each patient's unique genetic composition. But the progress of personalized medicine depends in large part on a reorganization and reconceptualization of the FDA. The concept of "safe and effective" itself needs to be redefined—technology is changing the goal of producing drugs that are deemed "safe and effective" for all Americans to a more focused and fluid certification involving much smaller patient populations.

To help reposition the agency to meet these new demands, the authors suggest a number of changes that the FDA and Congress should consider to improve efficiency, transparency, and consistency at the divisional level.


ABOUT THE AUTHORS

JOSEPH A. DIMASI is Director of Economic Analysis at the Tufts Center for the Study of Drug Development (CSDD), an independent, nonprofit multidisciplinary research organization affiliated with Tufts University committed to the exploration of scientific, economic, legal, and public policy issues related to pharmaceutical and biotechnology research, development, and regulation throughout the world. He serves on the editorial board of Therapeutic Innovation & Regulatory Science, and has served on the editorial boards of Drug Information Journal, Journal of Research in Pharmaceutical Economics, and Journal of Pharmaceutical Finance, Economics & Policy. DiMasi, an internationally recognized expert on the economics of the pharmaceutical industry, has published in a wide variety of economic, medical, and scientific journals and has presented his research at numerous professional and industry conferences. He testified before the U.S. Congress in hearings leading up to the FDA Modernization Act of 1997 and the reauthorization of the Prescription Drug User Fee Act. DiMasi is a board member of the Manhattan Institute's FDA Project.

DiMasi's research interests include: the R&D cost of new drug development; clinical success and phase attrition rates; development and regulatory approval times; the role that pharmacoeconomic evaluations have played in the R&D process; pricing and profitability in the pharmaceutical industry; innovation incentives for pharmaceutical R&D; and changes in the structure and performance of the pharmaceutical and biotechnology industries.

DiMasi holds a PhD in economics from Boston College.

CHRISTOPHER-PAUL MILNE joined the Tufts Center for the Study of Drug Development in 1998 as a senior research fellow and is currently its Director of Research, as well as research assistant professor at Tufts University Medical School. He serves on the editorial boards of Therapeutic Innovation & Regulatory Science and Pharma Focus Asia. Milne has published more than 60 book chapters and journal articles.

Milne's research interests include: academic and industry collaborations; disease, demographic, and market access factors in the emerging markets; incentive programs for pediatric studies, orphan products, neglected diseases, and medical countermeasures (MCMs); and tracking the progress of new regulatory and research initiatives such as regulatory science, comparative effectiveness research, translational medicine, and personalized medicine.

Milne holds a BA from Fordham University, an MPH from Johns Hopkins University, and doctoral degrees in veterinary medicine and law.

ALEX TABARROK is the Bartley J. Madden Chair in Economics at the Mercatus Center at George Mason University. He is coauthor, with Tyler Cowen, of the popular economics blog MarginalRevolution and cofounder of the online educational platform Marginal Revolution University. Tabarrok is the author of the recent e-book Launching the Innovation Renaissance (TED Books) and, with Tyler Cowen, Modern Principles of Economics, a leading textbook. Tabarrok, along with Daniel Klein, is author of FDAReview.org.

Tabarrok's recent research interests include: the FDA; patent system reform; the effectiveness of bounty hunters compared with that of the police; how judicial elections bias judges; and how local poverty rates affect trial decisions by juries. His other research examines methods to increase the supply of human organs for transplant, the regulation of pharmaceuticals, and voting systems. Tabarrok is editor of Entrepreneurial Economics: Bright Ideas from the Dismal Science; The Voluntary City: Choice, Community, and Civil Society; and Changing the Guard: Private Prisons and the Control of Crime. He has published in The Journal of Law & Economics, Public Choice, Economic Inquiry, Journal of Health Economics, Journal of Theoretical Politics, American Law and Economics Review, and Kyklos, among others. Popular articles by Tabarrok have appeared in the New York Times, the Wall Street Journal, and many other magazines and newspapers.

Tabarrok holds a BA from the University of Victoria (Canada) and a PhD from George Mason University.


INTRODUCTION

New drugs save lives. Each generation of pharmaceu- ticals is better overall than the previous one, resulting in increased life expectancy and quality of life number of new drugs approved in the United States has declined in recent years. Scientific and economic developments influence that number, but the most controllable factor is Food and Drug Administration (FDA) policy.

The FDA is the primary regulator of new drugs and medical devices. The agency sets the standards and chooses which new drugs and devices are permitted to be sold in the United States. Moreover, because the U.S. market is so large, the standards and choices of the FDA influence worldwide investment in pharmaceutical research and development.

Evaluating the risks and rewards of new medicines is fraught with value judgments. As a result, the FDA is accused of being too quick and careless but also too slow and cautious. The risk/reward trade-off is of less concern to us in this paper, however, than a second question: does the FDA exercise its regulatory powers efficiently? That is, given the resources and regulatory tools at its disposal, are we maximizing reward for a given risk? A high-performing FDA should exercise its core responsibilities with a high degree of predictability, transparency, efficiency, and consistency.

Although it is difficult to evaluate the efficiency of the FDA ab initio, we can pursue an answer by comparing the agency's performance across its divi- sions. We collected statistics by FDA division and compared divisions on measures of output such as the speed of approval of new drug applications and whether the goals of the 1992 Prescription Drug User Fee Act (PDUFA) were met. We combined measures of output with measures of workload to compute productivity by division.

We found large differences in productivity across the divisions, each of which reviews drug applications for specific therapeutic area(s). For example, the Cardiovascular and Renal division took nearly four times as long on average to approve a drug (nearly 800 days) as the Oncology division did (about 200 days). Average times can be significantly influenced by a handful of unusual approval applications. But dramatic differences persisted when the approval times were examined on a median basis. In that case, the Neurology division took the most time (nearly 600 days), almost three times as long as the approval period for the Oncology and Anti-Viral divisions, both of which clocked in at under 200 days.

These differences are suggestive of big gaps in pro- ductivity, but a number of other factors could be at work to explain the wide disparity in timing. Speedier approvals might depend on one division having fewer problems with its applications, for instance, or more resources than another. But even when those factors, along with safety considerations, were taken into account, the reason for the gaps still appeared to be varying levels of productivity—that is, faster divisions used their time and resources in a more efficient and effective way than slower divisions did.

Our goal in rating divisions is neither to praise nor to excoriate different divisions but to suggest useful avenues for further investigation. If some divisions are, in fact, more productive than others, more study is warranted to determine the reasons for the gaps and to suggest reforms. If best practices were spread across all FDA divisions, total productivity would increase.

Our results suggest that the potential for improvements is large. The best FDA divisions approve new drugs in half the time that the average division takes, and they do so without greater resources, reduced complexity of task, or reduction in safety. If the average division were to move halfway toward the best performers, we calculate that total drug development costs would fall by $874 million annually, spurring the search for more drugs to develop. But as big as those benefits would be, they would be dwarfed by the benefits accruing to patients. For patients, the speedier delivery of more effective drugs would increase life expectancy.

Our division ratings will be periodically updated in order to track and encourage growth in FDA productivity.


IMPROVING THE FDA

The potential gains from a more efficient FDA are very large, so it is important that the evidence indicates that change is possible. The agency's performance has certainly shifted in the past—in response to changes in public policy, funding, and staffing—becoming, at turns, more or less efficient (or cautious).

In one of the earliest studies of the agency, Peltzman (1973) examined the effects of the 1962 Kefauver-Harris Amendments to the Food, Drug, and Cosmetics Act of 1938. Best known for adding a proof-of-efficacy requirement, the amendments also significantly enhanced the FDA's powers. Peltzman found that the amendments significantly reduced pharmaceutical innovation—as measured by the number of new drugs, which fell by more than 50 percent after 1962. He found little evidence to suggest that the decline was due to a decrease in the proportion of inefficacious drugs.[1] In addition to drug loss, the time it took for new drugs to reach the market increased dramatically after 1962, suggesting that drug lag was also a significant burden on innovation.[2]

Although Peltzman concluded that policy changes during the 1960s were harmful, his study and similar ones suggest that policy changes may be capable of facilitating innovation and productivity. We have witnessed just such impacts—most notably, with the 1992 enactment of PDUFA.

Prior to 1992, the FDA typically took two and a half years to review a New Drug Application (NDA) and sometimes up to eight years. Often, the cause of the delay was not the difficulty of the application but merely a backlog. Applications would sit unexamined for months or even years. The FDA concluded that the process of approval could speed up if it had better equipment and more workers to review applications. Congress was, however, unwilling to increase FDA appropriations.

FDA Drug Review Process

The last major stage of the pre-approval process is the submission of a New Drug Application (NDA) or Biologics License Application (BLA), requesting approval from the FDA to market a new drug in the United States. An NDA (the BLA process is essentially the same) includes data on all animal and human studies done on the drug, as well as manufacturing and quality data. When an NDA is submitted to the agency, the FDA has 60 days to decide whether to file it so that it can be reviewed. The FDA can refuse to file an application that is incomplete (e.g., if one or more required studies are missing).

According to the FDA, the goals of the NDA are to provide enough information to permit agency reviewers to reach the following key decisions:

  • Whether the drug is safe and effective in its proposed use(s) and whether the benefits of the drug outweigh the risks

  • Whether the drug's proposed labeling (package insert) is appropriate and what it should contain

  • Whether the methods used in manufacturing the drug and the controls used to maintain the drug's quality are adequate to preserve the drug's identity, strength, quality, and purity

Once an NDA is submitted, an FDA review team—composed of physicians, chemists, statisticians, microbiologists, pharmacologists, toxicologists, and other experts—conducts separate evaluations. The review team analyzes study results and looks for possible issues with the application, such as weaknesses of the study design or analyses. Reviewers determine whether they agree with the sponsor's results and conclusions, or whether additional information is required for the review team to make a decision. Each reviewer prepares a written evaluation containing conclusions and recommendations about the application. These evaluations are then considered by team leaders, division directors, and office directors, depending on the type of application. Sometimes the FDA calls on advisory committees (now by default, if the product contains a new active ingredient), consisting of external, unbiased experts, to provide the FDA with independent opinions and recommendations on applications to market new drugs and on FDA policies. Whether an advisory committee is needed depends on many factors—for example, if the drug is the first in its class or the first for a given indication; or if specific safety issues are associated with that drug or that class of drugs.

CDER is expected to review and act on at least 90 percent of NDAs for standard drugs no later than ten months—and for priority drugs, no later than six months—after the 60-day filing period has expired. During the review period, FDA-sponsor meetings are held at Mid-Cycle (three–five months) and at Wrap-Up (one–two months) before the date of the first action (i.e., NDA approval, NDA rejection, or a Complete Response Letter indicating additional actions required by the sponsor).

Thus was born the PDUFA era, establishing renewable five-year periods of mandatory fees submitted by pharmaceutical companies along with their applications (as well as product and establishment fees). With those fees, the FDA hired hundreds of new employees. As a result of the legislation, the average processing time fell by a full year, to 18 months. Because of this evident success, PDUFA has been renewed by Congress every five years since 1992. Most important, the bulk of the evidence indicates that faster approval times have resulted from greater resources and improved efficiency and not from reductions in safety.[3]

Unfortunately, the FDA faces asymmetrical incentives. Damage can occur when bad drugs are approved quickly or when good drugs are approved slowly. However, the cost to the FDA of these two outcomes is not the same. When bad drugs are approved quickly, the FDA is scrutinized and criticized, victims are identified, and their graves are marked. In contrast, when good drugs are approved slowly, the victims are unknown (Madden 2010). We know that some people who died would have lived had new drugs been available sooner, but we don't know which people. As a result, premature deaths from drug lag and drug loss create less opposition than deaths from early approval, and the FDA's natural stance is one of deadly caution.

In 2004, the FDA was charged with creating "what may be the single greatest drug safety catastrophe in the history of this country or the history of the world."[4] At issue was the safety of Vioxx, a highly popular drug prescribed for arthritis and pain relief, until it was withdrawn in September 2004 after a study found that patients using the drug for long periods had a higher rate of heart attacks and strokes than a control group. The Vioxx scare returned the FDA to its traditional asymmetry.

Between 1993 and 2004—the post-PDUFA, pre-Vioxx era—the FDA permitted an annual average of 33.4 new molecular entities (NMEs) and new therapeutically significant biologics.[5]In the post-Vioxx era (2005–13), however, the FDA has permitted only 25.3 NMEs and therapeutically significant biologics per year, a 24 percent decline, as shown in Figure 1.

Given the value of new drugs, the decline in the number of approvals in the post-Vioxx era is of tremendous concern. The decline is superimposed on a longer-run trend of increasing drug development costs. It costs more than $1 billion to develop and bring a new drug to market today, with some estimates exceeding $2 billion.[6] Drug development costs have been rising at a rate well above that of inflation for several decades. Twelve estimates of the cost of new drug development from different points in time are illustrated in Figure 2.

The decline of new drugs is especially troubling be- cause it has come at a time when advances in research techniques and computing power suggest great opportunities for treatment breakthroughs. From 2001 to 2011, the cost of genetic sequencing, for example, fell by a factor of 10,000: from $100 million per genome in 2001 to just $10,000 in 2011, with a figure below $1,000 well in sight. Cost reductions in genetic sequencing and advances in cognate techniques—such as on-the-fly analysis of RNA transcripts, proteins, antibodies, and metabolites—suggest that rapid advances toward personalized medicine are possible.[7]

But personalized medicine—the tailoring of medical treatment and delivery of health-care based on individual patient characteristics—will require more than scientific breakthroughs. It will also require a reorganization and reconceptualization of the FDA.

In the past, most drugs were approved without a fundamental understanding of their mechanisms of action. In the face of mass ignorance, the best one could do was throw a drug against a large sample of patients and count noses. Did the drug benefit more people than it failed to benefit? Standard practice has relied on the evidence of the crowds to make treatment decisions that are beneficial on average, even if that average hides tremendous variability in benefit and cost. Mass ignorance produces mass medicine, which ignores the variability of benefits as well as risks for individuals. For example, Vioxx was withdrawn from the market, but Eric Topol, who provided one of the earliest and strongest warnings about its dangers,[8] argues that genetic testing could identify and exclude from the patient population the minority of people at risk from serious side effects, and thus that Vioxx would be a useful drug to have on the market.[9]

It is unclear, however, whether the FDA has the tools and resources or the mindset to adapt to the new technologies and approaches. The FDA is still focused on permitting only those drugs that they deem "safe and effective," despite the fact that these terms can be defined for a large population only by doing "violence to heterogeneity". Safe and effective for the American population as a whole is no longer the relevant paradigm; instead, the standard must shift to safe and effective when physicians are targeting treatments based on deep, contextual knowledge of patients and diseases that is continually evolving. In a world with molecular medicine and mass heterogeneity, the FDA's role will change from the yes/no single rule that fits no one perfectly, to being a certifier of biochemical pathways and prescribing modalities that evolve with rapid feedback and scientific advances.[10]

It is especially important to revisit bureaucratic structures and procedures in a time of rapid technological change. The FDA is an exceedingly large, complex, and constantly evolving organization, which regulates industries that account for nearly 25 percent of U.S. consumer spending. Resource allocation in large organizations becomes increasingly inefficient when demands change but resources continue to be allocated according to history and bureaucracy. Old methods are honed to solve old problems but often falter when faced with new problems and fail to deliver when offered new opportunities. Some divisions and program areas are overloaded, while others languish and the organization unbalances. Conflicting processes and tracking systems emerge across divisions, best practices fail to propagate, and training becomes inadequate. New procedures and organization are needed to improve efficiency, transparency, and consistency.


INCONSISTENCY ACROSS FDA DIVISIONS

In discussing differences in performance within the FDA, we focus on the agency's Center for Drug Evaluation and Research (CDER). Figure 3 shows the CDER divisions regulating pharmaceuticals at the time of the study.

Over the last decade, researchers from both the private and public sectors have highlighted several common criticisms of the FDA, and chief among these were unpredictability and inconsistency across drug review divisions. A 2003 report from the U.S. Office of the Inspector General of the Department of Health and Human Services on the FDA's review process for New Drug Applications, which included a survey of pharmaceutical companies as well as FDA review staff, noted:

  • "Seventy-five percent of sponsors responding to our survey indicated that FDA reviews are inconsistent across the 15 review divisions within CDER."

  • "One sponsor commented that these inconsistencies may prompt some sponsors to shop for review divisions when a drug could be classified under different therapeutic review divisions."

  • The FDA made few efforts to identify and eliminate inefficiencies in the review process.

  • "Forty-eight percent of FDA survey respondents indicated that FDA was not doing enough quality improvement activities."

  • Recommendations were made to evaluate the adequacy of current staffing levels and work- load distribution across the review divisions.[12]

A few years later, a report on drug safety by the In- stitute of Medicine (IOM), a branch of the U.S. National Academies, was summarized by several prominent medical researchers in a New England Journal of Medicine editorial that highlighted a particular set of challenges: "Contributing to an urgent need for cultural change in the FDA are a suboptimal work environment, a lack of consistency among CDER review divisions, polarization between offices responsible for the pre-marketing review and post-marketing surveillance, CDER management's disregard and disrespect for scientific disagreement, and politicization and a lack of stability in the office of the FDA commissioner."[13]

Most recently, in 2012 testimony before Congress on the Food and Drug Administration Safety and Innovation Act (FDASIA), a former FDA commissioner, Andrew von Eschenbach, pointed to several crucial factors affecting the investment climate for new drugs: "Last year the National Venture Capital Association released a report that underscores America's risk of losing its standing as the world leader in medical innovation. Their survey clearly showed that the FDA's regulatory challenges, the lack of regulatory certainty, the day-to-day unpredictability, and unnecessary delays are stifling investment in the development of lifesaving drugs and devices."[14]

A number of studies in recent decades have tried to explain FDA inconsistencies in general—that is, why some drugs are approved more quickly and with fewer complications than others. Explanations have included approving first in class and more important drugs faster (Carpenter 2010, Kaitin et al. 1991, Dranove and Meltzer 1994, Downing et al. 2012), patient pressure (Carpenter 2004), and congressional deadlines (Carpenter et al. 2012). Few previous studies, however, have focused on the inconsistency of approvals by FDA division, the closest being Milne and Kaitin (2012), which examined many of the same factors that we consider here but over a shorter time frame and without an overall metric on the relative efficiency of reviewing divisions.


MEASURING THE EFFICIENCY OF THE FDA BY DIVISION

We gathered a wide variety of data measuring both the outputs and inputs (workload) by FDA review division over the period 2004–12. Our analysis covers 12 FDA review divisions and 184 new drugs, which amounts to 80 percent of all new drugs approved by CDER from 2004 to 2012. Organizational changes in CDER have occurred over our time frame, so we combined data for certain divisions and assigned compounds according to the current divisional structure (for details, see Appendix 1). We excluded from the analysis drugs approved in older divisional structures that would not be reviewed in the current structure, given the indications for which they were approved. We also excluded some current divisions that had a relatively small number of new drug approvals over the period analyzed.

We take approval phase time as one of the primary review division outputs. Differences across FDA divisions can be dramatic. Figure 4 shows the mean time, from submission of a New Drug Application (NDA) or a Biologics License Application (BLA), to approval by a review division. The time differentials are striking, with drugs reviewed in the Neurology division taking three times longer on average than drugs reviewed in Oncology, while drugs reviewed in the Cardiovascular and Renal division took nearly four times longer than those reviewed in Oncology. Longer approval phases mean that patients must wait longer to receive safe and effective new therapies and that developers have shorter periods to recoup their R&D investments.

The mean time to approval, however, can be sub- stantially influenced by a handful of outliers. Thus, in Figure 5 we look at the median time to approval (with the divisions sorted in the same order). As expected, median time varies less than mean time, but the differences among the divisions are still large. The slowest divisions (Neurology and Cardiovascular/Renal) have median times to approval that are roughly two and a half to three times as long as the fastest divisions (Oncology and Anti-Viral). With respect to medians, the Cardiovascular and Renal division performs better than Neurology, while the reverse is true for means. That indicates that high outliers are more of an issue for the Cardiovascular and Renal division than for Neurology.


PRODUCTIVITY METRICS

The differences in mean and median approval times by division are suggestive but do not necessarily tell us that one division is more productive than another or that there are opportunities for increased productivity. It could be the case, for example, that the faster divisions have fewer problems or more resources. To measure productivity, we need to control for inputs as well as outputs. We shall do this further below.

A hint that there are important differences in productivity comes from examining one measure of workload: the number of investigational new drug applications (INDs) per staffer. Oncology, the fastest division by mean and median, has the highest IND per staffer workload (92 percent above average)—that is, Oncology has fewer staffers relative to its workload than other divisions, and yet it works more quickly, suggesting significant differences in productivity. As noted, INDs per staffer is only one measure of workload, so we turn now to a more detailed investigation.

We gathered data to measure a division's workload and its output.[15] In particular, we gathered annual data on the number of INDs and the number of NDAs, and, using data on staffing levels (2011), we constructed metrics for INDs per staffer and NDAs per staffer for each division.[16] We recognize that some INDs and NDAs are more difficult to process and evaluate than others; drug evaluation is highly complex and multi-dimensional. Thus, we also gathered data on a wide variety of drug-specific variables meant to control for workload complexity and difficulty. In our first pass at the data, these included molecule size, orphan drug status, black box warnings, and whether the drug was approved through a special program such as accelerated approval or fast-track designation.[17] We included accelerated approval or fast-track designation as a workload factor, for example, because speeding up the review process will likely require greater resources. In particular, the performance goals for review time under PDUFA are more stringent for priority-rated drugs (six months for a review decision for a drug with a priority rating for at least 90 percent of applications, versus ten months for drugs with standard review ratings).[18]

A number of the additional variables had a low frequency of occurrence or were highly correlated. Thus we excluded these and settled on the final set of workload factors as: INDs per staffer; NDAs per staffer; whether the compound received a priority review rating; whether the compound was designated for a special program (accelerated approval or fast track); whether an advisory committee was involved; whether a clinical hold was placed on development of the drug; whether a black box warning was included on the product label; the number of post-marketing requirements; and the clinical development time. Note that some of these variables may also be influenced by FDA efficiency and standards, particularly the clinical development time,[19] but we conservatively included this variable as a workload factor as a proxy for differences in scientific complexity by therapeutic class.

For each of our variables, we measured whether the division was significantly above or below the division average for that variable. The cutoffs for above and below were taken to be the upper and lower 95 percent confidence interval estimates for proportions, in the cases of qualitative variables; and for means, in the cases of quantitative variables. Tables 1, 2, and 3 present the basic data, with above-average cells for that variable in green, below-average values in red, and average values uncolored.

Table 1 shows that the division with the greatest workload as measured by IND per staffer was Oncology (with INDs per staffer of 1.622, 92 percent above the average of 0.845), and the division with the greatest workload as measured by NDAs per staffer was Anti-Viral (54 percent above average). Recall that Oncology and Anti-Viral were the best-performing divisions in terms of speed of approval.

Oncology and Anti-Viral also have an above-average number of fast-track or accelerated approval drugs. The fact that speedier divisions have more drugs in that category suggests that the special programs do change behavior. It should be kept in mind, however, that these programs will typically require more work from the review division. The fast-track program, for example, will often require increased scientific interaction with the sponsor and a more complex process involving a rolling review of the application, in which parts of the filing are assessed as they are completed, instead of a bulk submission reviewed en masse. Accelerated approval typically makes use of surrogate endpoints to measure clinical benefit, which later must be confirmed in post-marketing trials. This aspect may shift some of the evidentiary burden from pre-market to post-market, but, in fact, a recent examination of nearly 200 NMEs and new biologics (2005–12) showed that nearly half the approvals were based on pivotal trials that had surrogate endpoints as their primary outcome, of which only about 10 percent were accelerated approvals.[20] Accelerated approval likely requires additional work up front by these divisions, work that has benefits later in the process, as measured by a greater likelihood of first-cycle approvals and shorter approval times (see Table 3).

In contrast, Neurology and Psychiatry, among the slowest divisions, have below-average IND and NDA workloads and below-average use of special programs.

Table 2 looks at a variety of other workload factors. Interestingly, there is little to no indication that exemplary divisions, in terms of time to approval, have lower application workloads or less complexity to deal with, or that they skimp on factors related to safety. There is no indication, for example, that Oncology and Anti-Viral are less likely to impose black box warnings than other divisions (if anything, they are a bit above average). Oncology uses fewer advisory committee meetings than average, but Anti-Viral uses more, suggesting that that is not a determinative factor. Both the leading performance divisions use post-marketing requirements (PMRs) (in place since 2008) at above-average rates, but so do Neurology (a laggard in time to approval) and Metabolism/Endocrinology (middle of the pack for time to approval), demonstrating that some differences in workload factors are associated with the nature of the therapeutic area (e.g., a higher rate of PMRs in Oncology due to accelerated approvals, or in Neurology due to the need for more post-approval pediatric studies).

We used clinical development times by review division as an indicator of scientific complexity. The average length of the U.S. clinical development phase across the 12 divisions was 84.2 months. The lengthiest average clinical development phases were for Neurology (37 percent above average) and Oncology (20 percent above average). The shortest average clinical development phases were for drugs reviewed by Anti-Infective (30% below average) and Anti-Viral (27 percent below average). Again, there is little indication, judging by this measure, that exemplary divisions necessarily review less complex or more complex compounds.

Overall, Tables 1 and 2 suggest that the performance of the leading divisions cannot be explained by a lower workload (in fact, their workload as measured by NDAs and INDs per staffer is higher) or by other factors that might be associated with lower workloads, drugs that are more difficult to review, or less safety.

The output variables were U.S. approval phase time,[21] the number of review cycles, and whether the PDU-FA performance goal was met for the initial review.[22] Consistent with their performance on mean approval phase time, the leading divisions of Oncology and Anti-Viral also had fewer than average review cycles and a better performance on meeting PDUFA goals, while the lagging divisions of Cardiovascular/Renal and Neurology have more review cycles and more often fail to meet PDUFA goals.

To get a better overall sense of division performance and a net ranking, we constructed a relatively straightforward and simple scoring algorithm. For each variable, we assigned a –1 if the variable was below average, 0 if average, and +1 if above average (with, as noted earlier, the cutoffs taken to be the lower and upper 95 percent confidence interval estimates for proportions in the cases of qualitative variables; and for means, in the cases of quantitative variables). We then added unweighted scores for both the set of workload factors and the set of output factors that were examined, and we multiplied each by 100. This yields division and output scores that range from –100 to +100. Negative values may be viewed as below-average scores for the given metric and positive values as above-average scores. We then combined the two aggregated scores for each division into an overall relative performance metric by adding the workload and output scores. Consequently, a higher workload score for a given output score yields a higher relative performance value. Similarly, a higher output score for a given workload score also yields a higher relative performance value. We divided the sum of the workload and output scores by two to put the relative performance metric, also on a –100 to +100 scale.

Figure 6 shows the relative performance scores by division, where the performance metric is an aggregate of the workload and output scores. The Anti-Viral and Oncology divisions score substantially better than the other divisions, by this metric. These two divisions had both relatively high output scores and high workload scores. The only other division with an above-average performance score was Hematology, which had an average output score but an above-average workload score. This result suggests that the division's output could improve to above-average if it had more resources. The worst-performing divisions were Neurology, Cardiovascular/Renal, and Psychiatry. These divisions had not only relatively low output but also relatively low workloads.

The relative rankings for the divisions are very robust to changes in how we calculate the scores. Removing one workload or output factor at a time, for example, does not alter the set of divisions at the top and bottom of this scale. There are minor differences in the rankings in between. Similarly, there does not appear to be a marked trend in the data over the period analyzed. The rankings for the more recent 2008–12 period are nearly identical to the rankings for the entire period studied. For the more recent period, Anti-Viral, Oncology, and Hematology maintain their rankings at the top, and Pulmonary/Allergy/Rheumatology, Psychiatry, Neurology, and Cardiovascular/Renal constitute the bottom four (Pulmonary/Allergy/Rheumatology and Psychiatry switched ranks, as did Cardiovascular/Renal and Neurology).


DISCUSSION

Our results suggest that two review divisions are set above the rest in terms of overall performance (Anti-Viral and Oncology) while three appear to be subpar performers (Neurology, Cardiovascular/Renal,and Psychiatry). For example, Oncology and Anti-Viral combined had nearly triple the proportion of priority-rated approvals (83.6 percent) compared with the other divisions taken as a whole (30.5 percent), but without any decline in the percentage of PDUFA goals met. This indicates that they were able to maximize their usage of time, personnel, and resources to meet deadlines as well as the other divisions, despite a higher workload threshold.

A Bayesian, however, would start at a different point, and arrive at reliable answers much faster. We are dealing here with a typical reverse probability problem: we have an observed effect—child chatter—and we are wondering how confidently we can attribute it to the suspect cause. But we are talking Fifth Avenue, where dogs are quite common. Accepting an "I-saw-a-lion" report requires additional information: Were the Ringling Brothers in town, and did their truck crash? "I saw a stegosaurus" is never believable, not even if Steven Spielberg is in town. The reliability of each report depends not only on the child but also on knowledge that has nothing to do with the child—knowledge about where lions roam and dinosaurs don't.

We can ascertain some measure of the impact of scientific complexity and evidentiary burden by looking at clinical development time. What perhaps is most striking here is that there is not more of a difference in high-performing divisions versus low-performing divisions. If we compare the overall median development time for the six top-scoring divisions by overall performance in Figure 6 (Anti-Viral, Oncology, Hematology, Anti-Infective, Gastroenterology, and Metabolism/Endocrinology) to that same parameter for the remaining six divisions (Neurology, Cardiovascular/Renal, Psychiatry, Pulmonary/Allergy/Rheumatology, Anesthesia/Analgesia/Addiction, and Reproductive/Urologic), it amounts to a difference of just 6 percent (73.9 and 70.0 months, respectively). In terms of rank, with Neurology at 1 for clinical development length and Anti-Infective at 12, the aggregate rank of the top six divisions is 7.5 and that of the bottom six is 5.5. All in all, if clinical development time is a proxy for scientific complexity, the difference in performance cannot be explained to any appreciable degree by this factor alone.

A simple back-of-the-envelope calculation establishes the importance and potential for increased FDA efficiency. The Oncology division is approximately 60 percent faster on average at getting new drugs through the regulatory approval period than the other divisions, taken as a whole. If the other divisions could move just halfway toward Oncology—a conservative assumption—a 30 percent improvement in speed, with no reduction in quality, would be generated.

What would this be worth to U.S. firms and consumers? Inclusive of failure and time costs, the average new drug costs at least $1 billion to get to market.[23] DiMasi (2002) estimates that a 30 percent reduction in regulatory review time would reduce development costs by 4.6 percent, or $46 million per drug. With 19 new non-oncology drugs per year on average (out of a total of 25), the reduction in review time would translate to total annual savings of $874 million in development costs.

Our results indicate that such savings are possible without an increase in budget, but it seems clear that the magnitude of the savings would more than justify any necessary budgetary impact. Moreover, these savings do not include benefits to consumers, which would flow over time as firms responded to reduced development costs with more new drugs. Even small increases in the number of new drugs would add billions to the dividends from faster FDA review times.


THE HIGH VALUE OF INCREASED LIFE EXPECTANCY AND PHARMACEUTICAL INNOVATION

Indeed, research shows that the value of new drug development, in terms of increased longevity, produces enormous gains to society that are not well understood outside the economic literature. Research also supports the idea that even accounting for drug prices and drug profits, consumers capture the vast majority of gains from access to new medicines.

At its most basic level, increased efficiency in reviewing new drugs will save lives. The average increase in life expectancy at birth between 1970 and 2000 was 7.37 years for men, to age 74.4; and 4.98 years for women, to age 79.7. Over that 30-year period, the worth of that increase to Americans has been about $3.2 trillion per year. [24]

The immense value of increases in life expectancy derives from two simple numbers: a substantial value of increased life expectancy per person and the large size of the U.S. population. A conservative estimate of the value of a life-year, for example, is $150,000.[25] For a population of 300 million, to use a round number, an additional year of life expectancy is worth $45 trillion. (The actual increase would be even higher, since the U.S. now has an estimated population of 317 million and there are also gains to billions of people elsewhere in the world.)

Life expectancy at birth increased by a little more than a year between 2000 and 2007 in the United States,[26] thus producing a benefit during this period of $45 trillion. To put this number in context, the value of U.S. goods and services produced between 2000 and 2007 was $109 trillion (in 2009 dollars). Thus the increase in life expectancy was worth 41 percent of the goods and services produced during this period. Put differently: if we measure total production appropriately, the U.S. economy produced $154 trillion of value between 2000 and 2007, with 30 percent derived from the production of life expectancy and 70 percent from the production of goods and services.

A substantial fraction of the increase in life expectancy in recent decades has been due to better pharmaceuticals. A recent study by Frank Lichtenberg of Columbia University used variation in the number of new drugs prescribed to patients across 30 developing and high-income countries to estimate the effect of new pharmaceuticals on longevity. Countries that adopted new pharmaceuticals faster saw larger increases in life expectancy than countries that were slower. From 2000 to 2009, the study estimated that new pharmaceuticals increased life expectancy by 1.27 years, or 73 percent of the actual increase in life expectancy at birth.[27]

In a follow-up study, Lichtenberg examined the impact of new drugs by comparing different groups within the United States. It takes time for physicians to learn of new pharmaceuticals and to become comfortable with their side effects and prescription modalities. Thus, even within a single country at the same point in time, not all patients with the same disease and demographics receive the same pharmaceuticals. The life expectancy of elderly Americans increased by 0.6 years between 1996 and 2003. Lichtenberg examined variations in prescriptions across similarly situated elderly patients in the United States to estimate that 68 percent of this increase was due to pharmaceutical innovation.[28]

These figures are plausible, given that in an average month, about half of all Americans are taking at least one prescription drug; the number of elderly Americans taking a prescription drug in a given month is even higher, nearly 90 percent.[29] Other researchers also find that pharmaceuticals have a large effect on life expectancy.[30]


THE GAINS FROM A FASTER FDA

The high value of increased life expectancy, along with the large fraction of the increase that can be attributed to new pharmaceuticals, explains the potentially huge payoff from boosting the efficiency of the FDA. In recent times (2000–2011), life expectancy increased by 0.182 years annually in the United States. Suppose that 50 percent, or 0.091 of a year, of this increase is due to new pharmaceuticals.[31] Using a value of a life-year of $150,000 and a U.S. population of 300 million, as discussed earlier, the increase in life expectancy created by new pharmaceuticals in a typical year is about $4 trillion.

That astonishing number is at stake in the debate over the FDA's efficiency in approving new drugs. If, for example, we could introduce just one generation of new drugs (say, 25 drugs) just one year faster, the payoff would be the $4 trillion of value to be found in the longer lives of U.S. citizens.[32]


RECOMMENDATIONS

Given the high stakes involved—for the drug companies in terms of savings and increased incentives to seek approval for new drugs, and for society as a whole—a number of changes should be considered by the FDA and by Congress to improve the overall performance of the agency's divisions. We recommend actions in the following areas:

Best Practices. We support further study to identify the policies and procedures that are working in high-performing divisions, with the goal of finding ways to apply them in low-performing divisions, thereby improving review speed and efficiency. The FDA may also wish to consider management controls from the private sector, including total quality-management approaches.

Congress may also wish to consider creating a regular update mechanism for the agency's commissioner to brief Congress annually, or biannually, on continual quality-improvement efforts. This briefing would be in addition to the five-year PDUFA review process. The new mechanism would encourage the agency to embrace a practice of continual quality review and to improve internal management controls between PDUFA reauthorizations.

Transparency. We encourage the FDA to expand its laudable transparency efforts, such as its recent self- analysis of approval delays and denials,[33] in order to address root causes of the actions (or inaction) that precipitated those outcomes.[34]

Special Designation Programs. We urge the expansion of special designation programs beyond the somewhat narrow confines of current implementation. The expansion would be along the lines suggested by the 2012 report from the President's Council of Advisors on Science and Technology (PCAST) that FDA should expand the use of its existing authority, as well as engage the biomedical community in the development and evaluation of specific clinical outcome predictors, to better address unmet medical needs for serious or life-threatening illness.[35]

Staffing and Resources. We recommend the establishment of a cadre of "shock troops" within the FDA that can be used to alleviate fluctuations in workload. The agency has been constrained by its ability to shift internal staff resources to address workload demands, as noted in a 2003 report by the U.S. Office of the Inspector General.[36] To be sure, the pharmaceuticals industry might be reluctant to fund an expensive standing staff reserve without clear metrics for evaluating how they are being utilized (assuming that funding for this would come from user fees and not from congressional appropriations).

For that reason, it would also make sense to explore the extent to which other trusted intermediaries—such as the C-Path Institute, the Reagan-Udall Foundation, the National Institutes of Health, or academic programs—could be used to augment FDA review staff, particularly for novel or complex technologies that might otherwise fall outside the agency's existing tool kit and thus might be particularly difficult or time-consuming to assess.

Other programs could also be put into place to regularly test novel drug development and approval paradigms that, over time, could be incorporated into the divisions if they prove successful.


CONCLUSION

Our analysis of performance has revealed large differences among the FDA divisions. High-performing divisions are several-fold better on output measures than low-performing divisions, and they perform better without commensurately greater resources or lesser complexity of task or reduced safety. Inconsistent performance across divisions is thus a strong indication of inefficiency but also of opportunity. A careful comparison of the performance of the agency's drug review divisions suggests that agency performance can be dramatically improved at little cost to taxpayers.

Internal improvements in FDA efficiency could reduce research and development costs by nearly $1 billion annually. Reductions in research costs would, in turn, incentivize greater investments in research and development, generating new drugs that would improve patients' life expectancy and quality of life.

* * *


APPENDIX 1: CDER REORGANIZATIONS

The FDA's Center for Drug Evaluation and Research (CDER) restructured its review divisions in 2005, 2009, and 2011. In the mid-2000s, there were several significant functional and structural changes—most notably, the transfer of review responsibility for therapeutic biologics from the Center for Biologics Evaluation and Research (CBER) to CDER, and the creation of the Office of Drug Safety (now the Office of Surveillance and Epidemiology), and the Office of New Drugs, under which the review divisions are currently housed organizationally.

In 2005, the Division of Neuropharmacological Drug Products became the Division of Neurology Products and the Division of Psychiatry Products, respectively. The Division of Anesthetic, Critical Care, and Addiction Drug Products became the Division of Anesthesia, Analgesia, and Rheumatology Products. The Division of Pulmonary Products became the Division of Pulmonary and Allergy Products. The Division of Gastrointestinal and Coagulation Drug Products became the Division of Gastroenterology Products. The Division of Anti-Infective Products was renamed the Division of AntiInfective and Ophthalmology Products. The Division of Special Pathogens and Immunologic Products was renamed the Division of Special Pathogens and Transplant Products. The Division of Medical Imaging and Radiopharmaceutical Drug Products became the Division of Medical Imaging and Hematology Products.

In 2009, the FDA made another set of changes to the structure of the review divisions: the Division of Anesthesia, Analgesia, and Rheumatology Products became the Division of Anesthesia, Analgesia, and Addiction Products; Rheumatology was reassigned to the Division of Pulmonary, Allergy, and Rheumatology Products. The Division of Medical Imaging and Hematology Products split into the Division of Medical Imaging Products (now in ODE IV with nonprescription drugs) and the Division of Hematology Products.

In 2011, the Division of Anti-Infective and Ophthalmologic Products became the Division of Anti-Infective Products again; the Division of Special Pathogens and Transplant Products became the Division of Transplant and Ophthalmology Products; the Division of Gastroenterology Products became the Division of Gastroenterology and Inborn Error Products. The Division of Drug Oncology Products split into the Division of Oncology Products 1 and the Division of Oncology Products 2 (treated as one division for our analyses).


APPENDIX 2: DATA SOURCES

The following information was drawn from an internal database at the Tufts Center for the Study of Drug Development: a list of new molecular entities (NMEs) and new therapeutically significant biologics with New Drug Applications (NDAs) or Biologics License Applications (BLAs) approved during 2004–12, including submission dates, approval dates, and therapeutic classifications. Information on the reviewing division responsible for each product was acquired through documents found in the FDA Access Data public database (Drugs@FDA).

Information used to evaluate post-marketing commitments (PMCs) and requirements (PMRs), as well as risk evaluation and mitigation strategies (REMS), was drawn from public documents on the FDA website. To determine the number of review cycles taken to approve each NME or new BLA and whether the PDUFA goal was met, we checked the annual FDA PDUFA Performance Reports to Congress, as well as public summary review documents on the FDA website.

Additionally, to decide which applications involved advisory committee meetings before approval and which received approvable/complete response (CR) letters, we used the public approval letters and summary reviews found on the FDA website, as well as on Thomson Reuters databases. The documents found on the website were also used to assess whether the applications were previously withdrawn and then resubmitted. Likewise, we read the documents to determine whether a drug was given a special protocol assessment (SPA), an orphan drug designation, a fast-track designation, and/or accelerated approval status; additionally, we referred to the website to conclude whether the drug was a 505(b)2 product. We used a website (https://blackboxrx.com/app/guest) to indicate whether the original approval for the drug in question required a black box warning on the product label.

For the purpose of assessing the workloads attributable to investigational new drug applications (INDs) filed and NDAs submitted, and to determine the details of "clinical holds" on commercial IND filings (orders to delay or suspend clinical trials in humans until certain safety or other concerns are addressed by the sponsor), we used PAREXEL's Bio/ Pharmaceutical R&D Statistical Sourcebook by CDER review division for 2004–12.

To assess the staffing levels for each CDER review division, we consulted the U.S. Department of Health and Human Services (HHS) Employee Directory online (http://directory.psc.gov/employee.htm). Using the search feature, under "Agency," we selected FDA; under "Other Organization," we entered the abbreviation of the CDER review division in question. We then counted the number of employees under the review division, as listed in the HHS Employee Directory. We repeated this process for each CDER division.

ENDNOTES
  1. Peltzman 1973; see also Grabowski and Vernon (1983), who concluded: "In sum, the hypothesis that the observed decline in new product introductions has largely been concentrated in marginal or ineffective drugs is not generally supported by empirical analyses".
  2. Wiggins 1981. See, further, Klein and Tabarrok (2013) from which we have drawn, for a lengthier review of FDA studies.
  3. See, e.g., Philipson et al. 2008; Grabowski and Wang 2008; and Tufts 2005. Cf., however, Olson 2002.
  4. The statement was made by FDA whistle-blower David Graham and published in the New York Times (Harris 2004).
  5. Biologics are medical products, such as vaccines, recombinant proteins, and monoclonal antibodies, that are created by biological processes rather than chemically synthesized—as are most new molecular entities (i.e., drugs).
  6. See DiMasi, Hansen, and Grabowski 2003; DiMasi and Grabowski 2007; and Munos 2009.
  7. Chen et al. 2012; and Topol 2012.
  8. Mukherjee, Nissen, and Topol 2001.
  9. Topol 2012.
  10. Huber 2013.
  11. Adapted from FDA information, http://www.fda.gov/downloads/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/CDER/ContactCDER/UCM070722.pdf.
  12. U.S. Office of the Inspector General 2003.
  13. Psaty and Burke 2006.
  14. U.S. Senate Committee Hearing, Food and Drug Administration Safety and Innovation Act of 2012.
  15. Sources of data are described in Appendix 2.
  16. For ease of exposition, new drugs and new biologics are both referred to as new drugs.
  17. We examined data on more variables than we used for the performance scores. Specifically, for individual approved new drugs, we gathered qualitative data on: FDA therapeutic rating; molecule size; orphan drug status; whether the drug was the subject of an advisory committee meeting; had post-marketing commitments or requirements; had a risk evaluation and mitigation strategy (REMS) developed; was a 505b(2) approval (i.e., application was based on studies not conducted by or for the applicant, such as the published literature or FDA's findings for a previously approved product); had a black box warning; had received an accelerated or fast-track designation; had achieved its PDUFA review performance goal for each of its review cycles; received complete response letters; had a special protocol assessment; had a refusal-to-file for its application; or had to resubmit the NDA/BLA. The drug-specific quantitative variables examined were: the number of review cycles; the number of post-marketing commitments; the number of non-significant risk post-marketing commitments; the number of post-marketing requirements; the U.S. clinical development time; and the U.S. approval phase time for each drug included in the analysis.
  18. Priority rating, accelerated approval, and fast track are different but related programs. For greater detail, see http://www.fda.gov/forconsumers/byaudience/forpatientadvocates/speedingaccesstoimportantnewtherapies/ucm128291.htm.
  19. The U.S. clinical development time is the time from U.S. IND filing to first submission of an NDA or BLA with the FDA.
  20. Downing et al. 2014.
  21. The U.S. approval phase time is the period from first NDA or BLA submission to first NDA or BLA approval for the compound.
  22. The frequencies for whether PDUFA review performance goals were met for later review cycles were low and so were not included.
  23. DiMasi 2002, adjusted for inflation to 2013 dollars.
  24. See Murphy and Topel 2006; and Nordhaus 2002.
  25. The FDA, e.g., has used values for a statistical life-year of $100,000–$500,000. Similarly, the Environmental Protection Agency (EPA) has used values of $300,000 or higher (Robinson 2007). See also Aldy and Viscusi 2008; Murphy and Topel 2006; and Appelbaum 2011.
  26. World Bank, World Development Indicators.
  27. Lichtenberg 2012a.
  28. Lichtenberg 2012b.
  29. National Center for Health Statistics 2013.
  30. See, e.g., Frech and Miller 2004; Shaw, Horrace, and Vogel 2005; and Crιmieux et al. 2005. But for criticism of such findings, see Grootendorst et al. 2009.
  31. This could be considered a conservative share, based on Lichtenberg 2005, 2012a, and 2012b.
  32. Nor would a "faster FDA" require a trade-off on safety, as noted by then-acting CDER director Steven Galson, in 2005 testimony before the House Committee on Government Reform: the National Bureau of Economic Research "found no significant differences in the rates of safety withdrawals for drugs approved before PDUFA compared to drugs approved during the PDUFA era. This research confirms FDA's analysis on the same subject. In addition, as the public has become more aware of drug safety issues, we are now adding box warnings sooner than we did before PDUFA. This indicates that PDUFA has been successful in both speeding access and preserving safety" (http://www.fda.gov/NewsEvents/Testimony/ucm161673.htm).
  33. Sacks et al. 2014.
  34. See also Goodman and Redberg 2014.
  35. President's Council of Advisors on Science and Technology (PCAST) 2012.
  36. See Carpenter et al. 2012. Occasionally, this has been done on a one-off basis (e.g., 65 staff members from various offices and divisions were pulled from their original positions for several years to address the requirements of the pediatric studies initiative when it was first implemented in 1998).

REFERENCES

Aldy, Joseph E., and W. Kip Viscusi. 2008. "Adjusting the Value of a Statistical Life for Age and Cohort Effects." The Review of Economics and Statistics 90(3) (July 22): 573–81. doi:10.1162/rest.90.3.573.

Appelbaum, Binyamin. 2011. "A Life's Value May Depend on the Agency, but It's Rising." New York Times, February 16. http://www.nytimes.com/2011/02/17/business/economy/17regulation.html.

Brugts, J. J., et al. 2009. "The Benefits of Statins in People Without Established Cardiovascular Disease but with Cardiovascular Risk Factors: Meta-Analysis of Randomised Controlled Trials." BMJ 338 (June 30): b2376–b2376. doi:10.1136/bmj.b2376.

Carpenter, D. P. 2004. "The Political Economy of FDA Drug Review: Processing, Politics, and Lessons for Policy." Health Affairs, 23(1): 52–63.

———. 2010. Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA. Princeton, N.J.: Princeton University Press.

———, et al. 2012. "The Complications of Controlling Agency Time Discretion: FDA Review Deadlines and Postmarket Drug Safety." American Journal of Political Science 56: 98–114.

Chen, Rui, et al. 2012. "Personal Omics Profiling Reveals Dynamic Molecular and Medical Phenotypes." Cell 148(6) (March 16): 1293–1307. doi:10.1016/j.cell.2012.02.009.

Crιmieux, Pierre-Yves, et al. 2005. "Public and Private Pharmaceutical Spending as Determinants of Health Outcomes in Canada." Health Economics 14(2) (February): 107–16. doi:10.1002/hec.922.

Deyo, R. A. 2004. "Gaps, Tensions, and Conflicts in the FDA Approval Process: Implications for Clinical Practice." Journal of the American Board of Family Practice 17(2): 142–49.

DiMasi, Joseph A. 2002. "The Value of Improving the Productivity of the Drug Development Process: Faster Times and Better Decisions." PharmacoEconomics 20(supp. 3): 1–10.

———, Ronald W. Hansen, and Henry G. Grabowski. 2003. "The Price of Innovation: New Estimates of Drug Development Costs." Journal of Health Economics 22(2) (March): 151–85. doi:10.1016/S0167-6296(02)00126-1.

DiMasi, Joseph A., and Henry G. Grabowski. 2007. "The Cost of Biopharmaceutical R&D: Is Biotech Different?." Managerial and Decision Economics 28(4–5): 469–79. doi:10.1002/mde.1360.

DiMasi, J. A., and L. Faden. 2009. "Factors Associated with Multiple FDA Review Cycles and Approval Phase Times." Drug Information Journal 43: 201.

Downing, N. S., et al. 2012. "Regulatory Review of Novel Therapeutics: Comparison of Three Regulatory Agencies." New England Journal of Medicine 366(24): 2284–93.

Downing, N. S., et al. 2014. "Clinical Trial Evidence Supporting FDA Approval of Novel Therapeutic Agents, 2005– 2012." JAMA 311(4): 368–77.

Dranove, D., and D. Meltzer. 1994. "Do Important Drugs Reach the Market Sooner?." RAND Journal of Economics 25(3): 402–23.

Frech, H. E., and Richard D. Miller. 2004. "The Effects of Pharmaceutical Consumption and Obesity on the Quality of Life in the Organization of Economic Cooperation and Development (OECD) Countries." PharmacoEconomics 22(2): 25–36. doi:10.2165/00019053-200422002-00004.

Gingery, D. 2012. "Buying Time: Industry Sacrifices Early to Gain Later with PDUFA V Review Model." Prescription Pharmaceuticals and Biotechnology, The Pink Sheet 74(40) (October 1): 7.

Goodman, S. N., and R. F. Redberg. 2014. "Opening the FDA Black Box." JAMA 311(4): 361–63.

Grabowski, Henry G., and John M. Vernon. 1983. The Regulation of Pharmaceuticals: Balancing the Benefits and Risks. Washington, D.C.: American Enterprise Institute for Public Policy Research.

Grabowski, Henry, and Y. Richard Wang. 2008. "Do Faster Food and Drug Administration Drug Reviews Adversely Affect Patient Safety? An Analysis of the 1992 Prescription Drug User Fee Act." Journal of Law and Economics 51(2): 377–406. http://fds.duke.edu/db/attachment/580.

Grootendorst, Paul, Emmanuelle Piιrard, and Minsup Shim. 2009. "Life-Expectancy Gains from Pharmaceutical Drugs: A Critical Appraisal of the Literature." Expert Review of Pharmacoeconomics & Outcomes Research 9(4) (August): 353–64. doi:10.1586/erp.09.35.

Harris, Gardiner. 2004. "F.D.A. Failing in Drug Safety, Official Asserts." New York Times, November 19. http://www.nytimes.com/2004/11/19/business/19fda.html.

Huber, Peter W. 2013. The Cure in the Code: How 20th Century Law Is Undermining 21st Century Medicine. New York: Basic Books.

Kaitin, Kenneth I., and Joseph A. DiMasi. 2000. "Measuring the Pace of New Drug Development in the User Fee ERA." Drug Information Journal 34(3): 673–80. doi:10.1177/009286150003400303.

Klein, Daniel B., and Alexander Tabarrok. 2013. "FDAReview.org." The Independent Institute. http://www.fdareview.org.

Kola, Ismail, and John Landis. 2004. "Can the Pharmaceutical Industry Reduce Attrition Rates?." Nature Reviews Drug Discovery 3(8) (August): 711–16. doi:10.1038/nrd1470.

Koopmans, L. H., D. B. Owen, and J. I. Rosenblatt. 1964. "Confidence Intervals for the Coefficient of Variation for the Normal and Log Normal Distributions." Biometrika 51(1–2): 25–32. doi:10.1093/biomet/51.1-2.25.

Lazarou, J., B. H. Pomeranz, and P. N. Corey. 1998. "Incidence of Adverse Drug Reactions in Hospitalized Patients: A Meta-Analysis of Prospective Studies." JAMA 279(15) (April 15): 1200–1205.

Leape, Lucian L., et al. 1991. "The Nature of Adverse Events in Hospitalized Patients." New England Journal of Medicine 324(6): 377–84. doi:10.1056/NEJM199102073240605.

Lichtenberg, Frank R. 2005. "The Impact of New Drug Launches on Longevity: Evidence from Longitudinal, Disease- Level Data from 52 Countries, 1982–2001." International Journal of Health Care Finance and Economics 5(1): 47–73. doi:10.2307/25067714.

———. 2012a. "Pharmaceutical Innovation and Longevity Growth in 30 Developing and High-Income Countries, 2000–2009." Working paper 18235. National Bureau of Economic Research. http://www.nber.org/papers/ w18235.

———. 2012b. "The Effect of Pharmaceutical Innovation on Longevity: Patient-Level Evidence from the 1996–2002 Medical Expenditure Panel Survey and Linked Mortality Public-Use Files." Working paper 18552. National Bureau of Economic Research. http://www.nber.org/papers/w18552.

Madden, Bartley J. 2010. Free to Choose Medicine: How Faster Access to New Drugs Would Save Countless Lives and End Needless Suffering. Chicago: Heartland Institute.

Milne, C.-P., and K. I. Kaitin. 2012. "FDA Review Divisions: Performance Levels and the Impact on Drug Sponsors." Clinical Pharmacology & Therapeutics 91: 393–404.

Mukherjee, D., S. E. Nissen, and E. J. Topol. 2001. "Risk of Cardiovascular Events Associated with Selective Cox-2 Inhibitors." JAMA 286(8) (August 22): 954–59. doi:10.1001/jama.286.8.954.

Munos, Bernard. 2009. "Lessons from 60 Years of Pharmaceutical Innovation." Nature Reviews Drug Discovery 8(12) (December): 959–68. doi:10.1038/nrd2961.

Murphy, Kevin M., and Robert H. Topel. 2006. "The Value of Health and Longevity." Journal of Political Economy 114(5) (October 1): 871–904. doi:10.1086/508033.

National Center for Health Statistics. 2013. Health, United States, 2012. Hyattsville, Md. http://www.cdc.gov/nchs/data/hus/hus12.pdf.

Nordhaus, William D. 2002. "The Health of Nations: The Contribution of Improved Health to Living Standards." Working paper 8818. National Bureau of Economic Research. http://www.nber.org/papers/w8818.

Olson, Mary K. 2002. "Pharmaceutical Policy Change and the Safety of New Drugs." Journal of Law and Economics 45(S2) (October 1): 615–42. doi:10.1086/368006.

Peltzman, Sam. 1973. "An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments." Journal of Political Economy 81(5): 1049–91.

Philipson, Tomas, et al. 2008. "Cost-Benefit Analysis of the FDA: The Case of the Prescription Drug User Fee Acts." Journal of Public Economics 92(5–6): 1306–25. doi:10.1016/j.jpubeco.2007.09.010.

President's Council of Advisors on Science and Technology (PCAST). 2012. Propelling Innovation in Drug Discovery, Development, and Evaluation. (September 25). http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-fda-final.pdf.

Psaty, B. M., and S. P. Burke. 2006. "Protecting the Health of the Public: Institute of Medicine Recommendations on Drug Safety." New England Journal of Medicine 355: 1753–55.

Reichert, J. M. 2003. "Trends in Development and Approval Times for New Therapeutics in the United States." Nature Reviews Drug Discovery 2: 695–702.

Robinson, Lisa A. 2007. "Policy Monitor: How US Government Agencies Value Mortality Risk Reductions." Review of Environmental Economics and Policy 1(2) (July 1): 283–99. doi:10.1093/reep/rem018.

Sacks, L. V., et al. 2014. "Scientific and Regulatory Reasons for Delay and Denial of FDA Approval of Initial Applications for New Drugs, 2000–2012." JAMA 311(4): 378–84.

Shaw, James W., William C. Horrace, and Ronald J. Vogel. 2005. "The Determinants of Life Expectancy: An Analysis of the OECD Health Data." Southern Economic Journal 71(4): 768–83.

Tabarrok, Alexander T. 2000. "Assessing the FDA via the Anomaly of Off-label Drug Prescribing." Independent Review 5(1): 25–53.

Topol, E. 2012. The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care. New York: Basic Books.

Tufts Center for the Study of Drug Development. 2005. "Drug Safety Withdrawals in the U.S. Not Linked to Speed of FDA Approval." Impact Report 7(6).

U.S. FDA. 2010. "Postmarket Drug Safety Information for Patients and Providers—FDA Drug Safety Communication: Reduced Effectiveness of Plavix (Clopidogrel) in Patients Who Are Poor Metabolizers of the Drug." WebContent. http://www.fda.gov/drugs/drugsafety/postmarketdrugsafetyinformationforpatientsandproviders/ucm203888.htm.

U.S. FDA. FY 2011–12. Performance Report to the President and Congress, Prescription Drug User Fee Act, http://www.fda.gov/AboutFDA/ReportsManualsForms/Reports/UserFeeReports/PerformanceReports/default.htm.

U.S. Office of the Inspector General. 2003. FDA's Review Process for New Drug Applications. http://oig.hhs.gov/oei/reports/oei-01-01-00590.pdf.

U.S. Senate Committee Hearing, Food and Drug Administration Safety and Innovation Act of 2012. Senate, June 26; S4610 ff. http://capitolwords.org/date/2012/06/26/S4610-3_food-and-drug-administration-safety-and-innovation/.

Vernon, John A., Keener Hughen, and Joseph H. Golec. 2008. "Future of Drug Development: The Economics of Pharmacogenomics." Expert Review of Clinical Pharmacology 1(1) (January): 49–59. doi:10.1586/17512433.1.1.49.

Wester, Karin, et al. 2008. "Incidence of Fatal Adverse Drug Reactions: A Population Based Study." British Journal of Clinical Pharmacology 65(4): 573–79. doi:10.1111/j.1365-2125.2007.03064.x.

Wiggins, Steven N. 1981. "Product Quality Regulation and New Drug Introductions: Some New Evidence from the 1970s." The Review of Economics and Statistics 63(4): 615–19.

World Bank. Data. http://data.worldbank.org/country/united-states.


 
 
 

The Manhattan Institute, a 501(c)(3), is a think tank whose mission is to develop and disseminate new ideas
that foster greater economic choice and individual responsibility.

Copyright © 2014 Manhattan Institute for Policy Research, Inc. All rights reserved.

52 Vanderbilt Avenue, New York, N.Y. 10017
phone (212) 599-7000 / fax (212) 599-3494