Your current web browser is outdated. For best viewing experience, please consider upgrading to the latest version.

Donation - Other Level

Please use the quantity box to donate any amount you wish. Sign Up to Donate

Contact

Send a question or comment using the form below. This message may be routed through support staff.

Email Article

Password Reset Request

Register


Add a topic or expert to your feed.

Following

Follow Experts & Topics

Stay on top of our work by selecting topics and experts of interest.

Experts
Topics
Project
On The Ground
ERROR
Main Error Mesage Here
More detailed message would go here to provide context for the user and how to proceed
ERROR
Main Error Mesage Here
More detailed message would go here to provide context for the user and how to proceed

Manhattan Institute

search
Close Nav

Unlocking the Code of Health

report

Unlocking the Code of Health
Bridging the Gap Between Precision Medicine and FDA Regulation

March 11, 2015
Health PolicyFDA Reform

Executive Summary

Precision medicine—tailoring treatments to the biochemistry of individual patients—has the potential to cure countless diseases. Molecular biomarkers are the foundation of this approach. Many doctors—notably, oncologists—routinely prescribe drugs in ways that best fit each patient—biomarker profile. The Food and Drug Administration (FDA), however, has been slow to incorporate biomarkers into the regulatory procedures for drug approval and, as a result, has significantly slowed the development of safe and effective treatments for many diseases.

Realizing the full potential that biomarkers offer to revolutionize modern medicine will require substantive and clear regulatory standards, now lacking, for incorporating biomarkers into the drug-approval process, as well as a more transparent, predictable, and timely FDA process for reviewing biomarker submissions.

1. The Importance of Biomarkers for Drug Development and Approval

Biochemists rely on an understanding of the molecular biomarkers that propel diseases to design targeted drugs to block or control them. Doctors then prescribe the drugs to specific subgroups of patients who have the biomarkers in question (estrogen-receptor-positive breast cancer, for instance). Other biomarkers can be used to track whether a disease is advancing or retreating at every stage of its development, thus providing early indications of how well a drug is performing. Changes in biomarker status (such as lowering blood sugar in a diabetic patient) that can provide rapid, reliable evidence of efficacy also have the potential to greatly accelerate the FDA's drug-approval process.

Modern diagnostic tools have revealed that what used to be viewed as a single disease is quite often caused by biomarkers that vary significantly across patients; different groups of patients therefore respond differently to the same drug. Researchers are assembling large databases and using powerful computers to link arrays of different biomarker profiles to the same clinically defined diseases. These findings can then lead to the design of multiple different drugs to address them.

Biomarker science also sets the stage for developing drugs that can be used to take control of disease-causing molecular pathways before clinical symptoms develop. The potential benefits are enormous. For example, according to one estimate, a drug that would delay the onset of Alzheimer's by five years would save about $367 billion in direct health costs by 2050 while likely extending the life span of millions of patients.

Precision medicine is the future of medicine. But it is also the antithesis of the FDA's long-standing one-size-fits-all drugapproval process. Top officials at the FDA have publicly acknowledged this for over a decade, but the agency has been very slow to develop consistent and transparent standards for using biomarkers in drug trials. The absence of such standards has sharply reduced industry incentives to make the large investments needed to develop new targeted drugs or seek formal approval of new uses for existing drugs. Meanwhile, countries such as the U.K. are preparing to completely revamp their drug-approval protocols to develop and use biomarker science during the drug-approval process and approve associated precision-medicine treatment protocols. By offering companies a faster, more certain, path to market, our global competitors hope to shift pharmaceutical R&D dollars and jobs out of the U.S. and onto their own shores.

Incorporating the most recent advances in biomarker science into the drug-approval framework will significantly accelerate the development of new therapeutic options and their delivery to patients suffering from serious, currently untreatable, disorders. It will also lower the overall cost of developing new treatments and significantly lower health care costs by allowing us to detect, treat, or prevent the development of chronic ailments much more effectively than is currently possible.

2. The FDA's History of Crowd-Science Medicine

For more than a decade, the FDA has been saying the right things about biomarkers but has been very slow to act. In 2004, the FDA's Critical Path Initiative report identified biomarker development as a top priority. Dr. Janet Woodcock, currently head of the FDA's Center for Drug Evaluation and Research, noted that "biomarkers are the foundation of evidence-based medicine—who should be treated, how, and with what." Outcomes happen to people, not populations." And in a May 2013 speech addressing the advent of targeted therapies and personalized medicine, Dr. Woodcock declared:"We are going to have to change the way drugs are developed. Period," adding that the agency must "turn the clinical trial paradigm on its head."

But the traditional paradigm is still standing. Under that regime, which emerged in the 1960s, a new drug is approved only if its efficacy has been established by "substantial evidence" grounded in "adequate and well-controlled" clinical trials. Its safety must also be established, though there is no express statutory standard for what kind of evidence is required. In practice, both standards are generally understood to apply only "under the conditions of use prescribed, recommended, or suggested in the labeling thereof." No drug gets approved without a label, and the label is where the FDA, in effect, approves future users.

That approval can't be well-informed, however, without an understanding of the relevant details of the patient-side chemistry. Variations in that chemistry can have strong effects on both efficacy and safety. For most of the last 50 years, however, the FDA has required that a new drug's efficacy be demonstrated by prescribing it in a standard way to a group of patients large enough to provide a statistically robust, one-dimensional correlation with a desired change in a clinical condition. Randomized, double-blind, placebo-controlled trials, dating back to the 1930s and 1940s, are still often called the "gold standard" for modern drug testing.

But those protocols lead to what can, at best, be called crowd-science medicine—though, anchored as they are in empirical correlations, they are almost all crowd and very little science. They assume broad areas of biochemical uniformity among patients, where we now know that there is significant variation. They steer medicine relentlessly toward one-size-fits-all drugs for hypothetical one-size patients.

Tested in large groups of patients selected indiscriminately, many drugs that could help subsets of patients will fail to win approval because the FDA can't tolerate the uncertainty that its own policies sustain. By focusing exclusively on clinical symptoms and effects, which often take a long time to surface, these trials are often very slow to reach any conclusion at all.

The FDA's "Accelerated Approval" rule, developed in the late 1980s and codified by Congress in 1997, already provides the regulatory framework in which the FDA can, in principle—though very rarely in current practice—allow molecular biomarkers to be used to speed the evaluation process. The rule hinges on the use of "surrogate" endpoints that the FDA deems to be "reasonably likely" to predict clinical outcomes. The acceptance of surrogate endpoints allows the agency to make a first call about the drug's efficacy without waiting for clinical effects to surface and persist for some (often arbitrary) period of time. The manufacturer must still complete studies that last long enough to confirm the drug's clinical effects but does so after the drug has been conditionally approved. The drug may be withdrawn from the market if things don't pan out. But here, too, the FDA has declined to issue clear qualification criteria for surrogate endpoints, relying instead on an ad hoc—and, therefore, unpredictable—case-by-case analysis.

3. A Bystander in the Biomarker Revolution

These policies have left the FDA as a bystander to much of the ongoing revolution in molecular medicine. Molecular biomarker science is now being used at every other stage of the drug-development process and in many areas of medical practice. Ironically, much of the expertise about biomarkers can be found in the federal government itself—specifically, at the NIH, which long ago expressed its eagerness to help the FDA incorporate biomarkers into its approval process.

The NIH, professional medical associations, and others are fast acquiring the scientific tools and resources to track the molecular mechanics of diseases from the bottom up. In so doing, they are steadily improving medicine's ability to make an accurate prognosis of how an untreated disease is likely to progress inside an individual patient. The same body of science can lead to precise, objective criteria that define the molecular-level tasks that we want drugs to perform, as well as tests that can provide early indications of when a drug causes significant changes in a disease's progress. The tools that make it possible to acquire the molecular data needed to develop this body of science continue to improve rapidly. As they come to be more widely used, their costs will continue to drop. The same is true for the power and cost of the computers and software needed to assemble and analyze the massive amounts of complex data that such tools generate.

Vast amounts of such data are already being collected and analyzed by drug companies, medical specialists, and research centers. The costs are being covered by drug companies, philanthropists, private and public health-insurance programs, and taxpayers who fund the NIH and other research institutions. Collectively, the costs undoubtedly dwarf the FDA's budget; these programs also generate far more complex data than the FDA has the in-house expertise and computational tools to handle.

The Institute of Medicine (IOM)—the independent, nonprofit health arm of the National Academy of Sciences—specializes in developing substantive evidentiary standards for applied research. In 2010, the IOM released a workshop report that recommended that "the FDA adopt a consistent scientific framework for biomarker evaluation in order to achieve a rigorous and transparent process."

But clear substantive standards for the collection and analysis of data for biomarker validation at the FDA (the biomarker "qualification" process in the FDA's regulatory jargon) remain conspicuous by their absence. Drug companies and doctors already have strong incentives to develop biomarker science. But the most powerful economic incentive for standardizing, pooling, and analyzing biomarker data is the prospect that the results can be used to frame clinical trials in ways that make it more likely that a drug will perform well and, in some circumstances, substantially shorten the time required for FDA approval.

4. The Path to Reform

In part, the FDA has been marginalized in this area because of its regulatory role. Many of the major players involved in the pooling and analysis of molecular data don't directly interact with the agency, which faces sharp limits on how much contact it may have with drug companies and other experts outside the context of specific product applications. These constraints have limited the FDA's ability to keep pace with new advances. The NIH, by contrast, has a history of close collaboration with clinicians, medical research centers, professional medical societies, doctors, and patients, and NIH-funded research is often the starting point for uncovering and using newly discovered biomarkers.

To get things moving, Congress should create a framework for expert panels, convened under the auspices of the NIH and the IOM, to develop substantive standards for the use of biomarkers in the drug-approval process. Separate expert panels should be convened to develop standards that address the statistical tools needed to analyze biomarker data.

The FDA would be a partner in this process throughout, but the panels would be directed to frame standards that reflect the consensus views of the scientific community—and the standard-setting process should be dynamic and flexible enough to incorporate innovative approaches going forward. That said, the FDA would retain final authority in the drug-approval process to strike the right balance in assessing what is known, and with how much confidence, about the relevant biomarkers and surrogate endpoints used in clinical trials, a drug's safety and efficacy as established in those trials, the seriousness of the disease, and the availability of other therapies.

The objective of the reform effort should be to anchor the FDA-approval process in the best available molecular biology, speed up regulatory decision making, and ensure that the FDA's review of biomarker submissions is based on a transparent, predictable, and efficient approach.

READ FULL REPORT

______________________

Paul Howard is a senior fellow and director of health policy at the Manhattan Institute. Follow him on Twitter here

Peter W. Huber is a senior fellow at the Manhattan Institute.

Saved!
Close