Your current web browser is outdated. For best viewing experience, please consider upgrading to the latest version.

Donation - Other Level

Please use the quantity box to donate any amount you wish. Sign Up to Donate


Send a question or comment using the form below. This message may be routed through support staff.

Email Article

Password Reset Request


Add a topic or expert to your feed.


Follow Experts & Topics

Stay on top of our work by selecting topics and experts of interest.

On The Ground
Main Error Mesage Here
More detailed message would go here to provide context for the user and how to proceed
Main Error Mesage Here
More detailed message would go here to provide context for the user and how to proceed

Manhattan Institute

Close Nav

Testimony by Peter Huber on Precision Molecular Medicine versus the FDA


Testimony by Peter Huber on Precision Molecular Medicine versus the FDA

December 12, 2013
Health PolicyFDA Reform
Urban PolicyCrime

Pharmacology isn't a science of one hand clapping. The patient's chemistry matters as much as the drug's. The FDA doesn't license drugs—it licenses specified drug-patient combinations: the license's implicit promise of future safety and efficacy applies only “under the conditions of use prescribed, recommended, or suggested in the labeling thereof.” The agency spent forty years creating protocols for the development of empirical crowd-science medicine, and it has spent the last thirty wondering how to fit molecules into those protocols. The system is currently frozen in the headlights. It can't handle the complexity and torrents of data that now propel the advance of molecular medicine.

Modern drug designers develop drugs from the bottom up. They select a target molecule and design a molecule to modulate it. They use “structure based” drug design – show a molecular target to a sufficiently smart biochemist, or, increasingly a sufficiently smart super-computer, and he, she, or it can probably come up with an anti-molecule precisely matched to that specific target. Alternatively, designers use a lab animal's genetically engineered immune system to design antibodies perfectly matched to a receptor isolated from, say, the surface of a cancer cell. Recently, biochemists and doctors have begun directly manipulating the molecular code of the patient's own white blood cells to induce them to home in on a specific target on the patient's cancer cells.

The development of this precision molecular medicine begins with the study of human or (in the case of infectious diseases) microbial biology. But the only practical way to work out much of the drug-patient science is to study how the drug actually performs in patients. And the first opportunity to do that systematically is during drug licensing trials authorized and overseen by the FDA.

A good clinical trial of a good drug should culminate in prescription protocols that will make it possible for future doctors to prescribe the targeted drug to the right patients. The best prescription protocols will be based on patient molecular profiles that can be checked before treatment begins — the whole point, after all, is not to wait for clinical effects to reveal whether the drug was prescribed well. By and large, however, the FDA treats patient selection as a problem the drug company must solve either before the clinical trial begins or, to a limited extent, in its very early phases, which involve small numbers of patients. For the most part, FDA-approved trial protocols don't allow the participating doctors to systematically explore the molecular factors that determine why a drug performs well in some patients and not others.

At best, this means that during the trials the new drug is often prescribed to many patients whom it fails to help, and to still more of the wrong patients thereafter, until enough post-licensing data accumulate to reveal how to prescribe the drug more precisely. At worst, drugs that could greatly benefit some patients don't get licensed because the trials test the drugs in too many of the wrong patients. Either way, testing a drug in many of the wrong patients wastes a great deal of time and money. At some point the cost of relying on a very inefficient process to try to solidify the science up front surpasses how much the drug is likely to earn years later in the market. We then have an economically incurable disease.

To unleash the full power of modern molecular medicine the FDA will have to adopt trial protocols that allow the integrated development of drug-patient molecular science. In the words of Dr. Raymond Woosley, former president and CEO of the Critical Path Institute, a nonprofit group established to promote collaboration among drug companies, academic researchers, and the FDA, “Randomized controlled trials are out of date, and it's time to use the tools of the future.”

Blinded, randomized trial protocols were first used in 1938. They were designed to regulate ignorance, not knowledge—the dearth of molecular medical science, not the science itself, nor its efficient, orderly development. Typically, one group of patients gets the real thing, the other a placebo; when a reasonably good treatment is already available, the comparison may instead be drug versus drug. Doctors track clinical symptoms. At the end of the trial, the newly healthy and the still sick, the living and the dead, vote the drug up or down. The only issues that these protocols address systematically are the risks of deliberate or inadvertent manipulation of the data by doctors —“selection bias”— or wishful thinking by patients -- the “placebo effect.”

But patients suffering from the same clinically defined disease often present different clusters of molecular targets deep down, and precision molecular medicine hinges on the deliberate, scientific selection of the right drug-patient molecular combinations.

There are, for example, at least ten biochemically distinct breast cancers. Some are treated with an estrogen blocker, others with estrogen itself. The performance of one of the blockers depends on a genetic variation that affects how well patients metabolize the drug in their liver. More generally, the biochemistry associated with some of the biggest killers changes on the fly: cancer cells and HIV virions, for example, mutate rapidly.

Even a disease associated with a single gene may come in many molecular variations. In 2005 the FDA withdrew its approval of one of two drugs that target a receptor (EGFR) associated with one form of lung cancer. But as one oncologist remarked, “there are at least 20 different mutations in the EGF receptors in human lung cancers, and we don't know if the same drug works as well for every mutation which is why we want as many EGFR inhibitor drugs available as possible for testing.” A large genomic study of individuals thought to be particularly susceptible to heart attacks, strokes, obesity and other major health problems recently found that each subject carried about 300 potential “drug target genes” with rare variants that would probably alter a protein's structure in ways likely to undermine health and affect how the protein would respond to drugs. Safety issues add a further layer of complexity. Side effects occur when a drug sideswipes an innocent molecular bystander. But every body presents a somewhat different array of bystanders.

Given what molecular medicine knows and can do today, tying the drug licensing process to high-level clinical symptoms and the statistical study of large groups of patients is inherently anti-scientific. It is a process that deliberately loses molecular details in the crowd, collapses biochemically complex phenomena into misleadingly simple, one-dimensional, yes/no verdicts, and will often reject good drugs that many patients need.

The best drug science today is anchored in mechanistic facts about how a drug interacts with various molecules that propel the disease or unwanted side effects. Without an understanding of those facts many drugs that we need won't perform well because we don't know how to prescribe them well. As Dr. Janet Woodcock, director of the FDA's Center for Drug Evaluation and Research, put it in 2004, “biomarkers are the foundation of evidence based medicine—who should be treated, how and with what…. Outcomes happen to people, not populations.”

In September 2012 President Obama's Council of Advisors on Science and Technology (PCAST) released a report on “Propelling Innovation in Drug Discovery, Development, and Evaluation.” The FDA's trial protocols, the report notes, “have only a very limited ability to explore multiple factors. Such factors importantly include individual responses to a drug, the effects of simultaneous multiple treatment interventions, and the diversity of biomarkers and disease sub-types.” These protocols lead to clinical trials that are “expensive because they often must be extremely large or long to provide sufficient evidence about efficacy.”

The report proposes several reforms that, if vigorously implemented, would go a long way toward aligning FDA regulation with the drug development tools and practice of modern molecular medicine.

To begin with, the FDA should use its existing accelerated approval rule as the foundation for reforming the trial protocols used for all drugs that address an unmet medical need in the treatment of a serious or life threatening illness. As the PCAST report notes, the rule has “allowed for the development of pioneering and lifesaving HIV/AIDS and cancer drugs over the past two decades.”

In brief, when applying the rule the FDA makes a first call about the drug's efficacy much earlier, based on “surrogate end points,” without waiting for higher level clinical effects to surface and persist for some (arbitrary) period of time. In the Gleevec-versus-leukemia trials launched in 2000, for example, doctors tracked the drug's performance by following blood counts and the number of cells bearing the mutant “Philadelphia” chromosome. The truncated front-end trials need not resolve concerns about how the drug's performance might be affected by many aspects of genetic or lifestyle diversity; “differences in response among subsets of the population,” in FDA parlance, may be addressed later. So, too, may open-ended questions about long term side effects. The manufacturer must still complete conventional trials, but does so after the drug is licensed—and thus does so in tandem with the wider use of the drug by unblinded physicians who can investigate why the drug works in some patients and not others. The FDA rescinds the license if things don't pan out.

The PCAST report also urges the FDA to “expand the scope of acceptable endpoints” used to grant accelerated approval. Specifically, the FDA should make wider use of “intermediate” endpoints—indications that a drug provides “some degree of clinical benefit to patients” even though the benefits fall “short of the desired, long[-]term meaningful clinical outcome from a treatment.” The Agency should “signal to industry that this path for approval could be used for more types of drugs” and “specify what kinds of candidates and diseases would qualify.”

This is a very important recommendation. To deal successfully with complex diseases that require multi-drug treatments, we will have to develop treatments piece by piece, each piece consisting of a drug and a solid understanding of how a cluster of biomarkers can affect that drug's performance. Demanding a front-end demonstration that each drug can, on its own, deliver long-term clinical benefits to most patients will only ensure that no treatment for the disease is ever developed. Evidence that the drug is interacting in a promising way with a molecular factor that plays a role in propelling a complex disease is the best we can expect from any single drug.

A drug, for example, may successfully suppress an HIV protease enzyme and thus lower viral loads, or bind successfully with an estrogen receptor and thus shrink or slow the growth of a breast cancer tumor, yet have no lasting effect on the progress of the disease because the viral particles and cancer cells mutate their way past any single-pronged attack. The drug should be licensed anyway. A successful attack on a biochemically nimble virus or cancer has to begin somewhere, and the place to begin is with a targeted drug that has demonstrated its ability to disrupt some aspect of the disease's chemistry in a way that has some promising effect, in some patients, at some point further along in the complex process that propels the disease. When, for example, the first HIV protease inhibitor established that it could do that job and thus lower viral loads, it was a drug that medicine clearly wanted to have on the shelf—even though it would take several more years to develop additional drugs and assemble cocktails that could suppress the virus almost completely and for a long time.

The PCAST report also recommends adoption of “Bayesian and adaptive protocols” and “other modern statistical designs” to handle the data-intensive trials and explore multiple causal factors simultaneously. In brief, adaptive trials gather a great deal of data, tracking genes, proteins, microbes, and other biomarkers that may affect the trajectory of the disease and cause different patients to respond differently to the same treatment regimens. The trial protocols evolve as the trial progresses and investigators improve their understanding of the drug-patient molecular science. These adaptive, multi-dimensional trial designs gather relevant information much more quickly and efficiently than do trials conducted under the FDA's conventional protocols. And because the investigators learn and adapt as they go, the patients involved receive, on average, better treatments.

Multi-dimensional Bayesian analyses require very complex numerical calculations. The FDA's own information processing technologies, the PCAST report notes, are “outdated” and “woefully inadequate.” But as Andy Grove, the pioneering founder and for many years CEO of Intel, noted in a Science magazine op-ed in late 2011, the digital revolution now makes possible clinical trials that enlist patients much more flexibly, to “provide insights into the factors that determine … how individuals or subgroups respond to the drug, … facilitate such comparisons at incredible speeds, … quickly highlight negative results,…[and] liberate drugs from the tyranny of the averages that characterize trial information today.”

In light of what we now know about the molecular environments in which drugs operate, the unresolved question at the end of many failed clinical trials is what failed: the drug or the FDA-approved script. It's all too easy for a bad script to make a good drug look awful. For example, the script begins with a standard clinical definition of a disease that is in fact a cluster of many biochemically distinct diseases; a coalition of nine biochemical minorities, each with a slightly different form of the disease, vetoes the drug that would help the tenth. Or a biochemical majority vetoes the drug that would help a minority. Or every component of what would be an effective cocktail fails when each drug is tested alone. Or the good drug or cocktail fails because the disease's biochemistry changes quickly, but at different rates in different patients, and to remain effective, treatments have to be changed in tandem, but the clinical trial is set to continue for some fixed period that doesn't align with the dynamics of the disease in enough patients. Or side effects in a biochemical minority veto a drug or cocktail that works well for the majority.

By continuing to channel much of the development of drug science through trial protocols developed decades ago, the FDA now makes it increasingly likely that many drugs that we need and the associated drug-patient science will never get developed at all.