Health FDA Reform, Pharmaceuticals
June 28th, 2011 6 Minute Read Report by Eric Sun, Tomas J. Philipson

Blue Pill or Red Pill: The Limits of Comparative Effectiveness Research

Comparative Effectiveness Research (CER) measures the effects of different drugs or other treatments on a population, with the goal of finding out which ones produce the greatest benefits for the most patients. Used properly, CER gives the patient, doctor, and payer hard information from thousands, or even millions, of cases, saving them time and money that otherwise would be spent on a trial-and-error quest for the right treatment.

Public and private payers for health care hope to use CER to cut costs without reducing quality of care. Great expectations have been placed on this approach. "If there's broad agreement ... [that] the blue pill works better than the red pill," President Obama has said, "and it turns out the blue pills are half as expensive as the red pill, then we want to make sure that doctors and patients have that information available to them."

The potential short-term savings are significant. For example, antipsychotic drugs represent one of the largest and fastest-growing expenses for Medicaid. In 2005, a CER analysis of antipsychotic drugs found little difference between the effectiveness of older, cheaper antipsychotics and that of more expensive "second-generation" drugs. We determined that if reimbursement policies had been changed in response and Medicaid had stopped paying for the more costly drugs, it would have saved $1.2 billion out of the $5.5 billion that it spent on these medications in 2005. However, the consequences of this policy shift would have been worse mental health for many thousands of people, resulting in higher costs to society that would equal or outweigh any savings in Medicaid costs.

This result seems counterintuitive: How can it be that, when a CER study shows no difference between two drugs, limiting coverage for the more expensive drug could actually increase costs? The answer is that in most CER studies, it is the drug or treatment with the larger average effect on an entire population that "wins." In the president's hypothetical, the blue pills are "just as effective" as the red ones because, on average, they do as much good for patients. But the average patient is not the same as any particular individual patient. Declaring a treatment most effective based on an average is a medical and an economic error, for two reasons.

First, individuals differ from one another and from population averages. Therefore, what may be on average a "winning" therapy may simply not work for a large number of patients. Conversely, a drug that is less effective on average may still be the best, or only, choice for a sizable proportion of patients.

The second reason is the variance in dependence in patient responses across therapies. Dependence, for any individual patient, is the degree to which response to one treatment predicts response to another. Dependence varies from illness to illness and from drug to drug but is often an important aspect of finding treatments that work. One cannot know in advance, as a general rule, that Drug A's failure guarantees the failure of Drug B. Yet a reimbursement policy based on CER could well make this error: by refusing to reimburse Drug B on the grounds that Drug A is "more effective," such a policy assumes that failure with Drug A will predict failure with Drug B.

To understand the effect of these points on costs, we looked at the real-world consequences of applying CER results to the antipsychotics we mentioned. These drugs are one of the largest classes of medication for Medicaid patients, and the program's expenditures on antipsychotics are among its fastest-growing: they rose from $1 billion in 1995 to over $5.5 billion in 2005.

In 2005, a national CER study, the Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE), compared the effects of first-generation, cheaper antipsychotics with drugs discovered later. The CATIE study found that second-generation antipsychotics were no more effective at treating schizophrenia symptoms than are first-generation drugs. Naturally, this led to calls for Medicaid to limit reimbursement for second-generation antipsychotics. As this debate continues, we set out to answer a simple empirical question: Would potential reimbursement policies based on the CATIE actually save money on health-care costs? Or would the effects of difference and dependence undo the cost savings?

We found that the latter is the case. Our analysis focused on antipsychotic coverage for roughly 250,000 non-elderly adult Medicaid enrollees with schizophrenia. First, we considered an extreme case: denial of all coverage for second-generation antipsychotics, on the grounds that the cheaper first-generation drugs are just as effective. We found that that this hypothetical policy would save Medicare $1.2 billion, compared with full coverage. However, we estimate that it would reduce patient health by 13,138 quality-adjusted life years (QALYs) because of reduced health among the 75 percent of patients who were not responsive to first-generation antipsychotics and who, because of the restrictive policy, received no other drug therapy. Given that QALYs are typically valued at $100,000, this suggests that the savings from denying coverage for second-generation antipsychotics ($1.2 billion) would be outweighed by the costs of reduced health for patients ($1.3 billion).

The second hypothetical policy we considered would cover perphenazine and risperidone (which are available in less costly generic forms) but exclude olanzapine (which is not). This policy would save Medicaid $500 million annually but reduce health by 10,146 QALYs, mainly because of reduced health among patients who are unresponsive to either risperidone or perphenazine and who receive no therapy for six months or longer because of the restrictive policy. At a value of $100,000 per QALY - again, the typical value assumed in the scholarly literature and by many payers - the health loss is nearly double the savings to Medicaid. Even at a value of $50,000 per QALY, such a policy would only "break even." Therefore, using the CATIE findings to support restrictive coverage policies would not be cost-effective. It would limit freedom of choice for doctors and patients and yield no real compensation in savings.

We do not suggest that CER be dropped from the tool kit of private and public payers who want to cut costs while maintaining quality. On the contrary: we know that CER will become only more important to policymakers in the future. The 2009 federal stimulus law allocated $1 billion for CER programs, and the 2010 health-care overhaul created an institute to promote CER and disseminate the results of this research to doctors and payers. The 2010 law also rescinds a prohibition on the use of CER for coverage decisions by Medicare. In the meantime, insurance companies and other private payers are also on the bandwagon. A recent survey found 85 percent of such organizations expecting that CER will soon be used to justify changes in reimbursement policies.

Our results suggest that CER will not fulfill its promise unless it is implemented differently by researchers and understood differently by policymakers. Simply put, seeking the treatment that is most effective on average will not improve health or save money. However, CER can be conducted in a way that takes difference and dependence into account and measures their effect. If CER is applied in this way - ”as a tool for matching individual patients to the best treatments for those individuals - it will realize its potential to reduce costs without inhibiting freedom of choice for doctors and patients.

READ FULL REPORT

Donate

Are you interested in supporting the Manhattan Institute’s public-interest research and journalism? As a 501(c)(3) nonprofit, donations in support of MI and its scholars’ work are fully tax-deductible as provided by law (EIN #13-2912529).