Friday, March 20, 2009

Seroquel Lessons: Raised Eyebrows and Furrowed Brows

The very interesting Washington Post front page story on March 4 about an unfavorable Seroquel study that may have been “buried” during clinical development ends with a comment from the principal investigator of the CATIE trial, the large NIH comparative trial of antipsychotics.

Post writer Shankar Vedantam says “Jeffrey Lieberman, a Columbia University psychiatrist who led the federal study, said doctors missed clues in evaluating antipsychotics such as Seroquel. If a doctor had known about Study 15, he added, ‘it would raise your eyebrows.’"

That’s an intriguing observation, especially since Lieberman was not only the lead investigator for the NIH study but also for a smaller comparative study commissioned by Seroquel manufacturer AstraZeneca which was conducted during the course of the large NIH comparative project.

As we wrote at the time of the first CATIE results in 2005, AstraZeneca was one of the most aggressive participants in the atypical antipsychotic field in its preparations for the release of comparative information from CATIE.

During 2003, while CATIE was already underway, AstraZeneca commissioned a 52-week comparative trial of Seroquel, Lilly's Zyprexa and J&J's Risperdal in first-episode psychosis, a trial that closely paralleled CATIE but compared a response in a category of patients that was excluded from the NIH trial.

Called CAFÉ (Comparison of Atypicals in First Episode Psychosis), AstraZeneca’s trial even echoed CATIE’s name. The two studies shared about 14 of the sites included in CATIE.

CAFÉ may have added information about another group of patients but it also improved the results for Seroquel noticeably and gave AstraZeneca something positive to say when the CATIE results were released.

The recent Post story raises issues about studies that might not have been reported to keep unfavorable information from the light of day.

The real world is even more complicated. As the CATIE/CAFÉ situation reminds, studies can be designed and run by similar teams that make it difficult to parse a useful meaning from the research. And that's an important message for the future.

During a period while the policy and budget communities in Washington are rallying around comparative effectiveness as a way to align health care spending with the most sensible and cost effective treatments, the CATIE experience is a useful warning of a tough road ahead for that effort.

Previous experience presages a particularly difficult path ahead for the advocates of comparative research as an educational tool – what could be called the Baucus school of comparative effectiveness.

The way comparative trials have been run to this point suggests there will be plenty of furrowed, confused brows in the future as well as raised eyebrows.

1 comment:

Anonymous said...

You'd have to be an idiot not to understand that federal/governement-directed comparative effectiveness studies are a bit different than those companies undertake to use in their marketing efforts