As Congress moves closer on creating a national center on comparative effectiveness, one big question mark for industry remains: how will insurance companies use the data?
The potential for payors to use comparative data to make up-or-down coverage decisions is an unsettling proposition for industry. While most stakeholders agree on the broad outlines of what a federal effort might look like (a public-private partnership with a transparent process that includes services across the health care spectrum), it’s the specifics that get people nervous.
Restrictions on patient choice has been a central theme in arguments against a centralized effort.
As David Nexon, representing the medical device industry association AdvaMed, put it to congressional staffers this way last week: “when the products are safe and effective, I do not believe the research should be allowed to be used to make blanket non-coverage decisions….An insurance company or the government shouldn’t be saying that you shouldn’t have access to the thing that’s best for you.”
Agency for Healthcare Research & Quality director Carolyn Clancy—whose agency already does some comparative effectiveness research—tried to quell those concerns.
AHRQ’s research, she told congressional staffers, rarely results in a “giant thumbs up or giant thumbs down” on coverage decisions. Instead, “we think the reports actually help clinicians and health care organizations refine the process of identifying more rapidly which patients are most likely to benefit…so that access to effective treatment is maximized.”
That is two very different views of a comparative effectiveness research center—as either a new obstacle to access or an enabler of more rapid uptake of effective therapies. The reality of how insurers will use the data is probably somewhere in the middle of those two extremes.
It’s clear government and private payors aren’t going to ignore the output of a federally directed comparative research initiative. As Karen Ignagni, president of America’s Health Insurance Plans, told congressional staffers, comparative effectiveness data “should not dictate benefit design, but the idea that we would have this robust research and not pay any attention to it doesn’t make any sense whatsoever.”
Ignagni pointed to the National Eye Institute’s trial that will pit Genentech’s Lucentis against the off-label use of Avastin for the treatment of age-related macular degeneration as the gold standard for comparative effectiveness research.
Noting that Lucentis is significantly more expensive than Avastin, “assuming they both have similar properties, and can effectively perform the same function, it doesn’t mean that a health plan wouldn’t cover them,” she said. But “we might put [Lucentis] in a higher tier, for example.”
But it’s comments like those that have people like AdvaMed’s Nexon nervous. He points to another high-profile, government-funded trial, the Clinical Antipsychotic Trials in Intervention Effectiveness, or CATIE, as an example of comparative effectiveness research gone wrong: Based on the conclusions of the CATIE study that older antipsychotics were just as effective as most newer drugs, state Medicaid agencies instituted a blanket denial of coverage for the branded products.
“That’s the kind of thing that this can be misused for, and I think it’s a real threat to American medical care, and to our own ability as patients to get what’s best for us,” he said. “The fact is, there is profit motive driving insurance companies, as there is one driving our companies.” AHRQ’s Clancy pledged that any expanded effort would be different this time around.
Comments, she said, “about people with green eye shades” and “government bureaucrats” are misplaced. “They are very good people. They are chief medical officers making very robust decisions as scientists….That’s the process we intend to invite.”
The potential for payors to use comparative data to make up-or-down coverage decisions is an unsettling proposition for industry. While most stakeholders agree on the broad outlines of what a federal effort might look like (a public-private partnership with a transparent process that includes services across the health care spectrum), it’s the specifics that get people nervous.
Restrictions on patient choice has been a central theme in arguments against a centralized effort.
As David Nexon, representing the medical device industry association AdvaMed, put it to congressional staffers this way last week: “when the products are safe and effective, I do not believe the research should be allowed to be used to make blanket non-coverage decisions….An insurance company or the government shouldn’t be saying that you shouldn’t have access to the thing that’s best for you.”
Agency for Healthcare Research & Quality director Carolyn Clancy—whose agency already does some comparative effectiveness research—tried to quell those concerns.
AHRQ’s research, she told congressional staffers, rarely results in a “giant thumbs up or giant thumbs down” on coverage decisions. Instead, “we think the reports actually help clinicians and health care organizations refine the process of identifying more rapidly which patients are most likely to benefit…so that access to effective treatment is maximized.”
That is two very different views of a comparative effectiveness research center—as either a new obstacle to access or an enabler of more rapid uptake of effective therapies. The reality of how insurers will use the data is probably somewhere in the middle of those two extremes.
It’s clear government and private payors aren’t going to ignore the output of a federally directed comparative research initiative. As Karen Ignagni, president of America’s Health Insurance Plans, told congressional staffers, comparative effectiveness data “should not dictate benefit design, but the idea that we would have this robust research and not pay any attention to it doesn’t make any sense whatsoever.”
Ignagni pointed to the National Eye Institute’s trial that will pit Genentech’s Lucentis against the off-label use of Avastin for the treatment of age-related macular degeneration as the gold standard for comparative effectiveness research.
Noting that Lucentis is significantly more expensive than Avastin, “assuming they both have similar properties, and can effectively perform the same function, it doesn’t mean that a health plan wouldn’t cover them,” she said. But “we might put [Lucentis] in a higher tier, for example.”
But it’s comments like those that have people like AdvaMed’s Nexon nervous. He points to another high-profile, government-funded trial, the Clinical Antipsychotic Trials in Intervention Effectiveness, or CATIE, as an example of comparative effectiveness research gone wrong: Based on the conclusions of the CATIE study that older antipsychotics were just as effective as most newer drugs, state Medicaid agencies instituted a blanket denial of coverage for the branded products.
“That’s the kind of thing that this can be misused for, and I think it’s a real threat to American medical care, and to our own ability as patients to get what’s best for us,” he said. “The fact is, there is profit motive driving insurance companies, as there is one driving our companies.” AHRQ’s Clancy pledged that any expanded effort would be different this time around.
Comments, she said, “about people with green eye shades” and “government bureaucrats” are misplaced. “They are very good people. They are chief medical officers making very robust decisions as scientists….That’s the process we intend to invite.”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.