6.11.13

Atlantic: Lies, Damned Lies, and Medical Science | e-Patients.net

Atlantic: Lies, Damned Lies, and Medical Science | e-Patients.net

Atlantic: Lies, Damned Lies, and Medical Science


There’s an extraordinary new article in The Atlantic, “Lies, Damned Lies, and Medical Science.” It echos the excellent article in our Journal of Participatory Medicine (JoPM) one year ago this week, by Richard W. Smith, 25 year editor of the British Medical Journal: In Search Of an Optimal Peer Review System.
J

oPM, Oct 21, 2009: “….most of what appears in peer-reviewed journals is scientifically weak.”
Atlantic, Oct. 16, 2010: “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.”

JoPM 2009: “Yet peer review remains sacred, worshiped by scientists and central to the processes of science — awarding grants, publishing, and dishing out prizes.”

Atlantic 2010: “So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?”

Dr. Marcia Angell said something just as damning in December 2008 in the New York Review of Books: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.” (Our post on Angell is here.)

What’s an e-patient to do?? How are patients supposed to research if, as all three authorities say, much of what they read is scientifically weak?

More problematic, what’s an e-patient to do when doctors commonly insult them, saying “You don’t know how to research – stick to peer reviewed journals”? You know what reaction patients get when they question those journals? Commonly, doctors’ eyes roll. Both are reasons why we’ve covered this subject forever. See our category Understanding Statistics.

I haven’t read the full Atlantic article yet, but today it was the buzz of Twitter.  I’ve asked for a post by Peter Frishauf, who authored a great commentary on Smith’s article last year: Reputation Systems: A New Vision for Publishing and Peer Review. Stay tuned.

Atlantic: Lies, Damned Lies, and Medical Science | e-Patients.net

Atlantic: Lies, Damned Lies, and Medical Science | e-Patients.net

Atlantic: Lies, Damned Lies, and Medical Science


There’s an extraordinary new article in The Atlantic, “Lies, Damned Lies, and Medical Science.” It echos the excellent article in our Journal of Participatory Medicine (JoPM) one year ago this week, by Richard W. Smith, 25 year editor of the British Medical Journal: In Search Of an Optimal Peer Review System.
J

oPM, Oct 21, 2009: “….most of what appears in peer-reviewed journals is scientifically weak.”
Atlantic, Oct. 16, 2010: “Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.”

JoPM 2009: “Yet peer review remains sacred, worshiped by scientists and central to the processes of science — awarding grants, publishing, and dishing out prizes.”

Atlantic 2010: “So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?”

Dr. Marcia Angell said something just as damning in December 2008 in the New York Review of Books: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.” (Our post on Angell is here.)

What’s an e-patient to do?? How are patients supposed to research if, as all three authorities say, much of what they read is scientifically weak?

More problematic, what’s an e-patient to do when doctors commonly insult them, saying “You don’t know how to research – stick to peer reviewed journals”? You know what reaction patients get when they question those journals? Commonly, doctors’ eyes roll. Both are reasons why we’ve covered this subject forever. See our category Understanding Statistics.

I haven’t read the full Atlantic article yet, but today it was the buzz of Twitter.  I’ve asked for a post by Peter Frishauf, who authored a great commentary on Smith’s article last year: Reputation Systems: A New Vision for Publishing and Peer Review. Stay tuned.

Fraud and science Part 1 & 2: Petulant Skeptic

Petulant Skeptic: Fraud and science

07 January 2011

Fraud and science

Everyone else is writing about it, so I may as well too. Everyone is reporting that not only does the MMR vaccine not cause autism, but the data purporting to show that it did was fraudulent.

While everyone seems terribly outraged by this, they also seem to be treating the whole thing as some sort of crazy anomaly -- the scientific equivalent of that high school reunion your wife couldn't go to and where you had a bit too much to drink and the (still) hot girl you pined for long ago is now impressed at what you've become and somehow you end up with her panties in your luggage. I mean, why tell your wife? It'll turn both your lives upside down, hurt everyone involved, and it was just a once in a lifetime confluence of alcohol, chance, and reminiscence...

Except that isn't the truth and everyone knows it. While it's not tenable to deploy the mutaween to the thousands of high school reunions in this country, it does seem feasible that we come up with a better accounting system in the world of science (and retractions in particular). At present the editors of surgical journals seem to have no problem telling Retraction Watch that the reason a paper was retracted is, "None of your damn business."

This is why some context is important. The media treating this Wakefield debacle as though it's of singular import casts a shadow that obfuscates the huge number of articles that ought to be retracted. For those too lazy to click that link the authors speculated that between 10,000 and 100,000 unretracted ought to be. (Ah, but maybe their own is among them? Shut up.)

Oh, but this Wakefield thing is just a fluke. Most of those crappy journal articles don't actually hurt people. Right? Right???

Wrong. Let me tell you the story of Werner Bezwoda (as if to emphasize my point, he doesn't even have a Wikipedia page). Back in the 1980s doctors had come up with a new theory for cancer treatment. Well, it wasn't really a new theory, it was just the extension of an old one to an unfathomable degree. The limiting factor on chemotherapy effectiveness at the time was the patient's bone marrow. If you destroyed all of the bone marrow cells (along with the cancer cells) your patient invariably died (although not from cancer). Medicine being what it is, doctors decided, well we need to destroy the bone marrow to kill the cancer, so what we'll do is extract bone marrow from these folks, give them the super megadose (it was actually called that) chemo, and then implant the bone marrow back.

The guys at the NIH who first came up with this idea tried to set up some clinical trials to rigorously test the idea before putting it into practice. Unfortunately they were undermined by a media blitzkrieg that cast them as villains for keeping a potentially lifesaving therapy from dying cancer patients. The public outcry was sufficient that the FDA gave the whole procedure a compassionate use exemption. Public demand for the procedure was fed by our man Bezwoda, a physician claiming extraordinary results with the autologous transplantation after the super megadose chemo protocol he had developed. Clients from around the world were regularly flying down to his clinic at Witwatersrand in Johannesburg, South Africa.

As a metric for how quickly this protocol swept the scientific community, in 1993 alone there were 1,177 papers published on it. In any case, those NIH guys were stubbornly continuing with their trials (try recruiting for a randomized trial when your subjects can jump ship if they don't like their assigned lot).

Then in 1999 Bezwoda opened the annual cancer meeting in Atlanta with a presentation on his results: He found that 8.5 years after megadose chemo and transplantation 60% of his patients were alive, whereas only 20% survived in the control arm. During the afternoon session those guys from the NIH presented their results. Except that their results were, let's say, not good. In one of their studies the researchers found "not even a modest improvement," and complication rates considerably higher than the control arm.

Later in the year a team of researchers pulled together by the president of the American Society of Clinical Oncology flew off to South Africa to take a look at Bezwoda's data. Upon arrival they requested the log books for the 158 patients Bezwoda reported treating. He gave them log books for 58, and said the rest had been lost (oh to live in a world without paperwork retainment requirements). The data he did give them was horrible. One of Bezwoda's purported breast cancer patients was actually a man. The entire thing had been a sham. In essence, Bezwoda's protocol was completely fabricated and his fraud was the sole thing holding up a $4 billion industry that performed the protocol on approximately 40,000 women.

To recap: There was a new experimental procedure for treating breast cancer. The media created such a furor at its restricted use that the government was browbeaten into allowing anyone who could pay (or litigate their insurers into paying, a whole different tangent) have a completely unproven procedure. The procedure turned out to be an epic failure and the exposition thereof went almost completely unnoticed by the mainstream media. (Incidentally, in this case scientists were the ones who exposed the fraud, in the Wakefield case it took a journalist to do so).

I do not have a solution to this madness, but until we, collectively, realize that the status quo is madness we're never going to get any closer to coming up with one.

As a parting note I'm going to quote an email that was sent to Jonah Lehrer by a former academic scientist who now works for a large biotech company:
When I worked in a university lab, we’d find all sorts of ways to get a significant result. We’d adjust the sample size after the fact, perhaps because some of the mice were outliers or maybe they were handled incorrectly, etc. This wasn’t considered misconduct. It was just the way things were done. Of course, once these animals were thrown out [of the data] the effect of the intervention was publishable. 
Here we have to be explicit, in advance, of how many mice we are going to use, and what effect we expect to find. We can’t fudge the numbers after the experiment has been done… That’s because companies don’t want to begin an expensive clinical trial based on basic research that is fundamentally flawed or just a product of randomness.
Reagan famously said, "Trust but verify." That's not sufficient any more. The new maxim is closer to, "Doubt until you've figured out their incentives to lie."

08 January 2011

Fraud and science, part II

There are reports all over (they're actually just disguised pressers since that's all the journalists read) about a new study in a journal no one reads (Pharmacoepidemiology and Drug Safety; it's the fourth google result when searching its title...). That antipsychotics are being prescribed off label in ever increasing amounts with scant data to support doing so. The actual study (gated, natch) is not linked to by a single one of those articles.

It does not reflect well on the PR acumen of Science (capital S) or scientists that the article was first "published" and the presser released on a Friday. It's as though they're undermining their own cause by ensuring that no one will read this article. (What possible incentive could they have for that? See III.)

Because I'm not actually a journalist, I'm just going to tell you that the presser gives you the highlights fairly accurately and reading the article is (probably) a waste of your time (unlike real journalists, I did link you to the article itself). Here were those highlights:

  • Antipsychotic treatment prescribed during the surveyed doctors' visits nearly tripled from 6.2 million in 1995 to 16.7 million in 2008, the most recent year for which they had data. During this period, prescriptions for first-generation antipsychotics decreased from 5.2 million to 1 million.
  • Antipsychotic use for indications that lacked FDA approval by the end of 2008 increased from 4.4 million prescriptions during surveyed doctors' visits in 1995 to 9 million in 2008.
  • In 2008, more than half — 54 percent — of the surveyed prescriptions for the new-generation antipsychotics had uncertain evidence.
  • An estimated $6 billion was spent in 2008 on off-label use of antipsychotic medication nationwide, of which $5.4 billion was for uses with uncertain evidence.
  • Prescriptions for antipsychotics began dropping slightly in 2006, shortly after the FDA issued a warning about their safety.
I'm not a psychiatrist (nor, God willing, will I ever be) but this is Bezwoda all over again. An entire industry built upon some questionable assumptions and a few unproven theories has exploded into a multibillion dollar industry for pharmaceutical companies and a publication factory for grant/tenure hungry faculty members.

The difference here is that there are too many genies for us to stuff back into the bottle if and when the whole house of cards falls down. This isn't someone falsifying data and then exhorting their colleagues to follow their invented protocols. This is a bunch of psychiatrists writing prescriptions that may work with no real basis for doing so beyond "everyone else is doing it and it might work." (If this sounds a lot like homeopathy to you, you're not far off... this is Homeopathy 2.0 where we use real drugs because they cost more and require med checks.)

Incidentally, if everyone else is doing it and you aren't, have fun trying to pay your mortgage. When word gets out that you won't exhaust every option for your patients not only will the malpractice suits come raining down, the waiting room will empty out. Patients don't care how well considered your refusal to participate in this charade is, they just want to get better and it's your job to make that happen.
 

Why Almost Everything You Hear About Medicine Is Wrong

Why Almost Everything You Hear About Medicine Is Wrong

If you follow the news about health research, you risk whiplash. First garlic lowers bad cholesterol, then—after more study—it doesn’t. Hormone replacement reduces the risk of heart disease in postmenopausal women, until a huge study finds that it doesn’t (and that it raises the risk of breast cancer to boot). Eating a big breakfast cuts your total daily calories, or not—as a study released last week finds. Yet even if biomedical research can be a fickle guide, we rely on it.
But what if wrong answers aren’t the exception but the rule? More and more scholars who scrutinize health research are now making that claim. It isn’t just an individual study here and there that’s flawed, they charge. Instead, the very framework of medical investigation may be off-kilter, leading time and again to findings that are at best unproved and at worst dangerously wrong. The result is a system that leads patients and physicians astray—spurring often costly regimens that won’t help and may even harm you.
 
It’s a disturbing view, with huge im-plications for doctors, policymakers, and health-conscious consumers. And one of its foremost advocates, Dr. John P.A. Ioannidis, has just ascended to a new, prominent platform after years of crusading against the baseless health and medical claims. As the new chief of Stanford University’s Prevention Research Center, Ioannidis is cementing his role as one of medicine’s top mythbusters. “People are being hurt and even dying” because of false medical claims, he says: not quackery, but errors in medical research.
This is Ioannidis’s moment. As medical costs hamper the economy and impede deficit-reduction efforts, policymakers and businesses are desperate to cut them without sacrificing sick people. One no-brainer solution is to use and pay for only treatments that work. But if Ioannidis is right, most biomedical studies are wrong.

In just the last two months, two pillars of preventive medicine fell. A major study concluded there’s no good evidence that statins (drugs like Lipitor and Crestor) help people with no history of heart disease. The study, by the Cochrane Collaboration, a global consortium of biomedical experts, was based on an evaluation of 14 individual trials with 34,272 patients. Cost of statins: more than $20 billion per year, of which half may be unnecessary. (Pfizer, which makes Lipitor, responds in part that “managing cardiovascular disease risk factors is complicated”). In November a panel of the Institute of Medicine concluded that having a blood test for vitamin D is pointless: almost everyone has enough D for bone health (20 nanograms per milliliter) without taking supplements or calcium pills. Cost of vitamin D: $425 million per year.
Ioannidis, 45, didn’t set out to slay medical myths. A child prodigy (he was calculating decimals at age 3 and wrote a book of poetry at 8), he graduated first in his class from the University of Athens Medical School, did a residency at Harvard, oversaw AIDS clinical trials at the National Institutes of Health in the mid-1990s, and chaired the department of epidemiology at Greece’s University of Ioannina School of Medicine. But at NIH Ioannidis had an epiphany. “Positive” drug trials, which find that a treatment is effective, and “negative” trials, in which a drug fails, take the same amount of time to conduct. “But negative trials took an extra two to four years to be published,” he noticed. “Negative results sit in a file drawer, or the trial keeps going in hopes the results turn positive.” With billions of dollars on the line, companies are loath to declare a new drug ineffective. As a result of the lag in publishing negative studies, patients receive a treatment that is actually ineffective. That made Ioannidis wonder, how many biomedical studies are wrong?
His answer, in a 2005 paper: “the majority.” From clinical trials of new drugs to cutting-edge genetics, biomedical research is riddled with incorrect findings, he argued. Ioannidis deployed an abstruse mathematical argument to prove this, which some critics have questioned. “I do agree that many claims are far more tenuous than is generally appreciated, but to ‘prove’ that most are false, in all areas of medicine, one needs a different statistical model and more empirical evidence than Ioannidis uses,” says biostatistician Steven Goodman of Johns Hopkins, who worries that the most-research-is-wrong claim “could promote an unhealthy skepticism about medical research, which is being used to fuel anti-science fervor.”

Even a cursory glance at medical journals shows that once heralded studies keep falling by the wayside. Two 1993 studies concluded that vitamin E prevents cardiovascular disease; that claim was overturned by more rigorous experiments, in 1996 and 2000. A 1996 study concluding that estrogen therapy reduces older women’s risk of Alzheimer’s was overturned in 2004. Numerous studies concluding that popular antidepressants work by altering brain chemistry have now been contradicted (the drugs help with mild and moderate depression, when they work at all, through a placebo effect), as has research claiming that early cancer detection (through, say, PSA tests) invariably saves lives. The list goes on.
Despite the explosive nature of his charges, Ioannidis has collaborated with some 1,500 other scientists, and Stanford, epitome of the establishment, hired him in August to run the preventive-medicine center. “The core of medicine is getting evidence that guides decision making for patients and doctors,” says Ralph Horwitz, chairman of the department of medicine at Stanford. “John has been the foremost innovative thinker about biomedical evidence, so he was a natural for us.”
Ioannidis’s first targets were shoddy statistics used in early genome studies. Scientists would test one or a few genes at a time for links to virtually every disease they could think of. That just about ensured they would get “hits” by chance alone. When he began marching through the genetics literature, it was like Sherman laying waste to Georgia: most of these candidate genes could not be verified. The claim that variants of the vitamin D–receptor gene explain three quarters of the risk of osteoporosis? Wrong, he and colleagues proved in 2006: the variants have no effect on osteoporosis. That scores of genes identified by the National Human Genome Research Institute can be used to predict cardiovascular disease? No (2009). That six gene variants raise the risk of Parkinson’s disease? No (2010). Yet claims that gene X raises the risk of disease Y contaminate the scientific literature, affecting personal health decisions and sustaining the personal genome-testing industry.
Statistical flukes also plague epidemiology, in which researchers look for links between health and the environment, including how people behave and what they eat. A study might ask whether coffee raises the risk of joint pain, or headaches, or gallbladder disease, or hundreds of other ills. “When you do thousands of tests, statistics says you’ll have some false winners,” says Ioannidis. Drug companies make a mint on such dicey statistics. By testing an approved drug for other uses, they get hits by chance, “and doctors use that as the basis to prescribe the drug for this new use. I think that’s wrong.” Even when a claim is disproved, it hangs around like a deadbeat renter you can’t evict. Years after the claim that vitamin E prevents heart disease had been overturned, half the scientific papers mentioning it cast it as true, Ioannidis found in 2007.
The situation isn’t hopeless. Geneticists have mostly mended their ways, tightening statistical criteria, but other fields still need to clean house, Ioannidis says. Surgical practices, for instance, have not been tested to nearly the extent that medications have. “I wouldn’t be surprised if a large proportion of surgical practice is based on thin air, and [claims for effectiveness] would evaporate if we studied them closely,” Ioannidis says. That would also save billions of dollars. George Lundberg, former editor of The Journal of the American Medical Association, estimates that strictly applying criteria like Ioannidis pushes would save $700 billion to $1 trillion a year in U.S. health-care spending.
Of course, not all conventional health wisdom is wrong. Smoking kills, being morbidly obese or severely underweight makes you more likely to die before your time, processed meat raises the risk of some cancers, and controlling blood pressure reduces the risk of stroke. The upshot for consumers: medical wisdom that has stood the test of time—and large, randomized, controlled trials—is more likely to be right than the latest news flash about a single food or drug.

Petulant Skeptic: Why nearly everything Newsweek writes about medicine is wrong

Petulant Skeptic: Why nearly everything Newsweek writes about medicine is wrong

29 January 2011


Why nearly everything Newsweek writes about medicine is wrong

Newsweek has an article about Dr John Ioannidis and his work discrediting many medical studies. Incidentally The Atlantic wrote about him and his work back in November as did The New Yorker in December. All of these articles highlight Ioannidis' findings that many studies are flawed, but take different approaches, and draw different conclusions from his work. Newsweek's take is by far the worst and most harmful.

In any case Newsweek informs us:
In just the last two months, two pillars of preventive medicine fell. A major study concluded there’s no good evidence that statins (drugs like Lipitor and Crestor) help people with no history of heart disease. The study, by the Cochrane Collaboration, a global consortium of biomedical experts, was based on an evaluation of 14 individual trials with 34,272 patients.
Combining this "fact" with Ioannidis' findings they go on to boldly assert: "Even a cursory glance at medical journals shows that once heralded studies keep falling by the wayside."
Lets take a look at their statin example since it characterizes the rest of the piece nicely. Newsweek tells us what the Cochrane Collaboration is, yet they omit that it does not conduct clinical studies. They conduct reviews, often in the form of meta-analyses. There's a qualitative difference between a meta-analysis and a study; studies examine real patients, meta-analyses examine data. Here is Dr Mark Crislip's explanation of why this difference matters:
The studies included in a meta-analysis are often of suboptimal quality. Many [meta-analyses] spend time bemoaning the lack of quality studies they are about to stuff into their study grinder. Then, despite knowing that the input is poor quality, the go ahead and make a sausage. The theory, as I said last week, is that if you collect many individual cow pies into one big pile, the manure transmogrifies into gold. I still think it as a case of GIGO: Garbage In, Garbage Out.

It has always been my understanding that a meta-analysis was used in lieu of a quality clinical trial. Once you had a few high quality studies, you could ignore the conclusions of a meta-analysis.
Not any longer, and not if you're Newsweek. Instead they prefer to ignore these substantive differences and merely inform us that one study has invalidated another. In fact they go so far as to make it sound as though the Cochrane review were an actual clinical study by telling us how many patients it "evaluated." In case you're wondering whether meta-analyses can be used as stand-ins for clinical trials, the NEJM published an article that explored just that:
We identified 12 large randomized, controlled trials and 19 meta-analyses addressing the same questions. For a total of 40 primary and secondary outcomes, agreement between the meta-analyses and the large clinical trials was only fair (kappa= 0.35; 95 percent confidence interval, 0.06 to 0.64). The positive predictive value of the meta-analyses was 68 percent, and the negative predictive value 67 percent. However, the difference in point estimates between the randomized trials and the meta-analyses was statistically significant for only 5 of the 40 comparisons (12 percent). Furthermore, in each case of disagreement a statistically significant effect of treatment was found by one method, whereas no statistically significant effect was found by the other.
Beyond this singular lack of understanding, Newsweek devotes many paragraphs to Ioannidis' work on statistical problems, yet only half of one sentence to the fact that his own statistical methods are controversial. In fact the sentence calling his methods controversial is buttressed by Newsweek telling us of Ioannidis' childhood genius and that his mathematical arguments are "abstruse"; evidently this complexity and his childhood genius were sufficient to convince the author of his accuracy and ought to be enough for the readers as well. (Handy rule of thumb: When not reading a profile you encounter a cute vignette from someone's childhood, the author is using it to paper over a logical deficiency.)

Indeed Newsweek spends a paragraph on Ioannidis' work discrediting statistical techniques (formerly) used in genetic attribution of disease and fails to tell us how that relates, at all, to medical studies. Indeed, they mention that it matters to the results genotyping companies give their customers, but how it relates to the article's thesis is ignored. This is the same as me telling you, "There are many problems with Mitsubishi automobiles" then spending a paragraph telling you about the dismal quality control conditions at Mitsubishi's television factories. It's the same aspersion by association and implication nonsense that I have written about before.

Perhaps the worst offense of all is that the article's central thesis is, "Everything you hear about medicine is wrong." Except that all of their reasons for discarding previously held findings are... new findings. Here they are telling us about vitamin E, "Two 1993 studies concluded that vitamin E prevents cardiovascular disease; that claim was overturned by more rigorous experiments, in 1996 and 2000." This is like a conspiracy theorist telling you that you can't trust anything a government official says and then quoting Robert Gibbs to evidence his opinion.

Don't get me wrong, there are plenty of problems with how modern science and medicine interact. Near the top of that list, though, are articles just like this. Consider that Newsweek has published literally dozens of articles about medical studies that have not met the article's central criteria:
[M]edical wisdom that has stood the test of time—and large, randomized, controlled trials—is more likely to be right than the latest news flash about a single food or drug.
That Newsweek has the gall to publish a poorly reasoned article decrying the hype over recently released medical results without addressing, or even mentioning, their own habit of doing precisely that is a problem. A larger problem, though, is that the editors of Newsweek are blind to the hypocrisy of hyping Ioannidis' findings in an article about how often such findings turn out to be wrong.