Now Reading
Sexy, But Biased

Sexy, But Biased

Different types of peer-reviewed research jour...
Image via Wikipedia

When scientists, scholarly reviewers, and the media focus only on the most sensational results of research studies, the resulting distortions can harm scientific progress and the public.

I write this column (and started ResearchBlogging.org) primarily because I believe that it’s important to consider peer-reviewed research when discussing science. In an age when anyone can have a blog, it’s easier than ever to disseminate pseudoscience, so it’s important to identify and report on unbiased scientific research.

But what if the scientific publication process itself is biased? Certainly we all get much more excited when a novel and unique result is found: A new treatment for depression, a new way to lose weight or prevent heart disease or treat cancer. What about all the studies that don’t succeed? Are peer-reviewers less likely to approve publication of a finding saying that a treatment doesn’t work? Might researchers be tempted to sweep uninteresting results under the rug?

Unfortunately, in many fields, the answer to these questions appears to be “yes.” It’s a documented problem known as publication bias. In the case of medical research, publication bias could have serious consequences. What if a drug company sponsored several clinical trials of a drug, some of which worked and some of which did not? The company could profit handsomely if the results favoring their drug were the only ones published. The potential for this sort of abuse is large enough that the US government now requires that the data from all drug trials be stored and shared on its site ClinicalTrials.gov.

But posting to a government database isn’t the same as publication in a scientific journal, where work—especially trials of promising new drugs—is likely to reach a much larger audience through the mainstream media and blogs. In these cases, publication bias can still have a big impact. Now, since ClinicalTrials.gov has been active for over 10 years, it’s possible to put some hard numbers to publication bias. The UK neuroscientist who blogs as “Neuroskeptic” discussed the most recent such study earlier this month. Florence Bourgeois led an analysis of the results of 546 trials in five major categories of drugs. The work was published in Annals of Internal Medicine.

The researchers found that only 66 percent of the trials conducted between 2000 and 2006 were actually published. Industry-funded trials were significantly less likely to be published than government- or organization-funded trials. Industry-funded trials were also more likely to get positive results. That suggests there’s a strong potential that industry is stacking the deck by only submitting positive results for publication. Since the drug industry doesn’t profit from unsuccessful drugs, they also have a clear motive for suppressing studies that don’t favor their interests.

See Also

But publication bias doesn’t just occur in drug research. Even when there’s no profit motive, scientists still want to be seen as producing interesting, positive results. I’ve frequently heard researchers say an experiment “didn’t work,” meaning not that they made a procedural error, but that the results weren’t interesting or surprising. In March, UK medical writer Helen Jaques blogged about a 2010 study on publication bias in research on psychotherapy for depression. While studies about psychotherapy typically don’t get drug-industry backing, researchers are still interested in demonstrating that therapy is effective. In a raw analysis of 117 studies, they found, on average, a moderate benefit of therapy. But when the results were adjusted for publication bias, the statistical power of the research was diminished, and the effect of therapy could only be described as small. The research was published in the British Journal of Psychiatry.

Read more . . .

Enhanced by Zemanta
What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top