Scientists like to think of science as self-correcting. To an alarming degree, it is not
“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.
Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.
The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.
To err is all too common
It is tempting to see the priming fracas as an isolated case in an area of science—psychology—easily marginalised as soft and wayward. But irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.
The governments of the OECD, a club of mostly rich countries, spent $59 billion on biomedical research in 2012, nearly double the figure in 2000. One of the justifications for this is that basic-science results provided by governments form the basis for private drug-development work. If companies cannot rely on academic research, that reasoning breaks down. When an official at America’s National Institutes of Health (NIH) reckons, despairingly, that researchers would find it hard to reproduce at least three-quarters of all published biomedical findings, the public part of the process seems to have failed.
Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think.
Various factors contribute to the problem. Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”
First, the statistics, which if perhaps off-putting are quite crucial.
Go deeper with Bing News on:
Unreliable research
- Engineers use data to manage grid transformers, boosting reliability to homes, farms
Science X is a network of high quality websites with most complete and comprehensive daily coverage of the full sweep of science, technology, and medicine news ...
- Research Firm Doubles Down on AI Disillusionment
After recently placing AI in the "trough of disillusionment" on its Hype Cycle report, research firm Gartner has doubled down ... services have experienced problems with service capacity, reliability, ...
- New study demonstrates reliability of Alzheimer's blood tests in doctors' offices
A new Swedish study shows their reliability in real-world healthcare settings as well. The study of more than 1,200 patients experiencing mild memory symptoms found that a blood test for Alzheimer's ...
- GARR Selects Infinera’s GX Series to Double Capacity of Italy’s Research & Education Network
July 29, 2024 (GLOBE NEWSWIRE) -- Infinera (Nasdaq: INFN) announced today that GARR, the Italian research and education network ... not only rapid data transmission but also redundancy and reliability ...
- Monitoring and controlling the structural integrity and reliability of buildings for hurricanes and tornadoes
Natural disasters like hurricanes and tornadoes significantly threaten the structural integrity and safety of buildings. These extreme weather events can cause catastrophic damage, leading to severe ...
Go deeper with Google Headlines on:
Unreliable research
[google_news title=”” keyword=”Unreliable research” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Science as self-correcting
- Willey recalls challenging baptism as first chief of Cambridge Enterprise
CEO Tony Quested asks Teri Willey, founding director of Cambridge Enterprise, to recall her five year tenure at the helm; the prevailing funding climate and the quality of innovation at hand.
- Overdoses on Ozempic Copies So Common, The FDA Had to Issue a Warning
Accidental overdoses from 'copies' of the weight-loss drug sold commercially as Ozempic and Wegovy are on a steep rise in the US.
- In the age of AI, there's no future for workers content with being code monkeys — and they know it
AI is reducing the need for software engineers, and software engineers know that they need to do more than just code to stay relevant in the AI era.
- Animal Doctor: Correcting an Unsustainable Economy
Instead, federal courts would be the arbiters of regulatory questions, including those involving science. This is an indefensible ruling ... learned mindfulness and self-composure by sitting still and ...
- Mental Illness Isn't Just About Chemical Imbalances
Have you ever wondered why mental illnesses often can't be cured with medications alone? Psychosocial conflicts may be stronger influences than they're given credit for.
Go deeper with Google Headlines on:
Science as self-correcting
[google_news title=”” keyword=”science as self-correcting” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]