An AI-powered prediction market may provide scientists with an efficient, cost-effective tool to tackle the reproducibility problem, a team of researchers report.
Credit: (National Cancer Institute/Unsplash.)
Scientists are increasingly concerned that the lack of reproducibility in research may lead to, among other things, inaccuracies that slow scientific output and diminished public trust in science.
Now, a team of researchers reports that creating a prediction market, where artificially intelligent — AI – agents make predictions — or bet — on hypothetical replication studies, could lead to an explainable, scalable approach to estimate confidence in published scholarly work.
Replication of experiments and studies, a critical step in the scientific process, helps provide confidence in the results and indicates whether they can generalize across contexts, according to Sarah Rajtmajer, assistant professor in information sciences and technology, Penn State. As experiments have become more complex, costly and time consuming, scientists increasingly lack the resources for robust replication efforts — what is often referred to now by them as the “replication crisis.”
“As scientists, we want to do work, and we want to know that our work is good,” said Rajtmajer. “Our approach to help address the replication crisis is to use AI to help predict whether a finding would replicate if repeated and why.”
Crowdsourced prediction markets can be likened to betting parlors to help forecast real world events, rather than horse races or football game outcomes. These markets have already been used to help anticipate everything from elections to infectious disease spreads.
“What inspired us was the success of prediction markets in precisely this task — that is, when you place researchers in a market and give them some cash to bet on outcomes of replications, they’re pretty good at it,” said Rajtmajer, who is a research associate at the Rock Ethics Institute and an Institute for Computational and Data Sciences associate. “But human-run prediction markets are expensive and slow. And ideally, you should run replications in parallel to the market so there is some ground truth on which researchers are betting. It just doesn’t scale.”
A bot-based approach scales and offers some explainability of its findings based on trading patterns and the features of the papers and claims that influenced the bots’ behavior. In the team’s approach, bots are trained to recognize key features in academic research papers — such as the authors and institutions, statistics, and linguistic cues, downstream mentions, and similar studies in the literature — and then make assessments regarding the confidence that the study is robust enough to replicate in future studies. Just like a human betting on the outcomes of a sporting event, the bot then bids on its level of confidence. The AI-powered bots’ results are compared to the bets made in human predictions.
C. Lee Giles, the David Reese Professor at the College of Information Sciences and Technology, said that while prediction markets based on human participants are well-known and have been used successfully in a number of fields, prediction markets are novel in examining research results.
“That’s probably the interesting and unique thing we’re doing here,” said Giles, who is also an ICDS associate. “We have already seen that humans are pretty good at using prediction markets. But, here, we’re using bots for our market, which is a little unusual and sort of fun.”
According to the researchers, who presented their results at a recent meeting for the Association for the Advancement of Artificial Intelligence, the system provided confidence scores for about 68 of the 192 papers — or about 35% — of the papers that were eventually reproduced, or ground truth replication studies. On that set of papers, the accuracy was approximately 90%.
Because humans tend to perform better at predicting research reproducibility, but bots can perform at scale, Giles and Rajtmajer suggest that a hybrid approach — human and bots working together — may deliver the best of both worlds: a system that would feature higher accuracy but still be scalable.
“Maybe we can train the bots in the presence of human traders every so often, and then deploy them offline when we need a quick result, or when we need replication efforts at scale,” said Rajtmajer. “Moreover, we can create bot markets that also leverage that intangible human wisdom. That is something we are working on right now.”
Original Article: Scientists tap AI betting agents to help solve research reproducibility concerns
More from: Pennsylvania State University | Old Dominion University | Texas A&M University | Rutgers University
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Trust in science
- Trust in the shadows: How loyalty fuels illicit economic transactions
As a professor of management who leads the University of Arizona’s Center for Trust Studies, I’ve long been interested in how people conceal illicit economic activity. One important way is through ...
- In Kentucky, the oldest Black independent library is still making history
Western Library is the oldest Black library still independently run in the U.S. Its current librarian has made it her mission to share its history with her community.
- MSU’s Merivaki explores election trust, access through Democracy Renewal Project research funding
A Mississippi State political scientist is among the inaugural slate of researchers granted funding through nonprofit Public Agenda’s Democracy Renewal Project aimed at exploring and strengthening ...
- How much trust do people have in different types of scientists?
Understanding why some people trust some scientists more than others is a key factor in solving social problems with science. But little was known about the trust levels across the diverse range of ...
- Survey reveals decline in public trust of health institutions, surge in self-reliance
The new Global Edelman Trust Barometer Survey shows fears about the politicization of medical science have increased by 13 points in the past year to 64%, and only 37% of people are empowered and ...
Go deeper with Google Headlines on:
Trust in science
[google_news title=”” keyword=”trust in science” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Reproducibility in research
- Salk Scientists Utilize AI to Develop Carbon-Capturing Plants in Fight Against Climate Change
Faster annotation, training, and highly accurate plant structure prediction are made possible by SLEAP's automated feature identification, which greatly accelerates the research process, with major ...
- Machine learning-powered robot streamlines genetic research process
University of Minnesota Twin Cities researchers have constructed a robot that uses machine learning to fully automate a complicated microinjection process used in genetic research.
- Apple Offers Peek at Its AI Language Model as iOS 18 Looms
On Monday, it released OpenELM, which it calls a "state-of-the-art open language model." Language models are the massive sets of information that tools like ChatGPT, Gemini, Perplexity and Dall-E draw ...
- Automated machine learning robot unlocks new potential for genetics research
University of Minnesota Twin Cities researchers have constructed a robot that uses machine learning to fully automate a complicated microinjection process used in genetic research.
- Biotech career: What it’s really like being a research assistant
Curious about life as a research assistant? Discover the demands of the role and its vital support in the biotech industry.
Go deeper with Google Headlines on:
Reproducibility in research
[google_news title=”” keyword=”reproducibility in research” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]