via Stanford University
Researchers at Stanford University wanted to see if AI-generated arguments could change minds on controversial hot-button issues. It worked.
Suddenly, the world is abuzz with chatter about chatbots. Artificially intelligent agents, like ChatGPT, have shown themselves to be remarkably adept at conversing in a very human-like fashion. Implications stretch from the classroom to Capitol Hill. ChatGPT, for instance, recently passed written exams at top business and law schools, among other feats both awe-inspiring and alarming.
Researchers at Stanford University’s Polarization and Social Change Lab and the Institute for Human-Centered Artificial Intelligence (HAI) wanted to probe the boundaries of AI’s political persuasiveness by testing its ability to sway real humans on some of the hottest social issues of the day — an assault weapon ban, the carbon tax, and paid parental leave, among others.
“AI fared quite well. Indeed, AI-generated persuasive appeals were as effective as ones written by humans in persuading human audiences on several political issues,” said Hui “Max” Bai, a postdoctoral researcher in the Polarization and Social Change Lab and first author on a new paper about the experiment in pre-print.
The research team, led by Robb Willer, a professor of sociology, psychology, and organizational behavior in the Stanford School of Humanities and Sciences and director of the Polarization and Social Change Lab, used GPT-3, the same large language model that fuels ChatGPT. They asked their model to craft persuasive messages on several controversial topics.
They then had thousands of real human beings read those persuasive texts. The readers were randomly assigned texts — sometimes they were written by AI, other times they were crafted by humans. In all cases, participants were asked to declare their positions on the issues before and after reading. The research team was then able to gauge how persuasive the messages had been on the readers and to assess which authors had been most persuasive and why.
Across all three comparisons conducted, the AI-generated messages were “consistently persuasive to human readers.” Though the effect sizes were relatively small, falling within a range of a few points on a scale of zero to 100, such small moves extrapolated across a polarizing topic and a voting-population scale could prove significant.
For instance, the authors report that AI-generated messages were at least as persuasive as human-generated messages across all topics, and on a smoking ban, gun control, carbon tax, an increased child tax credit, and a parental leave program, participants became “significantly more supportive” of the policies when reading AI-produced texts.
As an additional metric, the team asked participants to describe the qualities of the texts they had read. AI ranked consistently as more factual and logical, less angry, and less reliant upon storytelling as a persuasive technique.
The researchers undertook their study of political persuasiveness not as a steppingstone to a new era of AI-infused political discourse but as a cautionary tale about the potential for things to go afoul. Chatbots, they say, have serious implications for democracy and for national security.
The authors worry about the potential for harm if used in a political context. Large language models, such as GPT-3, might be applied by ill-intentioned domestic and foreign actors through mis- or disinformation campaigns or to craft problematic content based on inaccurate or misleading information for as-yet-unforeseen political purposes.
“Clearly, AI has reached a level of sophistication that raises some high-stakes questions for policy- and lawmakers that demand their attention,” Willer said. “AI has the potential to influence political discourse, and we should get out in front of these issues from the start.”
Persuasive AI, Bai noted, could be used for mass-scale campaigns based on suspect information and used to lobby, generate online comments, write peer-to-peer text messages, or even produce letters to editors of influential print media.
“These concerning findings call for immediate consideration of regulations of the use of AI for political activities,” Bai said.
Original Article: AI’s Powers of Political Persuasion
More from: Stanford University
The Latest Updates from Bing News
Go deeper with Bing News on:
- Mimicking Nature: New Molecular Switches Transform Synthetic Chemistry
Molecules that are induced by light to rotate bulky groups around central bonds when exposed to light could lead to the creation of light-activated bioactive systems, molecular switches, among other ...
- The Lawnmower: First artificial motor device crafted from natural proteins
Experts craft the first synthetic protein-based molecular motor device from natural proteins called the lawnmower, tested in Switzerland.
- First synthetic protein motor creates its own fuel as it 'mows'
Protein-based molecular motors are essential for life. Now, meet 'The Lawnmower' – the first synthetic motor modeled that propels itself by harnessing the energy it creates as it cuts through fields ...
- Research team designs a cutting-edge protein 'lawnmower'
An SFU-led collaboration has designed the first synthetic protein-based motor that harnesses biological reactions to fuel and propel itself.
- SFU-led research team designs a cutting-edge protein lawnmower
“Influenza is thought to work as a molecular motor to infiltrate the area around cells in order to infect them," Forde says. “Maybe synthetic motors could use the same approach, but rather ...
Go deeper with Bing News on:
AI-generated persuasive appeals
- AI Isn’t a Radical Technology
Actually, it’s not a technology at all—regardless of what overheated predictions about Sora have to say about it.
- AI-Generated Propaganda Is Highly Persuasive as the Real Propaganda From Iran, Russia
A new study has revealed that propaganda generated by AI large language models, such as GPT-3 davinci, is highly persuasive, like real propaganda from Iran or Russia. The research, conducted by ...
- Tech companies sign accord to combat AI-generated election trickery
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes ...
- AI-generated girlfriend helps ease loneliness of man with disability and millions more
Experts have expressed concerns about what they see as a lack of a legal or ethical framework for apps that encourage deep bonds with AI-generated ... or pick one that appeals to them.
- Missouri appeals court fines litigant after finding fake, AI-generated cases cited in filings
The case is the first appeals court case in Missouri that sanctions a party for creating fake citations using AI, said Jim Layton ... their submissions that are generated by artificial ...