
via University of Sheffield
A new artificial intelligence-based algorithm that can accurately predict which Twitter users will spread disinformation before they actually do it has been developed by researchers from the University of Sheffield.
- University of Sheffield researchers have developed an artificial intelligence-based algorithm that can accurately predict (79.7 per cent) which Twitter users are likely to share content from unreliable news sources before they actually do it
- Study found that Twitter users who spread disinformation mostly tweet about politics or religion, whereas users who share reliable sources of news tweet more about their personal lives
- Research also found that Twitter users who share disinformation use impolite language more frequently than users who share reliable news sources
- Findings could help governments and social media companies such as Twitter and Facebook better understand user behaviour and help them design more effective models for tackling the spread of disinformation
A new artificial intelligence-based algorithm that can accurately predict which Twitter users will spread disinformation before they actually do it has been developed by researchers from the University of Sheffield.
A team of researchers, led by Yida Mu and Dr Nikos Aletras from the University’s Department of Computer Science, has developed a method for predicting whether a social media user is likely to share content from unreliable news sources. Their findings have been published in the journal PeerJ.
The researchers analysed over 1 million tweets from approximately 6,200 Twitter users by developing new natural language processing methods – ways to help computers process and understand huge amounts of language data. The tweets they studied were all tweets that were publicly available for anyone to see on the social media platform.
Twitter users were grouped into two categories as part of the study – those who have shared unreliable news sources and those who only share stories from reliable news sources. The data was used to train a machine-learning algorithm that can accurately predict (79.7 per cent) whether a user will repost content from unreliable sources sometime in the future.
Results from the study found that the Twitter users who shared stories from unreliable sources are more likely to tweet about either politics or religion and use impolite language. They often posted tweets with words such as ‘liberal’, ‘government’, ‘media’, and their tweets often related to politics in the Middle East and Islam, with their tweets often mentioning ‘Islam’ or ‘Israel’.
In contrast, the study found that Twitter users who shared stories from reliable news sources often tweeted about their personal life, such as their emotions and interactions with friends. This group of users often posted tweets with words such as ‘mood’. ‘wanna’, ‘gonna’, ‘I’ll’, ‘excited’, and ‘birthday’.
Social media has become the primary platform for spreading disinformation, which is having a huge impact on society and can influence people’s judgement of what is happening in the world around them.
Dr Nikos Aletras
Lecturer in Natural Language Processing, University of Sheffield
Findings from the study could help social media companies such as Twitter and Facebook develop ways to tackle the spread of disinformation online. They could also help social scientists and psychologists improve their understanding of such user behaviour on a large scale.
Dr Nikos Aletras, Lecturer in Natural Language Processing at the University of Sheffield, said: “Social media has become one of the most popular ways that people access the news, with millions of users turning to platforms such as Twitter and Facebook every day to find out about key events that are happening both at home and around the world. However, social media has become the primary platform for spreading disinformation, which is having a huge impact on society and can influence people’s judgement of what is happening in the world around them.
“As part of our study, we identified certain trends in user behaviour that could help with those efforts – for example, we found that users who are most likely to share news stories from unreliable sources often tweet about politics or religion, whereas those who share stories from reliable news sources often tweeted about their personal lives.
“We also found that the correlation between the use of impolite language and the spread of unreliable content can be attributed to high online political hostility.”
Yida Mu, a PhD student at the University of Sheffield, said: “Studying and analysing the behaviour of users sharing content from unreliable news sources can help social media platforms to prevent the spread of fake news at the user level, complementing existing fact-checking methods that work on the post or the news source level.”
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Predicting behavior
- World-first biomarker test can predict depression and bipolar disorderon January 19, 2021 at 7:57 pm
Australian scientists have developed and validated a world-first test that is claimed to accurately measure levels of a brain protein known to be associated with depression and bipolar disorder. The ...
- This New Tool Can Predict Online Customer Behavior – So Retailers Can Step In and Boost Conversionon January 19, 2021 at 2:59 pm
Click here to read the full article. Selling product online has become a critical revenue opportunity for the industry, but attracting customers to a brand website is only the first step. In order to ...
- A Waltham company is using AI to predict Covid-19 hotspotson January 19, 2021 at 12:29 pm
These places are among the next Covid-19 hotspots in the U.S., according to predictions from a Waltham-based tech firm called Cotiviti.
- A Language AI Is Accurately Predicting Covid-19 “Escape” Mutationson January 19, 2021 at 7:00 am
When tested on some of our greatest viral foes, the algorithm was able to discern critical mutations that “transform” each virus.
- Scientists develop world's first test to accurately predict mood disorderson January 18, 2021 at 9:24 pm
University of South Australia scientists have developed the world's first test to accurately predict mood disorders in people, based on the levels of a specific protein found in the brain.
Go deeper with Google Headlines on:
Predicting behavior
Go deeper with Bing News on:
Spreading disinformation
- Facebook Is Still Letting Lies About George Floyd, Jacob Blake, and Breonna Taylor Spread Uncheckedon January 19, 2021 at 6:38 am
New research shows a link between those spreading lies about victims of police brutality and those who urged followers to attend the deadly Jan. 6 Trump rally.
- Disinformation Propelled By Social Media And Conspiracy Theories Led To Insurrectionon January 19, 2021 at 5:00 am
The attack on the U. S. Capitol on January 6, 2020, was the direct result of a continuous drumbeat of disinformation starting before the Presidential elections in the U. S. in 2020, and continuing ...
- Trump's Twitter Ban Obscures the Real Problem with Disinformationon January 19, 2021 at 3:37 am
But the issue of disinformation and manipulation on social media goes far beyond one man’s Twitter account. And it is much more widespread than previously thought. Since 2016, our team at the Oxford ...
- James Murdoch Slams "Media Property Owners" For "Spreading Disinformation"on January 16, 2021 at 12:42 pm
Murdoch and his wife Kathryn Murdoch shared in a statement to the FT that "spreading disinformation — whether about the election, public health or climate change — has real world consequences." "Many ...
- James Murdoch Denounces ‘Media Property Owners’ for ‘Spreading Disinformation’ Leading to Capitol Rioton January 16, 2021 at 10:21 am
James Murdoch, the son of Fox News owner Rupert Murdoch, has spoken out against “media property owners” that spread disinformation leading up to the Jan. 6 insurrection at the Capitol.