
First and second columns show the original im- ages and manipulated ones respectively. The black and white images in the third column are corresponding bi- nary GT masks. Predicted masks (column 4) and generated CAMs (column 5) for manipulated images from Face2Face (row 1,2,3) and Neural-Textures (row 4,5,6) dataset.
(Mazaheri & Roy-Chowdhury, 2022)
Two-pronged technique detects manipulated facial expressions and identity swaps
Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
Developments in video editing software have made it easy to exchange the face of one person for another and alter the expressions on original faces. As unscrupulous leaders and individuals deploy manipulated videos to sway political or social opinions, the ability to identify these videos is considered by many essential to protecting free democracies. Methods exist that can detect with reasonable accuracy when faces have been swapped. But identifying faces where only the expressions have been changed is more difficult and to date, no reliable technique exists.
“What makes the deepfake research area more challenging is the competition between the creation and detection and prevention of deepfakes which will become increasingly fierce in the future. With more advances in generative models, deepfakes will be easier to synthesize and harder to distinguish from real,” said paper co-author Amit Roy-Chowdhury, a Bourns College of Engineering professor of electrical and computer engineering.
The UC Riverside method divides the task into two components within a deep neural network. The first branch discerns facial expressions and feeds information about the regions that contain the expression, such as the mouth, eyes, or forehead, into a second branch, known as an encoder-decoder. The encoder-decoder architecture is responsible for manipulation detection and localization.
The framework, called Expression Manipulation Detection, or EMD, can both detect and localize the specific regions within an image that have been altered.
“Multi-task learning can leverage prominent features learned by facial expression recognition systems to benefit the training of conventional manipulation detection systems. Such an approach achieves impressive performance in facial expression manipulation detection,” said doctoral student Ghazal Mazaheri, who led the research.
The benchmark datasets for facial manipulation are based on expression and identity swap. One transfers the expressions of a source video onto a target video without changing the identity of the person in the target video. The other swaps two identities in a single video.
Experiments on two challenging facial manipulation datasets show EMD has better performance in detection of not only facial expression manipulations but also identity swaps. EMD accurately detected 99% of the manipulated videos.
Original Article: New method detects deepfake videos with up to 99% accuracy
More from: University of California Riverside
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Deepfake videos
- Bill would criminalize 'extremely harmful' online 'deepfakes'
Deepfake legislation from Rep. Yvette Clarke seeks to protect individuals nationwide from being misrepresented by certain kinds of digital content ...
- The deepfake avatars who want to sell you everything
AI-generated livestreamers can speak any language, making it easier for Chinese brands to capture an audience abroad.
- How Making Deepfake Videos Got So Easy and Why It’s a Threat
Now that artificial intelligence allows anyone with a smartphone to conjure up lifelike images and sound from seemingly nothing, it’s getting harder to tell if what you see and hear online is reality ...
- The Download: AI to cure diseases, and China’s deepfake influencers
AI can help us answer these questions and apply that knowledge to improve health and well-being worldwide—if researchers can access and harness these powerful new technologies. Scientific discovery, ...
- 'Harry Potter' audiobook narrator Stephen Fry said AI was used to steal his voice, and warned that convincing deepfake videos of celebrities will be next
Stephen Fry said he was "shocked" when AI cloned his voice because it could "have me read anything from a call to storm Parliament to hard porn." ...
Go deeper with Google Headlines on:
Deepfake videos
[google_news title=”” keyword=”deepfake videos” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Expression Manipulation Detection
- How to Arm Yourself Against Crypto AI Scams
In the ever-evolving realm of technology, two domains have consistently captured global attention over the past decade: cryptocurrency and artificial intelligence (AI). Both fields have showcased ...
- HRCP slams Kakar for polls sans Imran remarks
Says Kakar is being ‘anti-democratic’ as courts have not yet established guilt in cases against PTI chief, leaders ...
- To Address Online Misogyny, Borrow from the Disinformation Defense Playbook | Opinion
Rather than treating the latest tactic for spreading misogyny and hate as a new phenomenon, we need to be ready for their infinite materializations by preparing audiences for their arrival.
- Has AI Surpassed Human Creativity?
Has AI technology exceeded humans in the creative realm? A new study compares the abilities of AI versus humans in creative divergent thinking with potential insights on the future of work in creative ...
- TikTok now lets creators label AI-generated content
TikTok will now let users label posts that have been enhanced by AI in any way. The company is also working on tools that automatically detect AI-generated posts.
Go deeper with Google Headlines on:
Expression Manipulation Detection
[google_news title=”” keyword=”Expression Manipulation Detection” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]