NYU Tandon researchers implant “digital watermarks” using a neural network to easily spot manipulated photos and video.
To thwart sophisticated methods of altering photos and video, researchers at the NYU Tandon School of Engineering have demonstrated an experimental technique to authenticate images throughout the entire pipeline, from acquisition to delivery, using artificial intelligence (AI).
In tests, this prototype imaging pipeline increased the chances of detecting manipulation from approximately 45 percent to over 90 percent without sacrificing image quality.
Determining whether a photo or video is authentic is becoming increasingly problematic. Sophisticated techniques for altering photos and videos have become so accessible that so-called “deep fakes” — manipulated photos or videos that are remarkably convincing and often include celebrities or political figures — have become commonplace.
Pawel Korus, a research assistant professor in the Department of Computer Science and Engineering at NYU Tandon, pioneered this approach. It replaces the typical photo development pipeline with a neural network — one form of AI — that introduces carefully crafted artifacts directly into the image at the moment of image acquisition. These artifacts, akin to “digital watermarks,” are extremely sensitive to manipulation.
“Unlike previously used watermarking techniques, these AI-learned artifacts can reveal not only the existence of photo manipulations, but also their character,” Korus said.
The process is optimized for in-camera embedding and can survive image distortion applied by online photo sharing services.
The advantages of integrating such systems into cameras are clear.
“If the camera itself produces an image that is more sensitive to tampering, any adjustments will be detected with high probability,” said Nasir Memon, a professor of computer science and engineering at NYU Tandon and co-author, with Korus, of a paper detailing the technique. “These watermarks can survive post-processing; however, they’re quite fragile when it comes to modification: If you alter the image, the watermark breaks,” Memon said.
Most other attempts to determine image authenticity examine only the end product — a notoriously difficult undertaking.
Korus and Memon, by contrast, reasoned that modern digital imaging already relies on machine learning. Every photo taken on a smartphone undergoes near-instantaneous processing to adjust for low light and to stabilize images, both of which take place courtesy of onboard AI. In the coming years, AI-driven processes are likely to fully replace the traditional digital imaging pipelines. As this transition takes place, Memon said that “we have the opportunity to dramatically change the capabilities of next-generation devices when it comes to image integrity and authentication. Imaging pipelines that are optimized for forensics could help restore an element of trust in areas where the line between real and fake can be difficult to draw with confidence.”
Korus and Memon note that while their approach shows promise in testing, additional work is needed to refine the system. This solution is open-source and can be accessed at https://github.com/pkorus/neural-imaging. The researchers will present their paper, “Content Authentication for Neural Imaging Pipelines: End-to-end Optimization of Photo Provenance in Complex Distribution Channels,” at the Conference on Computer Vision and Pattern Recognition in Long Beach, California, in June.
Learn more: Outsmarting deep fakes: Researchers devise an AI-driven imaging system that protects authenticity
The Latest on: Digital watermarks
[google_news title=”” keyword=”digital watermarks” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Digital watermarks
- Golden Globes Screeners Go Digital-Only As Organization Teams With Indee on Streaming Platform (Exclusive)on April 29, 2024 at 11:01 pm
The service, which will help content providers reach hundreds of Globes voters spread around the world, is being facilitated by Indee, which also handles secure streaming for PGA Awards voters.
- Deepfake of principal’s voice is the latest case of AI being used for harmon April 29, 2024 at 10:04 am
The most recent criminal case involving artificial intelligence emerged from a high school in Baltimore County, Maryland.
- How Do You Inspire Visionary Culture In Your Company? Four Strategies From A Tech CEOon April 29, 2024 at 9:54 am
Building a strong company culture is crucial for attracting and retaining skilled, creative professionals and offering efficient, ground-breaking change.
- Deepfakes- The Race Is Onon April 28, 2024 at 12:47 pm
The models creating deepfakes include variational autoencoders (VAEs), generative adversarial networks (GANs), diffusion models, and transformers. VAEs, which were used in early deepfakes back in 2017 ...
- Generative AI Muddies the Election 2024 Outlook, and Voters Are Worriedon April 26, 2024 at 12:31 pm
The advent of generative AI tools, which can easily create realistic text, images and videos (and audio like the fake-Biden call), only exacerbates the potential for misinformation in 2024. It's a new ...
- Fears emerge that Windows 11 could hit you with a desktop watermark if your PC isn't AI-okayon April 26, 2024 at 8:46 am
Clues in Windows 11 suggest that if your PC can't run AI Explorer, Microsoft might implement a warning - but we doubt it'll be a watermark as some fear.
- Windows 11 might nag you about AI requirements soonon April 25, 2024 at 2:36 pm
Turns out Windows 11 build 26100 (purported 24H2 RTM) contains the AI Explorer requirements 📃 baked into the OS 💠 ARM64 CPU 💠 16GiB of RAM 💠 225GiB system drive (total, not free space) 💠 ...
- Nvidia-backed startup Synthesia unveils AI avatars that can be generated from texton April 24, 2024 at 10:59 pm
AI startup Synthesia on Thursday announced the launch of its “Expressive Avatars,” AI-generated digital avatars that can convey human emotions.
- Connecticut Senate Makes History with Groundbreaking AI Legislationon April 24, 2024 at 9:00 pm
Artificial intelligence regulation proves to be moving forward in the United States, at least in Connecticut, as its senate has reportedly passed a wide-ranging AI bill. Legislation is needed to stop ...
- Connecticut Senate passes wide-ranging bill to regulate AI. But its fate remains uncertainon April 24, 2024 at 6:50 pm
The Connecticut Senate has passed one of the first major legislative proposals in the U.S. to rein in bias in artificial intelligence decision-making and protect people from harm ...
via Bing News