University of Washington researchers have developed new algorithms that solve a thorny challenge in the field of computer vision: turning audio clips into a realistic, lip-synced video of the person speaking those words.
As detailed in a paper to be presented Aug. 2 at SIGGRAPH 2017, the team successfully generated highly-realistic video of former president Barack Obama talking about terrorism, fatherhood, job creation and other topics using audio clips of those speeches and existing weekly video addresses that were originally on a different topic.
“These type of results have never been shown before,” said Ira Kemelmacher-Shlizerman, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering. “Realistic audio-to-video conversion has practical applications like improving video conferencing for meetings, as well as futuristic ones such as being able to hold a conversation with a historical figure in virtual reality by creating visuals just from audio. This is the kind of breakthrough that will help enable those next steps.”
In a visual form of lip-syncing, the system converts audio files of an individual’s speech into realistic mouth shapes, which are then grafted onto and blended with the head of that person from another existing video.
The team chose Obama because the machine learning technique needs available video of the person to learn from, and there were hours of presidential videos in the public domain. “In the future video, chat tools like Skype or Messenger will enable anyone to collect videos that could be used to train computer models,” Kemelmacher-Shlizerman said.
Because streaming audio over the internet takes up far less bandwidth than video, the new system has the potential to end video chats that are constantly timing out from poor connections.
“When you watch Skype or Google Hangouts, often the connection is stuttery and low-resolution and really unpleasant, but often the audio is pretty good,” said co-author and Allen School professor Steve Seitz. “So if you could use the audio to produce much higher-quality video, that would be terrific.”
By reversing the process — feeding video into the network instead of just audio — the team could also potentially develop algorithms that could detect whether a video is real or manufactured.
The new machine learning tool makes significant progress in overcoming what’s known as the “uncanny valley” problem, which has dogged efforts to create realistic video from audio. When synthesized human likenesses appear to be almost real — but still manage to somehow miss the mark — people find them creepy or off-putting.
“People are particularly sensitive to any areas of your mouth that don’t look realistic,” said lead author Supasorn Suwajanakorn, a recent doctoral graduate in the Allen School. “If you don’t render teeth right or the chin moves at the wrong time, people can spot it right away and it’s going to look fake. So you have to render the mouth region perfectly to get beyond the uncanny valley.”
Previously, audio-to-video conversion processes have involved filming multiple people in a studio saying the same sentences over and over to try to capture how a particular sound correlates to different mouth shapes, which is expensive, tedious and time-consuming. By contrast, Suwajanakorn developed algorithms that can learn from videos that exist “in the wild” on the internet or elsewhere.
“There are millions of hours of video that already exist from interviews, video chats, movies, television programs and other sources. And these deep learning algorithms are very data hungry, so it’s a good match to do it this way,” Suwajanakorn said.
Rather than synthesizing the final video directly from audio, the team tackled the problem in two steps. The first involved training a neural network to watch videos of an individual and translate different audio sounds into basic mouth shapes.
By combining previous research from the UW Graphics and Image Laboratory team with a new mouth synthesis technique, they were then able to realistically superimpose and blend those mouth shapes and textures on an existing reference video of that person. Another key insight was to allow a small time shift to enable the neural network to anticipate what the speaker is going to say next.
The new lip-syncing process enabled the researchers to create realistic videos of Obama speaking in the White House, using words he spoke on a television talk show or during an interview decades ago.
Currently, the neural network is designed to learn on one individual at a time, meaning that Obama’s voice — speaking words he actually uttered — is the only information used to “drive” the synthesized video. Future steps, however, include helping the algorithms generalize across situations to recognize a person’s voice and speech patterns with less data – with only an hour of video to learn from, for instance, instead of 14 hours.
“You can’t just take anyone’s voice and turn it into an Obama video,” Seitz said. “We very consciously decided against going down the path of putting other people’s words into someone’s mouth. We’re simply taking real words that someone spoke and turning them into realistic video of that individual.”
The Latest on: Audio-to-video conversion
[google_news title=”” keyword=”computer vision” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
- Grabango’s Computer Vision Analytics Uncover Self-Checkout Systems Have 16 times more Shrink than Traditional Cashier Laneson November 28, 2023 at 6:00 am
Grabango’s Computer Vision Analytics Uncover Self-Checkout Systems Have 16 times more Shrink than Traditional Cashier Lanes.
- FIA introduces computer vision to capture track limit breacheson November 28, 2023 at 2:37 am
The Fédération Internationale de l'Automobile’s Remote Operations Centre is set to introduce computer vision technology to enhance track limit enforcement during the Abu Dhabi Grand Prix.
- Apple Vision Pro rumored to hit 10 million shipped in three yearson November 27, 2023 at 6:23 pm
A Chinese publication suggests Apple plans to sell millions of Apple Vision Pro units in its first three years, but the supply chain reporting is sketchy at best.
- GUIDi belt uses computer vision and haptics to guide the blindon November 27, 2023 at 2:18 pm
The traditional white cane may never become completely obsolete, but there certainly are a number of devices that could give it a run for its money. One of the latest is the GUIDi, which guides blind ...
- Computer Vision Market Size Worth $85.92 Billion by 2032: The Brainy Insightson November 27, 2023 at 9:00 am
The global Computer Vision market is anticipated to grow from USD 15.19 billion to USD 85.92 billion in 10 years. The expanding utilization of computer vision in non-industrial sectors, along with the ...
- Apple's 40 years of learning is powering Vision Proon November 27, 2023 at 8:10 am
How are competitors so bad at what Apple has been so good at? The answer to this question can tell us a great deal about the upcoming Apple Vision Pro.
- Computer Visionon November 25, 2023 at 10:43 pm
Among the daily deluge of news about new advancements in Large Language Models (LLMs), you might be asking, "how do I train my own?". Today, an LLM tailored to your specific needs is becoming an ...
- F1 News: FIA Tests 'Computer Vision' AI System In Abu Dhabi To Enhance Infringement Policingon November 25, 2023 at 9:00 pm
The FIA revealed that it plans to test the 'Computer Vision' Artificial Intelligence system in Abu Dhabi this weekend to enhance the policing of track limits ...
- Visionaries Unite: How Computer Vision Shapes the Metaverseon November 20, 2023 at 3:21 pm
Check out how the powerful technology of Computer Vision continues to evolve and find new applications in various industries.
- Can Apple Vision Pro reinvent the computer, again?on November 20, 2023 at 7:50 am
As the universe counts down the clock to Apple's upcoming "reinvention" of augmented reality computing with its new Vision Pro early in the new year, it's useful to take a look at how successful it ...
via Google News and Bing News