Computers will someday soon automatically provide short video digests of a day in your life, your family vacation or an eight-hour police patrol, say computer scientists at The University of Texas at Austin.
The researchers are working to develop tools to help make sense of the vast quantities of video that are going to be produced by wearable camera technology such as Google Glass and Looxcie.
“The amount of what we call ‘egocentric’ video, which is video that is shot from the perspective of a person who is moving around, is about to explode,” said Kristen Grauman, associate professor of computer science in the College of Natural Sciences. “We’re going to need better methods for summarizing and sifting through this data.”
Grauman and her colleagues developed a superior technique that uses machine learning to automatically analyze recorded videos and assemble a better short “story” of the footage than what is available from existing methods.
Better video summarization should prove important in helping military commanders managing data coming in from soldiers’ cameras, investigators trying to sift through cellphone video data in the wake of disasters like the Boston Marathon bombing, and senior citizens using video summaries of their days to compensate for memory loss, said Grauman.
“There’s research showing that if people suffering from memory loss wear a camera that takes a snapshot once a minute, and then they review those images at the end of the day, it can help their recall,” said Grauman. “That’s pretty inspiring. What if instead of images that were selected just because they were a minute apart, they had a video or photographic summary that was selected because it told a good story? Maybe that would help even more. That’s the kind of thing we’re hoping to achieve.”
Grauman, her postdoc Lu Zheng and doctoral student Yong Jae Lee presented their method, which they call “story-driven” video summarization, at the IEEE Conference on Computer Vision and Pattern Recognition this summer.
Their findings are based on video amassed by volunteers wearing commercially available Looxcie cameras, which cost about $200, record five hours of video at a stretch, connect to smartphones and fit in an ear as a large Bluetooth device does.
“The task is to take a very long video and automatically condense it into very short video clips, or a series of stills, that convey the essence of the story,” said Grauman. “To do that, though, we first have to ask: What makes a good visual story? Our answer is that beyond displaying important persons, objects and scenes, it must also convey how one thing leads to the next.”
To tackle the challenge, Grauman and her colleagues took a two-step approach. The first step involved using machine learning techniques to teach their system to “score” the significance of objects in view based on egocentric factors such as how often the objects appeared in the center of the frame, which is a good proxy for where the camera wearer’s gaze is, or whether they are touched by the wearer’s hands.
“If you give us a region in the video, then we will give back an importance level, based on all those properties that we have extracted and learned how to combine,” said Grauman. “So at that point you can select frames that will maximize the importance.”
The next step was to use those important frames, through the video, and look for early ones that influence later ones. To do that they adapted a method developed by researchers at Carnegie Mellon University that could predict how one news article leads to another, assembling a series of articles to transition from a starting point to a known end point.
For the text work, researchers used word frequencies and correlations across articles to quantify influence. For the video work, Grauman and Lu used their significant objects and frames to do the same. Then they were able to identify a chain of video clips that efficiently filled in the story from beginning to end.
“We ran human ‘taste tests’ comparing our method to previous methods,” said Grauman, “and between 75 and 90 percent of people evaluating the summaries, depending on the datasets and method being compared, found that our system is superior.”
Grauman said that as video summarization techniques continue to improve, they will become invaluable aids not just to people with very specialized needs, like police investigators and those suffering from memory loss, but to everyday Web surfers as well.
The Latest Bing News on:
- Greedy, egocentric Black Stars players cause of Inaki William’s struggles – Nana Yaw Kessehon February 29, 2024 at 4:00 pm
Inaki Williams’ amazing goal for Athletic Bilbao in their Copa Del Rey victory over Atletico Madrid on Thursday, February 29, 2024, has re-birthed the conversation about his inability to perform ...
- ‘The Seven Year Disappear’ Off Broadway Review: Cynthia Nixon Vanishes Into Several Roleson February 26, 2024 at 4:00 pm
Part of the delight is seeing Nixon play not only the egocentric Miriam, but a variety of other characters ... which allows director Scott Elliott to dress it all up with lots of pre-taped videos, ...
- Untangling the 50-Part "Who TF Did I Marry" TikTokon February 22, 2024 at 9:44 pm
"I'm going to tell the story of how I met, dated, married and divorced a real pathological liar," she began the lengthy video series. "I'm going to be truthful, even if it makes me look bad." ...
- Residente on Starting a New Chapter, Transitioning Into Film, and Supporting Palestineon February 22, 2024 at 4:01 pm
He pushed it back again last December, saying in an Instagram video that he didn’t feel ... without being arrogant or egocentric,” he explains. “I want to keep doing that on a lot of differe ...
- Study: Selfies May Make Women Appear Slimmer, Here’s The Potential Impacton October 15, 2023 at 9:23 am
All self-photographs can be considered egocentric photos because you are essentially doing everything in that photo, taking it and posing in it at the same time. This is in contrast to allocentric ...
- Common Reactions to Sibling Losson April 6, 2023 at 10:39 am
Common Reactions to Sibling Loss What are the most common reactions to sibling loss? How do siblings react at different stages of childhood? Here are some generalities: Reactions Among Infants The ...
- Meta is teaching robots how to move on their ownon April 4, 2023 at 11:17 am
Microsoft (MSFT), OpenAI, and Google (GOOG, GOOGL) might be getting all the attention when it comes to generative AI like ChatGPT, Bing, and Bard. But Facebook parent Meta (META) isn’t sitting ...
- Facebook helps AI take a first-person view of lifeon October 13, 2021 at 5:00 pm
In a blog post, Facebook argues that "next-generation AI will need to learn from videos that show the world from the centre of the action". AI that understands the world from this "egocentric ...
The Latest Google Headlines on:
[google_news title=”” keyword=”Egocentric video” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
The Latest Bing News on:
Wearable camera technology
- The Humane AI Pin shows us a wearable future – but it might not be thison February 29, 2024 at 4:00 pm
MWC offered a chance to get some time with the Humane AI Pin – which has grabbed attention with its unique blend of wearable tech and AI power. The clip-on camera is designed to free you from your ...
- MWC 2024 Day 2: From wearable AI to smartphones, here are major announcementson February 28, 2024 at 1:42 pm
MWC 2024 showcases groundbreaking gadgets like Humane AI Pin, Nothing's Phone 2a, TCL's budget-friendly 50 series models, Techno's Dynamic1 robot dog, and Tecno's Pocket Go AR gaming set. These ...
- Humane's Wearable AI Pin Hints at a Phone-Free Futureon February 27, 2024 at 11:27 am
And yet, some in the tech world are convinced that in the future we may not be glued constantly to the phone screens that have become so ubiquitous in our lives.With recent advances in AI, a new breed ...
- Apple to focus on AI-powered wearable devices as they explore smart glasseson February 27, 2024 at 6:42 am
Being able to capture moments on camera in a first-person view is ... I'm personally a little apprehensive of future reliance on wearable AI technology as a society, and I think it may be an ...
- Apple Smart Glasses, AirPods with Cameras Could be in Development: Reporton February 27, 2024 at 5:30 am
At the forefront of wearable technology, Samsung and Apple are exploring ... into smart glasses and enhancing its AirPods with cameras and sophisticated sensors to rival Meta's smart glasses ...
- MWC 2024: Moto’s Wearable Wrist Phone Is Cool & All, But It’s Not Practicalon February 26, 2024 at 8:41 pm
Motorola has a wearable concept phone on display at MWC that wraps around your hand and becomes wearable. Here's what we feel.
- Apple is exploring new wearable categories and products in ongoing developmenton February 26, 2024 at 12:56 pm
According to the well-connected Mark Gurman, the Cupertino tech giant in recent years has considered putting out a smart ring and smart glasses. Both would be ...
- Wearable tech will keep Nippon workers saferon February 25, 2024 at 11:01 pm
Binah.ai's safety and health management solution uses continuous passive monitoring of vitals to provide an evaluation index of health status and risk level.
- Apple Wearable Roadmap Unveils New Smart Glasses, Cameras for Ears—Reporton February 25, 2024 at 7:34 pm
Various reports regarding Apple 's new wearables are gaining traction, with one centering on a smart ring that may be an upcoming rival to that of Samsung's. The latest Power On Newsletter via ...
- Apple Wants to Add Cameras to AirPods Along With AI Features & Advanced Sensorson February 25, 2024 at 3:30 pm
Mark Gurman's latest newsletter has revealed Apple's plans for its wearable category and AirPods with cameras are possible in near future.
The Latest Google Headlines on:
Wearable camera technology
[google_news title=”” keyword=”wearable camera technology” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]