Researchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to “see” and understand what humans do in a typical day.
The technique gathered more than 40,000 pictures taken every 30 to 60 seconds, over a 6 month period, by a wearable camera and predicted with 83 percent accuracy what activity that person was doing. Researchers taught the computer to categorize images across 19 activity classes. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.
“It was surprising how the method’s ability to correctly classify images could be generalized to another person after just two more days of annotation,” said Steven Hickson, a Ph.D. candidate in Computer Science and a lead researcher on the project.
“This work is about developing a better way to understand people’s activities, and build systems that can recognize people’s activities at a finely-grained level of detail,” said Edison Thomaz, co-author and graduate research assistant in the School of Interactive Computing. “Activity tracking devices like the Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities – not just physical activities like walking and running. This work is moving toward full activity intelligence. At a technical level, we are showing that it’s becoming possible for computer vision techniques alone to be used for this.”
The group believes they have gathered the largest annotated dataset of first-person images to demonstrate that deep-learning can understand human behavior and the habits of a specific person.
Student Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. He says reaction from conference-goers was positive.
“People liked that we had a method that combines time and images,” Castro says. “Time (of activity) can be especially important for some activity classes. This system learned how relevant images were because of people’s schedules. What does it think the image is showing? It sees both time and image probabilities and makes a better prediction.”
The ability to literally see and recognize human activities has implications in a number of areas – from developing improved personal assistant applications like Siri to helping researchers explain links between health and behavior, Thomaz says.
Castro and Hickson believe that someday within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.
“Imagine if a device could learn what I would be doing next – ideally predict it – and recommend an alternative?” Castro says. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”
The Latest on: Deep learning
via Google News
The Latest on: Deep learning
- Deep-learning algorithm can de-noise imageson January 26, 2021 at 11:49 am
A new deep-learning algorithm can de-noise images to reveal otherwise invisible details. Here's how it outdoes other de-noising algorithms.
- Deep learning virtualization startup Run:AI raises $30M in new roundon January 26, 2021 at 7:11 am
Artificial intelligence orchestration and virtualization software startup Run:AI Labs Ltd. said today it’s now in expansion mode after raising $30 million in a new round of funding. Insight Partners ...
- Worldwide Industry for AI-Enabled Medical Imaging Solutions to 2030 - Rapidly Evolving Deep Learning Techniques Presents Opportunitieson January 26, 2021 at 5:15 am
The "Global AI-Enabled Medical Imaging Solutions Market: 15 Countries Analysis - Analysis and Forecast, 2020-2030" report has been added ...
- New Deep Learning Discovery Paves Way for AI Interpretation of Brainwave Dataon January 25, 2021 at 3:30 pm
Interpreting the results of electroencephalogram (EEG) graphs, which are used to visualize brain activity of everything from meditation ...
- Deep Learning Software Market Is Thriving Worldwide : IBM, Nvidia, AWSon January 25, 2021 at 2:04 pm
Deep Learning Software Market Comprehensive Study is an expert and top to bottom investigation on the momentum condition of the worldwide Deep Learning Software industry with an attention on the ...
- Deep Learning Market Size, Share, Future Challenges, Revenue, Demand, Opportunity, Analysis and Industry, Forecast 2025on January 24, 2021 at 11:06 pm
Jan (Heraldkeepers) -- Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the ...
- A general and transferable deep learning framework for predicting phase formation in materialson January 24, 2021 at 4:00 pm
Machine learning has been widely exploited in developing new materials. However, challenges still exist: small dataset is common for most tasks; new datasets, special descriptors and specific models ...
- Deep learning (computer vision) & satellite imagery for estimating hurricane intensityon January 23, 2021 at 4:00 pm
In this article, I utilized over 70,000 satellite images and developed a deep learning model to estimate the intensity of tropical cyclones (TCs). My overarching goals for this exercise are as follows ...
- A deep learning based framework for the registration of three dimensional multi-modal medical images of the headon January 20, 2021 at 4:00 pm
Image registration is a fundamental task in image analysis in which the transform that moves the coordinate system of one image to another is calculated. Registration of multi-modal medical images has ...
- Image-based deep learning haplotype-guided study maps the global adaptation of SARS-CoV-2on January 15, 2021 at 8:33 am
A new study used deep learning with image recognition technology to trace the emergence of variants with increased viral fitness. Higher fitness leads to rapid expansion of these lineages in the areas ...
via Bing News