Researchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to “see” and understand what humans do in a typical day.
The technique gathered more than 40,000 pictures taken every 30 to 60 seconds, over a 6 month period, by a wearable camera and predicted with 83 percent accuracy what activity that person was doing. Researchers taught the computer to categorize images across 19 activity classes. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.
“It was surprising how the method’s ability to correctly classify images could be generalized to another person after just two more days of annotation,” said Steven Hickson, a Ph.D. candidate in Computer Science and a lead researcher on the project.
“This work is about developing a better way to understand people’s activities, and build systems that can recognize people’s activities at a finely-grained level of detail,” said Edison Thomaz, co-author and graduate research assistant in the School of Interactive Computing. “Activity tracking devices like the Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities – not just physical activities like walking and running. This work is moving toward full activity intelligence. At a technical level, we are showing that it’s becoming possible for computer vision techniques alone to be used for this.”
The group believes they have gathered the largest annotated dataset of first-person images to demonstrate that deep-learning can understand human behavior and the habits of a specific person.
Student Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. He says reaction from conference-goers was positive.
“People liked that we had a method that combines time and images,” Castro says. “Time (of activity) can be especially important for some activity classes. This system learned how relevant images were because of people’s schedules. What does it think the image is showing? It sees both time and image probabilities and makes a better prediction.”
The ability to literally see and recognize human activities has implications in a number of areas – from developing improved personal assistant applications like Siri to helping researchers explain links between health and behavior, Thomaz says.
Castro and Hickson believe that someday within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.
“Imagine if a device could learn what I would be doing next – ideally predict it – and recommend an alternative?” Castro says. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”
Read more: Researchers Develop Deep-Learning Method to Predict Daily Activities
The Latest on: Deep learning
via Google News
The Latest on: Deep learning
- Study shows AI deep learning models can detect race in medical imagingon May 24, 2022 at 2:57 pm
Most of us have experienced some form of medical imaging, whether it was at an eye appointment or after a broken bone. These images might contain more information than meets the eye. Things artificial ...
- Neuromorphic chips 'up to 16 times more energy efficient' for deep learningon May 24, 2022 at 6:36 am
Neuromorphic chips have been endorsed in research showing that they are much more energy efficient at operating large deep learning networks than non-neuromorphic hardware. This may become important ...
- At 30.0% CAGR, Deep Learning Chip Market is Emerging with US$ 21.31 Billion by 2027on May 24, 2022 at 1:45 am
The Global Deep Learning Chip Market 2021 Research Report is a professional and in-depth study on the current state of Deep Learning Chip Market. With the ...
- GPU for Deep Learning Market Analysis 2022: Share, Top Key Players Research and Forecast to 2027 with Dominant Sectors and Countries Dataon May 22, 2022 at 11:59 pm
The “GPU for Deep Learning Market” 2022 Research report will make detailed analysis mainly on in-depth research on ...
- New deep learning technique paves path to pizza-making robotson May 22, 2022 at 7:00 am
A new deep learning technique developed by researchers shows promise to make robotics systems more stable in handling deformable objects.
- A Deeper Understanding of Deep Learningon May 21, 2022 at 11:02 pm
Deep learning should not work as well as it seems to: according to traditional statistics and machine learning, any analysis that has too many adjustable parameters will overfit noisy training data, ...
- Deep Learning Market 2022 Global Size, Share, Key Players, Production, Growth and Future Insights 2030on May 19, 2022 at 3:25 am
Quadintel published a new report on the Deep Learning Market. The research report consists of thorough information about demand, growth, opportunities, ...
- Outlook on the Global Deep Learning Market Size,on May 18, 2022 at 8:56 am
As per Zion Market Research study, The global deep learning market was worth around USD 11542.9 million by 2021 and is estimated to grow to ...
- Lessons From Deploying Deep Learning to Productionon May 18, 2022 at 8:49 am
Peter Gao, an early engineer at Cruise, reflects on his experience deploying deep-learning models into production.
- Deci deep-learning platform aims to ease AI application developmenton May 11, 2022 at 1:01 pm
Using Deci, the company says, AI developers can achieve improved inference performance and efficiency to enable effective deployments on resource-constrained edge devices, maximize hardware use and ...
via Bing News