Researchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to “see” and understand what humans do in a typical day.
The technique gathered more than 40,000 pictures taken every 30 to 60 seconds, over a 6 month period, by a wearable camera and predicted with 83 percent accuracy what activity that person was doing. Researchers taught the computer to categorize images across 19 activity classes. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.
“It was surprising how the method’s ability to correctly classify images could be generalized to another person after just two more days of annotation,” said Steven Hickson, a Ph.D. candidate in Computer Science and a lead researcher on the project.
“This work is about developing a better way to understand people’s activities, and build systems that can recognize people’s activities at a finely-grained level of detail,” said Edison Thomaz, co-author and graduate research assistant in the School of Interactive Computing. “Activity tracking devices like the Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities – not just physical activities like walking and running. This work is moving toward full activity intelligence. At a technical level, we are showing that it’s becoming possible for computer vision techniques alone to be used for this.”
The group believes they have gathered the largest annotated dataset of first-person images to demonstrate that deep-learning can understand human behavior and the habits of a specific person.
Student Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. He says reaction from conference-goers was positive.
“People liked that we had a method that combines time and images,” Castro says. “Time (of activity) can be especially important for some activity classes. This system learned how relevant images were because of people’s schedules. What does it think the image is showing? It sees both time and image probabilities and makes a better prediction.”
The ability to literally see and recognize human activities has implications in a number of areas – from developing improved personal assistant applications like Siri to helping researchers explain links between health and behavior, Thomaz says.
Castro and Hickson believe that someday within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.
“Imagine if a device could learn what I would be doing next – ideally predict it – and recommend an alternative?” Castro says. “Once it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.”
The Latest on: Deep learning
via Google News
The Latest on: Deep learning
- AI ‘Deep Nostalgia’ Images Have Deep Limitationson March 2, 2021 at 11:05 am
News outlets and social media accounts have been overrun with old photos that have been animated with MyHeritage’s “Deep Nostalgia” feature, an AI-based ...
- Hierarchical deep learning models using transfer learning for disease detection and classification based on small number of medical imageson March 1, 2021 at 2:59 am
Deep learning is being employed in disease detection and classification based on medical images for clinical decision making. It typically requires large amounts of labelled data; however, the sample ...
- Google’s deep learning finds a critical path in AI chipson February 28, 2021 at 10:39 pm
ZDNet spoke with Google Brain director Jeff Dean about how the company is using artificial intelligence to advance its internal development of custom chips to accelerate its software. Dean noted that ...
- Deep learning-based splice-AI can help predict the onset of Alzheimer's diseaseon February 28, 2021 at 9:17 pm
Korea Brain Research Institute (KBRI, President Suh Pann-ghill) announced that the research team led by Dr. Jae-Yeol Joo discovered new cryptic splice variants and SNVs in PLCg1 gene of AD-specific ...
- Deep Learning Market Analysis With Impact of COVID-19, Top Companies, Trends, Demand, Future Opportunity Outlook 2025on February 28, 2021 at 12:14 am
Global Deep Learning Market AnalysisDeep learning can be defined as a machine learning technique that teaches computers ...
- Deep learning market is expected to grow at a CAGR of around 51.1% over the forecast period 2019 to 2026on February 25, 2021 at 4:03 pm
The global deep learning market is expected to grow at a CAGR of around 51.1% over the forecast period 2019 to 2026 and reach the market value of over US$ 56,427.2 million by 2026. North America held ...
- Interested In AI? Master Deep Learning & Get NLP Certifiedon February 24, 2021 at 9:21 am
The world of artificial intelligence (AI) is revolutionizing the way we live, though it has become something of an acronym soup. From DL to ML, SSD to CN ...
- 3-D holographic microscopy powered by deep-learning deciphers cancer immunotherapyon February 24, 2021 at 7:35 am
Live tracking and analyzing of the dynamics of chimeric antigen receptor (CAR) T-cells targeting cancer cells can open new avenues for the development of cancer immunotherapy. However, imaging via ...
- DeepCube Launches Product Suite to Accelerate Enterprise Adoption of Deep Learningon February 24, 2021 at 6:08 am
DeepCube, the award-winning deep learning pioneer, today announced the launch of a new suite of products and services to help drive enterprise adoption of deep learning, at scale, on intelligent edge ...
- AI: IBM showcases new energy-efficient chip to power deep learningon February 18, 2021 at 7:35 am
IBM researchers have designed an AI accelerator chip that boosts performance, but not energy bills. By Daphne Leprince-Ringuet | February 18, 2021 -- 15:00 GMT (07:00 PST) | Topic: Innovation IBM's ...
via Bing News