Machine learning system efficiently recognizes activities by observing how objects change in only a few key frames.
Given only a few frames of a video, humans can usually surmise what is happening and will happen on screen. If we see an early frame of stacked cans, a middle frame with a finger at the stack’s base, and a late frame showing the cans toppled over, we can guess that the finger knocked down the cans. Computers, however, struggle with this concept.
In a paper being presented at this week’s European Conference on Computer Vision, MIT researchers describe an add-on module that helps artificial intelligence systems called convolutional neural networks, or CNNs, to fill in the gaps between video frames to greatly improve the network’s activity recognition.
The researchers’ module, called Temporal Relation Network (TRN), learns how objects change in a video at different times. It does so by analyzing a few key frames depicting an activity at different stages of the video — such as stacked objects that are then knocked down. Using the same process, it can then recognize the same type of activity in a new video.
In experiments, the module outperformed existing models by a large margin in recognizing hundreds of basic activities, such as poking objects to make them fall, tossing something in the air, and giving a thumbs-up. It also more accurately predicted what will happen next in a video — showing, for example, two hands making a small tear in a sheet of paper — given only a small number of early frames.
One day, the module could be used to help robots better understand what’s going on around them.
“We built an artificial intelligence system to recognize the transformation of objects, rather than appearance of objects,” says Bolei Zhou, a former PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) who is now an assistant professor of computer science at the Chinese University of Hong Kong. “The system doesn’t go through all the frames — it picks up key frames and, using the temporal relation of frames, recognize what’s going on. That improves the efficiency of the system and makes it run in real-time accurately.”
Co-authors on the paper are CSAIL principal investigator Antonio Torralba, who is also a professor in the Department of Electrical Engineering and Computer Science; CSAIL Principal Research Scientist Aude Oliva; and CSAIL Research Assistant Alex Andonian.
Picking up key frames
Two common CNN modules being used for activity recognition today suffer from efficiency and accuracy drawbacks. One model is accurate but must analyze each video frame before making a prediction, which is computationally expensive and slow. The other type, called two-stream network, is less accurate but more efficient. It uses one stream to extract features of one video frame, and then merges the results with “optical flows,” a stream of extracted information about the movement of each pixel. Optical flows are also computationally expensive to extract, so the model still isn’t that efficient.
“We wanted something that works in between those two models — getting efficiency and accuracy,” Zhou says.
The researchers trained and tested their module on three crowdsourced datasets of short videos of various performed activities. The first dataset, called Something-Something, built by the company TwentyBN, has more than 200,000 videos in 174 action categories, such as poking an object so it falls over or lifting an object. The second dataset, Jester, contains nearly 150,000 videos with 27 different hand gestures, such as giving a thumbs-up or swiping left. The third, Charades, built by Carnegie Mellon University researchers, has nearly 10,000 videos of 157 categorized activities, such as carrying a bike or playing basketball.
When given a video file, the researchers’ module simultaneously processes ordered frames — in groups of two, three, and four — spaced some time apart. Then it quickly assigns a probability that the object’s transformation across those frames matches a specific activity class. For instance, if it processes two frames, where the later frame shows an object at the bottom of the screen and the earlier shows the object at the top, it will assign a high probability to the activity class, “moving object down.” If a third frame shows the object in the middle of the screen, that probability increases even more, and so on. From this, it learns object-transformation features in frames that most represent a certain class of activity.
Recognizing and forecasting activities
In testing, a CNN equipped with the new module accurately recognized many activities using two frames, but the accuracy increased by sampling more frames. For Jester, the module achieved top accuracy of 95 percent in activity recognition, beating out several existing models.
It even guessed right on ambiguous classifications: Something-Something, for instance, included actions such as “pretending to open a book” versus “opening a book.” To discern between the two, the module just sampled a few more key frames, which revealed, for instance, a hand near a book in an early frame, then on the book, then moved away from the book in a later frame.
Some other activity-recognition models also process key frames but don’t consider temporal relationships in frames, which reduces their accuracy. The researchers report that their TRN module nearly doubles in accuracy over those key-frame models in certain tests.
The module also outperformed models on forecasting an activity, given limited frames. After processing the first 25 percent of frames, the module achieved accuracy several percentage points higher than a baseline model. With 50 percent of the frames, it achieved 10 to 40 percent higher accuracy. Examples include determining that a paper would be torn just a little, based how two hands are positioned on the paper in early frames, and predicting that a raised hand, shown facing forward, would swipe down.
“That’s important for robotics applications,” Zhou says. “You want [a robot] to anticipate and forecast what will happen early on, when you do a specific action.”
Next, the researchers aim to improve the module’s sophistication. The first step is implementing object recognition together with activity recognition. Then, they hope to add in “intuitive physics,” meaning helping it understand real-world physical properties of objects. “Because we know a lot of the physics inside these videos, we can train module to learn such physics laws and use those in recognizing new videos,” Zhou says. “We also open source all the code and models. Activity understanding is an exciting area of artificial intelligence right now.”
The Latest on: Machine learning
via Google News
The Latest on: Machine learning
- ElectrifAi Announces Expansion of Machine Learning Model Offerings for Amazon SageMakeron January 13, 2021 at 5:31 am
ElectrifAi, one of the world's leading companies in practical artificial intelligence (AI) and pre-built machine learning (ML) models, today announced expanded offerings of pre-built and ...
- Apple will build a learning hub in Atlanta as part of its racial equity pledgeon January 13, 2021 at 5:11 am
Apple has shed more light on its $100 million pledge to improve racial equity. Today, the company announced that it will be making a $25 million contribution to the Propel Center, a learning hub ...
- Another Large U.S. Gas Pipeline Company Adopts OneBridge’s Machine Learning SaaS Solutionon January 13, 2021 at 4:23 am
(TSXV:OSS) (OTCQB:OSSIF) (the “Company” or “OneSoft”) is pleased to announce that one of the largest gas pipeline companies in the U.S.A. (the “Client”) has entered into an agreement with the ...
- Machine Learning Flags Key Risk Factors for Suicide Attemptson January 12, 2021 at 1:52 pm
Because standard risk assessment tools don't capture all risk factors for suicide attempt, functional impairment and financial stress should also be examined, new research suggests.
- Talkdesk Broadens Machine Learning Capabilities With Amazon Web Services Contact Center Intelligenceon January 12, 2021 at 8:32 am
Talkdesk®, Inc., the cloud contact center for innovative enterprises, is participating in the global expansion of Amazon Web Services (AWS) Contact Center Intelligence (CCI) solutions. AWS CCI ...
- FDA Releases Artificial Intelligence/Machine Learning Action Planon January 12, 2021 at 8:03 am
FDA has released the Artificial Intelligence/Machine Learning- Based Software as a Medical Device Action Plan.
- Machine learning accelerates discovery of materials for use in industrial processeson January 11, 2021 at 1:17 pm
Research led by scientists at the University of Toronto and Northwestern University employs machine learning to craft the best building blocks in the assembly of reticular framework materials for use ...
- Review: Microsoft Azure AI and Machine Learning aims for the enterpriseon January 11, 2021 at 3:04 am
Microsoft Azure combines a wide range of cognitive services and a solid platform for machine learning that supports automated ML, no-code/low-code ML, and Python-based notebooks.
- Top 25 Machine Learning Startups To Watch In 2021 Based On Crunchbaseon January 10, 2021 at 8:36 pm
Throughout 2020, venture capital firms continued expanding into new global markets, with London, New York, Tel Aviv, Toronto, Boston, Seattle and Singapore startups receiving increased funding.
via Bing News