Now Reading
Robots Anticipate Human Actions, Adjust Accordingly

Robots Anticipate Human Actions, Adjust Accordingly

Robots Anticipate Human Actions_ml

A robot in Cornell’s Personal Robotics Lab has learned to foresee human action and adjust accordingly.

Seeing a person carrying a bowl toward the refrigerator, a robot identifies the objects in the scene. Knowing that bowls are storable and refrigerators are places to store things, it projects possible trajectories for the bowl, and decides to open the refrigerator door.

The robot was programmed to refill a person’s cup when it was nearly empty. To do this, the robot must plan its movements in advance and then follow the plan. But if a human sitting at the table happens to raise the cup and drink from it, the robot might pour a drink into a cup that isn’t there. But when the robot sees the human reaching for the cop, it can anticipate the human action and avoid making a mistake. In another test, the robot observed a human carrying an object toward a refrigerator and helpfully opened the refrigerator door.

Hema S. Koppula, Cornell graduate student in computer science, and Ashutosh Saxena, assistant professor of computer science, will describe their work at International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.

From a database of 120 3-D videos of people performing common household activities, the robot has been trained to identify human activities by tracking the movements of the body — reduced to a symbolic skeleton for easy calculation — breaking them down into sub-activities like reaching, carrying, pouring or drinking, and to associate the activities with objects. Since each person performs tasks a little differently, the robot can build a model that is general enough to match new events.

“We extract the general principles of how people behave,” said Saxena. “Drinking coffee is a big activity, but there are several parts to it.“ The robot builds a “vocabulary” of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.

Observing a new scene with its Microsoft Kinnect 3-D camera, the robot identifies the activities it sees, considers what uses are possible with the objects in the scene and how those uses fit with the activities; it then generates a set of possible continuations into the future — such as eating, drinking, cleaning, putting away — and finally chooses the most probable. As the action continues, it constantly updates and refines its predictions.

See Also

Read more . . .

 

The Latest Bing News on:
Robots foresee human action
The Latest Google Headlines on:
Robots foresee human action

[google_news title=”” keyword=”robots foresee human action” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]

The Latest Bing News on:
Human-robot interaction
The Latest Google Headlines on:
Human-robot interaction

[google_news title=”” keyword=”human-robot interaction” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top