SCS researchers have developed a learning method for robots that allows them to learn directly from human-interaction videos and apply that knowledge to new tasks.
Novel Method Developed by CMU Researchers Allows Robots To Learn in the Wild
The robot watched as Shikhar Bahl opened the refrigerator door. It recorded his movements, the swing of the door, the location of the fridge and more, analyzing this data and readying itself to mimic what Bahl had done.
It failed at first, missing the handle completely at times, grabbing it in the wrong spot or pulling it incorrectly. But after a few hours of practice, the robot succeeded and opened the door.
“Imitation is a great way to learn,” said Bahl, a Ph.D. student at the Robotics Institute (RI) in Carnegie Mellon University’s School of Computer Science. “Having robots actually learn from directly watching humans remains an unsolved problem in the field, but this work takes a significant step in enabling that ability.”
Bahl worked with Deepak Pathak and Abhinav Gupta, both faculty members in the RI, to develop a new learning method for robots called WHIRL, short for In-the-Wild Human Imitating Robot Learning. WHIRL is an efficient algorithm for one-shot visual imitation. It can learn directly from human-interaction videos and generalize that information to new tasks, making robots well-suited to learning household chores. People constantly perform various tasks in their homes. With WHIRL, a robot can observe those tasks and gather the video data it needs to eventually determine how to complete the job itself.
The team added a camera and their software to an off-the-shelf robot, and it learned how to do more than 20 tasks — from opening and closing appliances, cabinet doors and drawers to putting a lid on a pot, pushing in a chair and even taking a garbage bag out of the bin. Each time, the robot watched a human complete the task once and then went about practicing and learning to accomplish the task on its own. The team presented their research this month at the Robotics: Science and Systems conference in New York.
“This work presents a way to bring robots into the home,” said Pathak, an assistant professor in the RI and a member of the team. “Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people’s homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching.”
Current methods for teaching a robot a task typically rely on imitation or reinforcement learning. In imitation learning, humans manually operate a robot to teach it how to complete a task. This process must be done several times for a single task before the robot learns. In reinforcement learning, the robot is typically trained on millions of examples in simulation and then asked to adapt that training to the real world.
Both learning models work well when teaching a robot a single task in a structured environment, but they are difficult to scale and deploy. WHIRL can learn from any video of a human doing a task. It is easily scalable, not confined to one specific task and can operate in realistic home environments. The team is even working on a version of WHIRL trained by watching videos of human interaction from YouTube and Flickr.
Progress in computer vision made the work possible. Using models trained on internet data, computers can now understand and model movement in 3D. The team used these models to understand human movement, facilitating training WHIRL.
With WHIRL, a robot can accomplish tasks in their natural environments. The appliances, doors, drawers, lids, chairs and garbage bag were not modified or manipulated to suit the robot. The robot’s first several attempts at a task ended in failure, but once it had a few successes, it quickly latched on to how to accomplish it and mastered it. While the robot may not accomplish the task with the same movements as a human, that’s not the goal. Humans and robots have different parts, and they move differently. What matters is that the end result is the same. The door is opened. The switch is turned off. The faucet is turned on.
“To scale robotics in the wild, the data must be reliable and stable, and the robots should become better in their environment by practicing on their own,” Pathak said.
Original Article: Robots Learn Household Tasks by Watching Humans
More from: Carnegie Mellon University
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
In-the-Wild Human Imitating Robot Learning
- ‘The Wild Robot’: DreamWorks Animation Will Preview Footage at Annecy International Film Festival
Audiences will get the opportunity to preview footage from DreamWorks Animation‘s upcoming animated feature “The Wild Robot” at the Annecy International Film Festival. The studio ...
- ‘The Wild Robot’ Unleashed: DreamWorks to Preview New Footage at Annecy This June
DreamWorks Animation is bringing “The Wild Robot” to France. The Glendale, California-based animation studio, which celebrates its 30th anniversary this year, will preview never-seen footage ...
- 'The Wild Robot' Will Lumber into Theaters a Week Later Than Planned
Competition at the box office will be tough this fall, with the next film by DreamWorks Animation adjusting to the crowded calendar. Fans of DreamWorks Animation will have to wait a bit more to ...
- Universal Pushes ‘The Wild Robot’ Off ‘Transformers One’ Opening Weekend
Universal has moved back the release date of DreamWorks Animation’s “The Wild Robot” by one week to September 27. In doing so, the animated movie based on Peter Brown’s bestselling novel ...
- AI and the End of the Human Writer
Baron points out in her book Who Wrote This?, readers aren’t always able to tell if a slab of text came out of a human torturing herself over syntax or a machine’s frictionless innards. (William Blake ...
Go deeper with Google Headlines on:
In-the-Wild Human Imitating Robot Learning
[google_news title=”” keyword=”In-the-Wild Human Imitating Robot Learning” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Robot learning
- New AI Algorithm Enables Faster, More Reliable Learning
Recent research by Northwestern Engineering researchers, and published in Nature Machine Intelligence, unveils a novel artificial intelligence (AI) algorithm tailored for smart robotics.
- Ozobot Announces New Robot Recycle and Replace Program
NEWPORT BEACH, CA, USA – Ozobot, a global leader in programmable robotics and STEAM-based learning solutions that empower the next generation of creators from K-12 to higher education and beyond, ...
- Random robots are more reliable
New algorithm encourages robots to move more randomly to collect more diverse data for learning. In tests, robots started with no knowledge and then learned and correctly performed tasks within a ...
- Humanoid Robot With AI Mind Is Meant to Think Just Like People, And It’s Learning
Canadian startup Sanctuary AI introduces the seventh generation of the Phoenix robot with a mind and capabilities meant to mimic the human one ...
- Robot dog masters walking, trotting, pronking in a major milestone
Researchers at the Swiss Federal Institute of Technology Lausanne (EPFL) have trained a quadrupedal robot using machine learning, enabling it to adeptly navigate without falling by seamlessly ...
Go deeper with Google Headlines on:
Robot learning
[google_news title=”” keyword=”robot learning” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]