
SCS researchers have developed a learning method for robots that allows them to learn directly from human-interaction videos and apply that knowledge to new tasks.
Novel Method Developed by CMU Researchers Allows Robots To Learn in the Wild
The robot watched as Shikhar Bahl opened the refrigerator door. It recorded his movements, the swing of the door, the location of the fridge and more, analyzing this data and readying itself to mimic what Bahl had done.
It failed at first, missing the handle completely at times, grabbing it in the wrong spot or pulling it incorrectly. But after a few hours of practice, the robot succeeded and opened the door.
“Imitation is a great way to learn,” said Bahl, a Ph.D. student at the Robotics Institute (RI) in Carnegie Mellon University’s School of Computer Science. “Having robots actually learn from directly watching humans remains an unsolved problem in the field, but this work takes a significant step in enabling that ability.”
Bahl worked with Deepak Pathak and Abhinav Gupta, both faculty members in the RI, to develop a new learning method for robots called WHIRL, short for In-the-Wild Human Imitating Robot Learning. WHIRL is an efficient algorithm for one-shot visual imitation. It can learn directly from human-interaction videos and generalize that information to new tasks, making robots well-suited to learning household chores. People constantly perform various tasks in their homes. With WHIRL, a robot can observe those tasks and gather the video data it needs to eventually determine how to complete the job itself.
The team added a camera and their software to an off-the-shelf robot, and it learned how to do more than 20 tasks — from opening and closing appliances, cabinet doors and drawers to putting a lid on a pot, pushing in a chair and even taking a garbage bag out of the bin. Each time, the robot watched a human complete the task once and then went about practicing and learning to accomplish the task on its own. The team presented their research this month at the Robotics: Science and Systems conference in New York.
“This work presents a way to bring robots into the home,” said Pathak, an assistant professor in the RI and a member of the team. “Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people’s homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching.”
Current methods for teaching a robot a task typically rely on imitation or reinforcement learning. In imitation learning, humans manually operate a robot to teach it how to complete a task. This process must be done several times for a single task before the robot learns. In reinforcement learning, the robot is typically trained on millions of examples in simulation and then asked to adapt that training to the real world.
Both learning models work well when teaching a robot a single task in a structured environment, but they are difficult to scale and deploy. WHIRL can learn from any video of a human doing a task. It is easily scalable, not confined to one specific task and can operate in realistic home environments. The team is even working on a version of WHIRL trained by watching videos of human interaction from YouTube and Flickr.
Progress in computer vision made the work possible. Using models trained on internet data, computers can now understand and model movement in 3D. The team used these models to understand human movement, facilitating training WHIRL.
With WHIRL, a robot can accomplish tasks in their natural environments. The appliances, doors, drawers, lids, chairs and garbage bag were not modified or manipulated to suit the robot. The robot’s first several attempts at a task ended in failure, but once it had a few successes, it quickly latched on to how to accomplish it and mastered it. While the robot may not accomplish the task with the same movements as a human, that’s not the goal. Humans and robots have different parts, and they move differently. What matters is that the end result is the same. The door is opened. The switch is turned off. The faucet is turned on.
“To scale robotics in the wild, the data must be reliable and stable, and the robots should become better in their environment by practicing on their own,” Pathak said.
Original Article: Robots Learn Household Tasks by Watching Humans
More from: Carnegie Mellon University
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
In-the-Wild Human Imitating Robot Learning
- The Learning Network
The contest runs from June 9 to Aug. 18. By The Learning Network A New York Times Opinion writer urges the class of ’23 to remember that “the world is beautiful. And most people are good.” ...
- Hypersensitive robot hand is eerily human in how it can feel things
While we aren’t going to reach the level of Star Trek’s Data anytime soon, there is now a robot hand with a sense of touch that is almost human ... different machine learning methods that ...
- Tesla's Optimus robot is learning to navigate, sense and pick things up
The general-purpose robot ... human motion of picking things up, for example, but it doesn't supply any data from the robot itself alongside the process. Either way, it seems Optimus is learning ...
- human-robot interaction
instead working on different parts in a production pipeline or with the robot performing tasks instead of a human. In such cases any human-robot interaction (HRI) will be superficial. Yet what if ...
- Check out the wild robot that Sam's Club is testing to make burgers in its cafes
Watch out, burger-flipping maestros: A robot may soon be coming to take your ... Customers can order and pay for the burger without even talking to a human. And at least some burger connoisseurs ...
Go deeper with Google Headlines on:
In-the-Wild Human Imitating Robot Learning
[google_news title=”” keyword=”In-the-Wild Human Imitating Robot Learning” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
Robot learning
- Rethinking Robot Rights: A Confucian Approach
As we venture deeper into the world of robotics and artificial intelligence (AI), the debate around the moral and legal status of robots has been gaining momentum. Recent philosophical and legal ...
- Robot dog causes stir in NYC and ‘sniffs out’ NY Post newsroom
Bunny, an 85-pound purebred from Boston Dynamics, hit midtown Manhattan on a visit with her owner Agnieszka Pilat this week — and she came in peace. “I signed a pledge with some other robotics ...
- These robot hands can be controlled from anywhere in the world
Robot hands are not just impossible gadgets you see Tony Stark play with on-screen anymore. Shadow Robot, a London-based robotics company’s robot hand works with sharp dexterity and precision that ...
- Tecumseh Public Schools Young 5s through second grade using robots to learn coding
Tecumseh Public Schools students from the Young 5s through second grade are using robots to learn the basics of coding among other lessons.
- How robot ranch hands could protect livestock from predators
In an effort to keep livestock safe from coyotes while not harming the predators themselves, researchers carried out a study involving a remote-controlled vehicle equipped with powerful lights. The ...
Go deeper with Google Headlines on:
Robot learning
[google_news title=”” keyword=”robot learning” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]