via MIT Technology Review
A pair of robot legs called Cassie has been taught to walk using reinforcement learning, the training technique that teaches AIs complex behavior via trial and error. The two-legged robot learned a range of movements from scratch, including walking in a crouch and while carrying an unexpected load.
But can it boogie? Expectations for what robots can do run high thanks to viral videos put out by Boston Dynamics, which show its humanoid Atlas robot standing on one leg, jumping over boxes, and dancing. These videos have racked up millions of views and have even been parodied. The control Atlas has over its movements is impressive, but the choreographed sequences probably involve a lot of hand-tuning. (Boston Dynamics has not published details, so it’s hard to say how much.)
“These videos may lead some people to believe that this is a solved and easy problem,” says Zhongyu Li at the University of California, Berkeley, who worked on Cassie with his colleagues. “But we still have a long way to go to have humanoid robots reliably operate and live in human environments.” Cassie can’t yet dance, but teaching the human-size robot to walk by itself puts it several steps closer to being able to handle a wide range of terrain and recover when it stumbles or damages itself.
Virtual limitations: Reinforcement learning has been used to train many bots to walk inside simulations, but transferring that ability to the real world is hard. “Many of the videos that you see of virtual agents are not at all realistic,” says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the simulated physical laws inside a virtual environment and the real physical laws outside it—such as how friction works between a robot’s feet and the ground—can lead to big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movements are even a tiny bit off.
Double simulation: But training a large robot through trial and error in the real world would be dangerous. To get around these problems, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk by drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics that mirrors real-world physics with a high degree of accuracy—but at a cost in running speed. Only once Cassie seemed to walk well there was the learned walking model loaded into the actual robot.
The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could walk across rough and slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in its right leg but was able to adjust its movements to compensate. Finn thinks that this is exciting work. Edward Johns, who leads the Robot Learning Lab at Imperial College London agrees. “This is one of the most successful examples I have seen,” he says.
The Berkeley team hopes to use their approach to add to Cassie’s repertoire of movements. But don’t expect a dance-off anytime soon.
Original Article: Forget Boston Dynamics. This robot taught itself to walk
More from: University of California Berkeley
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Reinforcement learning
- A new AI system could substantially upgrade traffic flow
Have you ever been stuck in traffic only to head toward another red light? Is there a feeling more irritating than being held up in a traffic jam? Now Aston University researchers have engineered a ...
- AI Traffic Light System Aims To End Congestion
Long queues at traffic lights could be a thing of the past, thanks to a new artificial intelligence system developed by Aston University researchers.
- COMP.4240 Introduction to Reinforcement Learning
This course provides a solid introduction to the field of Reinforcement Learning (RL) and Decision Making. The students will learn about the basic blocks, main approached, and core challenges of ...
- Reinforcement Learning Market Grow Is Expected To Register A Robust CAGR Of ~44% By 2030
The reinforcement learning market is estimated to occupy a large revenue by growing at a CAGR of ~44% during the forecast period, i.e., 2022 – 2030, owing to the escalating adoption of AI and ...
- Conformist social learning leads to self-organised prevention against adverse bias in risky decision making
Mathematical modelling and large-scale online experiments revealed that learning from others can induce 'smarter' decisions even when most individuals are biased towards adverse risk aversion.
Go deeper with Google Headlines on:
Reinforcement learning
Go deeper with Bing News on:
Robot learning
- South Hills area robotics team offers varied learning experiences
Picture a 125-pound, nearly three-foot-tall mechanical creation climb unassisted onto a set of monkey bars, amid a throng of folks who are rooting for it to succeed. “It’s as exciting as a hockey game ...
- The Delivery Robots Market size is expected to reach USD 957 million by 2026 from USD 212 million in 2021
"Delivery Robots Market"Delivery robots market size is projected to grow from USD 212 million in 2021 to USD 957 million by 2026, at a CAGR of 35.1% from ...
- DeepMind’s new AI can perform over 600 tasks, from playing games to controlling robots
Alphabet-backed research lab DeepMind created an AI system, Gato, that can perform hundreds of tasks ranging from controlling a robot to analyzing text.
- Leave It To The Robots: What The Jetsons Got Right About AI
While the show exaggerated futuristic technology, it can help us think about the real role AI can and should play in the workforce.
- Metaverse: 'Train your robots in the virtual world' (CXOTalk interview)
In this exclusive conversation, the CTO of Nvidia talks about the metaverse, cloud computing, data centers, money, and cryptocurrency. Michael Kagan offers a glimpse into the future and separates fact ...