
via UC Berkeley
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play.
[Read more: Fed up, two UC Berkeley students launch tool to spot Twitter bots]
Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
The research team will perform a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5.
At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects.
“In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model.
With the new technology, a robot pushes objects on a table, then uses the learned prediction model to choose motions that will move an object to a desired location. Robots use the learned model from raw camera observations to teach themselves how to avoid obstacles and push objects around obstructions.
“Humans learn object manipulation skills without any teacher through millions of interactions with a variety of objects during their lifetime. We have shown that it possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specifically object pushing skills,” said Frederik Ebert, a graduate student in Levine’s lab who worked on the project.
Since control through video prediction relies only on observations that can be collected autonomously by the robot, such as through camera images, the resulting method is general and broadly applicable. In contrast to conventional computer vision methods, which require humans to manually label thousands or even millions of images, building video prediction models only requires unannotated video, which can be collected by the robot entirely autonomously. Indeed, video prediction models have also been applied to datasets that represent everything from human activities to driving, with compelling results.
“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”
The Berkeley scientists are continuing to research control through video prediction, focusing on further improving video prediction and prediction-based control, as well as developing more sophisticated methods by which robots can collected more focused video data, for complex tasks such as picking and placing objects and manipulating soft and deformable objects such as cloth or rope, and assembly.
Learn more: New robots can see into their future
The Latest on: Robotic learning technology
[google_news title=”” keyword=”robotic learning technology” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
- Exploring Confucianism as an alternative perspective on robot rightson June 3, 2023 at 2:32 am
Confucianism asserts that proper rituals and ceremonies, known as "rites," are essential for individuals to enhance their moral character. The role of rites for robots: Tae Wan Kim argues that rather ...
- Tecumseh Public Schools Young 5s through second grade using robots to learn codingon June 2, 2023 at 1:01 am
Tecumseh Public Schools students from the Young 5s through second grade are using robots to learn the basics of coding among other lessons.
- Educators get hands-on experience with robotics to help students in STEAM curriculumson June 1, 2023 at 9:06 pm
Educators in Middle Georgia are getting the chance to learn about robotics thanks to Georgia Power and the Middle Georgia Robotics Institute.
- Robotic simulators, esports and more equipment coming to some NC schoolson June 1, 2023 at 9:35 am
New digital technology — including e-sports teams and robotic simulators — are coming to some area school systems. The North Carolina State Board of Education on Thursday approved new, one-time ...
- Perrone Robotics works to advance safety and maneuverability in autonomous vehicleson June 1, 2023 at 7:19 am
The combination of GPT AI technology and Perrone's General Purpose Robotics Operating System, MAX works to increase maneuverability and a more autonomous standard to the vehicle.
- Technology creates new virtual learning option for STLCC students with disabilitieson May 31, 2023 at 9:00 pm
By broadening the use of Swivl technology to include students with disabilities, St. Louis Community College-Florissant Valley created a welcome learning option ...
- Laura Petrich, PhD Student in Robotics & Machine Learning – Interview Serieson May 31, 2023 at 11:53 am
Laura is currently pursuing a Ph.D. in Computing Science under the supervision of Dr. Patrick Pilarski and Dr. Matthew E. Taylor. She received a B.Sc. with Honors in Computing Science from the ...
- First robotics learning center in Manila openson May 31, 2023 at 9:45 am
STUDENTS ages 9 to 14 years old can now be introduced early to robotics, engineering and programming concepts through hands-on learning with the opening of Nullspace Robotics PH, the local franchise ...
- Making AI delivery robots disability-friendlyon May 30, 2023 at 3:09 am
As AI delivery robots hit the streets, how are they safe and accessible for all?
- The Evolution of Robotic Technology: From Robotic Assistance to Augmented Intelligence to Elevate All Surgeryon May 24, 2023 at 5:00 pm
Using robotic technology, powered by artificial intelligence (AI) and machine learning (ML), operating rooms are being transformed—and in turn so is surgery performance. Twenty years ago, ...
via Google News and Bing News