via NVIDIA
Researchers from NVIDIA, led by Stan Birchfield and Jonathan Tremblay, developed a first of its kind deep learning-based system that can teach a robot to complete a task by just observing the actions of a human. The method is designed to enhance communication between humans and robots and at the same time further research that will enable people to work alongside robots seamlessly.
“For robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot; this includes both the desired result and any hints as to the best means to achieve that result,” the researchers stated in their research paper. “With demonstrations, a user can communicate a task to the robot and provide clues as to how to best perform the task.”
Using NVIDIA TITAN X GPUs, the researchers trained a sequence of neural networks to perform duties associated with perception, program generation, and program execution. As a result, the robot was able to learn a task from a single demonstration in the real world.
Once the robot sees a task, it generates a human-readable description of the steps necessary to re-perform the task. The description allows the user to quickly identify and correct any issues with the robot’s interpretation of the human demonstration before execution on the real robot.
The key to achieving this capability is leveraging the power of synthetic data to train the neural networks. Current approaches to training neural networks require large amounts of labeled training data, which is a serious bottleneck in these systems. With synthetic data generation, an almost infinite amount of labeled training data can be produced with very little effort.
This is also the first time an image-centric domain randomization approach has been used on a robot. Domain randomization is a technique to produce synthetic data with large amounts of diversity, which then fools the perception network into seeing the real-world data as simply another variation of its training data. The researchers chose to process the data in an image-centric manner to ensure that the networks are not dependent on the camera or environment.
“The perception network as described applies to any rigid real-world object that can be reasonably approximated by its 3D bounding cuboid,” the researchers said. “Despite never observing a real image during training, the perception network reliably detects the bounding cuboids of objects in real images, even under severe occlusions.”
For their demonstration, the team trained object detectors on several colored blocks and a toy car. The system was taught the physical relationship of blocks, whether they are stacked on top of one another or placed next to each other.
In the video above, the human operator shows a pair of stacks of cubes to the robot. The system then infers an appropriate program and correctly places the cubes in the correct order. Because it takes the current state of the world into account during execution, the system is able to recover from mistakes in real time.
The researchers will present their research paper and work at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia this week.
The team says they will continue to explore the use of synthetic training data for robotics manipulation to extend the capabilities of their method to additional scenarios.
via NVIDIA: Read the research paper
The Latest on: Robot learning
via Google News
The Latest on: Robot learning
- Seoul Robotics Partners With Digiflec to Deliver Industry-Leading 3D Perception Solutions to the United Kingdomon May 12, 2022 at 12:27 pm
Seoul Robotics, the 3D perception solutions company using deep learning AI to power the future of autonomy, today announced a value-added reseller (VAR) partnership with Digiflec to supply its ...
- Metaverse: 'Train your robots in the virtual world' (CXOTalk interview)on May 12, 2022 at 11:03 am
In this exclusive conversation, the CTO of Nvidia talks about the metaverse, cloud computing, data centers, money, and cryptocurrency. Michael Kagan offers a glimpse into the future and separates fact ...
- Researchers Develop New Method Using Machine Learning to Model the Dynamics of Underwater Stingray Robotson May 12, 2022 at 8:58 am
Researchers from the Singapore University of Technology and Design (SUTD) used Machine Learning to develop a new technique for modeling the dynamics of underwater stingray-like robots. (a) CAD model ...
- New imaging method makes tiny robots visible in the bodyon May 12, 2022 at 7:27 am
Microrobots have the potential to revolutionize medicine. Researchers at the Max Planck ETH Centre for Learning Systems have now developed an imaging technique that for the first time recognizes ...
- Game based learning and video games as teaching tools catching on with MA schoolson May 12, 2022 at 2:07 am
Teachers in Revere, Reading, Brookline and Georgetown say kids are more engaged and excited about learning when playing games.
- Teaching underwater stingray robots to swim faster and with greater precision using machine learningon May 11, 2022 at 1:40 pm
Researchers from the Singapore University of Technology and Design (SUTD) developed a new approach to model the dynamics of underwater stingray-like robots using Machine Learning.
- Dusty Robotics to use $45M in Series B Funding to Scale Digital Floorplan Printingon May 11, 2022 at 1:21 pm
The ingestion of visual information would then also require a machine learning process to teach the robot how to identify and interpret inbound images or captured site data and re-route accordingly.
- Never scoop poop again with the sleek, brilliantly designed Litter-Robot 4on May 11, 2022 at 8:52 am
Designed with community feedback in mind, the latest Litter-Robot is all about improving on all the pain points from previous models – notably it’s less noisy, bulky and smelly. Ahead, everything to ...
- Video: Robots are changing the way we think about the hospitality industryon May 11, 2022 at 1:35 am
If you think about the general labor shortage, that's kind of happened since the pandemic. You know, you focus on the service side of the business and you kind of expand the robotics that are ...
via Bing News