Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
“When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research that was published by Nature Communications on December 8.
To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers—for the purposes of this project, the team wrote software that can be installed and run on a Teensy—a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.
Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.
This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.
“We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.
Original Article: Engineers Teach AI to Navigate Ocean with Minimal Energy
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
- Be the first to know
World leaders head to London for Queen Elizabeth II funeral ...
- Frozen Planet II filmmaker: “We’re taking the audience to Narnia"
Documentary maker Mark Brownlow talks us through the highs and lows of filming in one of the most inhospitable landscapes on Earth.
- RI reports 1st human case of West Nile virus
PROVIDENCE, R.I (WPRI) — The Rhode Island Department of Health (RIDOH) reported a human case of West Nile virus for the first time this year. The health department said a person in their 70s ...
- Scientists monitoring for megathrust earthquakes with new sensors off Vancouver Island
Ocean Networks Canada (ONC) has established a new monitoring system to forecast megathrust earthquakes off the west coast of Vancouver Island.
- Iran SEIZES two US Navy sea drones and then gives them back as warships close in
Iran on Friday said it had released two American sea drones that it seized in the Red Sea in the latest maritime showdown involving the U.S. Navy's new water-borne drone fleet in the Middle East.
Go deeper with Google Headlines on:
Go deeper with Bing News on:
- Federato raises $15M to help insurance customers manage risk
Will Ross and William Steenbergen were AI researchers at Stanford working on climate and atmospheric modeling and reinforcement learning, respectively, when they began to collaborate on wildfire ...
- Exploring reinforcement learning to control nuclear fusion reactions
A student in Carnegie Mellon University's School of Computer Science (SCS) has used reinforcement learning to help control nuclear fusion reactions, a significant step toward harnessing the immense ...
- Carnegie Mellon University: Exploring Reinforcement Learning To Control Nuclear Fusion Reactions
Exploring Reinforcement Learning To Control Nuclear Fusion ReactionsResearch by CMU School of Computer Science student marks several firsts in fieldBy Aaron Aupperlee EmailMedia InquiriesA student in ...
- OCD is a result of imbalanced learning between reinforcement and punishment, says study; finds potential to improve mental health therapy
To better understand the root of obsessive-compulsive disorder, researchers used a behavioural model. They demonstrated that when learning parameters for reinforcement and punishment are excessively ...
- Towards Augmented Microscopy with Reinforcement Learning-Enhanced Workflows
Towards Augmented Microscopy with Reinforcement Learning-Enhanced Workflows ...