
MIT engineers have developed a system for autonomous vehicles that senses tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner, such as when another car is approaching from behind a pillar in a parking garage.
By sensing tiny changes in shadows, a new system identifies approaching objects that may cause a collision.
To improve the safety of autonomous systems, MIT engineers have developed a system that can sense tiny changes in shadows on the ground to determine if there’s a moving object coming around the corner.
Autonomous cars could one day use the system to quickly avoid a potential collision with another car or pedestrian emerging from around a building’s corner or from in between parked cars. In the future, robots that may navigate hospital hallways to make medication or supply deliveries could use the system to avoid hitting people.
In a paper being presented at next week’s International Conference on Intelligent Robots and Systems (IROS), the researchers describe successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. When sensing and stopping for an approaching vehicle, the car-based system beats traditional LiDAR — which can only detect visible objects — by more than half a second.
That may not seem like much, but fractions of a second matter when it comes to fast-moving autonomous vehicles, the researchers say.
“For applications where robots are moving around environments with other moving objects or people, our method can give the robot an early warning that somebody is coming around the corner, so the vehicle can slow down, adapt its path, and prepare in advance to avoid a collision,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The big dream is to provide ‘X-ray vision’ of sorts to vehicles moving fast on the streets.”
Currently, the system has only been tested in indoor settings. Robotic speeds are much lower indoors, and lighting conditions are more consistent, making it easier for the system to sense and analyze shadows.
Joining Rus on the paper are: first author Felix Naser SM ’19, a former CSAIL researcher; Alexander Amini, a CSAIL graduate student; Igor Gilitschenski, a CSAIL postdoc; recent graduate Christina Liao ’19; Guy Rosman of the Toyota Research Institute; and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.
Extending ShadowCam
For their work, the researchers built on their system, called “ShadowCam,” that uses computer-vision techniques to detect and classify changes to shadows on the ground. MIT professors William Freeman and Antonio Torralba, who are not co-authors on the IROS paper, collaborated on the earlier versions of the system, which were presented at conferences in 2017 and 2018.
For input, ShadowCam uses sequences of video frames from a camera targeting a specific area, such as the floor in front of a corner. It detects changes in light intensity over time, from image to image, that may indicate something moving away or coming closer. Some of those changes may be difficult to detect or invisible to the naked eye, and can be determined by various properties of the object and environment. ShadowCam computes that information and classifies each image as containing a stationary object or a dynamic, moving one. If it gets to a dynamic image, it reacts accordingly.
Adapting ShadowCam for autonomous vehicles required a few advances. The early version, for instance, relied on lining an area with augmented reality labels called “AprilTags,” which resemble simplified QR codes. Robots scan AprilTags to detect and compute their precise 3D position and orientation relative to the tag. ShadowCam used the tags as features of the environment to zero in on specific patches of pixels that may contain shadows. But modifying real-world environments with AprilTags is not practical.
The researchers developed a novel process that combines image registration and a new visual-odometry technique. Often used in computer vision, image registration essentially overlays multiple images to reveal variations in the images. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.
Visual odometry, used for Mars Rovers, estimates the motion of a camera in real-time by analyzing pose and geometry in sequences of images. The researchers specifically employ “Direct Sparse Odometry” (DSO), which can compute feature points in environments similar to those captured by AprilTags. Essentially, DSO plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner. (Regions of interest were annotated manually beforehand.)
As ShadowCam takes input image sequences of a region of interest, it uses the DSO-image-registration method to overlay all the images from same viewpoint of the robot. Even as a robot is moving, it’s able to zero in on the exact same patch of pixels where a shadow is located to help it detect any subtle deviations between images.
Next is signal amplification, a technique introduced in the first paper. Pixels that may contain shadows get a boost in color that reduces the signal-to-noise ratio. This makes extremely weak signals from shadow changes far more detectable. If the boosted signal reaches a certain threshold — based partly on how much it deviates from other nearby shadows — ShadowCam classifies the image as “dynamic.” Depending on the strength of that signal, the system may tell the robot to slow down or stop.
“By detecting that signal, you can then be careful. It may be a shadow of some person running from behind the corner or a parked car, so the autonomous car can slow down or stop completely,” Naser says.
Tag-free testing
In one test, the researchers evaluated the system’s performance in classifying moving or stationary objects using AprilTags and the new DSO-based method. An autonomous wheelchair steered toward various hallway corners while humans turned the corner into the wheelchair’s path. Both methods achieved the same 70-percent classification accuracy, indicating AprilTags are no longer needed.
In a separate test, the researchers implemented ShadowCam in an autonomous car in a parking garage, where the headlights were turned off, mimicking nighttime driving conditions. They compared car-detection times versus LiDAR. In an example scenario, ShadowCam detected the car turning around pillars about 0.72 seconds faster than LiDAR. Moreover, because the researchers had tuned ShadowCam specifically to the garage’s lighting conditions, the system achieved a classification accuracy of around 86 percent.
Next, the researchers are developing the system further to work in different indoor and outdoor lighting conditions. In the future, there could also be ways to speed up the system’s shadow detection and automate the process of annotating targeted areas for shadow sensing.
Learn more: Helping autonomous vehicles see around corners
The Latest on: Autonomous systems
via Google News
The Latest on: Autonomous systems
- Paccar Makes Major Move into Global Autonomous Truck Marketon January 21, 2021 at 3:16 pm
Paccar announced a global alliance with autonomous truck technology start-up Aurora to develop, test and commercialize autonomous Kenworth and Peterbilt trucks.
- PACCAR and Aurora Innovation Partner To Develop Autonomous Peterbilt and Kenworth Truckson January 21, 2021 at 2:12 pm
Aurora Innovation, a Silicon Valley start-up backed by Amazon, has partnered with leading truck manufacturer PACCAR to develop self-driving big rigs. Under the partnership announced Tuesday, the two ...
- New Bobcat partnership points to future autonomous machineson January 21, 2021 at 2:00 pm
Doosan Bobcat North America is investing in the startup radar technologies company to further develop its radar sensor solutions to detect objects.
- Smooth touchdown: novel camera-based system for automated landing of drone on a fixed spoton January 21, 2021 at 9:01 am
While autonomous drones can greatly assist with difficult rescue missions, they require a safe landing procedure. In a new study, scientists from Shibaura Institute of Technology (SIT), Japan, have ...
- Dragontail Systems Creates Autonomous Drones for Food Deliverieson January 21, 2021 at 4:46 am
Dragontail Systems Limited, a software company that optimizes the entire restaurant order and delivery process for global brands like Dominos, KFC and Pizza Hut, announced the deployment of an ...
- Use of autonomous vehicles in mining and farming touted at CES 2021on January 20, 2021 at 10:23 am
Caterpillar and John Deere showcased autonomous machines that are being used worldwide in farming and mining projects.
- Dragontail Systems Deploys Autonomous Drones To Assist Food Deliverieson January 19, 2021 at 10:33 pm
PRNewswire/ -- Dragontail Systems Limited (ASX: DTS), a software company that optimizes the entire restaurant order and delivery process for global brands like Dominos, KFC and Pizza Hut, today ...
- Automotive Autonomous Emergency Braking System Market Giants Spending Is Going to Boomon January 19, 2021 at 1:29 pm
Stay up-to-date with global Automotive Autonomous Emergency Braking System market research offered by HTF MI. Check how key trends and emerging drivers are shaping this industry growth.
- Aurora, Paccar Partner On Autonomous Tech For Big Rigs As Robotruck Race Intensifieson January 19, 2021 at 1:00 pm
The alliance with the maker of Kenworth and Peterbilt semis is the Silicon Valley startup's biggest news since acquiring Uber's autonomous tech unit.
- Toyota builds an autonomous drift Supra to develop advanced driver assistance systemson January 19, 2021 at 9:47 am
Toyota has revealed an autonomous Supra that drifts. It's meant to help researchers develop driver assistance tech.
via Bing News