Think fast, robot

A visual odometry algorithm uses low-latency brightness change events from a Dynamic Vision Sensor (DVS) and the data from a normal camera to provide absolute brightness values. The left photograph shows the camera frame, and the right photograph shows the DVS events (displayed in red and blue) plus grayscale from the camera. Image courtesy of the researchers
A visual odometry algorithm uses low-latency brightness change events from a Dynamic Vision Sensor (DVS) and the data from a normal camera to provide absolute brightness values. The left photograph shows the camera frame, and the right photograph shows the DVS events (displayed in red and blue) plus grayscale from the camera. Image courtesy of the researchers
Algorithm that harnesses data from a new sensor could make autonomous robots more nimble.

One of the reasons we don’t yet have self-driving cars and mini-helicopters delivering online purchases is that autonomous vehicles tend not to perform well under pressure. A system that can flawlessly parallel park at 5 mph may have trouble avoiding obstacles at 35 mph.

Part of the problem is the time it takes to produce and interpret camera data. An autonomous vehicle using a standard camera to monitor its surroundings might take about a fifth of a second to update its location. That’s good enough for normal operating conditions but not nearly fast enough to handle the unexpected.

Andrea Censi, a research scientist in MIT’s Laboratory for Information and Decision Systems, thinks the solution could be to supplement cameras with a new type of sensor called an event-based (or “neuromorphic”) sensor, which can take measurements a million times a second.

Read more . . .

 

The Latest on: Autonomous robots

[google_news title=”” keyword=”Autonomous robots” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]

via Google News

 

See Also

The Latest on: Autonomous robots

via  Bing News

 

 

What's Your Reaction?
Don't Like it!
0
I Like it!
0
Scroll To Top