
Find both toothbrushes in the scene
Photo Credit:
UCSB researchers compare the performance of human subjects versus deep neural networks in visual searches
Before you read on, look for toothbrushes in the photo above. Find them? Both of them? If you’re like the vast majority of people, you honed in on the one near the sink, but probably took a moment or two before seeing the other, much larger one hanging on the wall. Although it is technically much more visible and not out of context, for a while at least, your brain excluded that enormous blue toothbrush in your visual search.
As it turns out, size matters. When we search through scenes for a particular object, we often miss even giant targets when their size is inconsistent with the rest of the scene. That’s according to scientists at UC Santa Barbara, where this curious phenomenon is being investigated in an effort to better understand how humans conduct visual searches.
These new findings, by researchers in the Department of Psychological & Brain Sciences, are published in the journal Current Biology.
“When something appears at the wrong scale, you will miss it more often because your brain automatically ignores it,” said UCSB professor Miguel Eckstein, who specializes in computational human vision, visual attention and search. Using scenes of ordinary objects where 14 targets were featured in computer-generated images that varied in color, viewing angle and size, mixed with “target-absent” scenes, the researchers asked 60 viewers to search for these objects (e.g. toothbrush, parking meter, computer mouse) while eye-tracking software monitored the paths of their gaze. They found that people tended to miss the target more often when it was mis-scaled, even when their gaze was directed to the incorrectly sized object.
Computer vision, by contrast, does not have this issue, the scientists reported.
“The idea is when you first see a scene, your brain rapidly processes it within a few hundred milliseconds or less, and then you use that information to guide your search towards likely locations where the object typically appears,” Eckstein said. “Also, you focus your attention on objects that are actually at the size that is consistent with the object that you’re looking for.” The most advanced computer vision — deep neural networks — search across entire scenes and use the visual properties of the object itself, while humans also use the relationships between objects and their context within the scene to guide their eyes.
This trend may seem like a deficiency on the part of humans, but the tables were turned when human subjects and a deep neural network were asked to verify the presence of different target objects in real-world scenes that may or may not have them. In that round, the deep neural network reported a much higher percentage of false positives. That is, they confirmed the presence of, say, a cellphone in a scene where there were computer keyboards because of their similarity in shape — despite the fact that keyboards are several times larger than a cellphone and, in the photo, are much larger than the nearby hands that would be holding them.
“No human would do that,” added former graduate Katie Koehler, now working at Riot Games. “Just based on the size your brain would automatically discard it.” This mechanism, according to the researchers, is in fact a useful strategy implemented by human brains to process scenes rapidly, eliminate distractors and reduce false positives. While this blindness due to inconsistent size may be an unwanted byproduct of the human brain’s search strategy, such scenarios are rare in the real world. With repeated exposure to the unusual scenario, human observers will eventually adapt their visual searches to accommodate it.
“The findings might suggest ways to improve computer vision by implementing some of the tricks the brain utilizes to reduce false positives,” said former postdoctoral researcher Emre Akbas, now an assistant professor of computer engineering at Middle East Technical University in Turkey, who was responsible for the computer vision components of the project.
According to Eckstein, some people on the autism spectrum might not miss the large targets in a scene. He is contemplating a study on that topic in the future.
“There are some theories that suggest that people with autism spectrum disorder focus more on local scene information and less on global structure,” he said. “So there is a possibility that people with autism spectrum disorder might miss the mis-scaled objects less often, but we won’t know that until we do the study.”
In the more immediate future, research will look into the brain activity that occurs when we view mis-scaled objects.
“Many studies have identified brain regions that process scenes and objects, and now researchers are trying to understand which particular properties of scenes and objects are represented in these regions,” said postdoctoral researcher Lauren Welbourne, whose current research concentrates on how objects are represented in the cortex, and how scene context influences the perception of objects. “And so what we’re trying to do is find out how these brain areas respond to objects that are either correctly or incorrectly scaled within a scene. This may help us determine which regions are responsible for making it more difficult for us to find objects if they are mis-scaled.”
Learn more: In Plain Sight
The Latest on: Deep neural networks
- World’s First Hearing Aid with an On-board Deep Neural Network Scores Two Wins in CES 2021 Innovation Awardson January 18, 2021 at 6:18 am
The world’s first hearing aid with an on-board Deep Neural Network (DNN) has received honors in both the highly competitive Health & Wellness and Wearable Technologies categories. This is the fifth ...
- New deep-learning framework predicts gene regulation at single-cell levelon January 13, 2021 at 5:27 pm
Scientists at the University of California, Irvine have developed a new deep-learning framework that predicts gene regulation at the single-cell level.
- Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Showson January 13, 2021 at 9:23 am
Compared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging.
- Using neural networks for faster X-ray imagingon January 13, 2021 at 4:38 am
A team of scientists from Argonne is using artificial intelligence to decode X-ray images faster, which could aid innovations in medicine, materials and energy.
- Oticon Introduces Oticon More, World’s First Hearing Aid with On-Board Deep Neural Networkon January 12, 2021 at 10:48 pm
Hearing aid manufacturer Oticon, Inc. is challenging the conventions of hearing aid technology once again. The company today launched Oticon More ™, the first-ever hearing aid built with an on-board ...
- A new technique called ‘concept whitening’ promises to provide neural network interpretabilityon January 12, 2021 at 2:01 pm
"Concept whitening" can help steer neural networks toward learning specific concepts without sacrificing performance.
- Deep learning AI chip introduced at CES 2021on January 12, 2021 at 5:39 am
While much of the world’s current artificial intelligence (AI) is software based, the market for hardware-based AI is accelerating quickly with several announcements for hardware AI semiconductors ...
- Neural Networks Playing Video Games Teach Us About Our Own Brainson January 10, 2021 at 6:40 am
When you are driving a car, your brain is taking in an enormous amount of visual information and using it to make driving decisions, such as when to brake ...
- Can neural networks experience bizarre hallucinations like us?on January 8, 2021 at 9:11 am
While Google’s neural network created images grabbed worldwide attention, we must fine tune these artificial intelligence models to generate perfect deepfake and visual content, by understanding how ...
- What neural networks playing video games demonstrate about the human brainon January 8, 2021 at 7:17 am
When you are driving a car, your brain is taking in an enormous amount of visual information and using it to make driving decisions, such as when to brake or change lanes. The brain needs to determine ...
via Google News and Bing News