
The “computer vision” system can identify objects based on only partial glimpses, like by using these photo snippets of a motorcycle.
Researchers from UCLA Samueli School of Engineering and Stanford have demonstrated a computer system that can discover and identify the real-world objects it “sees” based on the same method of visual learning that humans use.
The system is an advance in a type of technology called “computer vision,” which enables computers to read and identify visual images. It is an important step toward general artificial intelligence systems–computers that learn on their own, are intuitive, make decisions based on reasoning and interact with humans in a more human-like way. Although current AI computer vision systems are increasingly powerful and capable, they are task-specific, meaning their ability to identify what they see is limited by how much they have been trained and programmed by humans.

The “computer vision” system can identify objects based on only partial glimpses, like by using these photo snippets of a motorcycle.
Even today’s best computer vision systems cannot create a full picture of an object after seeing only certain parts of it–and the systems can be fooled by viewing the object in an unfamiliar setting. Engineers are aiming to make computer systems with those abilities–just like humans can understand that they are looking at a dog, even if the animal is hiding behind a chair and only the paws and tail are visible. Humans, of course, can also easily intuit where the dog’s head and the rest of its body are, but that ability still eludes most artificial intelligence systems.
Current computer vision systems are not designed to learn on their own. They must be trained on exactly what to learn, usually by reviewing thousands of images in which the objects they are trying to identify are labeled for them.
Computers, of course, also cannot explain their rationale for determining what the object in a photo represents: AI-based systems do not build an internal picture or a common-sense model of learned objects the way humans do.
The engineers’ new method, described in the Proceedings of the National Academy of Sciences, shows a way around these shortcomings.
The approach is made up of three broad steps. First, the system breaks up an image into small chunks, which the researchers call “viewlets.” Second, the computer learns how these viewlets fit together to form the object in question. And finally, it looks at what other objects are in the surrounding area, and whether or not information about those objects is relevant to describing and identifying the primary object.
To help the new system “learn” more like humans, the engineers decided to immerse it in an internet replica of the environment humans live in.
“Fortunately, the internet provides two things that help a brain-inspired computer vision system learn the same way humans do,” said Vwani Roychowdhury, a UCLA professor of electrical and computer engineering and the study’s principal investigator. “One is a wealth of images and videos that depict the same types of objects. The second is that these objects are shown from many perspectives–obscured, bird’s eye, up-close–and they are placed in different kinds of environments.”
To develop the framework, the researchers drew insights from cognitive psychology and neuroscience.
“Starting as infants, we learn what something is because we see many examples of it, in many contexts,” Roychowdhury said. “That contextual learning is a key feature of our brains, and it helps us build robust models of objects that are part of an integrated worldview where everything is functionally connected.”
The researchers tested the system with about 9,000 images, each showing people and other objects. The platform was able to build a detailed model of the human body without external guidance and without the images being labeled.
The engineers ran similar tests using images of motorcycles, cars and airplanes. In all cases, their system performed better or at least as well as traditional computer vision systems that have been developed with many years of training.
Learn more: New AI system mimics how humans visualize and identify objects
The Latest on: Computer vision
via Google News
The Latest on: Computer vision
- Synaptics’ AI SoC for Video, Vision and Voice Wins Best Embedded Processor Awardon January 25, 2021 at 8:32 am
SAN JOSE, Calif., Jan. 25, 2021 (GLOBE NEWSWIRE) -- Synaptics® Incorporated (Nasdaq: SYNA), today announced its VideoSmart™ VS680 solution has ...
- Computer Vision Technologies Market to Reach Approximately USD 23.00 Billion By 2027on January 25, 2021 at 4:24 am
A new comprehensive research Study is added in Data Bridge Market Research s database of 350 pages titled as Computer Vision Technologies Market study with 100 market data Tables Pie Chat Graphs ...
- Computer vision could help grill the perfect chickenon January 22, 2021 at 1:48 pm
Advances in chemical sensor and computer vision technology could help restaurants automate the cooking process, reducing incidences of undercooked or contaminated chicken being served to consumers.
- Computer Vision Market 2020 to 2025|Rising Demand, Share, Trends, Growth, Opportunitieson January 21, 2021 at 11:53 pm
The Final Report will cover the impact analysis of COVID-19 on this industry: Computer Vision can be best defined as a branch of computer science which deals with improving the digitization and ...
- Delhi Airport evaluating computer vision to ensure social distancingon January 20, 2021 at 6:37 am
In today technology news, we covered about the Delhi Airport evaluating computer vision to ensure social distancing ...
- WiMi Wins 2020 Award for Leadership in Computer Vision Holographic Cloud Serviceson January 20, 2021 at 4:00 am
WiMi Hologram Cloud Inc., a leading Hologram Augmented Reality Technology provider in China, today announced that it has won the award from CCIDnet for leading company of 2020 in computer vision ...
- Chooch AI Develops Agnostic Computer Vision Platformon January 20, 2021 at 12:54 am
Chooch AI Develops Agnostic Computer Vision Platform. If you had to choose the only technology under the broad umbrella of artificial ...
- Delhi Airport evaluating 'computer vision' technology to track passengers: CEOon January 19, 2021 at 2:26 am
Delhi Airport is evaluating computer vision technology to track passengers, reduce waiting time and ensure social distancing at terminals, a top official of the facility's operator informed | NewsByte ...
- Computer Vision Market 2021 : Top Countries Data with Drivers and Restraints, Business Opportunities, Advancement Technologies and Forecast to 2026on January 15, 2021 at 6:10 am
Research Reports has published a detailed report on “Computer Vision Market" 2021 which has been categorized by ...
- A new 'e-nose' and computer vision help researchers cook the perfect chickenon January 13, 2021 at 12:32 pm
Skoltech researchers have found a way to use chemical sensors and computer vision to determine when grilled chicken is cooked just right. These tools can help restaurants monitor and automate cooking ...
via Bing News