UCLA engineers have made major improvements on their design of an optical neural network –a device inspired by how the human brain works – that can identify objects or process information at the speed of light.
The development could lead to intelligent camera systems that figure out what they are seeing simply by the patterns of light that run through a 3D engineered material structure. Their new design takes advantage of the parallelization and scalability of optical-based computational systems.
For example, such systems could be incorporated into self-driving cars or robots, helping them make near-instantaneous decisions faster and using less power than computer-based systems that need additional time to identify an object after it’s been seen.
The technology was first introduced by the UCLA group in 2018. The system uses a series of 3D-printed wafers or layers with uneven surfaces that transmit or reflect incoming light – they’re reminiscent in look and effect to frosted glass. These layers have tens of thousands of pixel points – essentially these are artificial neurons that form an engineered volume of material that computes all-optically. Each object will have a unique light pathway through the 3D fabricated layers.
Behind those layers are several light detectors, each previously assigned in a computer to deduce what the input object is by where the most light ends up after traveling through the layers.
For example, if it’s trained to figure out handwritten digits, then the detector programmed to identify a “5” will see the most of the light hit that detector after the image of a “5” has traveled through the layers.
In this recent study, published in the open access journal Advanced Photonics, the UCLA researchers have significantly increased the system’s accuracy by adding a second set of detectors to the system, and therefore each object type is now represented with two detectors rather than one. The researchers aimed to increase the signal difference between a detector pair assigned to an object type. Intuitively, this is similar to weighing two stones simultaneously with the left and right hands – it is easier this way to differentiate if they are of similar weight or have different weights.
This differential detection scheme helped UCLA researchers improve their prediction accuracy for unknown objects that were seen by their optical neural network.
In this recent study, published in the open access journal Advanced Photonics, the UCLA researchers have significantly increased the system’s accuracy by adding a second set of detectors to the system, and therefore each object type is now represented with two detectors rather than one. The researchers aimed to increase the signal difference between a detector pair assigned to an object type. Intuitively, this is similar to weighing two stones simultaneously with the left and right hands – it is easier this way to differentiate if they are of similar weight or have different weights.
This differential detection scheme helped UCLA researchers improve their prediction accuracy for unknown objects that were seen by their optical neural network.
“Such a system performs machine-learning tasks with light-matter interaction and optical diffraction inside a 3D fabricated material structure, at the speed of light and without the need for extensive power, except the illumination light and a simple detector circuitry,” said Aydogan Ozcan, Chancellor’s Professor of Electrical and Computer Engineering and the principal investigator on the research. “This advance could enable task-specific smart cameras that perform computation on a scene using only photons and light-matter interaction, making it extremely fast and power efficient.”
The researchers tested their system’s accuracy using image datasets of hand-written digits, items of clothing, and a broader set of various vehicles and animals known as the CIFAR-10 image dataset. They found image recognition accuracy rates of 98.6%, 91.1% and 51.4% respectively.
Those results compare very favorably to earlier generations of all-electronic deep neural nets. While more recent electronic systems have better performance, the researchers suggest that all-optical systems have advantages in inference speed, low-power, and can be scaled up to accommodate and identify many more objects in parallel.
Learn more: Optical neural network could lead to intelligent cameras
The Latest on: Intelligent cameras
[google_news title=”” keyword=”intelligent cameras” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News
The Latest on: Intelligent cameras
- James Cameron turns cameras on octopuseson April 18, 2024 at 9:10 am
James Cameron – the man behind such feature films as “The Terminator,” “Avatar” and “Titanic” – is deeply committed to raising awareness for the preservation and conservation of nature. Part of his ...
- Vietnam High-speed Camera Market Outlook, Size, Prominent Players, Share, Revenue, and Forecast 2024 to 2032on April 17, 2024 at 2:32 pm
The recent analysis by Report Ocean on the “Vietnam High-speed Camera Market” Report 2024 to 2032 revolves around various aspects of the market, including characteristics, size and growth, ...
- Insta360's new 360-degree camera may be the only gadget content creators needon April 16, 2024 at 6:00 am
The Insta360 X4 can record 8K 30fps videos, be controlled with AI gestures, and has several other tricks that vloggers, travelers, and enthusiasts will love.
- Best Camera Smartphones of 2024on April 16, 2024 at 2:00 am
The year 2024 has brought forth an impressive array of camera smartphones that cater to the diverse needs of photography enthusiasts and casual users ...
- An Intelligent Future for Manufacturingon April 11, 2024 at 5:00 pm
high-speed cameras, and intelligent office automation systems, and in the future will host even more applications such as office-to-cloud, IoT capabilities, and 4K video security. Another leading ...
- This Artificially Intelligent Pin Wants to Free You From Your Phoneon April 11, 2024 at 9:28 am
The $700 Ai Pin, funded by OpenAI’s Sam Altman and Microsoft, can be helpful — until it struggles with tasks like doing math and crafting sandwich recipes.
- AVer & Biamp Collaborate on Camera Tracking Solutionon April 11, 2024 at 6:25 am
This collaboration uses data from Biamp Parlé conferencing microphones with Beamtracking technology to automatically trigger AVer cameras.
- Having Galaxy S24 Ultra camera issues? A fix may be coming soonon April 9, 2024 at 1:18 pm
Samsung forums and social media platforms are littered with users complaining about terrible camera issues on the Galaxy S24 Ultra. A fix might be on the way.
- LiveView Technologies unveils Immix integration, cameras to bolster intelligent physical securityon April 9, 2024 at 5:17 am
Through its integration with LVT, Immix users can now seamlessly access their LVT cameras alongside other security products for full network control.
- LVT Unveils Immix Integration and New Cameras to Bolster Intelligent Physical Security Ecosystemon April 9, 2024 at 5:00 am
LiveView Technologies announced its new capabilities at ISC West 2024LAS VEGAS, April 09, 2024 (GLOBE NEWSWIRE) -- (ISC West, Booth #11099 and L21), April 9, 2024—LVT (LiveView Technologies, Inc.), ...
via Bing News