A massively parallel amplitude-only Fourier neural network
Researchers invent an optical convolutional neural network accelerator for machine learning
SUMMARY
Researchers at the George Washington University, together with researchers at the University of California, Los Angeles, and the deep-tech venture startup Optelligence LLC, have developed an optical convolutional neural network accelerator capable of processing large amounts of information, on the order of petabytes, per second. This innovation, which harnesses the massive parallelism of light, heralds a new era of optical signal processing for machine learning with numerous applications, including in self-driving cars, 5G networks, data-centers, biomedical diagnostics, data-security and more.
THE SITUATION
Global demand for machine learning hardware is dramatically outpacing current computing power supplies. State-of-the-art electronic hardware, such as graphics processing units and tensor processing unit accelerators, help mitigate this, but are intrinsically challenged by serial data processing that requires iterative data processing and encounters delays from wiring and circuit constraints. Optical alternatives to electronic hardware could help speed up machine learning processes by simplifying the way information is processed in a non-iterative way. However, photonic-based machine learning is typically limited by the number of components that can be placed on photonic integrated circuits, limiting the interconnectivity, while free-space spatial-light-modulators are restricted to slow programming speeds.
THE SOLUTION
To achieve a breakthrough in this optical machine learning system, the researchers replaced spatial light modulators with digital mirror-based technology, thus developing a system over 100 times faster. The non-iterative timing of this processor, in combination with rapid programmability and massive parallelization, enables this optical machine learning system to outperform even the top-of-the-line graphics processing units by over one order of magnitude, with room for further optimization beyond the initial prototype.
Unlike the current paradigm in electronic machine learning hardware that processes information sequentially, this processor uses the Fourier optics, a concept of frequency filtering which allows for performing the required convolutions of the neural network as much simpler element-wise multiplications using the digital mirror technology.
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Machine intelligence
- ABB Opens $100 Million Global Innovation And Training Campus For Machine Automation At B&R In Austria
and-r-1.jpg' class='attachment-post-image size-post-image wp-post-image jetpack-lazy-image' alt data-lazy-srcset=' ...
- Iron Banter: This Week In Destiny 2 - The Case For Wrath Of The Machine
A legacy raid is returning to Destiny 2 next season, and while King's Fall would be awesome, Wrath of the Machine would be perfect from a story standpoint.
- CHIPS Is a Missed Opportunity for Real Security
The need to do something did not produce effective legislation with the CHIPS for America Act. American technology is still vulnerable.
- PowerSchool to Launch Connected Intelligence, First Fully Managed Data-as-a-Service Platform for K-12 Schools
PowerSchool (NYSE: PWSC), the leading provider of cloud-based software for K-12 education in North America, today announced it has launched Connected Intelligence by PowerSchool®. Partnering with ...
- How is the healthcare sector inclining toward artificial intelligence worldwide?
Using artificial intelligence to enhance the ability to identify deterioration or sense the development of complications can significantly improve outcomes and may reduce costs related to ...
Go deeper with Google Headlines on:
Machine intelligence
Go deeper with Bing News on:
Machine learning hardware
- Bosch’s new partnership aims to explore quantum digital twins
Bosch's partnership with Multiverse Computing is an example of how many legacy companies are exploring quantum computing today to prepare for more capable hardware.
- MIT’s New Analog Synapse Is 1 Million Times Faster Than the Synapses in the Human Brain
New Hardware Delivers Faster Computation for Artificial Intelligence, With Much Less Energy MIT engineers working on “analog deep learning” have found a way to propel protons through solids at ...
- Should Google make a Tensor Lite processor for its Pixel A series?
Google debuted its semi-custom Tensor chip last year, but should it deliver a mid-range Tensor for the Pixel A series?
- How the A16 chip could be a game-changer for the iPhone 14 Pro
The strongest rumors suggest that the A16 will not move forward to a major new manufacturing process node, and that it will only appear in the iPhone 14 Pro and Pro Max–with the regular iPhone 14 ...
- Analogue deep learning offers faster computation for artificial intelligence with much less energy
A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analogue synapse that they had previously developed. They utilized a practical inorganic material ...