A schematic of the how the lens-less imaging process works, from light collection through encoding the signal to post-processing with computing algorithms.
CREDIT : Xiuxi Pan from Tokyo Tech
A camera usually requires a lens system to capture a focused image, and the lensed camera has been the dominant imaging solution for centuries. A lensed camera requires a complex lens system to achieve high-quality, bright, and aberration-free imaging. Recent decades have seen a surge in the demand for smaller, lighter, and cheaper cameras. There is a clear need for next-generation cameras with high functionality, which are compact enough to be installed anywhere. However, the miniaturization of the lensed camera is restricted by the lens system and the focusing distance required by refractive lenses.
Recent advances in computing technology can simplify the lens system by substituting some parts of the optical system with computing. The entire lens can be abandoned thanks to the use of image reconstruction computing, allowing for a lens-less camera, which is ultra-thin, lightweight, and low-cost. The lens-less camera is gaining traction recently. But thus far, the image reconstruction technique has not been established, resulting in inadequate imaging quality and tedious computation time for the lens-less camera.
Recently, researchers have developed a new image reconstruction method that improves computation time and provides high-quality images. Describing the initial motivation behind the research, a core member of the research team, Prof. Masahiro Yamaguchi of Tokyo Tech, says, “Without the limitations of a lens, the lens-less camera could be ultra-miniature, which could allow new applications that are beyond our imagination.” Their work has been published in Optics Letters.
The typical optical hardware of the lens-less camera simply consists of a thin mask and an image sensor. The image is then reconstructed using a mathematical algorithm, as shown in Fig. 1. The mask and the sensor can be fabricated together in established semiconductor manufacturing processes for future production. The mask optically encodes the incident light and casts patterns on the sensor. Though the casted patterns are completely non-interpretable to the human eye, they can be decoded with explicit knowledge of the optical system.
However, the decoding process—based on image reconstruction technology—remains challenging. Traditional model-based decoding methods approximate the physical process of the lens-less optics and reconstruct the image by solving a “convex” optimization problem. This means the reconstruction result is susceptible to the imperfect approximations of the physical model. Moreover, the computation needed for solving the optimization problem is time-consuming because it requires iterative calculation. Deep learning could help avoid the limitations of model-based decoding, since it can learn the model and decode the image by a non-iterative direct process instead. However, existing deep learning methods for lens-less imaging, which utilize a convolutional neural network (CNN), cannot produce good-quality images. They are inefficient because CNN processes the image based on the relationships of neighboring, “local”, pixels, whereas lens-less optics transform local information in the scene into overlapping “global” information on all the pixels of the image sensor, through a property called “multiplexing”.
The TokyoTech research team is studying this multiplexing property and have now proposed a novel, dedicated machine learning algorithm for image reconstruction. The proposed algorithm, shown in Fig. 2, is based on a leading-edge machine learning technique called Vision Transformer (ViT), which is better at global feature reasoning. The novelty of the algorithm lies in the structure of the multistage transformer blocks with overlapped ”patchify” modules. This allows it to efficiently learn image features in a hierarchical representation. Consequently, the proposed method can well address the multiplexing property and avoid the limitations of conventional CNN-based deep learning, allowing better image reconstruction.
While conventional model-based methods require long computation times for iterative processing, the proposed method is faster because the direct reconstruction is possible with an iterative-free processing algorithm designed by machine learning. The influence of model approximation errors is also dramatically reduced because the machine learning system learns the physical model. Furthermore, the proposed ViT-based method uses global features in the image and is suitable for processing casted patterns over a wide area on the image sensor, whereas conventional machine learning-based decoding methods mainly learn local relationships by CNN.
In summary, the proposed method solves the limitations of conventional methods such as iterative image reconstruction-based processing and CNN-based machine learning with the ViT architecture, enabling the acquisition of high-quality images in a short computing time. The research team further performed optical experiments—as reported in their latest publication in—which suggest that the lens-less camera with the proposed reconstruction method can produce high-quality and visually appealing images while the speed of post-processing computation is high enough for real-time capture. The assembled lens-less camera and the experimental results are shown in Fig. 3 and Fig. 4, respectively.
“We realize that miniaturization should not be the only advantage of the lens-less camera. The lens-less camera can be applied to invisible light imaging, in which the use of a lens is impractical or even impossible. In addition, the underlying dimensionality of captured optical information by the lens-less camera is greater than two, which makes one-shot 3D imaging and post-capture refocusing possible. We are exploring more features of the lens-less camera. The ultimate goal of a lens-less camera is being miniature-yet-mighty. We are excited to be leading in this new direction for next-generation imaging and sensing solutions,” says the lead author of the study, Mr. Xiuxi Pan of TokyoTech, while talking about their future work.
More from: Tokyo Institute of Technology
The Latest Updates from Bing News & Google News
Go deeper with Bing News on:
Image reconstruction computing
- Govt committed for rehabilitation of flood-hit people, infrastructural reconstruction: FM
Foreign Minister Bilawal Bhutto Zardari has said the government is committed for the rehabilitation of the flood-affected people and reconstruction of infrastructure. Addressing a conference in Lahore ...
- Ukraine’s reconstruction must also combat ‘Russification’
As Russian forces withdraw from Ukrainian lands they once occupied, we are seeing grisly new evidence of the depravity at the heart of Kremlin-directed war tactics. When analysts and ...
- NSW government creates Reconstruction Authority for natural disaster management
On Thursday, the parliament passed a bill to create the NSW Reconstruction Authority to assist communities recover from disasters as well as prepare for them. It's designed to cut through red ...
- Andrew Forrest commits $500 million for Ukrainian reconstruction fund
Australian iron ore magnate Andrew Forrest announced on Thursday a $500 million commitment towards the launch of a $25 billion international investment fund designed to help Ukraine rebuild after ...
- Australian mining billionaire launches fund to begin costly Ukraine reconstruction
We use your sign-up to provide content in ways you've consented to and to improve our understanding of you. This may include adverts from us and 3rd parties based on our understanding. You can ...
Go deeper with Google Headlines on:
Image reconstruction computing
[google_news title=”” keyword=”image reconstruction computing” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]
Go deeper with Bing News on:
- Transformer Table: 130M Video Views On Instagram - Incredible Table & Black Friday Special!
One simple video showcasing Transformer Table's innovative extendable dining table & bench set is taking the world by storm after amassing a whooping 300M views combined across social media platforms, ...
- Padmount Transformers Market 2022 Update: Global Size, Competitive Landscape, Growth Opportunity, Industry Emerging Trends and SWOT Analysis by 2028
A quantitative analysis of regional markets is combined with a fresh perspective on the target industry in the Padmount Transformers Market study. The research covers market size, ...
- Black Friday LEGO Deal: Save 20% Off the LEGO Transformers Optimus Prime
If you're into complex LEGO builds, Transformers toys, or both, have we got an Amazon Black Friday deal for you. Amazon is offering the LEGO Transformers Optimus Prime for only $144.99 ...
- Zesa losing eight transformers weekly
The major targets of vandalism are transformers, copper conductors and ... all sectors of the economy and a serious threat to Vision 2030.” Mahlanze urged Zimbabweans to declare war on economic ...
- ‘MrsFormer’ Employs a Novel Multiresolution-Head Attention Mechanism to Cut Transformers’ Compute and Memory Costs
Transformer architectures have demonstrated impressive performance improvements since their introduction in 2017 and are now the standard in the natural language processing and computer vision ...
Go deeper with Google Headlines on:
[google_news title=”” keyword=”Vision Transformer” num_posts=”5″ blurb_length=”0″ show_thumb=”left”]