Lab Manager | Run Your Lab Like a Business
Various sheets of colored paper making a swirling pattern
iStock, vision008

Snapshot Multispectral Imaging Using a Diffractive Optical Network

Researchers convert a monochrome image sensor into a snapshot multispectral imaging device

by Light Publishing Center, Changchun Institute of Optics Fine Mechanics And Physics, CAS
Register for free to listen to this article
Listen with Speechify

Multispectral imaging has fueled major advances in various fields, including environmental monitoring, astronomy, agricultural sciences, biomedicine, medical diagnostics, and food quality control. The most ubiquitous and primitive form of a spectral imaging device is the color camera that collects information from red (R), green (G), and blue (B) color channels. The traditional design of RGB color cameras relies on spectral filters spatially located over a periodically repeating array of 2×2 pixels, with each subpixel containing an absorptive spectral filter that transmits one of the red, green, or blue channels while blocking others. Despite its widespread use in various imaging applications, scaling up the number of these absorptive filter arrays to collect richer spectral information from many distinct color bands poses various challenges due to their low power efficiency, high spectral cross-talk, and poor color representation quality.

UCLA researchers have recently introduced a snapshot multispectral imager that uses a diffractive optical network, instead of absorptive filters, to have 16 unique spectral bands periodically repeating at the output image field-of-view to form a virtual multispectral pixel array. This diffractive network-based multispectral imager is optimized using deep learning to spatially separate the input spectral channels onto distinct pixels at the output image plane, serving as a virtual spectral filter array that preserves the spatial information of the input scene or objects, instantaneously yielding an image cube without image reconstruction algorithms. Therefore, this diffractive multispectral imaging network can virtually convert a monochrome image sensor into a snapshot multispectral imaging device without conventional spectral filters or digital algorithms.

Published in Light: Science & Applications, a journal of the Springer Nature, the diffractive network-based multispectral imager framework is reported to offer both high spatial imaging quality and high spectral signal contrast. The authors’ research showed that ~79 percent average transmission efficiency across distinct bands could be achieved without a major compromise on the system's spatial imaging performance and spectral signal contrast.

This research was led by Dr. Aydogan Ozcan, the Chancellor’s Professor and Volgenau Chair for Engineering Innovation at UCLA and an HHMI Professor with the Howard Hughes Medical Institute. The other authors of this work include Deniz Mengu, Anika Tabassum, and professor Mona Jarrahi, all from the Electrical and Computer Engineering department at UCLA. Ozcan also has UCLA faculty appointments in the bioengineering and surgery departments and is an associate director of the California NanoSystems Institute (CNSI).

- This press release was provided by Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS