Lab Manager | Run Your Lab Like a Business

Developments in Computational Imaging Techniques

Aydogan Ozcan, PhD,  talks to contributing editor Tanuja Koppal, PhD, about his fascinating journey bringing computational imaging into microscopy.

by Tanuja Koppal, PhD
Register for free to listen to this article
Listen with Speechify
0:00
5:00

Aydogan Ozcan, PhD, chancellor’s professor in the departments of electrical engineering and bioengineering at the University of California, Los Angeles; HHMI professor at the Howard Hughes Medical Institute; and associate director of California NanoSystems Institute, talks to contributing editor Tanuja Koppal, PhD, about his fascinating journey bringing computational imaging into microscopy. His efforts have led to a series of field-portable, low-cost, high-performing devices and microscopes that can fit into your pocket and can be used for a wide variety of applications, overcoming some of the limitations of traditional microscopy.


Q: Can you discuss some of the innovative work you are doing in microscopy?

A: Our laboratory has been working on computational imaging techniques, where we have pushed forward image reconstruction algorithms, especially for holography and holographic imaging techniques. That effort has led to a series of new devices and microscopes that have benefited from these reconstruction algorithms to simplify their imaging architecture drastically. We have created microscopes that generate images with extremely high resolution, containing about a billion useful pixels, with a wide field of view [that look] at very large volumes. At the same time, these imaging devices can easily fit into your pocket and typically weigh less than 200 grams. This has been enabled by the fact that we got rid of a lot of the mechanical and optical components that you find in a traditional microscope and have replaced their functions with reconstruction algorithms.

Get training in Asset Management and earn CEUs.One of over 25 IACET-accredited courses in the Academy.
Asset Management Course

That’s been a theme in my lab and has led to the creation of field-portable, cost-effective microscopes that can be used for pathology, microbiology, material science, long-term monitoring and analysis of cells, and many other applications. Sometimes this innovation is not just about miniaturization or being cost-effective; it’s also about performance. We have shown that these kinds of microscopes can reveal a lot more than traditional microscopes can. One example is where we use these microscopes to track hundreds of thousands of sperm in 3-D with submicron precision. This work has led to the discovery of new types of sperm motion in 3-D. This has not been seen before with traditional microscopes because of their limitations in terms of imaging area, depth of field, imaging volume, and challenges with instant autofocusing. It’s been a fascinating journey for us where we have shown that computation can create powerful microscopic imaging and sensing tools for various applications, from fundamental measurements to applications in field medicine, mobile health, diagnostics, and related fields.

Q: Can you describe what your device looks like and how it differs from a traditional microscope?

A: The basic component of a computational microscope is an imager chip, similar to what is present in every digital camera. The chip is an optoelectronic sensor, either a complementary metal oxide semiconductor (CMOS) or a charge-coupled device (CCD) detector, which records and digitizes the images. For instance, in a ten-megapixel camera, you will have ten million small photodetectors located at the back of the camera, which digitize the image. We place optical samples that are of interest to us— such as tissue samples, blood smears, cell lines, or gels—directly on this chip. When light shines through the object without any optics in between the object and the sensor, the diffraction pattern of the transmitted light through the object is recorded. This is similar to the creation of microshadows of a specimen against light using a standard imager, and the rest, in terms of creation of the image, just involves processing or reconstruction of these shadows. These shadows are, in fact, holograms of the objects, so we do holographic processing to convert these diffraction patterns into 3-D transmission images of the objects. This type of microscopy is also called lens-free on-chip imaging, since there is no lens involved in its design. It’s all diffraction-based imaging, with the sample sitting on the chip, which gives us the extremely compact, cost-effective design of the microscope. Overall, you are looking at a ten-centimeter-long microscope that can fit into the palm of your hand. It’s also very lightweight because there are no bulky optics involved, and that makes the entire device portable and able to perform highly advanced imaging tasks demanded by professionals.

Q: What are some of the limitations?

A: A lens-free computational microscope that can image and size nanoparticles and viruses using a field-portable platform. More information: http://pubs.acs.org/doi/abs/10.1021/acsnano.5b00388 (This article is open access and is free to download from the ACS site.)What I have just described is a transmission design microscope that can image only transparent objects. However, there are different designs of this lens-free on-chip imaging device that can perform reflection- based holographic microscopy, using ideas similar to the transmission imager, to image opaque objects. Many of our technologies are based on the transmission imaging design because most of the samples in the medical field used for diagnostic applications are, in fact, transparent. For example, the standard tissue samples and slides that are sub-10 micron in thickness are transparent and can be beautifully imaged using our computational imaging technology. We can also image blood or plasma samples as long as the thickness or the height of the microchannels used is less than 50 microns.

Q: Are any of these products that you have developed commercially available yet?

A: Some of these techniques have been licensed through UCLA by a start-up company that I cofounded. In the diagnostics market, there is a need for cost-effective and field-portable measurement tools, and some of our patented products are being used in more than ten countries. Some of the work that I have described can also be integrated with mobile phones. The imaging interface on a mobile phone is very advanced, and some of these reconstruction and image analysis tasks can also be conducted on the phone. With the connectivity of the smartphone, you can get results, process them, and tag them with space and time details to be integrated with other digital records or databases, resulting in a set of distributed measurements that can be mapped as a function of time and space. Using the phone as a way to read and measure diagnostic tests has already resulted in a marketed product. We are open to new directions for applying our know-how in computational imaging, sensing, and diagnostic tools to create competitive mobile systems that can perform the same tasks as traditional lab instruments do. We do collaborate with a diverse group of researchers from all around the world in different fields, but we are also aware of the fact that not all the projects that we are working on in the lab can be and should be translated into products.

Q: What are some of the challenges that you are dealing with?

A: We have different collaborators with different backgrounds, and we cannot expect them all to be physicists, computer scientists, or optical engineers in order to use our technologies. As a result, it’s becoming more and more important for us to standardize our interfaces and designs so that we can communicate with them easily. These devices use extremely advanced algorithms, and we are working to make our designs more modular and more easily transferable so our collaborators and other researchers can work more efficiently with us. We want to have more user-friendly graphical interfaces for people to use without the need for technical expertise or a background in specialized computing languages. This will give our collaborators enough freedom to use our technologies without fully understanding how this black box performs the tasks.

Q: What are some of the opportunities that you are looking to tap into?

A: What’s exciting is that big data analysis and machine learning are now coming together to analyze and label images, and this is helping the expert— for example, a pathologist or a microbiologist—using that image make much better and faster decisions. It’s an exciting time for us not to stop just at the image computation or image creation point but to move several steps further toward image annotation and labeling to try to give statistical recommendations to the user. There is so much potential there to make the entire microanalysis much more efficient and accurate, giving not only high-quality, mobile, on-the-spot, large gigapixel images but also [giving] a front-end analysis telling the expert where to look and what to expect, at least in the statistical sense. A gigapixel image is wonderful and has a lot of information in it, but at the same time there is a lot to look at, and if you don’t know where and how to look, the analysis can take a lot of time. That’s where we can make recommendations on which subregions of the image to look at, guiding experts and making diagnosis more efficient and accurate.