Image science investigates the ways that image quality can be defined, measured and optimized; it touches and improves the visualization of everything from healthy bones to unstable atmospheres to millennia-old geological formations. This interdisciplinary field studies the physics of photon generation, the propagation of light through optical systems, signal generation in detectors and more, and considers the statistics of random processes and how they affect the information contained within images.
The faculty in image science at the Wyant College of Optical Sciences show particular strength in designing new technology for medical imaging, homeland security, earth sciences and other applications, and in developing new methods for assessing image quality by quantifying how accurately imaging systems can accomplish certain analytical tasks.
Image Science Research Updates
Date Published: March 8, 2022
Professor David Brady's work in computational imaging attracted the attention of writers at SPIE who focused an article on the work he is doing to build a camera capable of creating the world's first gigapixel images. The machine contains an array of 98 microcameras with microprocessors that can stitch the individual images together. According to the article, "Computational imaging, on the other hand, allows users to refocus a photo, construct a 3D picture, combine wavelengths, or stitch together separate images into one. It can correct for aberrations, generate sharp images without lenses, and use inexpensive instruments to create photos that once would have required expensive equipment, even pushing past the diffraction limit to take pictures with resolutions beyond what a camera is theoretically capable of." Read the full article. Learn more about Dr. Brady's research.
Football game imaged with a gigapixel camera, with zoomed in details. Image resolution is uniform across the scene. See more at the online, interactive image: http://www.gigapan.com/gigapans/146504
Date Published: March 4, 2022
A new paper in Optics Express Vol. 30, Issue 2 features the work of Chengyu Wang, Minghao Hu, Yuzuru Takashima, Timothy J. Schulz, and David J. Brady. The team uses convolutional neural networks to recover images optically down-sampled by 6.7 × using coherent aperture synthesis over a 16 camera array. Where conventional ptychography relies on scanning and oversampling, here the researchers apply decompressive neural estimation to recover full resolution image from a single snapshot, although as shown in simulation multiple snapshots can be used to improve signal-to-noise ratio (SNR). In place training on experimental measurements eliminates the need to directly calibrate the measurement system. The work also presents simulations of diverse array camera sampling strategies to explore how snapshot compressive systems might be optimized. Read the full published paper.
Comparing the reconstruction results with different aperture distributions. The full resolution images were reconstructed but only the zoomed-in details of the reconstructed images are shown for easy comparison.
Date Published: June 27, 2019
Using a slip aperature and diffraction grating on a smartphone, the research team, including Dongkyun "DK" Kang, has developed a confocal microscope. It's use for in vivo human skin imaging was successful, conducting two-dimensional confocal imaging without the need for any beam scanning devices. These results suggest that the smartphone confocal microscope has a potential to be a low-cost option capable of examining cellular details in vivo and may help disease diagnosis in resource-poor settings, where conducting standard histopathological analysis is challenging. Read the published article.
Photos of the smartphone confocal microscope. A – front view with the cover removed; and B – top view.
Date Published: November 24, 2014
As imaging devices — from smartphones to military drones to security cameras to cars — become ubiquitous, rising data volume and processing demands become problematic. Compression is routinely employed to reduce image file sizes for convenient storage and transmission. However, the success of image compression techniques suggests that traditional imaging systems can be highly inefficient, collecting redundant data that could be compressed without significant degradation. The field of compressive imaging addresses this shortcoming by acquiring a “compressed image” directly in the optical domain.
Left: Compressive imager prototype. Right: Images from traditional imager and prototype at eight times compression.
One direct benefit of such optical compression is that it employs energy-efficient low-resolution sensors, rather than the power-hungry high-resolution image sensors typically found in traditional high-quality cameras (e.g., digital single-lens reflex cameras). The prototype shown above, developed by Amit Ashok’s Intelligent Imaging and Sensing Laboratory under U.S. Army and Defense Advanced Research Projects Agency programs, is capable of forming high-resolution images using a low-resolution sensor and a programmable spatial light modulator. Compressive imaging has the potential to make significant impacts in spectral bands (e.g., infrared, terahertz), where sensor costs and complexity dominate camera design.
Date Published: November 24, 2014
Nuclear imaging modalities, such as positron emission tomography and single-photon emission computed tomography, form an important element of modern medical diagnostics. The University of Arizona Center for Gamma-Ray Imaging, led by Harrison H. Barrett in cooperation with Lars R. Furenlid, Matthew A. Kupinski and Eric W. Clarkson, focuses on advancing the state of the art in radionuclide imaging (e.g., PET and SPECT). The CGRI uniquely combines rigorous theory, inventive computational tools, advanced detectors and electronics, innovative imaging systems, novel radiotracers and cutting-edge clinical and preclinical applications. This work is done within the context of gamma-ray imaging, but it is important to other forms of medical imaging and image science in general.
Left: High-resolution mouse kidney and bladder SPECT image acquired with cadmium-zinc-telluride gamma-ray pixel detectors. Right: Coregistered mouse SPECT/CT image acquired with FastSPECT II and FaCT (adaptive X-ray computed tomography) systems.
The collaborative research supported by the CGRI applies these new imaging tools to basic research in functional genomics, cardiovascular disease and cognitive neuroscience, and to research in breast cancer and surgical tumor detection. An exciting direction of research in the center is examining multimodal and adaptive imaging systems. A multimodal imaging system becomes adaptive when the information from one system is used to modify a second system before data is taken with it. For example, the first system (MRI or computerized tomography) may be used to locate regions of abnormalities, and then the second system (SPECT or PET) can be modified to focus on these regions for functional imaging.
They also specialize in developing and applying advanced X-ray and gamma-ray detectors and commissioned SPECT, PET and X-ray CT imaging systems. They study the physics of scintillation and solid-state detectors and the design of pulse-processing electronics, digital data-acquisition systems, and data inversion and reconstruction with a variety of computational methods.