This dissertation introduces a novel machine-learning method in deep learning for medical imaging for a signal estimation task of an adaptive Single-Photon Emission Computed Tomography (SPECT) system. In SPECT, estimation tasks aim to measure or quantify features of the object that has been imaged. Currently, several algorithms exist that estimate the parameters used to describe the signal. Our approach to this problem is to apply a deep convolutional neural network that learns and estimates the signal parameters from a given SPECT image dataset. The developed machine learning model learned and extracted essential features from the input image through supervised learning techniques that minimize the mean-squared error (MSE) loss for the estimation task.
The SPECT system used in this work is the modeled adaptive SPECT system called AdaptiSPECT. The image data for the neural network model is acquired from the modeled AdaptiSPECT system's diversity imaging. We vary the imaging data by combining data from all camera positions at once, giving the impression that the cameras in AdaptiSPECT are moving while we collect data. The object data consists of digitally simulated Mouse Whole-Body (MOBY) phantom objects containing variations of spherical lesions (signals). The estimation task is set up to demonstrate realistically with the signal and the object's attributes being variable, as in clinical settings.
In summary, we developed and trained a deep convolutional network for a signal estimation task, which is a good estimator across the signal parameters. Root mean-squared Error (RMSE) is the figure of merit used to assess how well the model predicts the parameters that define the signal. The results indicate that this supervised learning network accurately predicts the signal parameters of interest. It also demonstrates that deep convolutional neural networks are practical for adaptive SPECT imaging systems.
Password for Zoom Meeting: CGRI