Advancing photoacoustic tomography using deep learning

Photoacoustic Tomography (PAT) is a non-invasive hybrid biomedical imaging modality combining the advantages of both optical and ultrasound imaging. In the past decade, PAT has emerged as a promising imaging modality due to its ability to provide high-contrast and high-resolution images at higher im...

Full description

Saved in:
Bibliographic Details
Main Author: Rajendran, Praveenbalaji
Other Authors: Manojit Pramanik
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162207
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Photoacoustic Tomography (PAT) is a non-invasive hybrid biomedical imaging modality combining the advantages of both optical and ultrasound imaging. In the past decade, PAT has emerged as a promising imaging modality due to its ability to provide high-contrast and high-resolution images at higher imaging depths. However, the application of PAT for clinical and preclinical imaging is still hindered due to limitations such as cost, size, image quality, and acquisition time. Thus, further improvements and optimization are necessary for the translation of PAT into clinical and preclinical applications. Conventionally in PAT systems, Nd:YAG lasers are used as excitation sources. These, Nd:YAG laser sources are bulky, expensive, and require sophisticated vibration isolation housing for their operation. Furthermore, the pulse repetition rate (PRR) of the Nd:YAG laser is low (10-100 Hz). This makes the conventional PAT systems slow, expensive and non-portable. Over recent years, a new type of laser source called pulsed laser diodes (PLD) are gaining importance due to its compact size, low cost, and high PRR in comparison with the conventional Nd:YAG lasers. Thus, by taking advantage of the PLD laser, we have developed a compact desktop high-speed PAT imaging system by using PLD as the excitation source. We have validated the potential of the developed PLD-PAT system by applying it to detect a pathophysiological condition of reduced intracranial pressure [intracranial hypotension (IH)]. The IH was induced by the extraction of cerebrospinal fluid (CSF) from the cisterna magna of the rat. The sagittal sinus area was then imaged by PAT and it served as an accurate parameter to indicate the occurrence of IH. However, there exist certain limitations with the PLD-PAT system such as resolution degradation, image distortion, artifacts, etc. Overcoming these limitations are intrinsic to the successful translation of the PLD-PAT system into clinics. Over recent years, deep learning, a subset of machine learning is gaining significant interest in PAI due to its ability to solve complex imaging-related tasks. Especially, convolutional neural networks (CNN) are widely preferred to improve the quality of images. Typically in the PAT system, a single UST is rotated 360◦ around the sample to acquire the generated PA waves and a simple delay-and-sum beamformer is used to reconstruct the PAT image. When this technique is employed there will be degradation of the resultant image quality due to poor tangential resolution near the detector surface. To overcome this limitation we have developed a CNN-based deep learning architecture termed TARES network. The TARES network was optimized using the simulated PAT images and its performance was evaluated on experimental phantoms as well as in vivo images. In comparison with the conventional delay-and-sum beamformer the developed network improved the tangential resolution of images by ∼8 folds without compromising the structural similarity and quality of the images. In a conventional PAT system, a single ultrasound transducer (UST) is employed to detect the generated PA waves and it takes several minutes to acquire an image of acceptable quality. If N multiple USTs are used instead of a single UST along with the excitation sources of high PRR the image acquisition time can be reduced by N times. However, in PAT systems employing multiple UST’s, the exact radius of each transducer is required for accurate image reconstruction using the delay-and-sum beamformer. In practical scenarios measuring the exact radius of each transducer is time-consuming and cumbersome. Thus to alleviate the need for radius calibration we have developed a CNN architecture aided with convolutional long short-term memory block (RACOR-PAT network) to reconstruct the PAT images. The RACOR-PAT network was trained using a combination of simulation and experimental datasets to account for the variation encountered in in vivo scenarios. The results show that the proposed RACOR-PAT network improves the peak signal-to-noise ratio (PSNR) by 73% without compromising the quality of the images. Another limitation that hampers the imaging speed of the PAT system is the emergence of artifacts at higher imaging speeds due to sparse signal acquisition and low signal-to-noise ratio (SNR). Due to the presence of these artifacts improving the imaging speed beyond 2 frames per second still remains a challenge despite the employment of multi-USTs and high PRR excitation sources in PAT. Thus, to improve the imaging frame rate of PAT systems without hampering the image quality we developed a deep learning-based approach employing a CNN architecture called the HD-U-Net. The proposed network was optimized using the simulated data and its performance was evaluated on the experimental data obtained using both single and multi-UST PAT systems. The results demonstrate that the proposed approach can enhance the imaging speed in single-UST-PAT systems by ∼6 folds and in the multi-UST-PAT system by ∼2 folds with significant improvement in image quality. Overall, in this thesis we have developed a compact, low-cost desktop PLD-PAT system for pre-clinical application. We have improved its resolution, imaging speed, and also removed artifacts arising from the use of multiple transducers with the help of novel deep learning techniques. We have tested the system’s performance with in vivo imaging results and validated the improvements.