Enhancing cone-beam computed tomography image quality using improved denoising diffusion probabilistic model

Cone beam computed tomography (CBCT) is an important tool for many clinical and industrial applications. Compared with conventional fan-beam CT scans, CBCTs require less radiation exposure and shorter scanning time. However, CBCTs have limitations such as poor tissue contrast, image artifacts, and u...

Full description

Saved in:
Bibliographic Details
Main Author: Nyamtsogt Munkhbilguun
Other Authors: Cai Yiyu
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167998
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Cone beam computed tomography (CBCT) is an important tool for many clinical and industrial applications. Compared with conventional fan-beam CT scans, CBCTs require less radiation exposure and shorter scanning time. However, CBCTs have limitations such as poor tissue contrast, image artifacts, and unreliable Hounsfield Unit (HU) values, which limits their effective use in research and practical applications. The purpose of this Final Year Project (FYP) is to apply and test the denoising diffusion probabilistic model (DDPM) for the purpose of enhancing the CBCT scan. This study proposes a method to enhance CBCT scans by formulating them as an image-to-image translation mapping problem. The proposed solution employs a DDPM, a particular implementation of the diffusion model, which has recently demonstrated exceptional ability in generating high-quality image samples and performing translation mapping tasks outside the medical domain. First, the current methods and research papers to improve the CBCT scans are studied, followed by the study of various implementations of DDPM. The data preparation process of the project involves finding raw data of paired CBCT and CT scans, clipping, resizing, formatting them, and slicing the 3D volumes into 2D slices. In the fine-tuning phase, the proposed model is trained with 16 different combinations of parameters for a fewer number of iterations. The results from this phase are assessed according to their SSIM and PSNR scores and only 4 best-performing parameter configurations are selected to be trained for a much longer amount of time. The experimental results illustrate that the proposed method can effectively learn precise feature mapping from CBCT to CT by using unaligned, patient-paired 2D slices with careful tuning and design choices. Finally, the results of this study have shown a significant improvement in SSIM, PSNR, and KLD scores compared with original CBCT scans. The images generated by the proposed method resulted in an increase of 38.3554% in SSIM and 10.1525% in PSNR compared to the original CBCT images. Moreover, it is impressive that average KLD scores between CT and enhanced CBCT image sets are 0.0418470, meaning that the generated CBCT images can be the good representation of the real CT images.