A comparative study of COVID-19 CT image synthesis using GAN and CycleGAN
Generative adversarial networks (GANs) have been very successful in many applications of medical image synthesis, which hold great clinical value in diagnosis and analysis tasks, especially when data is scarce. This study compares the two most adopted generative modelling algorithms in recent medica...
Saved in:
Main Authors: | , |
---|---|
Format: | Proceedings |
Language: | English English |
Published: |
IEEE Xplore
2022
|
Subjects: | |
Online Access: | https://eprints.ums.edu.my/id/eprint/41736/1/ABSTRACT.pdf https://eprints.ums.edu.my/id/eprint/41736/2/FULL%20TEXT.pdf https://eprints.ums.edu.my/id/eprint/41736/ https://ieeexplore.ieee.org/document/9936810 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaysia Sabah |
Language: | English English |
Summary: | Generative adversarial networks (GANs) have been very successful in many applications of medical image synthesis, which hold great clinical value in diagnosis and analysis tasks, especially when data is scarce. This study compares the two most adopted generative modelling algorithms in recent medical image synthesis tasks, namely the traditional Generative Adversarial Network (GAN) and Cycleconsistency Generative Adversarial Network (CycleGAN) for COVID-19 CT image synthesis. Experiments show that very plausible synthetic COVID-19 images with a clear vision of artificially generated ground glass opacity (GGO) can be generated with CycleGAN when trained using an identity loss constant at 0.5. Moreover, it is found that the synthesis of the synthetic GGO features is generalized across images with different chest and lung structures, which suggests that diverse patterns of GGO can be synthesized using a conventional Image-to-Image translation setting without additional auxiliary conditions or visual annotations. In addition, similar experiment setting achieves encouraging perceptual quality with a Fréchet Inception Distance score of 0.347, which outperforms GAN at 0.383 and CycleGAN at 0.380 with an identity loss constant of 0.005. The experiment outcomes postulate a negative correlation between the strength of the identity loss and the significance of the synthetic instances manifested on the generated images, which highlights an interesting research path to improve the quality of generated images without compromising the significance of synthetic instances upon the image translation. |
---|