Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images
Unsupervised anomaly detection (UAD) that requires only normal (healthy) training images is an important tool for enabling the development of medical image analysis (MIA) applications, such as disease screening, since it is often difficult to collect and annotate abnormal (or disease) images in MIA....
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7037 https://ink.library.smu.edu.sg/context/sis_research/article/8040/viewcontent/2109.01303.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8040 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-80402022-03-24T07:12:57Z Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images TIAN, Yu LIU, Fengbei PANG, Guansong CHEN, Yuanhong LIU, Yuyuan VERJANS, Johan W. SINGH, Rajvinder Unsupervised anomaly detection (UAD) that requires only normal (healthy) training images is an important tool for enabling the development of medical image analysis (MIA) applications, such as disease screening, since it is often difficult to collect and annotate abnormal (or disease) images in MIA. However, heavily relying on the normal images may cause the model training to overfit the normal class. Self-supervised pre-training is an effective solution to this problem. Unfortunately, current self-supervision methods adapted from computer vision are sub-optimal for MIA applications because they do not explore MIA domain knowledge for designing the pretext tasks or the training process. In this paper, we propose a new self-supervised pre-training method for UAD designed for MIA applications, named Multi-class Strong Augmentation via Contrastive Learning (MSACL). MSACL is based on a novel optimisation to contrast normal and multiple classes of synthetised abnormal images, with each class enforced to form a tight and dense cluster in terms of Euclidean distance and cosine similarity, where abnormal images are formed by simulating a varying number of lesions of different sizes and appearance in the normal images. In the experiments, we show that our MSACL pre-training improves the accuracy of SOTA UAD methods on many MIA benchmarks using colonoscopy, fundus screening and Covid-19 Chest X-ray datasets. 2021-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7037 https://ink.library.smu.edu.sg/context/sis_research/article/8040/viewcontent/2109.01303.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Artificial Intelligence and Robotics Graphics and Human Computer Interfaces |
spellingShingle |
Artificial Intelligence and Robotics Graphics and Human Computer Interfaces TIAN, Yu LIU, Fengbei PANG, Guansong CHEN, Yuanhong LIU, Yuyuan VERJANS, Johan W. SINGH, Rajvinder Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
description |
Unsupervised anomaly detection (UAD) that requires only normal (healthy) training images is an important tool for enabling the development of medical image analysis (MIA) applications, such as disease screening, since it is often difficult to collect and annotate abnormal (or disease) images in MIA. However, heavily relying on the normal images may cause the model training to overfit the normal class. Self-supervised pre-training is an effective solution to this problem. Unfortunately, current self-supervision methods adapted from computer vision are sub-optimal for MIA applications because they do not explore MIA domain knowledge for designing the pretext tasks or the training process. In this paper, we propose a new self-supervised pre-training method for UAD designed for MIA applications, named Multi-class Strong Augmentation via Contrastive Learning (MSACL). MSACL is based on a novel optimisation to contrast normal and multiple classes of synthetised abnormal images, with each class enforced to form a tight and dense cluster in terms of Euclidean distance and cosine similarity, where abnormal images are formed by simulating a varying number of lesions of different sizes and appearance in the normal images. In the experiments, we show that our MSACL pre-training improves the accuracy of SOTA UAD methods on many MIA benchmarks using colonoscopy, fundus screening and Covid-19 Chest X-ray datasets. |
format |
text |
author |
TIAN, Yu LIU, Fengbei PANG, Guansong CHEN, Yuanhong LIU, Yuyuan VERJANS, Johan W. SINGH, Rajvinder |
author_facet |
TIAN, Yu LIU, Fengbei PANG, Guansong CHEN, Yuanhong LIU, Yuyuan VERJANS, Johan W. SINGH, Rajvinder |
author_sort |
TIAN, Yu |
title |
Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
title_short |
Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
title_full |
Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
title_fullStr |
Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
title_full_unstemmed |
Self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
title_sort |
self-supervised multi-class pre-training for unsupervised anomaly detection and segmentation in medical images |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/7037 https://ink.library.smu.edu.sg/context/sis_research/article/8040/viewcontent/2109.01303.pdf |
_version_ |
1770576192646152192 |