Classification of echocardiographic standard views using a hybrid attention-based approach
The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. However, classifying echocardio-grams at the video level is complicated, and previous observations concluded that the most significant challenge lies in distinguishing among the various a...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Tech Science Press
2022
|
Online Access: | http://eprints.utem.edu.my/id/eprint/27129/2/0130721062023.pdf http://eprints.utem.edu.my/id/eprint/27129/ https://www.techscience.com/iasc/v33n2/46762 https://doi.org/10.32604/iasc.2022.023555 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Teknikal Malaysia Melaka |
Language: | English |
id |
my.utem.eprints.27129 |
---|---|
record_format |
eprints |
spelling |
my.utem.eprints.271292024-07-04T11:12:46Z http://eprints.utem.edu.my/id/eprint/27129/ Classification of echocardiographic standard views using a hybrid attention-based approach Xianda, Ni Zi, Ye Jaya Kumar, Yogan Goh, Ong Sing Fengyan, Song The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. However, classifying echocardio-grams at the video level is complicated, and previous observations concluded that the most significant challenge lies in distinguishing among the various adjacent views. To this end, we propose an ECHO-Attention architecture consisting of two parts. We first design an ECHO-ACTION block, which efficiently encodes Spatio-temporal features, channel-wise features, and motion features. Then, we can insert this block into existing ResNet architectures, combined with a self-attention module to ensure its task-related focus, to form an effective ECHO-Attention network. The experimental results are confirmed on a dataset of 2693 videos acquired from 267 patients that trained cardiologist has manually labeled. Our methods provide a comparable classification performance (overall accuracy of 94.81%) on the entire video sample and achieved significant improvements on the classification of anatomically similar views (precision 88.65% and 81.70% for parasternal short-axis apical view and parasternal short-axis papillary view on 30-frame clips, respectively). Tech Science Press 2022-02 Article PeerReviewed text en http://eprints.utem.edu.my/id/eprint/27129/2/0130721062023.pdf Xianda, Ni and Zi, Ye and Jaya Kumar, Yogan and Goh, Ong Sing and Fengyan, Song (2022) Classification of echocardiographic standard views using a hybrid attention-based approach. Intelligent Automation and Soft Computing, 33 (2). pp. 1197-1215. ISSN 1079-8587 https://www.techscience.com/iasc/v33n2/46762 https://doi.org/10.32604/iasc.2022.023555 |
institution |
Universiti Teknikal Malaysia Melaka |
building |
UTEM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknikal Malaysia Melaka |
content_source |
UTEM Institutional Repository |
url_provider |
http://eprints.utem.edu.my/ |
language |
English |
description |
The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. However, classifying echocardio-grams at the video level is complicated, and previous observations concluded that
the most significant challenge lies in distinguishing among the various adjacent views. To this end, we propose an ECHO-Attention architecture consisting of two parts. We first design an ECHO-ACTION block, which efficiently encodes Spatio-temporal features, channel-wise features, and motion features. Then, we can insert this block into existing ResNet architectures, combined with a self-attention module to ensure its task-related focus, to form an effective ECHO-Attention network. The experimental results are confirmed on a dataset of 2693 videos acquired from 267 patients that trained cardiologist has manually labeled. Our methods provide a comparable classification performance (overall
accuracy of 94.81%) on the entire video sample and achieved significant improvements on the classification of anatomically similar views (precision 88.65% and 81.70% for parasternal short-axis apical view and parasternal short-axis papillary
view on 30-frame clips, respectively). |
format |
Article |
author |
Xianda, Ni Zi, Ye Jaya Kumar, Yogan Goh, Ong Sing Fengyan, Song |
spellingShingle |
Xianda, Ni Zi, Ye Jaya Kumar, Yogan Goh, Ong Sing Fengyan, Song Classification of echocardiographic standard views using a hybrid attention-based approach |
author_facet |
Xianda, Ni Zi, Ye Jaya Kumar, Yogan Goh, Ong Sing Fengyan, Song |
author_sort |
Xianda, Ni |
title |
Classification of echocardiographic standard views using a hybrid attention-based approach |
title_short |
Classification of echocardiographic standard views using a hybrid attention-based approach |
title_full |
Classification of echocardiographic standard views using a hybrid attention-based approach |
title_fullStr |
Classification of echocardiographic standard views using a hybrid attention-based approach |
title_full_unstemmed |
Classification of echocardiographic standard views using a hybrid attention-based approach |
title_sort |
classification of echocardiographic standard views using a hybrid attention-based approach |
publisher |
Tech Science Press |
publishDate |
2022 |
url |
http://eprints.utem.edu.my/id/eprint/27129/2/0130721062023.pdf http://eprints.utem.edu.my/id/eprint/27129/ https://www.techscience.com/iasc/v33n2/46762 https://doi.org/10.32604/iasc.2022.023555 |
_version_ |
1804070303474647040 |