TRANSFER LEARNING BASED COVID-19 CLASSIFICATION USING COUGH SOUND
COVID-19 is the currently happening pandemic and there are many papers about COVID-19 classification using respiratory sound especially cough sound due to its advantages of being swift, low-cost, and not conducting a new cluster of COVID-19 cases. But, in the previous works, it is always stated that...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/66656 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | COVID-19 is the currently happening pandemic and there are many papers about COVID-19 classification using respiratory sound especially cough sound due to its advantages of being swift, low-cost, and not conducting a new cluster of COVID-19 cases. But, in the previous works, it is always stated that the available public data of COVID-19 labeled cough sound is very limited, while the deep learning architecture needs a huge amount of data, so it will converges quickly and results better performance. To mitigate this problem, transfer learning approach is used at some previous works, but mostly previous works only use one approach of the transfer learning and not compare it with another transfer learning approach.
In this research, our contribution is trying two variations of transfer learning approach and compare each variation performance to mitigate the problem of limited availability of labeled COVID-19 data. Those two transfer learning variations are weight initializer and feature extractor, and implemented using ResNet-50 architecture. Pretrained model that is used in this research is a model that has been trained to classify three classes of respiratory sounds: cough, sneeze and throat clearing, with macro F1-score of 91.16% when tested on unseen testing dataset.
Those two transfer learning variations show improvement compared with baseline model when evaluating AUC-ROC and F1-score for positive class and feature extractor variation shows best performance in this research. Weight initializer variation gives 0.06% improvement of F1-score for positive class and 0.28% improvement of AUC-ROC when tested on unseen testing dataset. Feature extractor variation gives 0.89% improvement of F1-score for positive class and 1.59% improvement of AUC-ROC when tested on unseen testing dataset. Those two transfer learning variations even suppress the occurrence of overfitting that occured when not using transfer learning.
|
---|