Classifying Mosquito Presence and Genera Using Median and Interquartile Values From 26-Filter Wingbeat Acoustic Properties
Mosquitoes are known to be one of the deadliest creatures in the world. There have been several studies that aim to identify mosquito presence and species using various techniques. The most common ones involve automatic identification of mosquito species from the sounds produced by flapping its wing...
Saved in:
Main Authors: | , |
---|---|
Format: | text |
Published: |
Archīum Ateneo
2021
|
Subjects: | |
Online Access: | https://archium.ateneo.edu/discs-faculty-pubs/279 https://archium.ateneo.edu/cgi/viewcontent.cgi?article=1276&context=discs-faculty-pubs |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Ateneo De Manila University |
Summary: | Mosquitoes are known to be one of the deadliest creatures in the world. There have been several studies that aim to identify mosquito presence and species using various techniques. The most common ones involve automatic identification of mosquito species from the sounds produced by flapping its wings. The development of these important concepts and technologies can help reduce the spread of mosquito-borne diseases. This paper presents a simple model based on mean and interquartile values that aim to solve the mosquito classification. Despite its simplicity, the proposed model significantly outperforms a Convolutional Neural Network (CNN) model in identifying the mosquito genus from the classes of Aedes, Anopheles and Culex, with an additional fourth class of No-Mosquito. A dataset of sound recordings from the Humbug Zooniverse, collected by researchers from Oxford University, and augmented with locally collected audio recordings of mosquitoes in the Philippines were used in this study. The proposed technique uses the numerical data from a series of 26 different pass-band filter values generated from spectrograms of audio recordings, specifically computing the statistical measures of median and interquartile values for each filter from instances of the same class. To predict the class of an instance, the sum of squares of differences was computed between the actual values of the instance against the expected values of each class on each of these three statistical measures. The average classification accuracy of our proposed model was 92.8%, and this was higher than the 86.6% classification accuracy yielded by the CNN model. Moreover, the proposed model required much less time for both training and classification than the CNN model. As the proposed model outperformed the CNN model in accuracy and efficiency, the results offer a promising technique that may also simplify the process of solving other sound-based classification problems. |
---|