Animal hunt: bioacoustics animal recognition application

This project aims to create a bioacoustics classification model that can be used for real- time identification of animals based on their sounds in mobile applications. The first part of the project will focus on developing a bioacoustics classification model for the backend of the application. Th...

全面介紹

Saved in:
書目詳細資料
主要作者: Low, Ren Hwa
其他作者: Owen Noel Newton Fernando
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/175290
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:This project aims to create a bioacoustics classification model that can be used for real- time identification of animals based on their sounds in mobile applications. The first part of the project will focus on developing a bioacoustics classification model for the backend of the application. The second part of the project will emphasize on the deployment of the model and optimizing its inference performance for edge devices. To build effective bioacoustics classification models, a substantial amount of labelled data is often required. The primary challenge for many bioacoustics tasks lies in the scarcity of training data, especially for rare and endangered species. Furthermore, challenges arise not only from a scarcity of data, but also from concerns regarding data quality. Many datasets exhibit weak labelling and are often plagued by background noise and overlapping vocalizations from different species. To address the data limitations, this study reframes the bioacoustics classification task as a few-shot learning problem, primarily relying on transfer learning through pre-trained global bird embedding models such as BirdNET and Perch, known for their exceptional generalization capabilities to other non-bird taxa. The performance of their embeddings was evaluated on three diverse datasets specific to Singapore. We also propose a pipeline to derive an annotated dataset for supervised learning through the use of MixIT, a sound separation model designed to isolate background noise and overlapping vocalizations, and RIBBIT, a bioacoustics tool. RIBBIT can not only identify the output channel containing the isolated target vocalizations but also generate strongly labeled data by providing temporal information about the audio events within each recording. Our findings demonstrate the superior performance of these large-scale acoustic bird classifier models in comparison to general audio event detection models for bioacoustics classification tasks, which can be further improved by applying separation to classifier training data, addressing the issue of limited high quality training data.