A Convolutional Neural Network (CNN) for Automated Speed Recognition (ASR) for Low Resource language: A Case Study on Iban Language
The development of automatic speech recognition (ASR) systems for under-resourced languages poses challenges due to the lack of written resources required to train such systems. Traditionally, researchers have used language models to improve ASR model accuracy, some also resorts to the integration o...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Language: | English English English |
Published: |
Universiti Malaysia Sarawak
2024
|
Subjects: | |
Online Access: | http://ir.unimas.my/id/eprint/45394/3/DSVA_Steve%20Olsen.pdf http://ir.unimas.my/id/eprint/45394/4/Thesis%20Ms._Steve%20Olsen.ftext.pdf http://ir.unimas.my/id/eprint/45394/5/Thesis%20Ms._Steve%20Olsen%20-%2024%20pages.pdf http://ir.unimas.my/id/eprint/45394/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaysia Sarawak |
Language: | English English English |
Summary: | The development of automatic speech recognition (ASR) systems for under-resourced languages poses challenges due to the lack of written resources required to train such systems. Traditionally, researchers have used language models to improve ASR model accuracy, some also resorts to the integration of pronunciation dictionaries, but these methods require abundance of written resources, which under-resourced languages often lack. The Iban language, spoken by the majority people of Sarawak in Malaysia, is an example of an under-resourced language for which previous attempts at developing an ASR system involved building a pronunciation dictionary and language model, transfer learning, and using DNN-HMM acoustic models. However, these methods proved challenging and costly. In this research, we propose a framework that uses a convolutional neural network (CNN) as an acoustic model to build an end-to-end ASR model for the Iban language. Three techniques are proposed to optimize the model without requiring additional data resources, including hyperparameter optimization, data augmentation and transfer learning. We report a significant reduction in word error rate (WER) in our experiments, demonstrating the effectiveness of our techniques. Overall, the proposed framework offers a promising approach for developing ASR systems for under-resourced languages that lack the necessary written resources for traditional methods. |
---|