Can differential testing improve automatic speech recognition systems?
Due to the widespread adoption of Automatic Speech Recognition (ASR) systems in many critical domains, ensuring the quality of recognized transcriptions is of great importance. A recent work, CrossASR++, can automatically uncover many failures in ASR systems by taking advantage of the differential t...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6893 https://ink.library.smu.edu.sg/context/sis_research/article/7896/viewcontent/Can_Differential_Testing_Improve.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7896 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-78962022-02-07T10:54:58Z Can differential testing improve automatic speech recognition systems? ASYROFI, Muhammad Hilmi YANG, Zhou SHI, Jieke QUAN, Chu Wei LO, David Due to the widespread adoption of Automatic Speech Recognition (ASR) systems in many critical domains, ensuring the quality of recognized transcriptions is of great importance. A recent work, CrossASR++, can automatically uncover many failures in ASR systems by taking advantage of the differential testing technique. It employs a Text-To-Speech (TTS) system to synthesize audios from texts and then reveals failed test cases by feeding them to multiple ASR systems for cross-referencing. However, no prior work tries to utilize the generated test cases to enhance the quality of ASR systems. In this paper, we explore the subsequent improvements brought by leveraging these test cases from two aspects, which we collectively refer to as a novel idea, evolutionary differential testing. On the one hand, we fine-tune a target ASR system on the corresponding test cases generated for it. On the other hand, we fine-tune a cross-referenced ASR system inside CrossASR++, with the hope to boost CrossASR++'s performance in uncovering more failed test cases. Our experiment results empirically show that the above methods to leverage the test cases can substantially improve both the target ASR system and CrossASR++ itself. After fine-tuning, the number of failed test cases uncovered decreases by 25.81% and the word error rate of the improved target ASR system drops by 45.81%. Moreover, by evolving just one cross-referenced ASR system, CrossASR++ can find 5.70%, 7.25%, 3.93%, and 1.52% more failed test cases for 4 target ASR systems, respectively. 2021-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6893 info:doi/10.1109/ICSME52107.2021.00079 https://ink.library.smu.edu.sg/context/sis_research/article/7896/viewcontent/Can_Differential_Testing_Improve.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Databases and Information Systems |
spellingShingle |
Databases and Information Systems ASYROFI, Muhammad Hilmi YANG, Zhou SHI, Jieke QUAN, Chu Wei LO, David Can differential testing improve automatic speech recognition systems? |
description |
Due to the widespread adoption of Automatic Speech Recognition (ASR) systems in many critical domains, ensuring the quality of recognized transcriptions is of great importance. A recent work, CrossASR++, can automatically uncover many failures in ASR systems by taking advantage of the differential testing technique. It employs a Text-To-Speech (TTS) system to synthesize audios from texts and then reveals failed test cases by feeding them to multiple ASR systems for cross-referencing. However, no prior work tries to utilize the generated test cases to enhance the quality of ASR systems. In this paper, we explore the subsequent improvements brought by leveraging these test cases from two aspects, which we collectively refer to as a novel idea, evolutionary differential testing. On the one hand, we fine-tune a target ASR system on the corresponding test cases generated for it. On the other hand, we fine-tune a cross-referenced ASR system inside CrossASR++, with the hope to boost CrossASR++'s performance in uncovering more failed test cases. Our experiment results empirically show that the above methods to leverage the test cases can substantially improve both the target ASR system and CrossASR++ itself. After fine-tuning, the number of failed test cases uncovered decreases by 25.81% and the word error rate of the improved target ASR system drops by 45.81%. Moreover, by evolving just one cross-referenced ASR system, CrossASR++ can find 5.70%, 7.25%, 3.93%, and 1.52% more failed test cases for 4 target ASR systems, respectively. |
format |
text |
author |
ASYROFI, Muhammad Hilmi YANG, Zhou SHI, Jieke QUAN, Chu Wei LO, David |
author_facet |
ASYROFI, Muhammad Hilmi YANG, Zhou SHI, Jieke QUAN, Chu Wei LO, David |
author_sort |
ASYROFI, Muhammad Hilmi |
title |
Can differential testing improve automatic speech recognition systems? |
title_short |
Can differential testing improve automatic speech recognition systems? |
title_full |
Can differential testing improve automatic speech recognition systems? |
title_fullStr |
Can differential testing improve automatic speech recognition systems? |
title_full_unstemmed |
Can differential testing improve automatic speech recognition systems? |
title_sort |
can differential testing improve automatic speech recognition systems? |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2021 |
url |
https://ink.library.smu.edu.sg/sis_research/6893 https://ink.library.smu.edu.sg/context/sis_research/article/7896/viewcontent/Can_Differential_Testing_Improve.pdf |
_version_ |
1770576114851250176 |