Small footprint model for noisy far-field keyword spotting
Building a small memory footprint keyword spotting model is important as it typically runs on mobile devices with low computational resources. However, it is very challenging to develop a lightweight model and also maintaining a state-of-the-art result under noisy far field environment. In real l...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158398 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Building a small memory footprint keyword spotting model is important as it typically
runs on mobile devices with low computational resources. However, it is very challenging
to develop a lightweight model and also maintaining a state-of-the-art result under noisy far
field environment. In real life, noisy environment with some reverberations is degrading the
performance of a keyword spotting model. We explored a variety of baseline models and data
processing techniques to make effective predictions for keywords. Additionally, we proposed
a novel feature interactive convolution model with small parameters for single-channel and
multi-channel utterance. The interactive unit is implemented as the attention mechanism to
enhance the flow of information by using less computation resources. Moreover, we proposed a
centroid based awareness component to improve the multi-channel system by providing some
additional spatial geometry information in the latent feature projection space. Single-channel
model was evaluated on Google Speech Command V2-12 dataset whereas multi-channel model
was evaluated on MISP Challenge 2021 dataset. Our single-channel model achieves accuracy
of 98.2% on original Google Speech Command and outperforms most of the previous small
models. Besides, our multi-channel model achieves outstanding improvement against the official
competition baseline with a 55% gain in the competition score which is 0.152 on 6-channel audio
input and a 63% which is 0.126 boost using traditional front-end speech enhancement. |
---|