Deep room recognition using inaudible echos
Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level indoor localization approach based on the measured room’s echos in response to a two-millisecond single-tone inaudible chirp emitted by a smartphone’s loudspeaker. Different fr...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/88075 http://hdl.handle.net/10220/49692 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-88075 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-880752021-01-08T04:28:05Z Deep room recognition using inaudible echos Song, Qun Gu, Chaojie Tan, Rui School of Computer Science and Engineering Interdisciplinary Graduate School (IGS) Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) Energy Research Institute @ NTU (ERI@N) Room Recognition Smartphone Engineering::Computer science and engineering Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level indoor localization approach based on the measured room’s echos in response to a two-millisecond single-tone inaudible chirp emitted by a smartphone’s loudspeaker. Different from other acoustics-based room recognition systems that record full-spectrum audio for up to ten seconds, our approach records audio in a narrow inaudible band for 0.1 seconds only to preserve the user’s privacy. However, the short-time and narrowband audio signal carries limited information about the room’s characteristics, presenting challenges to accurate room recognition. This paper applies deep learning to effectively capture the subtle fingerprints in the rooms’ acoustic responses. Our extensive experiments show that a two-layer convolutional neural network fed with the spectrogram of the inaudible echos achieve the best performance, compared with alternative designs using other raw data formats and deep models. Based on this result, we design a RoomRecognize cloud service and its mobile client library that enable the mobile application developers to readily implement the room recognition functionality without resorting to any existing infrastructures and add-on hardware. Extensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and 89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in a quiet museum, and 15 spots in a crowded museum, respectively. Compared with the state-of-the-art approaches based on support vector machine, RoomRecognize significantly improves the Pareto frontier of recognition accuracy versus robustness against interfering sounds (e.g., ambient music). Accepted version 2019-08-20T05:51:45Z 2019-12-06T16:55:28Z 2019-08-20T05:51:45Z 2019-12-06T16:55:28Z 2018 Conference Paper Song, Q., Gu, C., & Tan, R. (2018). Deep room recognition using inaudible echos. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2(3), 135-. doi:10.1145/3264945 https://hdl.handle.net/10356/88075 http://hdl.handle.net/10220/49692 10.1145/3264945 en © 2018 Association for Computing Machinery (ACM). All rights reserved. This paper was published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) and is made available with permission of Association for Computing Machinery (ACM). 28 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Room Recognition Smartphone Engineering::Computer science and engineering |
spellingShingle |
Room Recognition Smartphone Engineering::Computer science and engineering Song, Qun Gu, Chaojie Tan, Rui Deep room recognition using inaudible echos |
description |
Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level
indoor localization approach based on the measured room’s echos in response to a two-millisecond single-tone inaudible
chirp emitted by a smartphone’s loudspeaker. Different from other acoustics-based room recognition systems that record
full-spectrum audio for up to ten seconds, our approach records audio in a narrow inaudible band for 0.1 seconds only to
preserve the user’s privacy. However, the short-time and narrowband audio signal carries limited information about the room’s
characteristics, presenting challenges to accurate room recognition. This paper applies deep learning to effectively capture the
subtle fingerprints in the rooms’ acoustic responses. Our extensive experiments show that a two-layer convolutional neural
network fed with the spectrogram of the inaudible echos achieve the best performance, compared with alternative designs
using other raw data formats and deep models. Based on this result, we design a RoomRecognize cloud service and its mobile
client library that enable the mobile application developers to readily implement the room recognition functionality without
resorting to any existing infrastructures and add-on hardware. Extensive evaluation shows that RoomRecognize achieves
99.7%, 97.7%, 99%, and 89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in a quiet museum, and 15
spots in a crowded museum, respectively. Compared with the state-of-the-art approaches based on support vector machine,
RoomRecognize significantly improves the Pareto frontier of recognition accuracy versus robustness against interfering
sounds (e.g., ambient music). |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Song, Qun Gu, Chaojie Tan, Rui |
format |
Conference or Workshop Item |
author |
Song, Qun Gu, Chaojie Tan, Rui |
author_sort |
Song, Qun |
title |
Deep room recognition using inaudible echos |
title_short |
Deep room recognition using inaudible echos |
title_full |
Deep room recognition using inaudible echos |
title_fullStr |
Deep room recognition using inaudible echos |
title_full_unstemmed |
Deep room recognition using inaudible echos |
title_sort |
deep room recognition using inaudible echos |
publishDate |
2019 |
url |
https://hdl.handle.net/10356/88075 http://hdl.handle.net/10220/49692 |
_version_ |
1688665441705132032 |