Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices

Providing secure access to smart devices such as mobiles, wearables and various other IoT devices is becoming increasinglyimportant, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary a...

Full description

Saved in:
Bibliographic Details
Main Authors: CHAUHAN, Jagmohan, RAJASEGARAN, Jathusan, SENEVIRATNE, Surang, MISRA, Archan, SENEVIRATNE, Aruan, LEE, Youngki
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2018
Subjects:
GMM
IoT
MLP
SVM
Online Access:https://ink.library.smu.edu.sg/sis_research/4255
https://ink.library.smu.edu.sg/context/sis_research/article/5258/viewcontent/IMWUT_RNNBreathing_afv.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-5258
record_format dspace
spelling sg-smu-ink.sis_research-52582019-02-08T01:09:52Z Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices CHAUHAN, Jagmohan RAJASEGARAN, Jathusan SENEVIRATNE, Surang MISRA, Archan SENEVIRATNE, Aruan LEE, Youngki Providing secure access to smart devices such as mobiles, wearables and various other IoT devices is becoming increasinglyimportant, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary authentication mechanism for such authorized access, especially as it canbe readily applied to small form-factor devices. Executing sophisticated machine learning pipelines for such authenticationon such devices remains an open problem, given their resource limitations in terms of storage, memory and computational power. To investigate this possibility, we compare the performance of an end-to-end system for both user identification anduser verification tasks based on breathing acoustics on three type of smart devices: smartphone, smartwatch and Raspberry Piusing both shallow classifiers (i.e., SVM, GMM, Logistic Regression) and deep learning based classifiers (e.g., LSTM, MLP). Viadetailed investigation, we conclude that LSTM models for acoustic classification are the smallest in size, have lowest inference time and are more accurate than all other compared classifiers. An uncompressed LSTM model provides 80%-94% accuracy while requiring only 50–180 KB of storage (depending on the breathing gesture). The resulting inference can be done on smartphones and smartwatches within approximately 7–10 ms and 18–66 ms respectively, thereby making them suitable for resource-constrained devices. Further memory and computational savings can be achieved using model compression methods such as weight quantization and fully connected layer factorization: in particular, a combination of quantization and factorization achieves 25%–55% reduction in LSTM model size, with almost no loss of accuracy. We also compare the performance on GPUs and show that the use of GPU can reduce the inference time of LSTM models by a factor of 300%. These results provide a practical way to deploy breathing based biometrics, and more broadly LSTM-based classifiers, in futureubiquitous computing applications. 2018-04-12T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4255 info:doi/10.1145/3287036 https://ink.library.smu.edu.sg/context/sis_research/article/5258/viewcontent/IMWUT_RNNBreathing_afv.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Authentication Breathing Gestures GMM IoT LSTM MLP SVM Security Wearables Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Authentication
Breathing Gestures
GMM
IoT
LSTM
MLP
SVM
Security
Wearables
Software Engineering
spellingShingle Authentication
Breathing Gestures
GMM
IoT
LSTM
MLP
SVM
Security
Wearables
Software Engineering
CHAUHAN, Jagmohan
RAJASEGARAN, Jathusan
SENEVIRATNE, Surang
MISRA, Archan
SENEVIRATNE, Aruan
LEE, Youngki
Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
description Providing secure access to smart devices such as mobiles, wearables and various other IoT devices is becoming increasinglyimportant, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary authentication mechanism for such authorized access, especially as it canbe readily applied to small form-factor devices. Executing sophisticated machine learning pipelines for such authenticationon such devices remains an open problem, given their resource limitations in terms of storage, memory and computational power. To investigate this possibility, we compare the performance of an end-to-end system for both user identification anduser verification tasks based on breathing acoustics on three type of smart devices: smartphone, smartwatch and Raspberry Piusing both shallow classifiers (i.e., SVM, GMM, Logistic Regression) and deep learning based classifiers (e.g., LSTM, MLP). Viadetailed investigation, we conclude that LSTM models for acoustic classification are the smallest in size, have lowest inference time and are more accurate than all other compared classifiers. An uncompressed LSTM model provides 80%-94% accuracy while requiring only 50–180 KB of storage (depending on the breathing gesture). The resulting inference can be done on smartphones and smartwatches within approximately 7–10 ms and 18–66 ms respectively, thereby making them suitable for resource-constrained devices. Further memory and computational savings can be achieved using model compression methods such as weight quantization and fully connected layer factorization: in particular, a combination of quantization and factorization achieves 25%–55% reduction in LSTM model size, with almost no loss of accuracy. We also compare the performance on GPUs and show that the use of GPU can reduce the inference time of LSTM models by a factor of 300%. These results provide a practical way to deploy breathing based biometrics, and more broadly LSTM-based classifiers, in futureubiquitous computing applications.
format text
author CHAUHAN, Jagmohan
RAJASEGARAN, Jathusan
SENEVIRATNE, Surang
MISRA, Archan
SENEVIRATNE, Aruan
LEE, Youngki
author_facet CHAUHAN, Jagmohan
RAJASEGARAN, Jathusan
SENEVIRATNE, Surang
MISRA, Archan
SENEVIRATNE, Aruan
LEE, Youngki
author_sort CHAUHAN, Jagmohan
title Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
title_short Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
title_full Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
title_fullStr Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
title_full_unstemmed Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
title_sort performance characterization of deep learning models for breathing-based authentication on resource-constrained devices
publisher Institutional Knowledge at Singapore Management University
publishDate 2018
url https://ink.library.smu.edu.sg/sis_research/4255
https://ink.library.smu.edu.sg/context/sis_research/article/5258/viewcontent/IMWUT_RNNBreathing_afv.pdf
_version_ 1770574545875369984