Performance characterization of deep learning models for breathing-based authentication on resource-constrained devices

Providing secure access to smart devices such as mobiles, wearables and various other IoT devices is becoming increasinglyimportant, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary a...

Full description

Saved in:
Bibliographic Details
Main Authors: CHAUHAN, Jagmohan, RAJASEGARAN, Jathusan, SENEVIRATNE, Surang, MISRA, Archan, SENEVIRATNE, Aruan, LEE, Youngki
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2018
Subjects:
GMM
IoT
MLP
SVM
Online Access:https://ink.library.smu.edu.sg/sis_research/4255
https://ink.library.smu.edu.sg/context/sis_research/article/5258/viewcontent/IMWUT_RNNBreathing_afv.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Providing secure access to smart devices such as mobiles, wearables and various other IoT devices is becoming increasinglyimportant, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary authentication mechanism for such authorized access, especially as it canbe readily applied to small form-factor devices. Executing sophisticated machine learning pipelines for such authenticationon such devices remains an open problem, given their resource limitations in terms of storage, memory and computational power. To investigate this possibility, we compare the performance of an end-to-end system for both user identification anduser verification tasks based on breathing acoustics on three type of smart devices: smartphone, smartwatch and Raspberry Piusing both shallow classifiers (i.e., SVM, GMM, Logistic Regression) and deep learning based classifiers (e.g., LSTM, MLP). Viadetailed investigation, we conclude that LSTM models for acoustic classification are the smallest in size, have lowest inference time and are more accurate than all other compared classifiers. An uncompressed LSTM model provides 80%-94% accuracy while requiring only 50–180 KB of storage (depending on the breathing gesture). The resulting inference can be done on smartphones and smartwatches within approximately 7–10 ms and 18–66 ms respectively, thereby making them suitable for resource-constrained devices. Further memory and computational savings can be achieved using model compression methods such as weight quantization and fully connected layer factorization: in particular, a combination of quantization and factorization achieves 25%–55% reduction in LSTM model size, with almost no loss of accuracy. We also compare the performance on GPUs and show that the use of GPU can reduce the inference time of LSTM models by a factor of 300%. These results provide a practical way to deploy breathing based biometrics, and more broadly LSTM-based classifiers, in futureubiquitous computing applications.