PURE: passive multi-person identification via footstep for mobile service networks

Recently, passive behavioral biometric (e.g., gesture or footstep) acquired from wireless networks or mobile services have become promising complements to conventional user identification methods (e.g., face or fingerprint) under special situations, yet existing sensing technologies require lengthy...

Full description

Saved in:
Bibliographic Details
Main Authors: Cai, Chao, Jin, Ruinan, Nie, Jiangtian, Kang, Jiawen, Zhang, Yang, Luo, Jun
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170812
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Recently, passive behavioral biometric (e.g., gesture or footstep) acquired from wireless networks or mobile services have become promising complements to conventional user identification methods (e.g., face or fingerprint) under special situations, yet existing sensing technologies require lengthy measurement traces and cannot identify multiple users at the same time. To this end, we propose PURE as a passive multi-person identification system leveraging deep learning enabled footstep separation and recognition. PURE passively identifies a user by identifying the unique “footprints” in its footstep. Different from existing gait-enabled recognition systems incurring a long sensing delay to acquire many footsteps, PURE can recognize a person by as few as only one step, substantially cutting the identification latency. To make PURE adaptive to walking pace variations, environmental dynamics, and even unseen targets, we apply an adversarial learning technique to improve its domain generalisability and identification accuracy. Finally, PURE is robust against replay attack, enabled by the richness of footstep and spatial awareness. We implement a PURE prototype using commodity hardware and evaluate it in typical indoor settings. Evaluation results demonstrate a cross-domain identification accuracy of over 90%.