Distilling the knowledge from handcrafted features for human activity recognition

Human activity recognition is a core problem in intelligent automation systems due to its far-reaching applications including ubiquitous computing, health-care services, and smart living. Due to the nonintrusive property of smartphones, smartphone sensors are widely used for the identification of hu...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Zhenghua, Zhang, Le, Cao, Zhiguang, Guo, Jing
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2019
Subjects:
Online Access:https://hdl.handle.net/10356/86019
http://hdl.handle.net/10220/48267
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-86019
record_format dspace
spelling sg-ntu-dr.10356-860192020-03-07T13:57:29Z Distilling the knowledge from handcrafted features for human activity recognition Chen, Zhenghua Zhang, Le Cao, Zhiguang Guo, Jing School of Electrical and Electronic Engineering DRNTU::Engineering::Electrical and electronic engineering Human Activity Recognition Deep Long Short-term Memory (LSTM) Network Human activity recognition is a core problem in intelligent automation systems due to its far-reaching applications including ubiquitous computing, health-care services, and smart living. Due to the nonintrusive property of smartphones, smartphone sensors are widely used for the identification of human activities. However, unlike applications in vision or data mining domain, feature embedding from deep neural networks performs much worse in terms of recognition accuracy than properly designed handcrafted features. In this paper, we posit that feature embedding from deep neural networks may convey complementary information and propose a novel knowledge distilling strategy to improve its performance. More specifically, an efficient shallow network, i.e., single-layer feedforward neural network (SLFN), with handcrafted features is utilized to assist a deep long short-term memory (LSTM) network. On the one hand, the deep LSTM network is able to learn features from raw sensory data to encode temporal dependencies. On the other hand, the deep LSTM network can also learn from SLFN to mimic how it generalizes. Experimental results demonstrate the superiority of the proposed method in terms of recognition accuracy against several state-of-the-art methods in the literature. Accepted version 2019-05-17T07:42:42Z 2019-12-06T16:14:29Z 2019-05-17T07:42:42Z 2019-12-06T16:14:29Z 2018 Journal Article Chen, Z., Zhang, L., Cao, Z., & Guo, J. (2018). Distilling the Knowledge From Handcrafted Features for Human Activity Recognition. IEEE Transactions on Industrial Informatics, 14(10), 4334-4342. doi:10.1109/TII.2018.2789925 1551-3203 https://hdl.handle.net/10356/86019 http://hdl.handle.net/10220/48267 10.1109/TII.2018.2789925 en IEEE Transactions on Industrial Informatics © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TII.2018.2789925. 9 p. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic DRNTU::Engineering::Electrical and electronic engineering
Human Activity Recognition
Deep Long Short-term Memory (LSTM) Network
spellingShingle DRNTU::Engineering::Electrical and electronic engineering
Human Activity Recognition
Deep Long Short-term Memory (LSTM) Network
Chen, Zhenghua
Zhang, Le
Cao, Zhiguang
Guo, Jing
Distilling the knowledge from handcrafted features for human activity recognition
description Human activity recognition is a core problem in intelligent automation systems due to its far-reaching applications including ubiquitous computing, health-care services, and smart living. Due to the nonintrusive property of smartphones, smartphone sensors are widely used for the identification of human activities. However, unlike applications in vision or data mining domain, feature embedding from deep neural networks performs much worse in terms of recognition accuracy than properly designed handcrafted features. In this paper, we posit that feature embedding from deep neural networks may convey complementary information and propose a novel knowledge distilling strategy to improve its performance. More specifically, an efficient shallow network, i.e., single-layer feedforward neural network (SLFN), with handcrafted features is utilized to assist a deep long short-term memory (LSTM) network. On the one hand, the deep LSTM network is able to learn features from raw sensory data to encode temporal dependencies. On the other hand, the deep LSTM network can also learn from SLFN to mimic how it generalizes. Experimental results demonstrate the superiority of the proposed method in terms of recognition accuracy against several state-of-the-art methods in the literature.
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Chen, Zhenghua
Zhang, Le
Cao, Zhiguang
Guo, Jing
format Article
author Chen, Zhenghua
Zhang, Le
Cao, Zhiguang
Guo, Jing
author_sort Chen, Zhenghua
title Distilling the knowledge from handcrafted features for human activity recognition
title_short Distilling the knowledge from handcrafted features for human activity recognition
title_full Distilling the knowledge from handcrafted features for human activity recognition
title_fullStr Distilling the knowledge from handcrafted features for human activity recognition
title_full_unstemmed Distilling the knowledge from handcrafted features for human activity recognition
title_sort distilling the knowledge from handcrafted features for human activity recognition
publishDate 2019
url https://hdl.handle.net/10356/86019
http://hdl.handle.net/10220/48267
_version_ 1681044912912465920