Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog)
This paper investigates the problem of human identification in order to aid the self-improving and ambient intelligent empathic space in providing a tailor fitted space for its occupant. This is particularly relevant to the emphatic space since it should be capable of automatically recognizing its o...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2010
|
Online Access: | https://animorepository.dlsu.edu.ph/etd_honors/371 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
id |
oai:animorepository.dlsu.edu.ph:etd_honors-1370 |
---|---|
record_format |
eprints |
spelling |
oai:animorepository.dlsu.edu.ph:etd_honors-13702022-02-23T04:53:52Z Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) Cheung, Oi Hing Chuacokiong, Marjorie Go, Milton Lee, Nicole This paper investigates the problem of human identification in order to aid the self-improving and ambient intelligent empathic space in providing a tailor fitted space for its occupant. This is particularly relevant to the emphatic space since it should be capable of automatically recognizing its occupant, to allow it in turn to retrieve its (the occupants) affective and behavior model. While the use of a face recognition and voice recognition in solving these problems are well-studied, they prove brittle when applied to real-world scenarios, primarily because these are unimodal. This research subscribes to the use of multimodal human identification with constraints as required by the emphatic space. This paper presents a novel framework to solve the problem of human identification by extending the use of unimodal biometrics to multimodal biometric information making use of a persons face, voice, and gait for purposes of recognition. Evaluations were conducted based on the corpus built with 15 registered occupants. The accuracy results of the system for face, voice, and gait modalities acting independently are 81.33%, and 74.02% respectively. For fused modalities, the system yielded an overall accuracy rate of 86.67%. Though the gait performance is quite lo, it is still a necessary component of the system since facial and vocal information may be unavailable in certain situations (i.e., the person enters the space with his head bent down, or enters the room with no auditory information captured) – In which case, the process of identification may still proceed given the gait information gathered from the occupant which is always present. 2010-01-01T08:00:00Z text https://animorepository.dlsu.edu.ph/etd_honors/371 Honors Theses English Animo Repository |
institution |
De La Salle University |
building |
De La Salle University Library |
continent |
Asia |
country |
Philippines Philippines |
content_provider |
De La Salle University Library |
collection |
DLSU Institutional Repository |
language |
English |
description |
This paper investigates the problem of human identification in order to aid the self-improving and ambient intelligent empathic space in providing a tailor fitted space for its occupant. This is particularly relevant to the emphatic space since it should be capable of automatically recognizing its occupant, to allow it in turn to retrieve its (the occupants) affective and behavior model. While the use of a face recognition and voice recognition in solving these problems are well-studied, they prove brittle when applied to real-world scenarios, primarily because these are unimodal. This research subscribes to the use of multimodal human identification with constraints as required by the emphatic space. This paper presents a novel framework to solve the problem of human identification by extending the use of unimodal biometrics to multimodal biometric information making use of a persons face, voice, and gait for purposes of recognition.
Evaluations were conducted based on the corpus built with 15 registered occupants. The accuracy results of the system for face, voice, and gait modalities acting independently are 81.33%, and 74.02% respectively. For fused modalities, the system yielded an overall accuracy rate of 86.67%. Though the gait performance is quite lo, it is still a necessary component of the system since facial and vocal information may be unavailable in certain situations (i.e., the person enters the space with his head bent down, or enters the room with no auditory information captured) – In which case, the process of identification may still proceed given the gait information gathered from the occupant which is always present. |
format |
text |
author |
Cheung, Oi Hing Chuacokiong, Marjorie Go, Milton Lee, Nicole |
spellingShingle |
Cheung, Oi Hing Chuacokiong, Marjorie Go, Milton Lee, Nicole Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) |
author_facet |
Cheung, Oi Hing Chuacokiong, Marjorie Go, Milton Lee, Nicole |
author_sort |
Cheung, Oi Hing |
title |
Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) |
title_short |
Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) |
title_full |
Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) |
title_fullStr |
Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) |
title_full_unstemmed |
Automatic multimodal human identification for a self-improving, ambient intelligent empathic space (HuMaNRecog) |
title_sort |
automatic multimodal human identification for a self-improving, ambient intelligent empathic space (humanrecog) |
publisher |
Animo Repository |
publishDate |
2010 |
url |
https://animorepository.dlsu.edu.ph/etd_honors/371 |
_version_ |
1726158553329172480 |