Modeling dyadic and group impressions with intermodal and interperson features
This article proposes a novel feature-extraction framework for inferring impression personality traits, emergent leadership skills, communicative competence, and hiring decisions. The proposed framework extracts multimodal features, describing each participant's nonverbal activities. It capture...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Published: |
Animo Repository
2019
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/faculty_research/3497 https://animorepository.dlsu.edu.ph/context/faculty_research/article/4499/type/native/viewcontent/3265754.html |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
id |
oai:animorepository.dlsu.edu.ph:faculty_research-4499 |
---|---|
record_format |
eprints |
spelling |
oai:animorepository.dlsu.edu.ph:faculty_research-44992021-09-10T06:44:36Z Modeling dyadic and group impressions with intermodal and interperson features Okada, Shogo Nguyen, Laurent Son Aran, Oya Perez, Daniel Gatica- This article proposes a novel feature-extraction framework for inferring impression personality traits, emergent leadership skills, communicative competence, and hiring decisions. The proposed framework extracts multimodal features, describing each participant's nonverbal activities. It captures intermodal and interperson relationships in interactions and captures how the target interactor generates nonverbal behavior when other interactors also generate nonverbal behavior. The intermodal and interperson patterns are identified as frequent co-occurring events based on clustering from multimodal sequences. The proposed framework is applied to the SONVB corpus, which is an audiovisual dataset collected from dyadic job interviews, and the ELEA audiovisual data corpus, which is a dataset collected from group meetings. We evaluate the framework on a binary classification task involving 15 impression variables from the two data corpora. The experimental results show that the model trained with co-occurrence features is more accurate than previous models for 14 out of 15 traits. © 2019 Association for Computing Machinery. 2019-01-01T08:00:00Z text text/html https://animorepository.dlsu.edu.ph/faculty_research/3497 info:doi/10.1145/3265754 https://animorepository.dlsu.edu.ph/context/faculty_research/article/4499/type/native/viewcontent/3265754.html Faculty Research Work Animo Repository Multimodal user interfaces (Computer systems) Personality Computer Sciences |
institution |
De La Salle University |
building |
De La Salle University Library |
continent |
Asia |
country |
Philippines Philippines |
content_provider |
De La Salle University Library |
collection |
DLSU Institutional Repository |
topic |
Multimodal user interfaces (Computer systems) Personality Computer Sciences |
spellingShingle |
Multimodal user interfaces (Computer systems) Personality Computer Sciences Okada, Shogo Nguyen, Laurent Son Aran, Oya Perez, Daniel Gatica- Modeling dyadic and group impressions with intermodal and interperson features |
description |
This article proposes a novel feature-extraction framework for inferring impression personality traits, emergent leadership skills, communicative competence, and hiring decisions. The proposed framework extracts multimodal features, describing each participant's nonverbal activities. It captures intermodal and interperson relationships in interactions and captures how the target interactor generates nonverbal behavior when other interactors also generate nonverbal behavior. The intermodal and interperson patterns are identified as frequent co-occurring events based on clustering from multimodal sequences. The proposed framework is applied to the SONVB corpus, which is an audiovisual dataset collected from dyadic job interviews, and the ELEA audiovisual data corpus, which is a dataset collected from group meetings. We evaluate the framework on a binary classification task involving 15 impression variables from the two data corpora. The experimental results show that the model trained with co-occurrence features is more accurate than previous models for 14 out of 15 traits. © 2019 Association for Computing Machinery. |
format |
text |
author |
Okada, Shogo Nguyen, Laurent Son Aran, Oya Perez, Daniel Gatica- |
author_facet |
Okada, Shogo Nguyen, Laurent Son Aran, Oya Perez, Daniel Gatica- |
author_sort |
Okada, Shogo |
title |
Modeling dyadic and group impressions with intermodal and interperson features |
title_short |
Modeling dyadic and group impressions with intermodal and interperson features |
title_full |
Modeling dyadic and group impressions with intermodal and interperson features |
title_fullStr |
Modeling dyadic and group impressions with intermodal and interperson features |
title_full_unstemmed |
Modeling dyadic and group impressions with intermodal and interperson features |
title_sort |
modeling dyadic and group impressions with intermodal and interperson features |
publisher |
Animo Repository |
publishDate |
2019 |
url |
https://animorepository.dlsu.edu.ph/faculty_research/3497 https://animorepository.dlsu.edu.ph/context/faculty_research/article/4499/type/native/viewcontent/3265754.html |
_version_ |
1767195918252113920 |