Predicting levels of rapport in dyadic interactions through automatic detection of posture and posture congruence
Research in psychology and SSP often describe posture as one of the most expressive nonverbal cues. Various studies in psychology particularly link posture mirroring behaviour to rapport. Currently, however, there are few studies which deal with the automatic analysis of postures and none at all par...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Published: |
Animo Repository
2011
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/faculty_research/2583 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Summary: | Research in psychology and SSP often describe posture as one of the most expressive nonverbal cues. Various studies in psychology particularly link posture mirroring behaviour to rapport. Currently, however, there are few studies which deal with the automatic analysis of postures and none at all particularly focus on its connection with rapport. This study presents a method for automatically predicting rapport in dyadic interactions based on posture and congruence. We begin by constructing a dataset of dyadic interactions and selfreported rapport annotations. Then, we present a simple system for posture classification and use it to detect posture congruence in dyads. Sliding time windows are used to collect posture congruence statistics across video segments. And lastly, various machine learning techniques are tested and used to create rapport models. Among the machine learners tested, Support Vector Machines and Multilayer Perceptrons performed best, at around 71% average accuracy. © 2011 IEEE. |
---|