Investigation of multimodality sensors for real-time emotion assessment
Affective research using a multi-modal approach has increasingly involved the use of commercial available sensor devices. However, there is a lack of available market data on the accuracy and performance of such devices in predicting emotions. The objective of the project is to analyse the performan...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2016
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/66620 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Affective research using a multi-modal approach has increasingly involved the use of commercial available sensor devices. However, there is a lack of available market data on the accuracy and performance of such devices in predicting emotions. The objective of the project is to analyse the performance and capabilities of commercial sensor devices in predicting human emotions. Two methods of quantifying emotions will be introduced, namely Paul Ekman’s six discrete basic emotions, and the Circumplex valence-arousal dimensional model. The project also analyses whether the emotion elicitation methodology adopted, namely using International Affective Picture System (IAPS) or Karolinska Directed Emotional Faces (KDEF), will have any effect on the device’s performance.
The project is split into three phases, namely the development, experiment and analysis phase. The development phase focuses on experiment design and the development of the hardware and software systems. An experimental software application was developed in C# involving the Muse EEG headband, Amped PPG sensor, and Intel RealSense camera. The software records the subject’s EEG, PPG and facial data using the IAPS and KDEF image libraries to elicit emotions in two separate experiments.
The second phase involved the actual conduct of the experiment. The IAPS and KDEF experiment attracted 10 and 8 subjects respectively. Subjects were shown 20 images chosen from the IAPS library and were required to mimic the facial expressions of 18 images from the KDEF library. In both experiments, subjects had to answer a Self-Assessment Manikin (SAM) questionnaire after seeing each image to assess their subjective valence-arousal ratings.
A preliminary analysis of the he data collected from the experiments was done using Weka’s J48 decision tree with 10 fold cross validation to determine the accuracy of each sensor devices for both experiments. Recommendations for future works and possible improvements are also discussed towards the end of the report. |
---|