The empathetic car: Exploring emotion inference via driver behaviour and traffic context

An empathetic car that is capable of reading the driver’s emotions has been envisioned by many car manufacturers. Emotion inference enables in-vehicle applications to improve driver comfort, well-being, and safety. Available emotion inference approaches use physiological, facial, and speech-related...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU, Shu, KOCH, Kevin, ZHOU, Zimu, FOLL, Simon, HE, Xiaoxi, MENKE, Tina, FlEISCH, Elgar, WORTMANN, Felix
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6238
https://ink.library.smu.edu.sg/context/sis_research/article/7241/viewcontent/ubicomp21_liu.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:An empathetic car that is capable of reading the driver’s emotions has been envisioned by many car manufacturers. Emotion inference enables in-vehicle applications to improve driver comfort, well-being, and safety. Available emotion inference approaches use physiological, facial, and speech-related data to infer emotions during driving trips. However, existing solutions have two major limitations: Relying on sensors that are not built into the vehicle restricts emotion inference to those people leveraging corresponding devices (e.g., smartwatches). Relying on modalities such as facial expressions and speech raises privacy concerns. By contrast, researchers in mobile health have been able to infer affective states (e.g., emotions) based on behavioral and contextual patterns decoded in available sensor streams, e.g., obtained by smartphones. We transfer this rationale to an in-vehicle setting by analyzing the feasibility of inferring driver emotions by passively interpreting the data streams of the control area network (CAN-bus) and the traffic context (inferred from the front-view camera). Therefore, our approach does not rely on particularly privacy-sensitive data streams such as the driver facial video or driver speech, but is built based on existing CAN-bus data and traffic information, which is available in current high-end or future vehicles. To assess our approach, we conducted a four-month field study on public roads covering a variety of uncontrolled daily driving activities. Hence, our results were generated beyond the confines of a laboratory environment. Ultimately, our proposed approach can accurately recognise drivers’ emotions and achieve comparable performance as the medical-grade physiological sensor-based state-of-the-art baseline method.