Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates

Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown t...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Xiaodong, Guo, Xiaotao, Chen, Lin, Liu, Yijun, Goldberg, Michael E., Xu, Hong
Other Authors: School of Humanities and Social Sciences
Format: Article
Language:English
Published: 2017
Subjects:
Online Access:https://hdl.handle.net/10356/84755
http://hdl.handle.net/10220/43630
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-84755
record_format dspace
spelling sg-ntu-dr.10356-847552020-03-07T12:10:38Z Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates Wang, Xiaodong Guo, Xiaotao Chen, Lin Liu, Yijun Goldberg, Michael E. Xu, Hong School of Humanities and Social Sciences Aftereffect Laughter Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown that prolonged exposure to a face exhibiting one emotion, such as happiness, leads to contrastive biases in the perception of subsequently presented faces toward the opposite emotion, such as sadness. Such work has shown the importance of adaptation in calibrating face perception based on prior visual exposure. In the present study, we showed for the first time that emotion-laden sounds, like laughter, adapt the visual perception of emotional faces, that is, subjects more frequently perceived faces as sad after listening to a happy sound. Furthermore, via electroencephalography recordings and event-related potential analysis, we showed that there was a neural correlate underlying the perceptual bias: There was an attenuated response occurring at ∼ 400 ms to happy test faces and a quickened response to sad test faces, after exposure to a happy sound. Our results provide the first direct evidence for a behavioral cross-modal adaptation effect on the perception of facial emotion, and its neural correlate. MOE (Min. of Education, S’pore) Accepted version 2017-08-25T04:07:40Z 2019-12-06T15:50:48Z 2017-08-25T04:07:40Z 2019-12-06T15:50:48Z 2016 2016 Journal Article Wang, X. D., Guo, X., Chen, L., Liu, Y., Goldberg, M. E., & Xu, H. (2017). Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates. Cerebral Cortex, 27(2), 1337-1346. 1047-3211 https://hdl.handle.net/10356/84755 http://hdl.handle.net/10220/43630 10.1093/cercor/bhv321 190738 en Cerebral Cortex © 2016 The author (published by Oxford University Press). This is the author created version of a work that has been peer reviewed and accepted for publication in Cerebral Cortex, published by Oxford University Press on behalf of the author. It incorporates referee’s comments but changes resulting from the publishing process, such as copyediting, structural formatting, may not be reflected in this document.  The published version is available at: [http://dx.doi.org/10.1093/cercor/bhv321]. 40 p. application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Aftereffect
Laughter
spellingShingle Aftereffect
Laughter
Wang, Xiaodong
Guo, Xiaotao
Chen, Lin
Liu, Yijun
Goldberg, Michael E.
Xu, Hong
Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
description Adaptation is fundamental in sensory processing and has been studied extensively within the same sensory modality. However, little is known about adaptation across sensory modalities, especially in the context of high-level processing, such as the perception of emotion. Previous studies have shown that prolonged exposure to a face exhibiting one emotion, such as happiness, leads to contrastive biases in the perception of subsequently presented faces toward the opposite emotion, such as sadness. Such work has shown the importance of adaptation in calibrating face perception based on prior visual exposure. In the present study, we showed for the first time that emotion-laden sounds, like laughter, adapt the visual perception of emotional faces, that is, subjects more frequently perceived faces as sad after listening to a happy sound. Furthermore, via electroencephalography recordings and event-related potential analysis, we showed that there was a neural correlate underlying the perceptual bias: There was an attenuated response occurring at ∼ 400 ms to happy test faces and a quickened response to sad test faces, after exposure to a happy sound. Our results provide the first direct evidence for a behavioral cross-modal adaptation effect on the perception of facial emotion, and its neural correlate.
author2 School of Humanities and Social Sciences
author_facet School of Humanities and Social Sciences
Wang, Xiaodong
Guo, Xiaotao
Chen, Lin
Liu, Yijun
Goldberg, Michael E.
Xu, Hong
format Article
author Wang, Xiaodong
Guo, Xiaotao
Chen, Lin
Liu, Yijun
Goldberg, Michael E.
Xu, Hong
author_sort Wang, Xiaodong
title Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
title_short Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
title_full Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
title_fullStr Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
title_full_unstemmed Auditory to Visual Cross-Modal Adaptation for Emotion: Psychophysical and Neural Correlates
title_sort auditory to visual cross-modal adaptation for emotion: psychophysical and neural correlates
publishDate 2017
url https://hdl.handle.net/10356/84755
http://hdl.handle.net/10220/43630
_version_ 1681049054108188672