Autonomous soundscape augmentation with multimodal fusion of visual and participant-linked inputs
Autonomous soundscape augmentation systems typically use trained models to pick optimal maskers to effect a desired perceptual change. While acoustic information is paramount to such systems, contextual information, including participant demographics and the visual environment, also influences acous...
Saved in:
Main Authors: | Ooi, Kenneth, Watcharasupat, Karn, Lam, Bhan, Ong, Zhen-Ting, Gan, Woon-Seng |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165017 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes
by: Ooi, Kenneth, et al.
Published: (2023) -
Sentic blending: Scalable multimodal fusion for the continuous interpretation of semantics and sentics
by: Cambria, E., et al.
Published: (2014) -
Effect of masker selection schemes on the perceived affective quality of soundscapes: a pilot study
by: Ong, Zhen-Ting, et al.
Published: (2023) -
Effects of adding natural sounds to urban noises on the perceived loudness of noise and soundscape quality
by: Hong, Joo Young, et al.
Published: (2021) -
Multimodal fusion for multimedia analysis: A survey
by: Atrey, P.K., et al.
Published: (2013)