Mixing sound sources in 3-D for headphones
There have been an extensive number of studies into binaural hearing and spatialisation of virtual sound sources. The extent to which we perceive externalised sound images is highly dependent on how closely the sound relates to real life. This project seeks to understand the fundamental mechanisms b...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159284 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | There have been an extensive number of studies into binaural hearing and spatialisation of virtual sound sources. The extent to which we perceive externalised sound images is highly dependent on how closely the sound relates to real life. This project seeks to understand the fundamental mechanisms behind localisation. The variables that engender this phenomenon include Head-Related Transfer Function, spectral cues, head movement and environmental context simulation. Most studies seek to reduce the rates of localisation errors such as reversals and non-externalisation while promoting generalisability and accessibility to methods. Disproportionately, spatial audio exists among commercial offerings such as Dolby Atmos. This project approaches 3-D sound with an educational program catered to those interested in audio, specifically binaural. It implements a step-through cell-by-cell code in Python on Jupyter Notebooks. Specific HRTF datasets from MIT Kemar and Listen Ircam and convolution functions from numpy and scipy libraries are used. Helpful comments, visual aid, and sound playback are available for users to peruse. |
---|