A self-organizing neural model for multimedia information fusion

This paper presents a self-organizing network model for the fusion of multimedia information. By synchronizing the encoding of information across multiple media channels, the neural model known as fusion Adaptive Resonance Theory (fusion ART) generates clusters that encode the associative mappings a...

Full description

Saved in:
Bibliographic Details
Main Authors: NGUYEN, Luong-Dong, WOON, Kia-Yan, TAN, Ah-hwee
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2008
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6796
https://ink.library.smu.edu.sg/context/sis_research/article/7799/viewcontent/Fusion_08.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:This paper presents a self-organizing network model for the fusion of multimedia information. By synchronizing the encoding of information across multiple media channels, the neural model known as fusion Adaptive Resonance Theory (fusion ART) generates clusters that encode the associative mappings across multimedia information in a real-time and continuous manner. In addition, by incorporating a semantic category channel, fusion ART further enables multimedia information to be fused into predefined themes or semantic categories. We illustrate the fusion ART’s functionalities through experiments on two multimedia data sets in the terrorist domain and show the viability of the proposed approach.