Ensemble application of ELM and GPU for real-time multimodal sentiment analysis
The enormous number of videos posted everyday on multimedia websites such as Facebook and YouTube makes the Internet an infinite source of information. Collecting and processing such information, however, is a very challenging task as it involves dealing with a huge amount of information that is cha...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/141742 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-141742 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1417422020-06-10T06:15:26Z Ensemble application of ELM and GPU for real-time multimodal sentiment analysis Tran, Ha-Nguyen Cambria, Erik School of Computer Science and Engineering Engineering::Computer science and engineering Multimodal Sentiment Analysis Opinion Mining The enormous number of videos posted everyday on multimedia websites such as Facebook and YouTube makes the Internet an infinite source of information. Collecting and processing such information, however, is a very challenging task as it involves dealing with a huge amount of information that is changing at a very high speed. To this end, we leverage on the processing speed of extreme learning machine and graphics processing unit to overcome the limitations of standard learning algorithms and central processing unit (CPU) and, hence, perform real-time multimodal sentiment analysis, i.e., harvesting sentiments from web videos by taking into account audio, visual and textual modalities as sources of the information. For the sentiment classification, we leveraged on sentic memes, i.e., basic units of sentiment whose combination can potentially describe the full range of emotional experiences that are rooted in any of us, including different degrees of polarity. We used both feature and decision level fusion methods to fuse the information extracted from the different modalities. Using the sentiment annotated dataset generated from YouTube video reviews, our proposed multimodal system is shown to achieve an accuracy of 78%. In term of processing speed, our method shows improvements of several orders of magnitude for feature extraction compared to CPU-based counterparts. 2020-06-10T06:15:25Z 2020-06-10T06:15:25Z 2017 Journal Article Tran, H.-N., & Cambria, E. (2018). Ensemble application of ELM and GPU for real-time multimodal sentiment analysis. Memetic Computing, 10(1), 3-13. doi:10.1007/s12293-017-0228-3 1865-9284 https://hdl.handle.net/10356/141742 10.1007/s12293-017-0228-3 2-s2.0-85016172277 1 10 3 13 en Memetic Computing © 2017 Springer-Verlag Berlin Heidelberg. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Multimodal Sentiment Analysis Opinion Mining |
spellingShingle |
Engineering::Computer science and engineering Multimodal Sentiment Analysis Opinion Mining Tran, Ha-Nguyen Cambria, Erik Ensemble application of ELM and GPU for real-time multimodal sentiment analysis |
description |
The enormous number of videos posted everyday on multimedia websites such as Facebook and YouTube makes the Internet an infinite source of information. Collecting and processing such information, however, is a very challenging task as it involves dealing with a huge amount of information that is changing at a very high speed. To this end, we leverage on the processing speed of extreme learning machine and graphics processing unit to overcome the limitations of standard learning algorithms and central processing unit (CPU) and, hence, perform real-time multimodal sentiment analysis, i.e., harvesting sentiments from web videos by taking into account audio, visual and textual modalities as sources of the information. For the sentiment classification, we leveraged on sentic memes, i.e., basic units of sentiment whose combination can potentially describe the full range of emotional experiences that are rooted in any of us, including different degrees of polarity. We used both feature and decision level fusion methods to fuse the information extracted from the different modalities. Using the sentiment annotated dataset generated from YouTube video reviews, our proposed multimodal system is shown to achieve an accuracy of 78%. In term of processing speed, our method shows improvements of several orders of magnitude for feature extraction compared to CPU-based counterparts. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Tran, Ha-Nguyen Cambria, Erik |
format |
Article |
author |
Tran, Ha-Nguyen Cambria, Erik |
author_sort |
Tran, Ha-Nguyen |
title |
Ensemble application of ELM and GPU for real-time multimodal sentiment analysis |
title_short |
Ensemble application of ELM and GPU for real-time multimodal sentiment analysis |
title_full |
Ensemble application of ELM and GPU for real-time multimodal sentiment analysis |
title_fullStr |
Ensemble application of ELM and GPU for real-time multimodal sentiment analysis |
title_full_unstemmed |
Ensemble application of ELM and GPU for real-time multimodal sentiment analysis |
title_sort |
ensemble application of elm and gpu for real-time multimodal sentiment analysis |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/141742 |
_version_ |
1681058580895105024 |