Reviewing multimodal deep learning techniques for user-generated content analysis
Multi-modal review analysis has become an interesting research topic since the nature of reviews has morphed from a text-only feature to a text and image form. Since good reviews are essential for any product, e-commerce platforms thrive on helpful reviews which can successfully extract the right in...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166260 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-166260 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1662602023-04-28T15:39:37Z Reviewing multimodal deep learning techniques for user-generated content analysis Sachin, Surawar Sanath Luu Anh Tuan School of Computer Science and Engineering anhtuan.luu@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Multi-modal review analysis has become an interesting research topic since the nature of reviews has morphed from a text-only feature to a text and image form. Since good reviews are essential for any product, e-commerce platforms thrive on helpful reviews which can successfully extract the right information about the product so that a buyer can make the right choice. As such, among the existing review analysis tasks, evaluating their helpfulness has become a predominant task. This research paper aims to explore different algorithms in the space of multimodal review helpfulness prediction (MRHP) aiming to analyze review helpfulness from text and visual modals. To evaluate the algorithms, two benchmark multimodal datasets have been used. Experimental results concur with the hypothesis that multimodal reviews not only provide more information regarding a product but are better suited to gauging a product’s utility and serving as a better metric for product marketing. Bachelor of Engineering (Computer Science) 2023-04-24T07:30:45Z 2023-04-24T07:30:45Z 2023 Final Year Project (FYP) Sachin, S. S. (2023). Reviewing multimodal deep learning techniques for user-generated content analysis. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166260 https://hdl.handle.net/10356/166260 en SCSE22-0411 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Sachin, Surawar Sanath Reviewing multimodal deep learning techniques for user-generated content analysis |
description |
Multi-modal review analysis has become an interesting research topic since the nature of reviews has morphed from a text-only feature to a text and image form. Since good reviews are essential for any product, e-commerce platforms thrive on helpful reviews which can successfully extract the right information about the product so that a buyer can make the right choice. As such, among the existing review analysis tasks, evaluating their helpfulness has become a predominant task.
This research paper aims to explore different algorithms in the space of multimodal review helpfulness prediction (MRHP) aiming to analyze review helpfulness from text and visual modals. To evaluate the algorithms, two benchmark multimodal datasets have been used. Experimental results concur with the hypothesis that multimodal reviews not only provide more information regarding a product but are better suited to gauging a product’s utility and serving as a better metric for product marketing. |
author2 |
Luu Anh Tuan |
author_facet |
Luu Anh Tuan Sachin, Surawar Sanath |
format |
Final Year Project |
author |
Sachin, Surawar Sanath |
author_sort |
Sachin, Surawar Sanath |
title |
Reviewing multimodal deep learning techniques for user-generated content analysis |
title_short |
Reviewing multimodal deep learning techniques for user-generated content analysis |
title_full |
Reviewing multimodal deep learning techniques for user-generated content analysis |
title_fullStr |
Reviewing multimodal deep learning techniques for user-generated content analysis |
title_full_unstemmed |
Reviewing multimodal deep learning techniques for user-generated content analysis |
title_sort |
reviewing multimodal deep learning techniques for user-generated content analysis |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/166260 |
_version_ |
1765213856980795392 |