Reviewing multimodal deep learning techniques for user-generated content analysis
Multi-modal review analysis has become an interesting research topic since the nature of reviews has morphed from a text-only feature to a text and image form. Since good reviews are essential for any product, e-commerce platforms thrive on helpful reviews which can successfully extract the right in...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166260 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Multi-modal review analysis has become an interesting research topic since the nature of reviews has morphed from a text-only feature to a text and image form. Since good reviews are essential for any product, e-commerce platforms thrive on helpful reviews which can successfully extract the right information about the product so that a buyer can make the right choice. As such, among the existing review analysis tasks, evaluating their helpfulness has become a predominant task.
This research paper aims to explore different algorithms in the space of multimodal review helpfulness prediction (MRHP) aiming to analyze review helpfulness from text and visual modals. To evaluate the algorithms, two benchmark multimodal datasets have been used. Experimental results concur with the hypothesis that multimodal reviews not only provide more information regarding a product but are better suited to gauging a product’s utility and serving as a better metric for product marketing. |
---|