Fake review detection by fusing parameter efficient adapters in pre-trained language model

Peer-to-peer reviews are important to businesses. Review ratings affect the reputations of businesses, which helps the growth of the business. However, fake reviews are increasingly plaguing the internet, leading to bad quality purchases or even scams. This is especially common on services and marke...

Full description

Saved in:
Bibliographic Details
Main Author: Ho, See Cheng
Other Authors: Lihui Chen
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173129
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-173129
record_format dspace
spelling sg-ntu-dr.10356-1731292024-02-01T09:53:45Z Fake review detection by fusing parameter efficient adapters in pre-trained language model Ho, See Cheng Lihui Chen School of Electrical and Electronic Engineering Shopee Pte Ltd ELHCHEN@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Peer-to-peer reviews are important to businesses. Review ratings affect the reputations of businesses, which helps the growth of the business. However, fake reviews are increasingly plaguing the internet, leading to bad quality purchases or even scams. This is especially common on services and marketplace platforms such as Yelp, TripAdvisor and Amazon, whereby customers rely heavily on reviews on these platforms before paying for items or services from businesses on the platforms. Therefore, developing a system to detect fake reviews written by bad actors, is of utmost importance to protect the integrity of both platforms and businesses. Currently, there are many deep learning models utilizing large pre-trained language models to address the problem by analyzing text data. However, the identifiable pattern of fake reviews tends to change rapidly, resulting in the necessity of updating these models frequently. Large pre-trained language models usually have a huge number of parameters, which poses a challenge when it comes to retraining them periodically due to the large computes needed and the problem of catastrophic inference. To address this problem, this thesis utilizes adapter, a small set of parameters that is inserted into a transformer language model. A set of adapters will be fine-tuned to solve the fake review tasks, taking advantage of them being compact, modular, and composable modules. This allows the pre-trained models to retain their knowledge and reduces the memory storage required to store knowledge of various downstream tasks. In addition, multiple adapters can be fused together using the AdapterFusion methodology, opening additional solutions to introduce useful external knowledge into the model. In our experiments, we observe that using adapters achieve comparable performance to a fully fine-tuned language model for fake review detection. Additionally, by fusing adapters, with the introduction of external knowledge such as contextualized emotion and sentiment knowledge, we improve the model further, while reducing storage utilization and improving parameter efficiency. The results highlight the challenge of fake review detection and the need to explore solutions for efficiency instead of focusing deeper models. Master's degree 2024-01-17T01:00:47Z 2024-01-17T01:00:47Z 2023 Thesis-Master by Research Ho, S. C. (2023). Fake review detection by fusing parameter efficient adapters in pre-trained language model. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/173129 https://hdl.handle.net/10356/173129 10.32657/10356/173129 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Ho, See Cheng
Fake review detection by fusing parameter efficient adapters in pre-trained language model
description Peer-to-peer reviews are important to businesses. Review ratings affect the reputations of businesses, which helps the growth of the business. However, fake reviews are increasingly plaguing the internet, leading to bad quality purchases or even scams. This is especially common on services and marketplace platforms such as Yelp, TripAdvisor and Amazon, whereby customers rely heavily on reviews on these platforms before paying for items or services from businesses on the platforms. Therefore, developing a system to detect fake reviews written by bad actors, is of utmost importance to protect the integrity of both platforms and businesses. Currently, there are many deep learning models utilizing large pre-trained language models to address the problem by analyzing text data. However, the identifiable pattern of fake reviews tends to change rapidly, resulting in the necessity of updating these models frequently. Large pre-trained language models usually have a huge number of parameters, which poses a challenge when it comes to retraining them periodically due to the large computes needed and the problem of catastrophic inference. To address this problem, this thesis utilizes adapter, a small set of parameters that is inserted into a transformer language model. A set of adapters will be fine-tuned to solve the fake review tasks, taking advantage of them being compact, modular, and composable modules. This allows the pre-trained models to retain their knowledge and reduces the memory storage required to store knowledge of various downstream tasks. In addition, multiple adapters can be fused together using the AdapterFusion methodology, opening additional solutions to introduce useful external knowledge into the model. In our experiments, we observe that using adapters achieve comparable performance to a fully fine-tuned language model for fake review detection. Additionally, by fusing adapters, with the introduction of external knowledge such as contextualized emotion and sentiment knowledge, we improve the model further, while reducing storage utilization and improving parameter efficiency. The results highlight the challenge of fake review detection and the need to explore solutions for efficiency instead of focusing deeper models.
author2 Lihui Chen
author_facet Lihui Chen
Ho, See Cheng
format Thesis-Master by Research
author Ho, See Cheng
author_sort Ho, See Cheng
title Fake review detection by fusing parameter efficient adapters in pre-trained language model
title_short Fake review detection by fusing parameter efficient adapters in pre-trained language model
title_full Fake review detection by fusing parameter efficient adapters in pre-trained language model
title_fullStr Fake review detection by fusing parameter efficient adapters in pre-trained language model
title_full_unstemmed Fake review detection by fusing parameter efficient adapters in pre-trained language model
title_sort fake review detection by fusing parameter efficient adapters in pre-trained language model
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/173129
_version_ 1789968690065702912