COMPARING EXPLANATIONS OF SENTIMENT ANALYSIS SYSTEM USING INDONESIAN TEXTS

Explanations of an artificial intelligence system is an object which its main purpose is to explain the workflow or method of an artificial intelligence system. These explanations are usually made using what is called an explanation system. The main purposes of these systems are to provide knowle...

Full description

Saved in:
Bibliographic Details
Main Author: Galih Raihan Ramadhan, Muhammad
Format: Final Project
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/86425
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
Description
Summary:Explanations of an artificial intelligence system is an object which its main purpose is to explain the workflow or method of an artificial intelligence system. These explanations are usually made using what is called an explanation system. The main purposes of these systems are to provide knowledge to its users and to improve their trust while using said systems. On the other hand, sentiment analysis is a field of artificial intelligence that studies the ways of how an AI system can be used to predict the sentiment of a text. Sentiment analysis is one of the most frequently used field of AI currently in used. In this paper, 2 explanation methods are compared and will be used to explain a process of sentiment analysis. The first method is a task of sentiment analysis, ABSA (Aspect Based Sentiment Analysis). ABSA predicts the sentiment and the target aspects of said sentiment. The second method is LIME (Local Interpretable Model-agnostic Explanations). This method originates from the field of XAI (Explainable Artificial Intelligence) and was designed to explain multiple types of AI systems. For sentiment analysis, LIME provides an explanation by giving a value for each word in a sentence. Experiment is done using ABSA and LIME for the purpose of explaining a sentiment analysis process using an Indonesia text dataset. Both methods are evaluated using the metrics interpretability, completeness, and consistency. A survey is then done to gauge the interpretability and completeness of both methods, while consistency is gauged by comparing each explanation with another explanation from the same method. The result of the experiment showed that both methods provide explanations that can be understood by its user. From the survey, ABSA showed a better result for long explanations, while LIME showed a similarly well result for both short and long explanations. As for consistency, ABSA showed a better result compared to LIME which may provide an inconsistent explanation.