An empirical evaluation on the interpretable methods on Malware analysis

With the upsurge of cybersecurity attacks in recent years, there is a demand for more complex and accurate Malware classifiers to take the limelight. For these complex models to be trusted and be deployed in the wild, it is necessary for the results of these complex models to be explainable and thus...

Full description

Saved in:
Bibliographic Details
Main Author: Ang, Alvis Jie Kai
Other Authors: Liu Yang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148598
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With the upsurge of cybersecurity attacks in recent years, there is a demand for more complex and accurate Malware classifiers to take the limelight. For these complex models to be trusted and be deployed in the wild, it is necessary for the results of these complex models to be explainable and thus trusted. However, complex black box models are difficult to be explained accurately with existing explanation techniques as different explanation techniques may perform better under different conditions. This report empirically evaluates the performance of the two most popular explanation techniques, LIME and SHAP, on a XGBoost classifier that was trained to classify Malware. The XGBoost model makes use of unigram and bigram as training features. To evaluate the performance of LIME and SHAP on the XGBoost model, we investigate the effects of the top ranked features from both explanation techniques by detecting the Malware class probability before and after eliminating the top ranked feature. While this metric may be a simple one, the consistency of the results show that it is nevertheless an effective one. Additionally, our results also show that SHAP consistently performs better than LIME on our model. Further investigation reveals that features ranked highly by LIME fluctuates greatly, from features that impact the class probabilities greatly to little or no effect from the XGBoost classifier used. Overall, using the metric proposed, we can perform evaluation of various explanation techniques on complex black box models.