An empirical evaluation on the interpretable methods on Malware analysis

With the upsurge of cybersecurity attacks in recent years, there is a demand for more complex and accurate Malware classifiers to take the limelight. For these complex models to be trusted and be deployed in the wild, it is necessary for the results of these complex models to be explainable and thus...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Ang, Alvis Jie Kai
مؤلفون آخرون: Liu Yang
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2021
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/148598
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:With the upsurge of cybersecurity attacks in recent years, there is a demand for more complex and accurate Malware classifiers to take the limelight. For these complex models to be trusted and be deployed in the wild, it is necessary for the results of these complex models to be explainable and thus trusted. However, complex black box models are difficult to be explained accurately with existing explanation techniques as different explanation techniques may perform better under different conditions. This report empirically evaluates the performance of the two most popular explanation techniques, LIME and SHAP, on a XGBoost classifier that was trained to classify Malware. The XGBoost model makes use of unigram and bigram as training features. To evaluate the performance of LIME and SHAP on the XGBoost model, we investigate the effects of the top ranked features from both explanation techniques by detecting the Malware class probability before and after eliminating the top ranked feature. While this metric may be a simple one, the consistency of the results show that it is nevertheless an effective one. Additionally, our results also show that SHAP consistently performs better than LIME on our model. Further investigation reveals that features ranked highly by LIME fluctuates greatly, from features that impact the class probabilities greatly to little or no effect from the XGBoost classifier used. Overall, using the metric proposed, we can perform evaluation of various explanation techniques on complex black box models.