In-The-Wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy

Deepfake technology has become increasingly sophisticated in recent years, making detecting fake images and videos challenging. This paper investigates the performance of adaptable convolutional neural network (CNN) models for detecting Deepfakes. In-the-wild OpenForensics dataset was used to evalua...

Full description

Saved in:
Bibliographic Details
Main Authors: Saealal, Muhammad Salihin, Ibrahim, Mohd. Zamri, Shapiai, Mohd. Ibrahim, Fadilah, Norasyikin
Format: Conference or Workshop Item
Published: 2023
Subjects:
Online Access:http://eprints.utm.my/107616/
http://dx.doi.org/10.1109/ICCCI59363.2023.10210096
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Teknologi Malaysia
Description
Summary:Deepfake technology has become increasingly sophisticated in recent years, making detecting fake images and videos challenging. This paper investigates the performance of adaptable convolutional neural network (CNN) models for detecting Deepfakes. In-the-wild OpenForensics dataset was used to evaluate four different CNN models (DenseNet121, ResNet18, SqueezeNet, and VGG11) at different batch sizes and with various performance metrics. Results show that the adapted VGG11 model with a batch size of 32 achieved the highest accuracy of 94.46% in detecting Deepfakes, outperforming the other models, with DenseNet121 as the second-best performer achieving an accuracy of 93.89% with the same batch size. Grad-CAM techniques are utilized to visualize the decision-making process within the models, aiding in understanding the Deepfake classification process. These findings provide valuable insights into the performance of different deep learning models and can guide the selection of an appropriate model for a specific application.