Comparison of effectiveness and efficacy of different deep learning models on side-channel analysis

When encryption algorithms are implemented at the physical level, information tends to leak through time, power, and electromagnetic. Side-channel attackers recover the secret key and plaintext content by analyzing the collected side information. This method is much more effective than direct crypta...

Full description

Saved in:
Bibliographic Details
Main Author: Zhang, Han
Other Authors: Gwee Bah Hwee
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167831
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:When encryption algorithms are implemented at the physical level, information tends to leak through time, power, and electromagnetic. Side-channel attackers recover the secret key and plaintext content by analyzing the collected side information. This method is much more effective than direct cryptanalysis using mathematical methods on cryptographic primitives that achieve reputed security. Based on this idea, many Side-Channel Attack (SCA) methods have been developed, such as Template Attacks (TA), which have achieved excellent performance. Masking and shuffling are applied in physical encryption implementations to mitigate the threats of SCA. Traditional methods of SCA are challenged. Thanks to the rapid development of Deep Learning (DL), we can apply DL models to extract key features from complex information in SCA. The performance of a DL model is affected by many factors during the training process, such as learning rate, loss function, and regularization techniques. This report compares the effectiveness of DL models for SCA under several different settings. And the results show that noise injection can significantly increase DL-based SCA's effectiveness. According to the designed experiments, the most effective models are adding Gaussian Noise with 0.5 variances when unmasked and adding Gaussian Noise with 0.25 variances when under masking.