Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis

Deep neural networks (DNN) have become a significant threat to the security of cryptographic implementations with regards to side-channel analysis (SCA), as they automatically combine the leakages without any preprocessing needed, leading to a more efficient attack. However, these DNNs for SCA remai...

Full description

Saved in:
Bibliographic Details
Main Authors: Yap, Trevor, Benamira, Adrien, Bhasin, Shivam, Peyrin, Thomas
Other Authors: School of Physical and Mathematical Sciences
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/169835
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-169835
record_format dspace
spelling sg-ntu-dr.10356-1698352023-08-07T15:34:58Z Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis Yap, Trevor Benamira, Adrien Bhasin, Shivam Peyrin, Thomas School of Physical and Mathematical Sciences Engineering::Computer science and engineering Profiling Attack Neural Network Deep neural networks (DNN) have become a significant threat to the security of cryptographic implementations with regards to side-channel analysis (SCA), as they automatically combine the leakages without any preprocessing needed, leading to a more efficient attack. However, these DNNs for SCA remain mostly black-box algorithms that are very difficult to interpret. Benamira et al. recently proposed an interpretable neural network called Truth Table Deep Convolutional Neural Network (TT-DCNN), which is both expressive and easier to interpret. In particular, a TT-DCNN has a transparent inner structure that can entirely be transformed into SAT equations after training. In this work, we analyze the SAT equations extracted from a TT-DCNN when applied in SCA context, eventually obtaining the rules and decisions that the neural networks learned when retrieving the secret key from the cryptographic primitive (i.e., exact formula). As a result, we can pinpoint the critical rules that the neural network uses to locate the exact Points of Interest (PoIs). We validate our approach first on simulated traces for higher-order masking. However, applying TT-DCNN on real traces is not straightforward. We propose a method to adapt TT-DCNN for application on real SCA traces containing thousands of sample points. Experimental validation is performed on software-based ASCADv1 and hardware-based AES_HD_ext datasets. In addition, TT-DCNN is shown to be able to learn the exact countermeasure in a best-case setting. Published version 2023-08-07T08:25:44Z 2023-08-07T08:25:44Z 2023 Journal Article Yap, T., Benamira, A., Bhasin, S. & Peyrin, T. (2023). Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis. IACR Transactions On Cryptographic Hardware and Embedded Systems, 2023(2), 24-53. https://dx.doi.org/10.46586/tches.v2023.i2.24-53 2569-2925 https://hdl.handle.net/10356/169835 10.46586/tches.v2023.i2.24-53 2-s2.0-85150062301 2 2023 24 53 en IACR Transactions on Cryptographic Hardware and Embedded Systems © 2023 Trevor Yap, Adrien Benamira, Shivam Bhasin, Thomas Peyrin. This work is licensed under a Creative Commons Attribution 4.0 International License. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Profiling Attack
Neural Network
spellingShingle Engineering::Computer science and engineering
Profiling Attack
Neural Network
Yap, Trevor
Benamira, Adrien
Bhasin, Shivam
Peyrin, Thomas
Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis
description Deep neural networks (DNN) have become a significant threat to the security of cryptographic implementations with regards to side-channel analysis (SCA), as they automatically combine the leakages without any preprocessing needed, leading to a more efficient attack. However, these DNNs for SCA remain mostly black-box algorithms that are very difficult to interpret. Benamira et al. recently proposed an interpretable neural network called Truth Table Deep Convolutional Neural Network (TT-DCNN), which is both expressive and easier to interpret. In particular, a TT-DCNN has a transparent inner structure that can entirely be transformed into SAT equations after training. In this work, we analyze the SAT equations extracted from a TT-DCNN when applied in SCA context, eventually obtaining the rules and decisions that the neural networks learned when retrieving the secret key from the cryptographic primitive (i.e., exact formula). As a result, we can pinpoint the critical rules that the neural network uses to locate the exact Points of Interest (PoIs). We validate our approach first on simulated traces for higher-order masking. However, applying TT-DCNN on real traces is not straightforward. We propose a method to adapt TT-DCNN for application on real SCA traces containing thousands of sample points. Experimental validation is performed on software-based ASCADv1 and hardware-based AES_HD_ext datasets. In addition, TT-DCNN is shown to be able to learn the exact countermeasure in a best-case setting.
author2 School of Physical and Mathematical Sciences
author_facet School of Physical and Mathematical Sciences
Yap, Trevor
Benamira, Adrien
Bhasin, Shivam
Peyrin, Thomas
format Article
author Yap, Trevor
Benamira, Adrien
Bhasin, Shivam
Peyrin, Thomas
author_sort Yap, Trevor
title Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis
title_short Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis
title_full Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis
title_fullStr Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis
title_full_unstemmed Peek into the black-box: interpretable neural network using SAT equations in side-channel analysis
title_sort peek into the black-box: interpretable neural network using sat equations in side-channel analysis
publishDate 2023
url https://hdl.handle.net/10356/169835
_version_ 1779156382443896832