Lightweight privacy-preserving ensemble classification for face recognition
The development of machine learning technology and visual sensors is promoting the wider applications of face recognition into our daily life. However, if the face features in the servers are abused by the adversary, our privacy and wealth can be faced with great threat. Many security experts have p...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4405 https://ink.library.smu.edu.sg/context/sis_research/article/5408/viewcontent/101109JIOT20192905555.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | The development of machine learning technology and visual sensors is promoting the wider applications of face recognition into our daily life. However, if the face features in the servers are abused by the adversary, our privacy and wealth can be faced with great threat. Many security experts have pointed out that, by 3-D-printing technology, the adversary can utilize the leaked face feature data to masquerade others and break the E-bank accounts. Therefore, in this paper, we propose a lightweight privacy-preserving adaptive boosting (AdaBoost) classification framework for face recognition (POR) based on the additive secret sharing and edge computing. First, we improve the current additive secret sharing-based exponentiation and logarithm functions by expanding the effective input range. Then, by utilizing the protocols, two edge servers are deployed to cooperatively complete the ensemble classification of AdaBoost for face recognition. The application of edge computing ensures the efficiency and robustness of POR. Furthermore, we prove the correctness and security of our protocols by theoretic analysis. And experiment results show that, POR can reduce about 58% computation error compared with the existing differential privacy-based framework. |
---|