Let’s stride blindfolded in a forest : sublinear multi-client decision trees evaluation
Decision trees are popular machine-learning classification models due to their simplicity and effectiveness. Tai et al. (ESORICS ’17) propose a privacy-preserving decision-tree evaluation protocol purely based on additive homomorphic encryption, without introducing dummy nodes for hiding the tree s...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148326 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Decision trees are popular machine-learning classification models due to their simplicity and effectiveness. Tai et al. (ESORICS ’17) propose a privacy-preserving decision-tree evaluation protocol purely based on additive homomorphic encryption,
without introducing dummy nodes for hiding the tree structure, but it runs a secure comparison for each decision node, resulting in linear complexity. Later protocols (DBSEC ’18, PETS ’19) achieve sublinear (client-side) complexity, yet the server-side path evaluation requires oblivious transfer among 2d real and dummy
nodes even for a sparse tree of depth d to hide the tree structure. This paper aims for the best of both worlds and hence the most lightweight protocol to date. Our complete-tree protocol can be easily extended to the sparse-tree setting and the reusable outsourcing setting: a model owner (resp. client) can outsource
the decision tree (resp. attributes) to two non-colluding servers for classifications. The outsourced extension supports multi-client joint evaluation, which is the first of its kind without using multikey fully-homomorphic encryption (TDSC ’19). We also extend our protocol for achieving privacy against malicious adversaries.
Our experiments compare in various network settings our offline and online communication costs and the online computation time with the prior sublinear protocol of Tueno et al. (PETS ’19) and O(1)-round linear protocols of Kiss et al. (PETS ’19), which can be seen as garbled circuit variants of Tai et al.’s. Our protocols are shown to be desirable for IoT-like scenarios with weak clients
and big-data scenarios with high-dimensional feature vectors. |
---|