Advanced attack and defense techniques in machine learning systems
The security of machine learning systems has become a great concern in many real-world applications involving adversaries, including spam filtering, malware detection and e-commerce. There is an increasing trend of study on the security of machine learning systems but the current research is still f...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/103486 http://hdl.handle.net/10220/47390 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-103486 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering |
spellingShingle |
DRNTU::Engineering::Computer science and engineering Zhao, Mengchen Advanced attack and defense techniques in machine learning systems |
description |
The security of machine learning systems has become a great concern in many real-world applications involving adversaries, including spam filtering, malware detection and e-commerce. There is an increasing trend of study on the security of machine learning systems but the current research is still far from satisfactory. Towards building secure machine learning systems, the first step is to study their vulnerability, which turns out to be very challenging due to the variety and complexity of machine learning systems. Combating adversaries in machine learning systems is even more challenging due to the strategic behavior of the adversaries.
This thesis studies both the adversarial threats and the defenses in real-world machine learning systems. Regarding the adversarial threats, we begin by studying label contamination attacks, which is an important type of data poisoning attacks. Then we generalize the conventional data poisoning attacks on single-task learning models to multi-task learning models. Regarding defending against real-world attacks, we first study the spear phishing attacks in email systems and propose a framework for optimizing the personalized email filtering thresholds to mitigate such attacks. Then, we study the fraud transactions in e-commerce systems and propose a deep reinforcement learning based impression allocation mechanism for combating fraudulent sellers. The specific contributions of this thesis are listed below.
First, regarding the label contamination attacks, we develop a Projected Gradient Ascent (PGA) algorithm to compute attacks on a family of empirical risk minimizations and show that an attack on one victim model can also be effective on other victim models. This makes it possible that the attacker designs an attack against a substitute model and transfers it to a black-box victim model. Based on the observation of the transferability, we develop a defense algorithm to identify the data points that are most likely to be attacked. Empirical studies show that PGA significantly outperforms existing baselines and linear learning models are better substitute models than nonlinear ones.
Second, in the study of data poisoning attacks on muti-task learning models, we formulate the problem of computing optimal poisoning attacks on Multi-Task Relationship Learning (MTRL) as a bilevel program that is adaptive to arbitrary choice of \emph{target} tasks and \emph{attacking} tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on real-world datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks.
Third, on defending against spear phishing email attacks, we consider two important extensions of the previous threat models. First, we consider the cases where multiple users provide access to the same information or credential. Second, we consider attackers who make sequential attack plans based on the outcome of previous attacks. Our analysis starts from scenarios where there is only one credential and then extends to more general scenarios with multiple credentials. For single-credential scenarios, we demonstrate that the optimal defense strategy can be found by solving a binary combinatorial optimization problem called PEDS. For multiple-credential scenarios, we formulate it as a bilevel optimization problem for finding the optimal defense strategy and then reduce it to a single level optimization problem called PEMS using complementary slackness conditions. Experimental results show that both PEDS and PEMS lead to significant higher defender utilities than two existing benchmarks in different parameter settings. Also, both PEDS and PEMS are more robust than the existing benchmarks considering uncertainties.
Fourth, on combating fraudulent sellers in e-commerce platforms, we focus on improving the platform's impression allocation mechanism to maximize its profit and reduce the sellers' fraudulent behaviors simultaneously. First, we learn a seller behavior model to predict the sellers' fraudulent behaviors from the real-world data provided by one of the largest e-commerce company in the world. Then, we formulate the platform's impression allocation problem as a continuous Markov Decision Process (MDP) with unbounded action space. In order to make the action executable in practice and facilitate learning, we propose a novel deep reinforcement learning algorithm DDPG-ANP that introduces an action norm penalty to the reward function. Experimental results show that our algorithm significantly outperforms existing baselines in terms of scalability and solution quality. |
author2 |
Bo An |
author_facet |
Bo An Zhao, Mengchen |
format |
Theses and Dissertations |
author |
Zhao, Mengchen |
author_sort |
Zhao, Mengchen |
title |
Advanced attack and defense techniques in machine learning systems |
title_short |
Advanced attack and defense techniques in machine learning systems |
title_full |
Advanced attack and defense techniques in machine learning systems |
title_fullStr |
Advanced attack and defense techniques in machine learning systems |
title_full_unstemmed |
Advanced attack and defense techniques in machine learning systems |
title_sort |
advanced attack and defense techniques in machine learning systems |
publishDate |
2019 |
url |
https://hdl.handle.net/10356/103486 http://hdl.handle.net/10220/47390 |
_version_ |
1681059279445950464 |
spelling |
sg-ntu-dr.10356-1034862020-06-23T12:25:46Z Advanced attack and defense techniques in machine learning systems Zhao, Mengchen Bo An School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering The security of machine learning systems has become a great concern in many real-world applications involving adversaries, including spam filtering, malware detection and e-commerce. There is an increasing trend of study on the security of machine learning systems but the current research is still far from satisfactory. Towards building secure machine learning systems, the first step is to study their vulnerability, which turns out to be very challenging due to the variety and complexity of machine learning systems. Combating adversaries in machine learning systems is even more challenging due to the strategic behavior of the adversaries. This thesis studies both the adversarial threats and the defenses in real-world machine learning systems. Regarding the adversarial threats, we begin by studying label contamination attacks, which is an important type of data poisoning attacks. Then we generalize the conventional data poisoning attacks on single-task learning models to multi-task learning models. Regarding defending against real-world attacks, we first study the spear phishing attacks in email systems and propose a framework for optimizing the personalized email filtering thresholds to mitigate such attacks. Then, we study the fraud transactions in e-commerce systems and propose a deep reinforcement learning based impression allocation mechanism for combating fraudulent sellers. The specific contributions of this thesis are listed below. First, regarding the label contamination attacks, we develop a Projected Gradient Ascent (PGA) algorithm to compute attacks on a family of empirical risk minimizations and show that an attack on one victim model can also be effective on other victim models. This makes it possible that the attacker designs an attack against a substitute model and transfers it to a black-box victim model. Based on the observation of the transferability, we develop a defense algorithm to identify the data points that are most likely to be attacked. Empirical studies show that PGA significantly outperforms existing baselines and linear learning models are better substitute models than nonlinear ones. Second, in the study of data poisoning attacks on muti-task learning models, we formulate the problem of computing optimal poisoning attacks on Multi-Task Relationship Learning (MTRL) as a bilevel program that is adaptive to arbitrary choice of \emph{target} tasks and \emph{attacking} tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on real-world datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks. Third, on defending against spear phishing email attacks, we consider two important extensions of the previous threat models. First, we consider the cases where multiple users provide access to the same information or credential. Second, we consider attackers who make sequential attack plans based on the outcome of previous attacks. Our analysis starts from scenarios where there is only one credential and then extends to more general scenarios with multiple credentials. For single-credential scenarios, we demonstrate that the optimal defense strategy can be found by solving a binary combinatorial optimization problem called PEDS. For multiple-credential scenarios, we formulate it as a bilevel optimization problem for finding the optimal defense strategy and then reduce it to a single level optimization problem called PEMS using complementary slackness conditions. Experimental results show that both PEDS and PEMS lead to significant higher defender utilities than two existing benchmarks in different parameter settings. Also, both PEDS and PEMS are more robust than the existing benchmarks considering uncertainties. Fourth, on combating fraudulent sellers in e-commerce platforms, we focus on improving the platform's impression allocation mechanism to maximize its profit and reduce the sellers' fraudulent behaviors simultaneously. First, we learn a seller behavior model to predict the sellers' fraudulent behaviors from the real-world data provided by one of the largest e-commerce company in the world. Then, we formulate the platform's impression allocation problem as a continuous Markov Decision Process (MDP) with unbounded action space. In order to make the action executable in practice and facilitate learning, we propose a novel deep reinforcement learning algorithm DDPG-ANP that introduces an action norm penalty to the reward function. Experimental results show that our algorithm significantly outperforms existing baselines in terms of scalability and solution quality. Doctor of Philosophy 2019-01-04T15:13:25Z 2019-12-06T21:13:41Z 2019-01-04T15:13:25Z 2019-12-06T21:13:41Z 2018 Thesis Zhao, M. (2018). Advanced attack and defense techniques in machine learning systems. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/103486 http://hdl.handle.net/10220/47390 10.32657/10220/47390 en 117 p. application/pdf |