SoK: Towards the Science of security and privacy in machine learning

Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment o...

Full description

Saved in:
Bibliographic Details
Main Authors: PAPERNOT, Nicolas, MCDANIEL, Patrick, SINHA, Arunesh, WELLMAN, Michael
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2018
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/4790
https://ink.library.smu.edu.sg/context/sis_research/article/5793/viewcontent/1611.03814.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-5793
record_format dspace
spelling sg-smu-ink.sis_research-57932020-01-16T10:13:48Z SoK: Towards the Science of security and privacy in machine learning PAPERNOT, Nicolas MCDANIEL, Patrick SINHA, Arunesh WELLMAN, Michael Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community’s understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used. 2018-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4790 info:doi/10.1109/EuroSP.2018.00035 https://ink.library.smu.edu.sg/context/sis_research/article/5793/viewcontent/1611.03814.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Information Security Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Information Security
Theory and Algorithms
spellingShingle Information Security
Theory and Algorithms
PAPERNOT, Nicolas
MCDANIEL, Patrick
SINHA, Arunesh
WELLMAN, Michael
SoK: Towards the Science of security and privacy in machine learning
description Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every domain imaginable, leading to rapid and widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community’s understanding of the nature and extent of these vulnerabilities remains limited. We systematize recent findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation. Through these explorations, we show that there are (possibly unavoidable) tensions between model complexity, accuracy, and resilience that must be calibrated for the environments in which they will be used.
format text
author PAPERNOT, Nicolas
MCDANIEL, Patrick
SINHA, Arunesh
WELLMAN, Michael
author_facet PAPERNOT, Nicolas
MCDANIEL, Patrick
SINHA, Arunesh
WELLMAN, Michael
author_sort PAPERNOT, Nicolas
title SoK: Towards the Science of security and privacy in machine learning
title_short SoK: Towards the Science of security and privacy in machine learning
title_full SoK: Towards the Science of security and privacy in machine learning
title_fullStr SoK: Towards the Science of security and privacy in machine learning
title_full_unstemmed SoK: Towards the Science of security and privacy in machine learning
title_sort sok: towards the science of security and privacy in machine learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2018
url https://ink.library.smu.edu.sg/sis_research/4790
https://ink.library.smu.edu.sg/context/sis_research/article/5793/viewcontent/1611.03814.pdf
_version_ 1770575031554801664