Stealing deep reinforcement learning models for fun and profit

This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment. Model extraction attacks against supervised Deep Learning models have been w...

Full description

Saved in:
Bibliographic Details
Main Authors: CHEN, Kangjie, GUO, Shangwei, ZHANG, Tianwei, XIE, Xiaofei, LIU, Yang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7110
https://ink.library.smu.edu.sg/context/sis_research/article/8113/viewcontent/3433210.3453090.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8113
record_format dspace
spelling sg-smu-ink.sis_research-81132022-04-14T11:44:11Z Stealing deep reinforcement learning models for fun and profit CHEN, Kangjie GUO, Shangwei ZHANG, Tianwei XIE, Xiaofei LIU, Yang This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied. However, those techniques cannot be applied to the reinforcement learning scenario due to DRL models' high complexity, stochasticity and limited observable information. We propose a novel methodology to overcome the above challenges. The key insight of our approach is that the process of DRL model extraction is equivalent to imitation learning, a well-established solution to learn sequential decision-making policies. Based on this observation, our methodology first builds a classifier to reveal the training algorithm family of the targeted black-box DRL model only based on its predicted actions, and then leverages state-of-the-art imitation learning techniques to replicate the model from the identified algorithm family. Experimental results indicate that our methodology can effectively recover the DRL models with high fidelity and accuracy. We also demonstrate two use cases to show that our model extraction attack can (1) significantly improve the success rate of adversarial attacks, and (2) steal DRL models stealthily even they are protected by DNN watermarks. These pose a severe threat to the intellectual property and privacy protection of DRL applications. 2021-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7110 info:doi/10.1145/3433210.3453090 https://ink.library.smu.edu.sg/context/sis_research/article/8113/viewcontent/3433210.3453090.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Model extraction Deep reinforcement learning Imitation learning Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Model extraction
Deep reinforcement learning
Imitation learning
Software Engineering
spellingShingle Model extraction
Deep reinforcement learning
Imitation learning
Software Engineering
CHEN, Kangjie
GUO, Shangwei
ZHANG, Tianwei
XIE, Xiaofei
LIU, Yang
Stealing deep reinforcement learning models for fun and profit
description This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an external adversary to precisely recover a black-box DRL model only from its interaction with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied. However, those techniques cannot be applied to the reinforcement learning scenario due to DRL models' high complexity, stochasticity and limited observable information. We propose a novel methodology to overcome the above challenges. The key insight of our approach is that the process of DRL model extraction is equivalent to imitation learning, a well-established solution to learn sequential decision-making policies. Based on this observation, our methodology first builds a classifier to reveal the training algorithm family of the targeted black-box DRL model only based on its predicted actions, and then leverages state-of-the-art imitation learning techniques to replicate the model from the identified algorithm family. Experimental results indicate that our methodology can effectively recover the DRL models with high fidelity and accuracy. We also demonstrate two use cases to show that our model extraction attack can (1) significantly improve the success rate of adversarial attacks, and (2) steal DRL models stealthily even they are protected by DNN watermarks. These pose a severe threat to the intellectual property and privacy protection of DRL applications.
format text
author CHEN, Kangjie
GUO, Shangwei
ZHANG, Tianwei
XIE, Xiaofei
LIU, Yang
author_facet CHEN, Kangjie
GUO, Shangwei
ZHANG, Tianwei
XIE, Xiaofei
LIU, Yang
author_sort CHEN, Kangjie
title Stealing deep reinforcement learning models for fun and profit
title_short Stealing deep reinforcement learning models for fun and profit
title_full Stealing deep reinforcement learning models for fun and profit
title_fullStr Stealing deep reinforcement learning models for fun and profit
title_full_unstemmed Stealing deep reinforcement learning models for fun and profit
title_sort stealing deep reinforcement learning models for fun and profit
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/7110
https://ink.library.smu.edu.sg/context/sis_research/article/8113/viewcontent/3433210.3453090.pdf
_version_ 1770576214319169536