Baffle : Hiding backdoors in offline reinforcement learning datasets
Reinforcement learning (RL) makes an agent learn from trial-and-error experiences gathered during the interaction with the environment. Recently, offline RL has become a popular RL paradigm because it saves the interactions with environments. In offline RL, data providers share large pre-collected d...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9887 https://ink.library.smu.edu.sg/context/sis_research/article/10887/viewcontent/2210.04688v5.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10887 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-108872025-01-02T09:09:29Z Baffle : Hiding backdoors in offline reinforcement learning datasets GONG, Chen YANG, Zhou BAI, Yunpeng HE, Junda SHI, Jieke LI, Kecen SINHA, Arunesh XU, Bowen HOU, Xinwen David LO, WANG, Tianhao Reinforcement learning (RL) makes an agent learn from trial-and-error experiences gathered during the interaction with the environment. Recently, offline RL has become a popular RL paradigm because it saves the interactions with environments. In offline RL, data providers share large pre-collected datasets, and others can train high-quality agents without interacting with the environments. This paradigm has demonstrated effectiveness in critical tasks like robot control, autonomous driving, etc. However, less attention is paid to investigating the security threats to the offline RL system. This paper focuses on backdoor attacks, where some perturbations are added to the data (observations) such that given normal observations, the agent takes high-rewards actions, and low-reward actions on observations injected with triggers. In this paper, we propose Baffle (Backdoor Attack for Offline Reinforcement Learning), an approach that automatically implants backdoors to RL agents by poisoning the offline RL dataset, and evaluate how different offline RL algorithms react to this attack. Our experiments conducted on four tasks and nine offline RL algorithms expose a disquieting fact: none of the existing offline RL algorithms has been immune to such a backdoor attack. More specifically, Baffle modifies 10% of the datasets for four tasks (3 robotic controls and 1 autonomous driving). Agents trained on the poisoned datasets perform well in normal settings. However, when triggers are presented, the agents’ performance decreases drastically by 63.2%, 53.9%, 64.7%, and 47.4% in the four tasks on average. The backdoor still persists after fine-tuning poisoned agents on clean datasets. We further show that the inserted backdoor is also hard to be detected by a popular defensive method. This paper calls attention to developing more effective protection for the open-source offline RL dataset. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9887 info:doi/10.1109/SP54263.2024.00224 https://ink.library.smu.edu.sg/context/sis_research/article/10887/viewcontent/2210.04688v5.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Offline reinforcement learning Backdoor attack Dataset security threats Information Security |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Offline reinforcement learning Backdoor attack Dataset security threats Information Security |
spellingShingle |
Offline reinforcement learning Backdoor attack Dataset security threats Information Security GONG, Chen YANG, Zhou BAI, Yunpeng HE, Junda SHI, Jieke LI, Kecen SINHA, Arunesh XU, Bowen HOU, Xinwen David LO, WANG, Tianhao Baffle : Hiding backdoors in offline reinforcement learning datasets |
description |
Reinforcement learning (RL) makes an agent learn from trial-and-error experiences gathered during the interaction with the environment. Recently, offline RL has become a popular RL paradigm because it saves the interactions with environments. In offline RL, data providers share large pre-collected datasets, and others can train high-quality agents without interacting with the environments. This paradigm has demonstrated effectiveness in critical tasks like robot control, autonomous driving, etc. However, less attention is paid to investigating the security threats to the offline RL system. This paper focuses on backdoor attacks, where some perturbations are added to the data (observations) such that given normal observations, the agent takes high-rewards actions, and low-reward actions on observations injected with triggers. In this paper, we propose Baffle (Backdoor Attack for Offline Reinforcement Learning), an approach that automatically implants backdoors to RL agents by poisoning the offline RL dataset, and evaluate how different offline RL algorithms react to this attack. Our experiments conducted on four tasks and nine offline RL algorithms expose a disquieting fact: none of the existing offline RL algorithms has been immune to such a backdoor attack. More specifically, Baffle modifies 10% of the datasets for four tasks (3 robotic controls and 1 autonomous driving). Agents trained on the poisoned datasets perform well in normal settings. However, when triggers are presented, the agents’ performance decreases drastically by 63.2%, 53.9%, 64.7%, and 47.4% in the four tasks on average. The backdoor still persists after fine-tuning poisoned agents on clean datasets. We further show that the inserted backdoor is also hard to be detected by a popular defensive method. This paper calls attention to developing more effective protection for the open-source offline RL dataset. |
format |
text |
author |
GONG, Chen YANG, Zhou BAI, Yunpeng HE, Junda SHI, Jieke LI, Kecen SINHA, Arunesh XU, Bowen HOU, Xinwen David LO, WANG, Tianhao |
author_facet |
GONG, Chen YANG, Zhou BAI, Yunpeng HE, Junda SHI, Jieke LI, Kecen SINHA, Arunesh XU, Bowen HOU, Xinwen David LO, WANG, Tianhao |
author_sort |
GONG, Chen |
title |
Baffle : Hiding backdoors in offline reinforcement learning datasets |
title_short |
Baffle : Hiding backdoors in offline reinforcement learning datasets |
title_full |
Baffle : Hiding backdoors in offline reinforcement learning datasets |
title_fullStr |
Baffle : Hiding backdoors in offline reinforcement learning datasets |
title_full_unstemmed |
Baffle : Hiding backdoors in offline reinforcement learning datasets |
title_sort |
baffle : hiding backdoors in offline reinforcement learning datasets |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9887 https://ink.library.smu.edu.sg/context/sis_research/article/10887/viewcontent/2210.04688v5.pdf |
_version_ |
1821237274676822016 |