Minimalistic attacks : how little it takes to fool deep reinforcement learning policies
Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples. However, while most prior works analyze the effects of perturbing every pixel of every frame assuming white-box policy access, in this paper we take a more restrictive view towards adversary...
Saved in:
Main Authors: | Qu, Xinghua, Sun, Zhu, Ong, Yew-Soon, Gupta, Abhishek, Wei, Pengfei |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Article |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/153700 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Deep-attack over the deep reinforcement learning
by: Li, Yang, et al.
Published: (2022) -
Adversarial attacks and robustness for segment anything model
by: Liu, Shifei
Published: (2024) -
Robust data-driven adversarial false data injection attack detection method with deep Q-network in power systems
by: Ran, Xiaohong, et al.
Published: (2024) -
Curiosity-driven and victim-aware adversarial policies
by: GONG, Chen, et al.
Published: (2022) -
SPARK: Spatial-aware online incremental attack against visual tracking
by: GUO, Qing, et al.
Published: (2020)