Minimalistic attacks : how little it takes to fool deep reinforcement learning policies

Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples. However, while most prior works analyze the effects of perturbing every pixel of every frame assuming white-box policy access, in this paper we take a more restrictive view towards adversary...

全面介紹

Saved in:
書目詳細資料
Main Authors: Qu, Xinghua, Sun, Zhu, Ong, Yew-Soon, Gupta, Abhishek, Wei, Pengfei
其他作者: School of Computer Science and Engineering
格式: Article
語言:English
出版: 2021
主題:
在線閱讀:https://hdl.handle.net/10356/153700
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English