Minimalistic attacks : how little it takes to fool deep reinforcement learning policies

Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples. However, while most prior works analyze the effects of perturbing every pixel of every frame assuming white-box policy access, in this paper we take a more restrictive view towards adversary...

Full description

Saved in:
Bibliographic Details
Main Authors: Qu, Xinghua, Sun, Zhu, Ong, Yew-Soon, Gupta, Abhishek, Wei, Pengfei
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/153700
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English

Similar Items