Environment poisoning in reinforcement learning: attacks and resilience
As Reinforcement Learning (RL) systems have been widely adopted in real-world applications, the security of these systems has become increasingly significant. Thus, it is essential to protect RL systems against a variety of adversarial attacks. Of these attacks, a training-time attack is considered...
Saved in:
Main Author: | Xu, Hang |
---|---|
Other Authors: | Zinovi Rabinovich |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164969 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Early goal-detection for black-box environment poisoning attacks
by: Iyengar, Varun Srikant
Published: (2023) -
Study of attacks on federated learning
by: Thung, Jia Cheng
Published: (2021) -
Robust multi-agent team behaviors in uncertain environment via reinforcement learning
by: Yan, Kok Hong
Published: (2022) -
An empirical study of the inherent resistance of knowledge distillation based federated learning to targeted poisoning attacks
by: He, Weiyang, et al.
Published: (2024) -
Attack on training effort of deep learning
by: How, Kevin Kai-Wen
Published: (2021)