Offline RL with discrete proxy representations for generalizability in POMDPs
Offline Reinforcement Learning (RL) has demonstrated promising results in various applications by learning policies from previously collected datasets, reducing the need for online exploration and interactions. However, real-world scenarios usually involve partial observability, which brings crucial...
Saved in:
Main Authors: | GU, Pengjie, CAI, Xinyu, XING, Dong, WANG, Xinrun, ZHAO, Mengchen, AN, Bo |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9048 https://ink.library.smu.edu.sg/context/sis_research/article/10051/viewcontent/Offline_rl_with_discrete_proxy_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
SEAPoT-RL: Selective exploration algorithm for policy transfer in RL
by: NARAYAN, Akshay, et al.
Published: (2017) -
Unsupervised training sequence design: Efficient and generalizable agent training
by: LI, Wenjun, et al.
Published: (2024) -
On discovering motifs and frequent patterns in spatial trajectories with discrete Fréchet distance
by: TANG, Bo, et al.
Published: (2022) -
Efficient algorithms for trajectory-aware mobile crowdsourcing
by: HAN, Chung-Kyun
Published: (2021) -
Predicting Trusts among Users of Online Communities - An Epinions Case Study
by: LIU, Haifeng, et al.
Published: (2008)