Transferable curricula through difficulty conditioned generators

Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of c...

Full description

Saved in:
Bibliographic Details
Main Authors: TIO, Sidney, VARAKANTHAM, Pradeep
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8097
https://ink.library.smu.edu.sg/context/sis_research/article/9100/viewcontent/PERM_0543_pvoa.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9100
record_format dspace
spelling sg-smu-ink.sis_research-91002023-09-07T07:22:54Z Transferable curricula through difficulty conditioned generators TIO, Sidney VARAKANTHAM, Pradeep Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality. 2023-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8097 info:doi/10.24963/ijcai.2023/543 https://ink.library.smu.edu.sg/context/sis_research/article/9100/viewcontent/PERM_0543_pvoa.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Computer-aided education Game playing reinforcement learning Artificial Intelligence and Robotics Curriculum and Instruction Education
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Computer-aided education
Game playing
reinforcement learning
Artificial Intelligence and Robotics
Curriculum and Instruction
Education
spellingShingle Computer-aided education
Game playing
reinforcement learning
Artificial Intelligence and Robotics
Curriculum and Instruction
Education
TIO, Sidney
VARAKANTHAM, Pradeep
Transferable curricula through difficulty conditioned generators
description Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents in parameterized environments. Inspired by Item Response Theory, PERM seeks to model difficulty of environments and ability of RL agents directly. Given that RL agents and humans are trained more efficiently under the "zone of proximal development", our method generates a curriculum by matching the difficulty of an environment to the current ability of the student. In addition, PERM can be trained offline and does not employ non-stationary measures of student ability, making it suitable for transfer between students. We demonstrate PERM's ability to represent the environment parameter space, and training with RL agents with PERM produces a strong performance in deterministic environments. Lastly, we show that our method is transferable between students, without any sacrifice in training quality.
format text
author TIO, Sidney
VARAKANTHAM, Pradeep
author_facet TIO, Sidney
VARAKANTHAM, Pradeep
author_sort TIO, Sidney
title Transferable curricula through difficulty conditioned generators
title_short Transferable curricula through difficulty conditioned generators
title_full Transferable curricula through difficulty conditioned generators
title_fullStr Transferable curricula through difficulty conditioned generators
title_full_unstemmed Transferable curricula through difficulty conditioned generators
title_sort transferable curricula through difficulty conditioned generators
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8097
https://ink.library.smu.edu.sg/context/sis_research/article/9100/viewcontent/PERM_0543_pvoa.pdf
_version_ 1779157153719779328