Deep Reinforcement Learning With Explicit Context Representation

Though reinforcement learning (RL) has shown an outstanding capability for solving complex computational problems, most RL algorithms lack an explicit method that would allow learning from contextual information. On the other hand, humans often use context to identify patterns and relations among el...

Full description

Saved in:
Bibliographic Details
Main Authors: Munguia-Galeano, Francisco, TAN, Ah-hwee, JI, Ze
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8471
https://ink.library.smu.edu.sg/context/sis_research/article/9474/viewcontent/DRL_explicit_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9474
record_format dspace
spelling sg-smu-ink.sis_research-94742024-01-04T09:27:51Z Deep Reinforcement Learning With Explicit Context Representation Munguia-Galeano, Francisco TAN, Ah-hwee JI, Ze Though reinforcement learning (RL) has shown an outstanding capability for solving complex computational problems, most RL algorithms lack an explicit method that would allow learning from contextual information. On the other hand, humans often use context to identify patterns and relations among elements in the environment, along with how to avoid making wrong actions. However, what may seem like an obviously wrong decision from a human perspective could take hundreds of steps for an RL agent to learn to avoid. This article proposes a framework for discrete environments called Iota explicit context representation (IECR). The framework involves representing each state using contextual key frames (CKFs), which can then be used to extract a function that represents the affordances of the state; in addition, two loss functions are introduced with respect to the affordances of the state. The novelty of the IECR framework lies in its capacity to extract contextual information from the environment and learn from the CKFs' representation. We validate the framework by developing four new algorithms that learn using context: Iota deep Q-network (IDQN), Iota double deep Q-network (IDDQN), Iota dueling deep Q-network (IDuDQN), and Iota dueling double deep Q-network (IDDDQN). Furthermore, we evaluate the framework and the new algorithms in five discrete environments. We show that all the algorithms, which use contextual information, converge in around 40 000 training steps of the neural networks, significantly outperforming their state-of-the-art equivalents. 2023-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8471 info:doi/10.1109/TNNLS.2023.3325633 https://ink.library.smu.edu.sg/context/sis_research/article/9474/viewcontent/DRL_explicit_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial intelligence deep reinforcement learning (RL) machine learning (ML neural networks Q-learning (QL) Artificial Intelligence and Robotics OS and Networks Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial intelligence
deep reinforcement learning (RL)
machine learning (ML
neural networks
Q-learning (QL)
Artificial Intelligence and Robotics
OS and Networks
Theory and Algorithms
spellingShingle Artificial intelligence
deep reinforcement learning (RL)
machine learning (ML
neural networks
Q-learning (QL)
Artificial Intelligence and Robotics
OS and Networks
Theory and Algorithms
Munguia-Galeano, Francisco
TAN, Ah-hwee
JI, Ze
Deep Reinforcement Learning With Explicit Context Representation
description Though reinforcement learning (RL) has shown an outstanding capability for solving complex computational problems, most RL algorithms lack an explicit method that would allow learning from contextual information. On the other hand, humans often use context to identify patterns and relations among elements in the environment, along with how to avoid making wrong actions. However, what may seem like an obviously wrong decision from a human perspective could take hundreds of steps for an RL agent to learn to avoid. This article proposes a framework for discrete environments called Iota explicit context representation (IECR). The framework involves representing each state using contextual key frames (CKFs), which can then be used to extract a function that represents the affordances of the state; in addition, two loss functions are introduced with respect to the affordances of the state. The novelty of the IECR framework lies in its capacity to extract contextual information from the environment and learn from the CKFs' representation. We validate the framework by developing four new algorithms that learn using context: Iota deep Q-network (IDQN), Iota double deep Q-network (IDDQN), Iota dueling deep Q-network (IDuDQN), and Iota dueling double deep Q-network (IDDDQN). Furthermore, we evaluate the framework and the new algorithms in five discrete environments. We show that all the algorithms, which use contextual information, converge in around 40 000 training steps of the neural networks, significantly outperforming their state-of-the-art equivalents.
format text
author Munguia-Galeano, Francisco
TAN, Ah-hwee
JI, Ze
author_facet Munguia-Galeano, Francisco
TAN, Ah-hwee
JI, Ze
author_sort Munguia-Galeano, Francisco
title Deep Reinforcement Learning With Explicit Context Representation
title_short Deep Reinforcement Learning With Explicit Context Representation
title_full Deep Reinforcement Learning With Explicit Context Representation
title_fullStr Deep Reinforcement Learning With Explicit Context Representation
title_full_unstemmed Deep Reinforcement Learning With Explicit Context Representation
title_sort deep reinforcement learning with explicit context representation
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8471
https://ink.library.smu.edu.sg/context/sis_research/article/9474/viewcontent/DRL_explicit_av.pdf
_version_ 1787590775643570176