Self reward design with fine-grained interpretability

The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumven...

Full description

Saved in:
Bibliographic Details
Main Authors: Tjoa, Erico, Guan, Cuntai
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/169406
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-169406
record_format dspace
spelling sg-ntu-dr.10356-1694062023-07-21T15:36:33Z Self reward design with fine-grained interpretability Tjoa, Erico Guan, Cuntai School of Computer Science and Engineering Engineering::Computer science and engineering Deep Neural Network Human The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. The framework introduced in this paper is called the Self Reward Design (SRD), inspired by the Inverse Reward Design, and this interpretable design can (1) solve the problem by pure design (although imperfectly) and (2) be optimized like a standard DNN. With deliberate human designs, we show that some RL problems such as lavaland and MuJoCo can be solved using a model constructed with standard NN components with few parameters. Furthermore, with our fish sale auction example, we demonstrate how SRD is used to address situations that will not make sense if black-box models are used, where humanly-understandable semantic-based decision is required. Agency for Science, Technology and Research (A*STAR) Published version This research was supported by Alibaba Group Holding Limited, DAMO Academy, Health-AI division under Alibaba-NTU Talent Program. The program is the collaboration between Alibaba and Nanyang Technological University, Singapore. This work was also supported by the RIE2020 AME Programmatic Fund, Singapore (No. A20G8b0102). 2023-07-18T01:31:03Z 2023-07-18T01:31:03Z 2023 Journal Article Tjoa, E. & Guan, C. (2023). Self reward design with fine-grained interpretability. Scientific Reports, 13(1), 1638-. https://dx.doi.org/10.1038/s41598-023-28804-9 2045-2322 https://hdl.handle.net/10356/169406 10.1038/s41598-023-28804-9 36717641 2-s2.0-85147006785 1 13 1638 en A20G8b0102 Scientific Reports © 2023 The Author(s). This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Deep Neural Network
Human
spellingShingle Engineering::Computer science and engineering
Deep Neural Network
Human
Tjoa, Erico
Guan, Cuntai
Self reward design with fine-grained interpretability
description The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. The framework introduced in this paper is called the Self Reward Design (SRD), inspired by the Inverse Reward Design, and this interpretable design can (1) solve the problem by pure design (although imperfectly) and (2) be optimized like a standard DNN. With deliberate human designs, we show that some RL problems such as lavaland and MuJoCo can be solved using a model constructed with standard NN components with few parameters. Furthermore, with our fish sale auction example, we demonstrate how SRD is used to address situations that will not make sense if black-box models are used, where humanly-understandable semantic-based decision is required.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Tjoa, Erico
Guan, Cuntai
format Article
author Tjoa, Erico
Guan, Cuntai
author_sort Tjoa, Erico
title Self reward design with fine-grained interpretability
title_short Self reward design with fine-grained interpretability
title_full Self reward design with fine-grained interpretability
title_fullStr Self reward design with fine-grained interpretability
title_full_unstemmed Self reward design with fine-grained interpretability
title_sort self reward design with fine-grained interpretability
publishDate 2023
url https://hdl.handle.net/10356/169406
_version_ 1773551367265189888