Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework
This paper explores ChatGPT’s potential in aiding government agencies, drawing from a case study based on a government agency in Singapore. While ChatGPT’s text generation abilities offer promise, it brings inherent challenges, including data opacity, potential misinformation, and occasional errors....
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8747 https://ink.library.smu.edu.sg/context/sis_research/article/9750/viewcontent/VisionPaper_BigData_2023_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9750 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-97502024-05-03T07:16:03Z Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework LEE, Hui Shan SHANKARARAMAN, Venky, OUH, Eng Lieh This paper explores ChatGPT’s potential in aiding government agencies, drawing from a case study based on a government agency in Singapore. While ChatGPT’s text generation abilities offer promise, it brings inherent challenges, including data opacity, potential misinformation, and occasional errors. These issues are especially critical in government decision-making.Public administration’s core values of transparency and accountability magnify these concerns. Ensuring AI alignment with these principles is imperative, given the potential repercussions on policy outcomes and citizen trust.AI explainability plays a central role in ChatGPT’s adoption within government agencies. To address these concerns, we propose strategies like prompt engineering, data governance, and the adoption of interpretability tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). These tools aid in understanding and enhancing ChatGPT’s decision-making processes.This paper underscores the urgency for government agencies to adopt a proactive stance by proposing a 4-Steps framework completed with potential measures to enhance ChatGPT’s explainability within the specific context of public administration. Collaborative efforts between AI practitioners and public administrators are essential for striking an equilibrium between the capabilities of ChatGPT and the unique demands of government operations, ultimately ensuring a responsible integration of ChatGPT into public administration processes. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8747 info:doi/10.1109/BigData59044.2023.10386797 https://ink.library.smu.edu.sg/context/sis_research/article/9750/viewcontent/VisionPaper_BigData_2023_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University AI Explainability ChatGPT Government LIME SHAP Singapore Artificial Intelligence and Robotics Asian Studies Management Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
AI Explainability ChatGPT Government LIME SHAP Singapore Artificial Intelligence and Robotics Asian Studies Management Information Systems |
spellingShingle |
AI Explainability ChatGPT Government LIME SHAP Singapore Artificial Intelligence and Robotics Asian Studies Management Information Systems LEE, Hui Shan SHANKARARAMAN, Venky, OUH, Eng Lieh Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework |
description |
This paper explores ChatGPT’s potential in aiding government agencies, drawing from a case study based on a government agency in Singapore. While ChatGPT’s text generation abilities offer promise, it brings inherent challenges, including data opacity, potential misinformation, and occasional errors. These issues are especially critical in government decision-making.Public administration’s core values of transparency and accountability magnify these concerns. Ensuring AI alignment with these principles is imperative, given the potential repercussions on policy outcomes and citizen trust.AI explainability plays a central role in ChatGPT’s adoption within government agencies. To address these concerns, we propose strategies like prompt engineering, data governance, and the adoption of interpretability tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). These tools aid in understanding and enhancing ChatGPT’s decision-making processes.This paper underscores the urgency for government agencies to adopt a proactive stance by proposing a 4-Steps framework completed with potential measures to enhance ChatGPT’s explainability within the specific context of public administration. Collaborative efforts between AI practitioners and public administrators are essential for striking an equilibrium between the capabilities of ChatGPT and the unique demands of government operations, ultimately ensuring a responsible integration of ChatGPT into public administration processes. |
format |
text |
author |
LEE, Hui Shan SHANKARARAMAN, Venky, OUH, Eng Lieh |
author_facet |
LEE, Hui Shan SHANKARARAMAN, Venky, OUH, Eng Lieh |
author_sort |
LEE, Hui Shan |
title |
Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework |
title_short |
Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework |
title_full |
Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework |
title_fullStr |
Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework |
title_full_unstemmed |
Vision Paper: Advancing of AI explainability for the use of ChatGPT in government agencies: Proposal of a 4-step framework |
title_sort |
vision paper: advancing of ai explainability for the use of chatgpt in government agencies: proposal of a 4-step framework |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8747 https://ink.library.smu.edu.sg/context/sis_research/article/9750/viewcontent/VisionPaper_BigData_2023_av.pdf |
_version_ |
1814047500618694656 |