ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation

Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values, posing severe risks to users and society. To mitigate these risks, current evaluation benchmarks predominantly employ expertdesigned contextual scenarios to assess how well LLMs align with...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHENG, Jingnan, WANG, Han, NGUYEN, Tai D., ZHANG, An, SUN, Jun, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9834
https://ink.library.smu.edu.sg/context/sis_research/article/10834/viewcontent/8621_ALI_Agent_Assessing_LLMs_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10834
record_format dspace
spelling sg-smu-ink.sis_research-108342024-12-24T03:33:05Z ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation ZHENG, Jingnan WANG, Han NGUYEN, Tai D. ZHANG, An SUN, Jun CHUA, Tat-Seng Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values, posing severe risks to users and society. To mitigate these risks, current evaluation benchmarks predominantly employ expertdesigned contextual scenarios to assess how well LLMs align with human values. However, the labor-intensive nature of these benchmarks limits their test scope, hindering their ability to generalize to the extensive variety of open-world use cases and identify rare but crucial long-tail risks. Additionally, these static tests fail to adapt to the rapid evolution of LLMs, making it hard to evaluate timely alignment issues. To address these challenges, we propose ALI-Agent, an evaluation framework that leverages the autonomous abilities of LLM-powered agents to conduct in-depth and adaptive alignment assessments. ALI-Agent operates through two principal stages: Emulation and Refinement. During the Emulation stage, ALIAgent automates the generation of realistic test scenarios. In the Refinement stage, it iteratively refines the scenarios to probe long-tail risks. Specifically, ALI-Agent incorporates a memory module to guide test scenario generation, a tool-using module to reduce human labor in tasks such as evaluating feedback from target LLMs, and an action module to refine tests. Extensive experiments across three aspects of human values–stereotypes, morality, and legality–demonstrate that ALI-Agent, as a general evaluation framework, effectively identifies model misalignment. Systematic analysis also validates that the generated test scenarios represent meaningful use cases, as well as integrate enhanced measures to probe long-tail risks. Our code is available at https://github.com/SophieZheng998/ALI-Agent.git. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9834 info:doi/10.48550/arXiv.2405.14125 https://ink.library.smu.edu.sg/context/sis_research/article/10834/viewcontent/8621_ALI_Agent_Assessing_LLMs_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Software Engineering
spellingShingle Software Engineering
ZHENG, Jingnan
WANG, Han
NGUYEN, Tai D.
ZHANG, An
SUN, Jun
CHUA, Tat-Seng
ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation
description Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values, posing severe risks to users and society. To mitigate these risks, current evaluation benchmarks predominantly employ expertdesigned contextual scenarios to assess how well LLMs align with human values. However, the labor-intensive nature of these benchmarks limits their test scope, hindering their ability to generalize to the extensive variety of open-world use cases and identify rare but crucial long-tail risks. Additionally, these static tests fail to adapt to the rapid evolution of LLMs, making it hard to evaluate timely alignment issues. To address these challenges, we propose ALI-Agent, an evaluation framework that leverages the autonomous abilities of LLM-powered agents to conduct in-depth and adaptive alignment assessments. ALI-Agent operates through two principal stages: Emulation and Refinement. During the Emulation stage, ALIAgent automates the generation of realistic test scenarios. In the Refinement stage, it iteratively refines the scenarios to probe long-tail risks. Specifically, ALI-Agent incorporates a memory module to guide test scenario generation, a tool-using module to reduce human labor in tasks such as evaluating feedback from target LLMs, and an action module to refine tests. Extensive experiments across three aspects of human values–stereotypes, morality, and legality–demonstrate that ALI-Agent, as a general evaluation framework, effectively identifies model misalignment. Systematic analysis also validates that the generated test scenarios represent meaningful use cases, as well as integrate enhanced measures to probe long-tail risks. Our code is available at https://github.com/SophieZheng998/ALI-Agent.git.
format text
author ZHENG, Jingnan
WANG, Han
NGUYEN, Tai D.
ZHANG, An
SUN, Jun
CHUA, Tat-Seng
author_facet ZHENG, Jingnan
WANG, Han
NGUYEN, Tai D.
ZHANG, An
SUN, Jun
CHUA, Tat-Seng
author_sort ZHENG, Jingnan
title ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation
title_short ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation
title_full ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation
title_fullStr ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation
title_full_unstemmed ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation
title_sort ali-agent: assessing llms’ alignment with human values via agent-based evaluation
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9834
https://ink.library.smu.edu.sg/context/sis_research/article/10834/viewcontent/8621_ALI_Agent_Assessing_LLMs_.pdf
_version_ 1820027794773508096