ALI-agent: Assessing LLMS’ alignment with human values via agent-based evaluation

Large Language Models (LLMs) can elicit unintended and even harmful content when misaligned with human values, posing severe risks to users and society. To mitigate these risks, current evaluation benchmarks predominantly employ expertdesigned contextual scenarios to assess how well LLMs align with...

全面介紹

Saved in:
書目詳細資料
Main Authors: ZHENG, Jingnan, WANG, Han, NGUYEN, Tai D., ZHANG, An, SUN, Jun, CHUA, Tat-Seng
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2024
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/9834
https://ink.library.smu.edu.sg/context/sis_research/article/10834/viewcontent/8621_ALI_Agent_Assessing_LLMs_.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English