Chain of preference optimization: Improving chain-of-thought reasoning in LLMs

The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tr...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Xuan, DU, Chao, PANG, Tianyu, LIU, Qian, GAO, Wei, LIN, Min
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9881
https://ink.library.smu.edu.sg/context/sis_research/article/10881/viewcontent/2406.09136v2.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10881
record_format dspace
spelling sg-smu-ink.sis_research-108812025-01-02T09:12:52Z Chain of preference optimization: Improving chain-of-thought reasoning in LLMs ZHANG, Xuan DU, Chao PANG, Tianyu LIU, Qian GAO, Wei LIN, Min The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through Chain of Preference Optimization (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. Our code is available at this https URL. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9881 https://ink.library.smu.edu.sg/context/sis_research/article/10881/viewcontent/2406.09136v2.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
ZHANG, Xuan
DU, Chao
PANG, Tianyu
LIU, Qian
GAO, Wei
LIN, Min
Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
description The recent development of chain-of-thought (CoT) decoding has enabled large language models (LLMs) to generate explicit logical reasoning paths for complex problem-solving. However, research indicates that these paths are not always deliberate and optimal. The tree-of-thought (ToT) method employs tree-searching to extensively explore the reasoning space and find better reasoning paths that CoT decoding might overlook. This deliberation, however, comes at the cost of significantly increased inference complexity. In this work, we demonstrate that fine-tuning LLMs leveraging the search tree constructed by ToT allows CoT to achieve similar or better performance, thereby avoiding the substantial inference burden. This is achieved through Chain of Preference Optimization (CPO), where LLMs are fine-tuned to align each step of the CoT reasoning paths with those of ToT using the inherent preference information in the tree-search process. Extensive experimental results show that CPO significantly improves LLM performance in solving a variety of complex problems, including question answering, fact verification, and arithmetic reasoning, demonstrating its effectiveness. Our code is available at this https URL.
format text
author ZHANG, Xuan
DU, Chao
PANG, Tianyu
LIU, Qian
GAO, Wei
LIN, Min
author_facet ZHANG, Xuan
DU, Chao
PANG, Tianyu
LIU, Qian
GAO, Wei
LIN, Min
author_sort ZHANG, Xuan
title Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
title_short Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
title_full Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
title_fullStr Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
title_full_unstemmed Chain of preference optimization: Improving chain-of-thought reasoning in LLMs
title_sort chain of preference optimization: improving chain-of-thought reasoning in llms
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9881
https://ink.library.smu.edu.sg/context/sis_research/article/10881/viewcontent/2406.09136v2.pdf
_version_ 1821237272398266368