Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation

Chain-of-Thought (CoT) [2, 3] guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advan...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHONG, Shanshan, HUANG, Zhongzhan, GAO, Shanghua, WEN, Wushao, LIN, Liang, ZITNIK, Marinka, ZHOU, Pan
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9017
https://ink.library.smu.edu.sg/context/sis_research/article/10020/viewcontent/2024_CVPR_CLOT.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10020
record_format dspace
spelling sg-smu-ink.sis_research-100202024-07-25T08:10:19Z Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation ZHONG, Shanshan HUANG, Zhongzhan GAO, Shanghua WEN, Wushao LIN, Liang ZITNIK, Marinka ZHOU, Pan Chain-of-Thought (CoT) [2, 3] guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advancements. In this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs — a nonsequential, creative paradigm involving strong associations and knowledge leaps. To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study. Then to investigate LLMs’ LoT ability in the Oogiri game, we first build a multimodal and multilingual Oogiri-GO dataset which contains over 130,000 samples from the Oogiri game, and observe the insufficient LoT ability or failures of most existing LLMs on the Oogiri game. Accordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve LLM’s LoT ability. CLoT first formulates the Oogiri-GO dataset into LoT-oriented instruction tuning data to train pretrained LLM for achieving certain LoT humor generation and discrimination abilities. Then CLoT designs an explorative self-refinement that encourages the LLM to generate more creative LoT data via exploring parallels between seemingly unrelated concepts and selects high-quality data to train itself for self-refinement. CLoT not only excels in humor generation in the Oogiri game as shown in Fig. 1 but also boosts creative abilities in various tasks like “cloud guessing game” and “divergent association task”. These findings advance our understanding and offer a pathway to improve LLMs’ creative capacities for innovative applications across domains. The dataset, code, and models have been released online: https://zhongshsh.github.io/CLoT. 2024-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9017 https://ink.library.smu.edu.sg/context/sis_research/article/10020/viewcontent/2024_CVPR_CLOT.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graphics and Human Computer Interfaces Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Graphics and Human Computer Interfaces
Programming Languages and Compilers
spellingShingle Graphics and Human Computer Interfaces
Programming Languages and Compilers
ZHONG, Shanshan
HUANG, Zhongzhan
GAO, Shanghua
WEN, Wushao
LIN, Liang
ZITNIK, Marinka
ZHOU, Pan
Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
description Chain-of-Thought (CoT) [2, 3] guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advancements. In this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs — a nonsequential, creative paradigm involving strong associations and knowledge leaps. To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study. Then to investigate LLMs’ LoT ability in the Oogiri game, we first build a multimodal and multilingual Oogiri-GO dataset which contains over 130,000 samples from the Oogiri game, and observe the insufficient LoT ability or failures of most existing LLMs on the Oogiri game. Accordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve LLM’s LoT ability. CLoT first formulates the Oogiri-GO dataset into LoT-oriented instruction tuning data to train pretrained LLM for achieving certain LoT humor generation and discrimination abilities. Then CLoT designs an explorative self-refinement that encourages the LLM to generate more creative LoT data via exploring parallels between seemingly unrelated concepts and selects high-quality data to train itself for self-refinement. CLoT not only excels in humor generation in the Oogiri game as shown in Fig. 1 but also boosts creative abilities in various tasks like “cloud guessing game” and “divergent association task”. These findings advance our understanding and offer a pathway to improve LLMs’ creative capacities for innovative applications across domains. The dataset, code, and models have been released online: https://zhongshsh.github.io/CLoT.
format text
author ZHONG, Shanshan
HUANG, Zhongzhan
GAO, Shanghua
WEN, Wushao
LIN, Liang
ZITNIK, Marinka
ZHOU, Pan
author_facet ZHONG, Shanshan
HUANG, Zhongzhan
GAO, Shanghua
WEN, Wushao
LIN, Liang
ZITNIK, Marinka
ZHOU, Pan
author_sort ZHONG, Shanshan
title Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
title_short Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
title_full Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
title_fullStr Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
title_full_unstemmed Let’s think outside the box: Exploring leap-of-thought in large language models with multimodal humor generation
title_sort let’s think outside the box: exploring leap-of-thought in large language models with multimodal humor generation
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9017
https://ink.library.smu.edu.sg/context/sis_research/article/10020/viewcontent/2024_CVPR_CLOT.pdf
_version_ 1814047693525221376