What makes the story forward?: Inferring commonsense explanations as prompts for future event generation

Prediction over event sequences is critical for many real-world applications in Information Retrieval and Natural Language Processing. Future Event Generation (FEG) is a challenging task in event sequence prediction because it requires not only fluent text generation but also commonsense reasoning t...

Full description

Saved in:
Bibliographic Details
Main Authors: LIN, Li, CAO, Yixin, HUANG, Lifu, LI, Shu Ang, HU, Xuming, WEN, Lijie, WANG, Jianmin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7229
https://ink.library.smu.edu.sg/context/sis_research/article/8232/viewcontent/3477495.3532080_pvoa_cc_by.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8232
record_format dspace
spelling sg-smu-ink.sis_research-82322022-08-18T05:00:19Z What makes the story forward?: Inferring commonsense explanations as prompts for future event generation LIN, Li CAO, Yixin HUANG, Lifu LI, Shu Ang HU, Xuming WEN, Lijie WANG, Jianmin Prediction over event sequences is critical for many real-world applications in Information Retrieval and Natural Language Processing. Future Event Generation (FEG) is a challenging task in event sequence prediction because it requires not only fluent text generation but also commonsense reasoning to maintain the logical coherence of the entire event story. In this paper, we propose a novel explainable FEG framework, Coep. It highlights and integrates two types of event knowledge, sequential knowledge of direct event-event relations and inferential knowledge that reflects the intermediate character psychology between events, such as intents, causes, reactions, which intrinsically pushes the story forward. To alleviate the knowledge forgetting issue, we design two modules, IM and GM, for each type of knowledge, which are combined via prompt tuning. First, IM focuses on understanding inferential knowledge to generate commonsense explanations and provide a soft prompt vector for GM. We also design a contrastive discriminator for better generalization ability. Second, GM generates future events by modeling direct sequential knowledge with the guidance of IM. Automatic and human evaluation demonstrate that our approach can generate more coherent, specific, and logical future events. 2022-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7229 info:doi/10.1145/3477495.3532080 https://ink.library.smu.edu.sg/context/sis_research/article/8232/viewcontent/3477495.3532080_pvoa_cc_by.pdf http://creativecommons.org/licenses/by/3.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University commonsense reasoning contrastive training textual event generation Artificial Intelligence and Robotics Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic commonsense reasoning
contrastive training
textual event generation
Artificial Intelligence and Robotics
Databases and Information Systems
spellingShingle commonsense reasoning
contrastive training
textual event generation
Artificial Intelligence and Robotics
Databases and Information Systems
LIN, Li
CAO, Yixin
HUANG, Lifu
LI, Shu Ang
HU, Xuming
WEN, Lijie
WANG, Jianmin
What makes the story forward?: Inferring commonsense explanations as prompts for future event generation
description Prediction over event sequences is critical for many real-world applications in Information Retrieval and Natural Language Processing. Future Event Generation (FEG) is a challenging task in event sequence prediction because it requires not only fluent text generation but also commonsense reasoning to maintain the logical coherence of the entire event story. In this paper, we propose a novel explainable FEG framework, Coep. It highlights and integrates two types of event knowledge, sequential knowledge of direct event-event relations and inferential knowledge that reflects the intermediate character psychology between events, such as intents, causes, reactions, which intrinsically pushes the story forward. To alleviate the knowledge forgetting issue, we design two modules, IM and GM, for each type of knowledge, which are combined via prompt tuning. First, IM focuses on understanding inferential knowledge to generate commonsense explanations and provide a soft prompt vector for GM. We also design a contrastive discriminator for better generalization ability. Second, GM generates future events by modeling direct sequential knowledge with the guidance of IM. Automatic and human evaluation demonstrate that our approach can generate more coherent, specific, and logical future events.
format text
author LIN, Li
CAO, Yixin
HUANG, Lifu
LI, Shu Ang
HU, Xuming
WEN, Lijie
WANG, Jianmin
author_facet LIN, Li
CAO, Yixin
HUANG, Lifu
LI, Shu Ang
HU, Xuming
WEN, Lijie
WANG, Jianmin
author_sort LIN, Li
title What makes the story forward?: Inferring commonsense explanations as prompts for future event generation
title_short What makes the story forward?: Inferring commonsense explanations as prompts for future event generation
title_full What makes the story forward?: Inferring commonsense explanations as prompts for future event generation
title_fullStr What makes the story forward?: Inferring commonsense explanations as prompts for future event generation
title_full_unstemmed What makes the story forward?: Inferring commonsense explanations as prompts for future event generation
title_sort what makes the story forward?: inferring commonsense explanations as prompts for future event generation
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7229
https://ink.library.smu.edu.sg/context/sis_research/article/8232/viewcontent/3477495.3532080_pvoa_cc_by.pdf
_version_ 1770576274864996352