Extracting and encoding event sequences for use in recurrent neural networks
Abstract The area of story content generation has been widely explored in the field of natural language processing. Previously, analogy-based methodologies have been used to provide an approach to this task. However, with the improvement of tech-nology, more and more research have been tapping into r...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2019
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/etd_masteral/6517 https://animorepository.dlsu.edu.ph/context/etd_masteral/article/13531/viewcontent/Villaluna__Winfred_Louie_D.____w_border__Main_Document2___Extracting_and_Encoding_Event_Sequences.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
id |
oai:animorepository.dlsu.edu.ph:etd_masteral-13531 |
---|---|
record_format |
eprints |
spelling |
oai:animorepository.dlsu.edu.ph:etd_masteral-135312022-12-06T03:36:25Z Extracting and encoding event sequences for use in recurrent neural networks Villaluna, Winfred Louie D. Abstract The area of story content generation has been widely explored in the field of natural language processing. Previously, analogy-based methodologies have been used to provide an approach to this task. However, with the improvement of tech-nology, more and more research have been tapping into recurrent neuralnetworks - specifically, long short-term memory networks (LSTM) to accomplish this task. More specifically, they train the LStM models to learn the story, either through sequences of scenes in a story, or more commonly through sequences of action events, and allow them to predict a subsequent event based on this input. While the approach proves to be more complex, general findings from these researches an inability to provide consistently decent respons. In these researches, the rec-curring problem is attributed mainly to the poor quality of the training dataset. This research takes this opportunity and provides an alternative method of ex-traction and encoding event sequences in stories. The performance analysis of the event extraction system over 8 stories yielded the system an F1 score of 82%. The effectiveness of the encoding was evaluated by utilizing the encoded events extracted from a set of children’s stories in training an LSTM network. Results show the system’s ability to generate a decent response for more than half the time, with its ability limited by the current size of the dataset. 2019-12-16T08:00:00Z text application/pdf https://animorepository.dlsu.edu.ph/etd_masteral/6517 https://animorepository.dlsu.edu.ph/context/etd_masteral/article/13531/viewcontent/Villaluna__Winfred_Louie_D.____w_border__Main_Document2___Extracting_and_Encoding_Event_Sequences.pdf Master's Theses English Animo Repository Natural language generation (Computer science) Parsing (Computer grammar) Computational linguistics Computer Sciences |
institution |
De La Salle University |
building |
De La Salle University Library |
continent |
Asia |
country |
Philippines Philippines |
content_provider |
De La Salle University Library |
collection |
DLSU Institutional Repository |
language |
English |
topic |
Natural language generation (Computer science) Parsing (Computer grammar) Computational linguistics Computer Sciences |
spellingShingle |
Natural language generation (Computer science) Parsing (Computer grammar) Computational linguistics Computer Sciences Villaluna, Winfred Louie D. Extracting and encoding event sequences for use in recurrent neural networks |
description |
Abstract The area of story content generation has been widely explored in the field of natural language processing. Previously, analogy-based methodologies have been used to provide an approach to this task. However, with the improvement of tech-nology, more and more research have been tapping into recurrent neuralnetworks - specifically, long short-term memory networks (LSTM) to accomplish this task. More specifically, they train the LStM models to learn the story, either through sequences of scenes in a story, or more commonly through sequences of action events, and allow them to predict a subsequent event based on this input. While the approach proves to be more complex, general findings from these researches an inability to provide consistently decent respons. In these researches, the rec-curring problem is attributed mainly to the poor quality of the training dataset. This research takes this opportunity and provides an alternative method of ex-traction and encoding event sequences in stories. The performance analysis of the event extraction system over 8 stories yielded the system an F1 score of 82%. The effectiveness of the encoding was evaluated by utilizing the encoded events extracted from a set of children’s stories in training an LSTM network. Results show the system’s ability to generate a decent response for more than half the time, with its ability limited by the current size of the dataset. |
format |
text |
author |
Villaluna, Winfred Louie D. |
author_facet |
Villaluna, Winfred Louie D. |
author_sort |
Villaluna, Winfred Louie D. |
title |
Extracting and encoding event sequences for use in recurrent neural networks |
title_short |
Extracting and encoding event sequences for use in recurrent neural networks |
title_full |
Extracting and encoding event sequences for use in recurrent neural networks |
title_fullStr |
Extracting and encoding event sequences for use in recurrent neural networks |
title_full_unstemmed |
Extracting and encoding event sequences for use in recurrent neural networks |
title_sort |
extracting and encoding event sequences for use in recurrent neural networks |
publisher |
Animo Repository |
publishDate |
2019 |
url |
https://animorepository.dlsu.edu.ph/etd_masteral/6517 https://animorepository.dlsu.edu.ph/context/etd_masteral/article/13531/viewcontent/Villaluna__Winfred_Louie_D.____w_border__Main_Document2___Extracting_and_Encoding_Event_Sequences.pdf |
_version_ |
1767196775551074304 |