Extracting event knowledge from pretrained language models
The advent of large-scale Pretrained Language Models (PLM) in the field of Natural Language Processing (NLP) has allowed for the domain to reach new frontiers in language generation. This paper seeks to explore the idea of script knowledge probing in three PLMs - FLAN-T5, OPT, and GPT-3 (specific...
Saved in:
Main Author: | Ong, Claudia Beth |
---|---|
Other Authors: | Li Boyang |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166081 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Data-efficient domain adaptation for pretrained language models
by: Guo, Xu
Published: (2023) -
Model-driven smart contract generation leveraging pretrained large language models
by: Jiang, Qinbo
Published: (2024) -
Code problem similarity detection using code clones and pretrained models
by: Yeo, Geremie Yun Siang
Published: (2023) -
Chinese idiom understanding with transformer-based pretrained language models
by: TAN, Minghuan
Published: (2022) -
Analyzing the Domain Robustness of Pretrained Language Models, Layer by Layer
by: Kashyap, Abhinav Ramesh, et al.
Published: (2021)