Disentangling transformer language models as superposed topic models
Topic Modelling is an established research area where the quality of a given topic is measured using coherence metrics. Often, we infer topics from Neural Topic Models (NTM) by interpreting their decoder weights, consisting of top-activated words projected from individual neurons. Transformer-based...
Saved in:
Main Authors: | LIM, Jia Peng, LAUW, Hady Wirawan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8470 https://ink.library.smu.edu.sg/context/sis_research/article/9473/viewcontent/2023.emnlp_main.534__1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Benchmarking foundation models with language-model-as-an-examiner
by: BAI, Yushi, et al.
Published: (2023) -
Unified modeling language: A complexity analysis
by: SIAU, Keng, et al.
Published: (2001) -
LLM-adapters: An adapter family for parameter-efficient fine-tuning of large language models
by: HU, Zhiqiang, et al.
Published: (2023) -
A comprehensive evaluation of large language models on legal judgment prediction
by: SHUI, Ruihao, et al.
Published: (2023) -
Large language models as source planner for personalized knowledge-grounded dialogues
by: WANG, Hongru, et al.
Published: (2023)