Time series task extraction from large language models
Recent advancements in large language models (LLMs) have shown tremendous potential to revolutionize time series classification. These models possess newly improved capabilities, including impressive zero-shot learning and remarkable reasoning skills, without requiring any additional training...
Saved in:
主要作者: | Toh, Leong Seng |
---|---|
其他作者: | Thomas Peyrin |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/180995 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Evaluating TT-net capability on time series forecasting tasks
由: Nguyen, Tung Bach
出版: (2025) -
Contextual human object interaction understanding from pre-trained large language model
由: Gao ,Jianjun, et al.
出版: (2025) -
A comparison of global rule induction and HMM approaches on extracting story boundaries in news video
由: Chaisorn, L., et al.
出版: (2013) -
Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
由: MA, Yubo, et al.
出版: (2023) -
A hierarchical multi-modal approach to story segmentation in news video
由: LEKHA CHAISORN
出版: (2010)