Time series task extraction from large language models

Recent advancements in large language models (LLMs) have shown tremendous potential to revolutionize time series classification. These models possess newly improved capabilities, including impressive zero-shot learning and remarkable reasoning skills, without requiring any additional training...

Full description

Saved in:
Bibliographic Details
Main Author: Toh, Leong Seng
Other Authors: Thomas Peyrin
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180995
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Recent advancements in large language models (LLMs) have shown tremendous potential to revolutionize time series classification. These models possess newly improved capabilities, including impressive zero-shot learning and remarkable reasoning skills, without requiring any additional training data. We anticipate that ALLM will become the standard for time series classification, eventually replacing resource-intensive machine learning models. However, the lack of interpretability in LLMsandtheir potential for inaccuracies pose significant challenges that undermine user trust. To build user trust, two critical gaps need to be addressed: reliability and interpretability. To address this issue, we propose a method to approximate ALLM using human-interpretable binary feature rules, denoted as ¯ Arule. This approach leverages the TT-rules (Truth Table rules) model developed by Benamira et al., 2023 to extract binary rules through LLM inference on time series datasets. The LLM is set aside once the rules are derived and inference is conducted exclusively using Arule. This methodology will be validated using three cyber-security datasets, while incorporating the privacy-preserving features outlined by Soegeng, 2024 to ensure the protection of sensitive data.