Architecture of electrical equipment health and monitoring (EHM) system based on influxDB platform

The Transformer architecture was originally developed for natural language processing tasks, such as machine translation and language modeling. The architecture consists of a self-attention mechanism that allows the model to attend to different parts of the input sequence, enabling it to capture lon...

Full description

Saved in:
Bibliographic Details
Main Author: Nong, Chunkai
Other Authors: Soong Boon Hee
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166684
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The Transformer architecture was originally developed for natural language processing tasks, such as machine translation and language modeling. The architecture consists of a self-attention mechanism that allows the model to attend to different parts of the input sequence, enabling it to capture long-range dependencies. Recently, the Transformer architecture has gained popularity in time series forecasting tasks, where the goal is to predict future values of a sequence based on its past values. One advantage of using the Transformer for time series forecasting is that it can handle variable-length input sequences, which is a common characteristic of time series data. Additionally, the self-attention mechanism allows the model to capture complex temporal relationships between different parts of the input sequence. There are several deep learning models based on the Transformer architecture that have been proposed for time series forecasting, including Informer, FEDformer, and others. The present study compares the performance of two deep learning models, Informer and FEDformer, for time series forecasting on two datasets, ETDataset and a real-world dataset. The results indicate that FEDformer outperforms Informer in both datasets. Furthermore, it was observed that the Transformer model, which is the basis for both Informer and FEDformer, performs well on certain features of the data but not on others. These findings suggest that while Transformer-based models may be effective for time series forecasting, their performance may vary depending on the characteristics of the data being analyzed. This study contributes to the growing body of research on deep learning models for time series forecasting and highlights the importance of selecting the appropriate model for a given dataset.