Exploring sequential VAE to handle time-series data
Variational Autoencoders (VAEs) have gained significant popularity in recent years as a powerful generative model. They emerged in 2013 when it was introduced as a means to learn latent representations of data in an unsupervised manner while providing a probabilistic framework for generation. One of...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166501 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Variational Autoencoders (VAEs) have gained significant popularity in recent years as a powerful generative model. They emerged in 2013 when it was introduced as a means to learn latent representations of data in an unsupervised manner while providing a probabilistic framework for generation. One of the key innovations in VAEs is their ability to combine deep learning techniques with variational inference, allowing the model to learn a probabilistic mapping between the data and latent space. This unique combination enables VAEs to learn compact and meaningful representations of data and generate new samples by sampling from the latent space. Since their introduction, VAEs have been widely adopted for various tasks such as image generation, text generation, and representation learning, contributing to their rise in popularity. The aim of this project is to explore VAE architectures that are capable of capturing spatiotemporal correlations and learn complex underlying problems in time-series data. In particular, the objective is to guarantee that the latent representation of time series that are temporally close are also similar in the latent space. This means that we want the VAE to not only learn the important features present in the data, but also to capture the temporal differences between consecutive data points in the latent space, which could be useful for tasks like out-of-distribution (OOD) detection. Ultimately, the results of this project hope to be able to improve the task of OOD detection, by proposing architectures that are able to perform well at the task. |
---|