Learning to anticipate and forecast human actions from videos
Action Anticipation and forecasting aims to predict future actions by processing videos containing past and current observations. In this project, we develop new methods based on the encoder-decoder architecture with Transformer models to anticipate and forecast future human actions by proce...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158618 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-158618 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1586182023-07-07T19:02:20Z Learning to anticipate and forecast human actions from videos Peh, Eric Zheng Quan Soh Cheong Boon School of Electrical and Electronic Engineering ECBSOH@ntu.edu.sg Engineering::Electrical and electronic engineering Action Anticipation and forecasting aims to predict future actions by processing videos containing past and current observations. In this project, we develop new methods based on the encoder-decoder architecture with Transformer models to anticipate and forecast future human actions by processing videos. The model will observe a video for several seconds (or minutes) and then encodes information of the video to predict plausible human action that are going to happen in the future. Temporal information from videos will be extracted from deep neural networks. The performance of these models will then be evaluated on standard action forecasting datasets such as Breakfast and 50Salads datasets Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-20T00:54:40Z 2022-05-20T00:54:40Z 2022 Final Year Project (FYP) Peh, E. Z. Q. (2022). Learning to anticipate and forecast human actions from videos. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158618 https://hdl.handle.net/10356/158618 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Peh, Eric Zheng Quan Learning to anticipate and forecast human actions from videos |
description |
Action Anticipation and forecasting aims to predict future actions by processing
videos containing past and current observations.
In this project, we develop new methods based on the encoder-decoder architecture
with Transformer models to anticipate and forecast future human actions by
processing videos. The model will observe a video for several seconds (or minutes)
and then encodes information of the video to predict plausible human action that are
going to happen in the future. Temporal information from videos will be extracted
from deep neural networks. The performance of these models will then be evaluated
on standard action forecasting datasets such as Breakfast and 50Salads datasets |
author2 |
Soh Cheong Boon |
author_facet |
Soh Cheong Boon Peh, Eric Zheng Quan |
format |
Final Year Project |
author |
Peh, Eric Zheng Quan |
author_sort |
Peh, Eric Zheng Quan |
title |
Learning to anticipate and forecast human actions from videos |
title_short |
Learning to anticipate and forecast human actions from videos |
title_full |
Learning to anticipate and forecast human actions from videos |
title_fullStr |
Learning to anticipate and forecast human actions from videos |
title_full_unstemmed |
Learning to anticipate and forecast human actions from videos |
title_sort |
learning to anticipate and forecast human actions from videos |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/158618 |
_version_ |
1772827267436642304 |