Key frame extraction from a big dataset
An autonomous vehicle is an automobile platform capable of sensing and reacting to its immediate environment in an attempt to eliminate the need for human drivers. In autonomous driving, taking decisions like overtaking a vehicle or defining a route requires environmental perception, localization, a...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/153862 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | An autonomous vehicle is an automobile platform capable of sensing and reacting to its immediate environment in an attempt to eliminate the need for human drivers. In autonomous driving, taking decisions like overtaking a vehicle or defining a route requires environmental perception, localization, and planning.
In particular, object detection is one of the core modules in an autonomous vehicle as perception plays a central role in many tasks ranging from localization to obstacle avoidance and general motion planning. To enable multi-modal perception, autonomous vehicles use several on-board sensors such as cameras, radars, and lidars which provide data which are utilized by deep-learning based object detectors. A large, diverse and accurately labeled dataset is essential for perception tasks like object detection. However, the problem encountered is that the costs pertaining to human annotation of a large dataset is very expensive even for large companies and results in diminishing returns.
This project aims to solve this problem by designing a keyframe filter package to extract keyframes from a big dataset and the extracted useful images are sent to annotators for manual labeling. The approach to perform keyframe extraction explored in this project uses heuristics to determine keyframes in a dataset by performing 2D multi-label tagging of images. The multi-label tagging of images is implemented using one of the state-of-the-art object detection frameworks, Faster Region-based Convolutional Neural Network (Faster-RCNN). This project proposes a novel addition to improve the Faster-RCNN model by including the visibility detection feature. The keyframe filter package permits the use of a subset of the raw data for annotation while optimizing the model performance and reducing the costs incurred. |
---|