Video-based traffic analysis

Detecting lane markers reliably and accurately is a crucial yet challenging task. While modern deep-learning-based lane detection has achieved remarkable performance addressing complex topologies of traffic lines and diverse driving scenarios, it is often at the expense of real-time efficiency. Con...

Full description

Saved in:
Bibliographic Details
Main Author: Fong, Hao Wei
Other Authors: Miao Chun Yan
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/153491
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Detecting lane markers reliably and accurately is a crucial yet challenging task. While modern deep-learning-based lane detection has achieved remarkable performance addressing complex topologies of traffic lines and diverse driving scenarios, it is often at the expense of real-time efficiency. Conventional detection of lane markers uses deep segmentation approaches involving pixel-level dense prediction representation to detect lane instances. However, the dense prediction property often bottlenecks the efficiency of identifying lane markers. In this final year project, lane detection is formulated as a row-wise classification problem. I formulate row-wise classification using predefined row anchors and grid cells that are smaller than the size of an image. The computation complexity can be reduced considerably because lane markers are computed by classifying each grid instead of each pixel. Experimentation with the viability of improved loss calculation strategies is also proposed. Loss calculation strategies like focal loss allow training to focus on misclassified examples, specifically complex scenarios, allowing the model to address no visual clues scenarios better. In this context, no-visual-clues of lanes markers are a result of challenging scenarios such as severe occlusion and poor illumination conditions. When used in conjunction during model training, preliminary results have seen positive results and show additional performance gain on top of row-wise classification formulation. This project has been evaluated extensively on two widely used lane detection datasets. The lightweight model can achieve 220+frames per second while having a performance gain of 1.14% from the previous UFAST method. Finally, an ablation study is performed to present the performance gains for our improvement strategy