Video-based traffic analysis

Detecting lane markers reliably and accurately is a crucial yet challenging task. While modern deep-learning-based lane detection has achieved remarkable performance addressing complex topologies of traffic lines and diverse driving scenarios, it is often at the expense of real-time efficiency. Con...

全面介紹

Saved in:
書目詳細資料
主要作者: Fong, Hao Wei
其他作者: Miao Chun Yan
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2021
主題:
在線閱讀:https://hdl.handle.net/10356/153491
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Detecting lane markers reliably and accurately is a crucial yet challenging task. While modern deep-learning-based lane detection has achieved remarkable performance addressing complex topologies of traffic lines and diverse driving scenarios, it is often at the expense of real-time efficiency. Conventional detection of lane markers uses deep segmentation approaches involving pixel-level dense prediction representation to detect lane instances. However, the dense prediction property often bottlenecks the efficiency of identifying lane markers. In this final year project, lane detection is formulated as a row-wise classification problem. I formulate row-wise classification using predefined row anchors and grid cells that are smaller than the size of an image. The computation complexity can be reduced considerably because lane markers are computed by classifying each grid instead of each pixel. Experimentation with the viability of improved loss calculation strategies is also proposed. Loss calculation strategies like focal loss allow training to focus on misclassified examples, specifically complex scenarios, allowing the model to address no visual clues scenarios better. In this context, no-visual-clues of lanes markers are a result of challenging scenarios such as severe occlusion and poor illumination conditions. When used in conjunction during model training, preliminary results have seen positive results and show additional performance gain on top of row-wise classification formulation. This project has been evaluated extensively on two widely used lane detection datasets. The lightweight model can achieve 220+frames per second while having a performance gain of 1.14% from the previous UFAST method. Finally, an ablation study is performed to present the performance gains for our improvement strategy