Compute-efficient techniques for vision-based traffic surveillance

It is envisaged that intelligent surveillance systems for traffic law enforcement will become ubiquitous to maximize roadway utilization especially in urban cities. State-of-the-art techniques typically rely on compute intensive techniques for detecting moving vehicles in real-time and under varying...

Full description

Saved in:
Bibliographic Details
Main Author: Garg, Kratika
Other Authors: Thambipillai Srikanthan
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/136957
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-136957
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Garg, Kratika
Compute-efficient techniques for vision-based traffic surveillance
description It is envisaged that intelligent surveillance systems for traffic law enforcement will become ubiquitous to maximize roadway utilization especially in urban cities. State-of-the-art techniques typically rely on compute intensive techniques for detecting moving vehicles in real-time and under varying road conditions, making them unsuitable for incorporation into low-cost edge computing platforms. In this thesis, low-complexity techniques for real-time vehicle detection, under adverse road conditions, have been proposed to facilitate mass-volume adoption. A novel strategy has been proposed to divide lanes into regions relative to the vehicle size i.e. blocks of interest (BoI). This approach has shown to adapt well to diverse camera heights and fields of view, without necessitating explicit camera calibration. A low-complexity feature based on BoI intensity variance is then employed to effectively distinguish vehicles despite illumination variations, camera jitter, and noise. This strategy led to substantial reduction in complexity without affecting the overall accuracy for vehicle detection. The proposed method was ex- tended to initialize the background in free-flowing traffic conditions. Quantitative evaluation show that the accuracy of this low-complexity block-based background initialization technique is comparable (within 1%) to existing state-of-the-art pixel- based techniques. Next, a low-complexity background modeling technique for foreground detection is proposed. A lightweight BoI signature, which represents the relative change in the BoI with respect to the background, was employed for background/foreground classification. A periodic background maintenance strategy is employed to cater to changes in the scene, such as static shadows, small debris, illumination changes and weather conditions. The proposed foreground detection technique has been shown to be effective in the estimation of lane-wise traffic density and traffic incidents. Experimental evaluations using widely used datasets demonstrate that the proposed technique achieves comparable accuracy to the existing state-of-the-art techniques at a significantly lower computational complexity. The proposed techniques have been incorporated into a single-chip multi-core platform (Odroid-XU4) to achieve 40 frames/second, paving the way for real-time capable solutions at low-cost. The background modeling technique was further enhanced with the help of an adaptive Bayesian probabilistic framework to enhance the robustness of foreground detection under challenging conditions. Unlike existing techniques, an adaptive modeling technique has been proposed to consider both the foreground and background to withstand diverse changes in the background, without necessitating manual tuning of thresholds. In addition, it was shown that misdetections can be further minimized by incorporating brief history of vehicle trajectories into the Bayesian model. This together with a more sensitive intensity-based model is selectively employed to accurately classify misdetections, if any. Extensive evaluations on widely used datasets for traffic conditions, including traffic situations such as stationary foreground objects and slow-moving heavy traffic, demonstrate that the proposed technique can be automated to withstand varying environmental situations such as illumination changes, weather conditions and camera jitter. The proposed method achieves comparable pixel-level accuracy when compared with existing state-of-the-art techniques while notably improving compute performance (i.e. frame rate of over 180 frames per second on Odroid-XU4 platform). A low-complexity technique for detecting moving shadow and highlight elimination has been proposed in order to further enhance the vehicle localization process. The novel approach relies on information such as vehicle size, shadow direction and intensity to realize compute efficient shadow elimination pipeline. In addition, a classification based on interior edge features only has been employed to cope with complex scenarios. Unlike existing techniques that rely on computationally intensive explicit region segmentation, the proposed technique examines only the predefined BoIs to substantially lower the overall computational complexity. The method was also adapted to realize a cascaded feature pipeline to efficiently recover vehicle blocks and eliminate highlight blocks. Extensive evaluations on large datasets demonstrate that the proposed shadow elimination technique outperforms state-of-the-art techniques for varying shadow directions, intensities, and sizes. Additionally, it also achieves over 20 times speed-up on Odroid-XU4 over the state-of-the-art technique to achieve a frame-rate of 100 frames/second on Odroid-XU4. Similar performance was achieved for the proposed moving highlight elimination technique for varying illumination conditions. Finally, an integrated framework for vehicle detection for traffic surveillance systems has been proposed by combining the proposed building blocks for background/foreground detection and moving shadow and highlight elimination. In addition, the framework is invariant to camera movements (camera shake and shift) and small objects/debris that occur under typical operating conditions. Quantitative and qualitative evaluations show that the proposed framework has paved the way for robust traffic surveillance solutions at low-cost to facilitate real-time incident monitoring and traffic-law enforcement. Finally, functional prototypes demonstrated on a low-cost Odroid-XU4 platform further validate the applicability of the proposed methods for mass volume deployment of low-cost robust traffic surveillance solutions.
author2 Thambipillai Srikanthan
author_facet Thambipillai Srikanthan
Garg, Kratika
format Thesis-Doctor of Philosophy
author Garg, Kratika
author_sort Garg, Kratika
title Compute-efficient techniques for vision-based traffic surveillance
title_short Compute-efficient techniques for vision-based traffic surveillance
title_full Compute-efficient techniques for vision-based traffic surveillance
title_fullStr Compute-efficient techniques for vision-based traffic surveillance
title_full_unstemmed Compute-efficient techniques for vision-based traffic surveillance
title_sort compute-efficient techniques for vision-based traffic surveillance
publisher Nanyang Technological University
publishDate 2020
url https://hdl.handle.net/10356/136957
_version_ 1683494165897281536
spelling sg-ntu-dr.10356-1369572020-10-28T08:40:57Z Compute-efficient techniques for vision-based traffic surveillance Garg, Kratika Thambipillai Srikanthan School of Computer Science and Engineering astsrikan@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision It is envisaged that intelligent surveillance systems for traffic law enforcement will become ubiquitous to maximize roadway utilization especially in urban cities. State-of-the-art techniques typically rely on compute intensive techniques for detecting moving vehicles in real-time and under varying road conditions, making them unsuitable for incorporation into low-cost edge computing platforms. In this thesis, low-complexity techniques for real-time vehicle detection, under adverse road conditions, have been proposed to facilitate mass-volume adoption. A novel strategy has been proposed to divide lanes into regions relative to the vehicle size i.e. blocks of interest (BoI). This approach has shown to adapt well to diverse camera heights and fields of view, without necessitating explicit camera calibration. A low-complexity feature based on BoI intensity variance is then employed to effectively distinguish vehicles despite illumination variations, camera jitter, and noise. This strategy led to substantial reduction in complexity without affecting the overall accuracy for vehicle detection. The proposed method was ex- tended to initialize the background in free-flowing traffic conditions. Quantitative evaluation show that the accuracy of this low-complexity block-based background initialization technique is comparable (within 1%) to existing state-of-the-art pixel- based techniques. Next, a low-complexity background modeling technique for foreground detection is proposed. A lightweight BoI signature, which represents the relative change in the BoI with respect to the background, was employed for background/foreground classification. A periodic background maintenance strategy is employed to cater to changes in the scene, such as static shadows, small debris, illumination changes and weather conditions. The proposed foreground detection technique has been shown to be effective in the estimation of lane-wise traffic density and traffic incidents. Experimental evaluations using widely used datasets demonstrate that the proposed technique achieves comparable accuracy to the existing state-of-the-art techniques at a significantly lower computational complexity. The proposed techniques have been incorporated into a single-chip multi-core platform (Odroid-XU4) to achieve 40 frames/second, paving the way for real-time capable solutions at low-cost. The background modeling technique was further enhanced with the help of an adaptive Bayesian probabilistic framework to enhance the robustness of foreground detection under challenging conditions. Unlike existing techniques, an adaptive modeling technique has been proposed to consider both the foreground and background to withstand diverse changes in the background, without necessitating manual tuning of thresholds. In addition, it was shown that misdetections can be further minimized by incorporating brief history of vehicle trajectories into the Bayesian model. This together with a more sensitive intensity-based model is selectively employed to accurately classify misdetections, if any. Extensive evaluations on widely used datasets for traffic conditions, including traffic situations such as stationary foreground objects and slow-moving heavy traffic, demonstrate that the proposed technique can be automated to withstand varying environmental situations such as illumination changes, weather conditions and camera jitter. The proposed method achieves comparable pixel-level accuracy when compared with existing state-of-the-art techniques while notably improving compute performance (i.e. frame rate of over 180 frames per second on Odroid-XU4 platform). A low-complexity technique for detecting moving shadow and highlight elimination has been proposed in order to further enhance the vehicle localization process. The novel approach relies on information such as vehicle size, shadow direction and intensity to realize compute efficient shadow elimination pipeline. In addition, a classification based on interior edge features only has been employed to cope with complex scenarios. Unlike existing techniques that rely on computationally intensive explicit region segmentation, the proposed technique examines only the predefined BoIs to substantially lower the overall computational complexity. The method was also adapted to realize a cascaded feature pipeline to efficiently recover vehicle blocks and eliminate highlight blocks. Extensive evaluations on large datasets demonstrate that the proposed shadow elimination technique outperforms state-of-the-art techniques for varying shadow directions, intensities, and sizes. Additionally, it also achieves over 20 times speed-up on Odroid-XU4 over the state-of-the-art technique to achieve a frame-rate of 100 frames/second on Odroid-XU4. Similar performance was achieved for the proposed moving highlight elimination technique for varying illumination conditions. Finally, an integrated framework for vehicle detection for traffic surveillance systems has been proposed by combining the proposed building blocks for background/foreground detection and moving shadow and highlight elimination. In addition, the framework is invariant to camera movements (camera shake and shift) and small objects/debris that occur under typical operating conditions. Quantitative and qualitative evaluations show that the proposed framework has paved the way for robust traffic surveillance solutions at low-cost to facilitate real-time incident monitoring and traffic-law enforcement. Finally, functional prototypes demonstrated on a low-cost Odroid-XU4 platform further validate the applicability of the proposed methods for mass volume deployment of low-cost robust traffic surveillance solutions. Doctor of Philosophy 2020-02-07T03:15:39Z 2020-02-07T03:15:39Z 2019 Thesis-Doctor of Philosophy Kratika, G. (2019). Compute-efficient techniques for vision-based traffic surveillance. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/136957 10.32657/10356/136957 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University