Visual analytics using artificial intelligence (multi-modality driver action recognition)

A report published by the National Highway Traffic Safety Administration (NHTSA) in the United States showed that up to 3522 people were killed due to distracted driving. Various driver monitoring system were developed to tackle this issue and potentially saving lives and increasing road safety,...

全面介紹

Saved in:
書目詳細資料
主要作者: Lee, Jaron Jin-An
其他作者: Yap Kim Hui
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/176634
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:A report published by the National Highway Traffic Safety Administration (NHTSA) in the United States showed that up to 3522 people were killed due to distracted driving. Various driver monitoring system were developed to tackle this issue and potentially saving lives and increasing road safety, one such system includes a driver video action recognition system. The project aims to develop a robust and stable driver action recognition model utilizing multimodality data streams, including RGB, IR and depth. A literature review was carried out to determine suitable model and dataset for this project. Following model and dataset selection, hyperparameters tuning is conducted to optimize VideoMAE V2 for improved accuracy and efficiency on the Drive&Act (DAA) dataset. Various fusion learning technique were explored and implemented into the system for evaluation. Early fusion achieves an average Top-1 accuracy of 82.40%, while late fusion obtains an average Top-1 accuracy of 84.30% on the test set. Overall, the project demonstrated the capability of incorporating early and late fusion methods with VideoMAE V2 model to achieve satisfactory results. This suggests the potential applicability of this model to different multi-modality action recognition tasks. Future work explores alternative fusion techniques and expanding the model to other driver datasets.