Single Camera Data Augmentation in End-To-End Deep Learning Simulated Self-Driving Car

Developing a self-driving car is a daunting task. Usually there are multiple steps involved in the learning pipeline to generate a feasible model. This research offers an alternative approach using end-to-end deep learning with vast data generated from a simulated environment. The dataset, combin...

Full description

Saved in:
Bibliographic Details
Main Authors: Timur, Muhammad Idham Ananta, Istiyanto, Jazi Eko, Dharmawan, Andi, Setiadi, Beatrice Paulina
Format: Other NonPeerReviewed
Language:English
Published: ICIC Express Letters, Part B: Applications 2022
Subjects:
Online Access:https://repository.ugm.ac.id/283882/1/SINGLE-CAMERA-DATA-AUGMENTATION-IN-ENDTOEND-DEEP-LEARNING-SIMULATED-SELFDRIVING-CARICIC-Express-Letters-Part-B-Applications.pdf
https://repository.ugm.ac.id/283882/
https://www.researchgate.net/publication/363731195_SINGLE_CAMERA_DATA_AUGMENTATION_IN_END-TO-END_DEEP_LEARNING_SIMULATED_SELF-DRIVING_CAR
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universitas Gadjah Mada
Language: English
Description
Summary:Developing a self-driving car is a daunting task. Usually there are multiple steps involved in the learning pipeline to generate a feasible model. This research offers an alternative approach using end-to-end deep learning with vast data generated from a simulated environment. The dataset, combined with data augmentation, is expected to contain enough features to be learned. The dataset for training is taken by manually driving the car, using the record feature in the simulator to store the dataset. It consists of vehicle’s driving behaviour labels and images which are taken from a single camera mounted on the car’s dashboard. Convolutional Neural Network (CNN) is used to pro- cess the copy and labels of the dataset while training the model. The result of this research is a simulation that is able to steer the car, in which the trained model makes the pre- dictions for the steering angle based on image input from the camera. The accuracy of the trained model is measured through the RMSE calculation, which results in a value of 0.178. We also evaluate the model according to their time and distance in autonomous driving testing from different start points on the map.