AR-assisted driving with adversarial detector and reinforcement learning

Digital Twin is a Metaverse virtual model that replicates the physical scenes accurately. This replication can achieve purposes of simulation or prediction of the physical world. Augmented reality (AR) assisted driving is an application that relies on Real-Time Digital Twinning. When driving,...

Full description

Saved in:
Bibliographic Details
Main Author: Neo, Gavin Jun Hui
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167621
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Digital Twin is a Metaverse virtual model that replicates the physical scenes accurately. This replication can achieve purposes of simulation or prediction of the physical world. Augmented reality (AR) assisted driving is an application that relies on Real-Time Digital Twinning. When driving, the images of the physical scenes, such as signboards, are captured by the Internet of Vehicles (IoV). A Service Provider (SP) will then upload the captured images to a Service Provider Base Station (SPBS), which then be replicated in a virtual model. Information will be calculated and displayed back to assist drivers in an AR format. However, these applications may invite adversarial attacks. Actions include placing adversarial patches onto the captured images before being replicated onto Metaverse. These tiny patches on physical objects may cause the virtual model to generate false information, and this misinformation can be detrimental to the drivers on the road. The first portion of this project is to introduce an adversarial patch detection model placed in the SPBS. The second portion introduces a reinforcement learning model. The model tasks are to allocate channels between IoV and SPBS efficiently and select an appropriate resolution size for the image sent for uploading. As AR-assisted driving is in real-time, these tasks aim to maximize the detection model's mean average precision (mAP) while minimizing the upload latency and idle count.