CoPEM: cooperative perception error models for autonomous driving
In this paper, we introduce the notion of Cooperative Perception Error Models (coPEMs) towards achieving an effective and efficient integration of V2X solutions within a virtual test environment. We focus our analysis on the occlusion problem in the (onboard) perception of Autonomous Vehicles (AV...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166784 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In this paper, we introduce the notion of Cooperative Perception Error Models
(coPEMs) towards achieving an effective and efficient integration of V2X
solutions within a virtual test environment. We focus our analysis on the
occlusion problem in the (onboard) perception of Autonomous Vehicles (AV),
which can manifest as misdetection errors on the occluded objects. Cooperative
perception (CP) solutions based on Vehicle-to-Everything (V2X) communications
aim to avoid such issues by cooperatively leveraging additional points of view
for the world around the AV. This approach usually requires many sensors,
mainly cameras and LiDARs, to be deployed simultaneously in the environment
either as part of the road infrastructure or on other traffic vehicles.
However, implementing a large number of sensor models in a virtual simulation
pipeline is often prohibitively computationally expensive. Therefore, in this
paper, we rely on extending Perception Error Models (PEMs) to efficiently
implement such cooperative perception solutions along with the errors and
uncertainties associated with them. We demonstrate the approach by comparing
the safety achievable by an AV challenged with a traffic scenario where
occlusion is the primary cause of a potential collision. |
---|