Compressed event sensing (CES) volumes for event cameras
Deep learning has made significant progress in event-driven applications. But to match standard vision networks, most approaches rely on aggregating events into grid-like representations, which obscure crucial temporal information and limit overall performance. To address this issue, we propose a no...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180781 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-180781 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1807812024-10-24T06:41:12Z Compressed event sensing (CES) volumes for event cameras Lin, Songnan Ma, Ye Chen, Jing Wen, Bihan School of Electrical and Electronic Engineering Engineering Data representation Compressed sensing Deep learning has made significant progress in event-driven applications. But to match standard vision networks, most approaches rely on aggregating events into grid-like representations, which obscure crucial temporal information and limit overall performance. To address this issue, we propose a novel event representation called compressed event sensing (CES) volumes. CES volumes preserve the high temporal resolution of event streams by leveraging the sparsity property of events and the principles of compressed sensing theory. They effectively capture the frequency characteristics of events in low-dimensional representations, which can be accurately decoded to raw high-dimensional event signals. In addition, our theoretical analysis show that, when integrated with a neural network, CES volumes demonstrates greater expressive power under the neural tangent kernel approximation. Through synthetic phantom validation on dense frame regression and two downstream applications involving intensity-image reconstruction and object recognition tasks, we demonstrate the superior performance of CES volumes compared to state-of-the-art event representations. Ministry of Education (MOE) This work was supported in part by the Ministry of Education, Republic of Singapore, through its Start-Up Grant and Academic Research Fund Tier 1 (RG61/22). 2024-10-24T06:41:12Z 2024-10-24T06:41:12Z 2024 Journal Article Lin, S., Ma, Y., Chen, J. & Wen, B. (2024). Compressed event sensing (CES) volumes for event cameras. International Journal of Computer Vision. https://dx.doi.org/10.1007/s11263-024-02197-2 0920-5691 https://hdl.handle.net/10356/180781 10.1007/s11263-024-02197-2 2-s2.0-85200402606 en RG61/22 International Journal of Computer Vision © 2024 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Data representation Compressed sensing |
spellingShingle |
Engineering Data representation Compressed sensing Lin, Songnan Ma, Ye Chen, Jing Wen, Bihan Compressed event sensing (CES) volumes for event cameras |
description |
Deep learning has made significant progress in event-driven applications. But to match standard vision networks, most approaches rely on aggregating events into grid-like representations, which obscure crucial temporal information and limit overall performance. To address this issue, we propose a novel event representation called compressed event sensing (CES) volumes. CES volumes preserve the high temporal resolution of event streams by leveraging the sparsity property of events and the principles of compressed sensing theory. They effectively capture the frequency characteristics of events in low-dimensional representations, which can be accurately decoded to raw high-dimensional event signals. In addition, our theoretical analysis show that, when integrated with a neural network, CES volumes demonstrates greater expressive power under the neural tangent kernel approximation. Through synthetic phantom validation on dense frame regression and two downstream applications involving intensity-image reconstruction and object recognition tasks, we demonstrate the superior performance of CES volumes compared to state-of-the-art event representations. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Lin, Songnan Ma, Ye Chen, Jing Wen, Bihan |
format |
Article |
author |
Lin, Songnan Ma, Ye Chen, Jing Wen, Bihan |
author_sort |
Lin, Songnan |
title |
Compressed event sensing (CES) volumes for event cameras |
title_short |
Compressed event sensing (CES) volumes for event cameras |
title_full |
Compressed event sensing (CES) volumes for event cameras |
title_fullStr |
Compressed event sensing (CES) volumes for event cameras |
title_full_unstemmed |
Compressed event sensing (CES) volumes for event cameras |
title_sort |
compressed event sensing (ces) volumes for event cameras |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/180781 |
_version_ |
1814777817364168704 |