Light field denoising via anisotropic parallax analysis in a CNN framework

Light field (LF) cameras provide perspective information of scenes by taking directional measurements of the focusing light rays. The raw outputs are usually dark with additive camera noise, which impedes subsequent processing and applications. We propose a novel LF denoising framework based on anis...

全面介紹

Saved in:
書目詳細資料
Main Authors: Chen, Jie, Hou, Junhui, Chau, Lap-Pui
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2020
主題:
在線閱讀:https://hdl.handle.net/10356/142560
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Light field (LF) cameras provide perspective information of scenes by taking directional measurements of the focusing light rays. The raw outputs are usually dark with additive camera noise, which impedes subsequent processing and applications. We propose a novel LF denoising framework based on anisotropic parallax analysis (APA). Two convolutional neural networks are jointly designed for the task: first, the structural parallax synthesis network predicts the parallax details for the entire LF based on a set of anisotropic parallax features. These novel features can efficiently capture the high-frequency perspective components of a LF from noisy observations. Second, the view-dependent detail compensation network restores non-Lambertian variation to each LF view by involving view-specific spatial energies. Extensive experiments show that the proposed APA LF denoiser provides a much better denoising performance than state-of-the-art methods in terms of visual quality and in preservation of parallax details.