Angle sensitive imaging : a new paradigm for light field imaging

Imaging is a process of mapping information from higher dimensions of a light field into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as...

Full description

Saved in:
Bibliographic Details
Main Author: Varghese, Vigil
Other Authors: Chen Shoushun
Format: Theses and Dissertations
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/72323
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Imaging is a process of mapping information from higher dimensions of a light field into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as each sensor element (called a pixel) integrates all the light rays arriving at its surface. Directional information is lost and only intensity information is retained. This work is in pursuit of a method to decouple this link and enable image sensors to capture both intensity and direction without sacrificing much of the spatial resolution as the existing techniques do. Numerous applications have been demonstrated in the past that benefit from the additional directional information with the passive depth estimation being an obvious one. Others include multi-view point rendering, extended depth of field imaging, post capture image refocus, visibility in the presence of partial occluders and 3D scene recon- struction. This work concentrates on the ubiquitous issue of capturing high resolution light fields, consciously relegating the potential applications as simple software solutions to the designed hardware (image sensor). Once the 4D information is available, suitable modifications to the data set result in its application to diverse areas. Existing techniques that align their goals with this work have a severe shortcoming in terms of the achievable spatial resolution. The trade-off is that of the spatial versus directional resolution. One cannot be increased without affecting the other. This work attempts to find an optimum solution that maximizes spatial resolution without affect- ing the quality of the directional information captured thereby ensuring that sufficient directional information is available for computational post processing techniques. This work builds heavily on the theoretical premise laid down by the earlier work on multi-aperture imaging. Practical aspects are modeled on the diffraction based Talbot effect. The solution falls into a general category of sub-wavelength apertures and is a one-dimensional case of the same. We explore other alternative solutions such as differential quadrature pixels, polarization pixels, multi-finger pixels and combinations of these to effectively capture the angular information of light by consuming only a very small imager area. We establish the capabilities of our technique through rigorous testing of individual sensing elements and the image sensor as a whole. Our solution enables a rich set of applications among which are fast response auto-focus camera systems and single-shot passive 3D imaging.