Angle sensitive imaging : a new paradigm for light field imaging

Imaging is a process of mapping information from higher dimensions of a light field into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Varghese, Vigil
مؤلفون آخرون: Chen Shoushun
التنسيق: Theses and Dissertations
اللغة:English
منشور في: 2017
الموضوعات:
الوصول للمادة أونلاين:http://hdl.handle.net/10356/72323
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:Imaging is a process of mapping information from higher dimensions of a light field into lower dimensions. Conventional cameras do this mapping into two dimensions of the image sensor array. These sensors lose directional information contained in the light rays passing through the camera aperture as each sensor element (called a pixel) integrates all the light rays arriving at its surface. Directional information is lost and only intensity information is retained. This work is in pursuit of a method to decouple this link and enable image sensors to capture both intensity and direction without sacrificing much of the spatial resolution as the existing techniques do. Numerous applications have been demonstrated in the past that benefit from the additional directional information with the passive depth estimation being an obvious one. Others include multi-view point rendering, extended depth of field imaging, post capture image refocus, visibility in the presence of partial occluders and 3D scene recon- struction. This work concentrates on the ubiquitous issue of capturing high resolution light fields, consciously relegating the potential applications as simple software solutions to the designed hardware (image sensor). Once the 4D information is available, suitable modifications to the data set result in its application to diverse areas. Existing techniques that align their goals with this work have a severe shortcoming in terms of the achievable spatial resolution. The trade-off is that of the spatial versus directional resolution. One cannot be increased without affecting the other. This work attempts to find an optimum solution that maximizes spatial resolution without affect- ing the quality of the directional information captured thereby ensuring that sufficient directional information is available for computational post processing techniques. This work builds heavily on the theoretical premise laid down by the earlier work on multi-aperture imaging. Practical aspects are modeled on the diffraction based Talbot effect. The solution falls into a general category of sub-wavelength apertures and is a one-dimensional case of the same. We explore other alternative solutions such as differential quadrature pixels, polarization pixels, multi-finger pixels and combinations of these to effectively capture the angular information of light by consuming only a very small imager area. We establish the capabilities of our technique through rigorous testing of individual sensing elements and the image sensor as a whole. Our solution enables a rich set of applications among which are fast response auto-focus camera systems and single-shot passive 3D imaging.