Tangible images

Haptic interaction is typically based on detecting collisions of the haptic interaction point with physical models of the virtual objects comprising polygon meshes, point sets, or procedural models topologically collocated with the geometric models of the object in the modeling space. However, it is...

Full description

Saved in:
Bibliographic Details
Main Author: Shahzad Rasool
Other Authors: Alexei Sourin
Format: Theses and Dissertations
Language:English
Published: 2014
Subjects:
Online Access:https://hdl.handle.net/10356/61841
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Haptic interaction is typically based on detecting collisions of the haptic interaction point with physical models of the virtual objects comprising polygon meshes, point sets, or procedural models topologically collocated with the geometric models of the object in the modeling space. However, it is not always possible or feasible to make such models when it comes to using real images or videos as elements of interaction. Moreover, traditional polygon-based modelling representation yields large sized files and time consuming visual rendering to achieve photorealistic quality. This deficiency could be overcome by employing image-based representations. A survey of the haptic interaction methods related to such representations leads to the conclusion that there is room for research for proposing new algorithms of haptic interaction with two-dimensional images and videos, without reconstruction of 3D models. Based on the paradigm ‘what we see is what we touch’, a set of novel techniques for making two dimensional images tangible is proposed. A novel depth-based haptic geometry rendering algorithm is devised which is combined with haptic rendering of surface texture and physical properties. The computational complexity of the proposed rendering algorithm is much lower than that of visual and haptic rendering of three dimensional scene representations while photorealistic visual quality is achieved. An innovative extension of the algorithm to reliably interact with hybrid scenes composed of 3D models and 2D images is proposed. Visible 3D models are used to simulate editable parts of scene while low resolution invisible 3D models are used as haptic containers to augment the visual interaction experience. These haptic models have simplified geometry and are additionally capable of representing non-visual force effects such as wind, flow, magnetism, etc. In order to validate the research niche, a comprehensive application example of simulation of a minimally invasive surgical procedure is implemented. The developed image-driven virtual arthroscopy training simulator provides a set of training exercises for learning basic skills required for such minimally invasive surgery. A few other applications are implemented to illustrate the usefulness and feasibility of the proposed algorithms.