View-dependent feature line detection on polygonal meshes
In computer graphics and image processing, feature line extraction plays a critical role in increasing numbers of applications, due to its capability of carrying the most prominent characteristics of a mesh surface. In the past few decades, there have been intensive research done in both object-spac...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2015
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/63048 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In computer graphics and image processing, feature line extraction plays a critical role in increasing numbers of applications, due to its capability of carrying the most prominent characteristics of a mesh surface. In the past few decades, there have been intensive research done in both object-space and image-space approaches of feature line extraction. However, most of the object-space solutions, such as suggestive contours and ridge-valley lines, involve lots of on-the-fly computations of second or third order surface derivatives, resulting in a poor performance when rendering complicated models or large scale of scenes. This report presents a novel object-space line-drawing technique called Laplacian line, which extracts view-dependent feature lines in real-time. Inspired by the Laplacian-of-Gaussian edge detector, Laplacian line is defined as a set of zero-crossing points of the Laplacian of illumination. The Laplacian of illumination could be simplified to the dot product of the light vector and Laplacian of vertex normal. Therefore, the most time-costly computation in this algorithm is to compute the third order surface derivatives, i.e. the Laplacian of vertex normal. Since the Laplacian normal is view-independent and could be completely pre-computed, this algorithm is extremely promising to extract features of complicated models and large scenes by avoiding on-the-fly heavy computations. |
---|