2D image deformation based on guaranteed feature correspondence and mesh mapping
Image deformation has ubiquitous usage in multimedia applications. It morphs one image into another through a seamless transition. Existing techniques either mainly focus on the correspondence mapping of interior features of the objects in two images, without considering object contours, or sketch c...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/100605 http://hdl.handle.net/10220/48565 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-100605 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1006052020-03-07T11:50:46Z 2D image deformation based on guaranteed feature correspondence and mesh mapping Liu, Yaqiong Lin, Xin Shou, Guochu Seah, Hock Soon School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering Automatic Contour Exaction 2D Image Deformation/Morphing Image deformation has ubiquitous usage in multimedia applications. It morphs one image into another through a seamless transition. Existing techniques either mainly focus on the correspondence mapping of interior features of the objects in two images, without considering object contours, or sketch contours manually, resulting in tedious work for users. Thus, we propose a 2D image deformation method, which extracts object contours automatically, considers both inner features and contours as constraints and preserves image features in terms of visual importance. Our method first automatically extracts the object contours in the source and target images and then allows users to sketch some interior features in both the images. Then, our method tessellates two images to generate two triangular meshes and builds a guaranteed bijective mesh mapping between them. We also prove the bijectivity of our mesh mapping and discuss its other desirable properties. Then, our method generates the intermediate images between the source and target images by calculating the intermediate meshes and pixels of each intermediate image. Our method realizes automatic contour extraction, provides an intuitive user interface and utilizes harmonic maps to establish a bijective mesh mapping. Therefore, it preserves more significant features with less distortion and works well for many image deformation cases in real time. Published version 2019-06-06T06:38:44Z 2019-12-06T20:25:16Z 2019-06-06T06:38:44Z 2019-12-06T20:25:16Z 2018 Journal Article Liu, Y., Lin, X., Shou, G., & Seah, H. S. (2019). 2D image deformation based on guaranteed feature correspondence and mesh mapping. IEEE Access, 7, 5208-5221. doi:10.1109/ACCESS.2018.2887078 https://hdl.handle.net/10356/100605 http://hdl.handle.net/10220/48565 10.1109/ACCESS.2018.2887078 en IEEE Access © 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information 14 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering Automatic Contour Exaction 2D Image Deformation/Morphing |
spellingShingle |
DRNTU::Engineering::Computer science and engineering Automatic Contour Exaction 2D Image Deformation/Morphing Liu, Yaqiong Lin, Xin Shou, Guochu Seah, Hock Soon 2D image deformation based on guaranteed feature correspondence and mesh mapping |
description |
Image deformation has ubiquitous usage in multimedia applications. It morphs one image into another through a seamless transition. Existing techniques either mainly focus on the correspondence mapping of interior features of the objects in two images, without considering object contours, or sketch contours manually, resulting in tedious work for users. Thus, we propose a 2D image deformation method, which extracts object contours automatically, considers both inner features and contours as constraints and preserves image features in terms of visual importance. Our method first automatically extracts the object contours in the source and target images and then allows users to sketch some interior features in both the images. Then, our method tessellates two images to generate two triangular meshes and builds a guaranteed bijective mesh mapping between them. We also prove the bijectivity of our mesh mapping and discuss its other desirable properties. Then, our method generates the intermediate images between the source and target images by calculating the intermediate meshes and pixels of each intermediate image. Our method realizes automatic contour extraction, provides an intuitive user interface and utilizes harmonic maps to establish a bijective mesh mapping. Therefore, it preserves more significant features with less distortion and works well for many image deformation cases in real time. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Liu, Yaqiong Lin, Xin Shou, Guochu Seah, Hock Soon |
format |
Article |
author |
Liu, Yaqiong Lin, Xin Shou, Guochu Seah, Hock Soon |
author_sort |
Liu, Yaqiong |
title |
2D image deformation based on guaranteed feature correspondence and mesh mapping |
title_short |
2D image deformation based on guaranteed feature correspondence and mesh mapping |
title_full |
2D image deformation based on guaranteed feature correspondence and mesh mapping |
title_fullStr |
2D image deformation based on guaranteed feature correspondence and mesh mapping |
title_full_unstemmed |
2D image deformation based on guaranteed feature correspondence and mesh mapping |
title_sort |
2d image deformation based on guaranteed feature correspondence and mesh mapping |
publishDate |
2019 |
url |
https://hdl.handle.net/10356/100605 http://hdl.handle.net/10220/48565 |
_version_ |
1681045630958436352 |