Texture memory-augmented deep patch-based image inpainting

Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However,...

Full description

Saved in:
Bibliographic Details
Main Authors: Xu, Rui, Guo, Minghao, Wang, Jiaqi, Li, Xiaoxiao, Zhou, Bolei, Loy, Chen Change
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160516
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-160516
record_format dspace
spelling sg-ntu-dr.10356-1605162022-07-26T04:23:56Z Texture memory-augmented deep patch-based image inpainting Xu, Rui Guo, Minghao Wang, Jiaqi Li, Xiaoxiao Zhou, Bolei Loy, Chen Change School of Computer Science and Engineering Engineering::Computer science and engineering Image Reconstruction Image Restoration Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets (Code will be made publicly available in https://github.com/open-mmlab/mmediting). National Research Foundation (NRF) This work was supported in part by the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, in part by the Research Grants Council (RGC) of Hong Kong under ECS Grant 24206219, in part by the General Research Fund (GRF) under Grant 14204521, in part by The Chinese University of Hong Kong (CUHK) Faculty of Engineering (FoE) Research Sustainability of Major RGC Funding Schemes (RSFS) Grant, and in part by SenseTime Collaborative Grant. 2022-07-26T04:23:56Z 2022-07-26T04:23:56Z 2021 Journal Article Xu, R., Guo, M., Wang, J., Li, X., Zhou, B. & Loy, C. C. (2021). Texture memory-augmented deep patch-based image inpainting. IEEE Transactions On Image Processing, 30, 9112-9124. https://dx.doi.org/10.1109/TIP.2021.3122930 1057-7149 https://hdl.handle.net/10356/160516 10.1109/TIP.2021.3122930 34723802 2-s2.0-85118677035 30 9112 9124 en IEEE Transactions on Image Processing © 2021 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Image Reconstruction
Image Restoration
spellingShingle Engineering::Computer science and engineering
Image Reconstruction
Image Restoration
Xu, Rui
Guo, Minghao
Wang, Jiaqi
Li, Xiaoxiao
Zhou, Bolei
Loy, Chen Change
Texture memory-augmented deep patch-based image inpainting
description Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets (Code will be made publicly available in https://github.com/open-mmlab/mmediting).
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Xu, Rui
Guo, Minghao
Wang, Jiaqi
Li, Xiaoxiao
Zhou, Bolei
Loy, Chen Change
format Article
author Xu, Rui
Guo, Minghao
Wang, Jiaqi
Li, Xiaoxiao
Zhou, Bolei
Loy, Chen Change
author_sort Xu, Rui
title Texture memory-augmented deep patch-based image inpainting
title_short Texture memory-augmented deep patch-based image inpainting
title_full Texture memory-augmented deep patch-based image inpainting
title_fullStr Texture memory-augmented deep patch-based image inpainting
title_full_unstemmed Texture memory-augmented deep patch-based image inpainting
title_sort texture memory-augmented deep patch-based image inpainting
publishDate 2022
url https://hdl.handle.net/10356/160516
_version_ 1739837376485654528