Image inpainting for manipulating scenes and object

Image inpainting is a challenging computer vision task that involves filling in a part of an image that is obstructed or damaged while preserving its visual coherence. It is commonly used to remove unwanted objects or replace an object in images and can be extended to manipulate objects such as movi...

Full description

Saved in:
Bibliographic Details
Main Author: Chen, Weiyi
Other Authors: Cham Tat Jen
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175089
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175089
record_format dspace
spelling sg-ntu-dr.10356-1750892024-04-19T15:42:15Z Image inpainting for manipulating scenes and object Chen, Weiyi Cham Tat Jen School of Computer Science and Engineering ASTJCham@ntu.edu.sg Computer and Information Science Image inpainting is a challenging computer vision task that involves filling in a part of an image that is obstructed or damaged while preserving its visual coherence. It is commonly used to remove unwanted objects or replace an object in images and can be extended to manipulate objects such as moving or resizing them. While advancements in deep learning models have been made in this field, many of them are difficult to access by users with little to no image editing background due to the technical knowledge required to use them. As the paradigm of deep learning shifts towards foundation models, two major foundation models in computer vision emerged: Segment Anything Model (SAM) and Stable Diffusion (SD). SAM tackles image segmentation, which involves partitioning an image into multiple parts and objects, whereas SD focuses on image generation and, by extension, image inpainting. Both models are open-sourced and achieve high performance in their respective tasks. This project proposes a framework that concatenates the abovementioned models into an object manipulation pipeline, focusing primarily on object removal and movement. In this framework, SAM generates a mask from an input point, and SD inpaints the masked area with a suitable background, removing the masked object from the image. Multiple SD variants were explored to find the most appropriate model configuration. A web application was also developed to demonstrate basic movements with the object manipulation capabilities of this framework using an image uploaded by the user. Bachelor's degree 2024-04-19T05:10:48Z 2024-04-19T05:10:48Z 2024 Final Year Project (FYP) Chen, W. (2024). Image inpainting for manipulating scenes and object. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175089 https://hdl.handle.net/10356/175089 en SCSE23-0031 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
spellingShingle Computer and Information Science
Chen, Weiyi
Image inpainting for manipulating scenes and object
description Image inpainting is a challenging computer vision task that involves filling in a part of an image that is obstructed or damaged while preserving its visual coherence. It is commonly used to remove unwanted objects or replace an object in images and can be extended to manipulate objects such as moving or resizing them. While advancements in deep learning models have been made in this field, many of them are difficult to access by users with little to no image editing background due to the technical knowledge required to use them. As the paradigm of deep learning shifts towards foundation models, two major foundation models in computer vision emerged: Segment Anything Model (SAM) and Stable Diffusion (SD). SAM tackles image segmentation, which involves partitioning an image into multiple parts and objects, whereas SD focuses on image generation and, by extension, image inpainting. Both models are open-sourced and achieve high performance in their respective tasks. This project proposes a framework that concatenates the abovementioned models into an object manipulation pipeline, focusing primarily on object removal and movement. In this framework, SAM generates a mask from an input point, and SD inpaints the masked area with a suitable background, removing the masked object from the image. Multiple SD variants were explored to find the most appropriate model configuration. A web application was also developed to demonstrate basic movements with the object manipulation capabilities of this framework using an image uploaded by the user.
author2 Cham Tat Jen
author_facet Cham Tat Jen
Chen, Weiyi
format Final Year Project
author Chen, Weiyi
author_sort Chen, Weiyi
title Image inpainting for manipulating scenes and object
title_short Image inpainting for manipulating scenes and object
title_full Image inpainting for manipulating scenes and object
title_fullStr Image inpainting for manipulating scenes and object
title_full_unstemmed Image inpainting for manipulating scenes and object
title_sort image inpainting for manipulating scenes and object
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175089
_version_ 1800916415112282112