Image inpainting for manipulating scenes and object

Image inpainting is a challenging computer vision task that involves filling in a part of an image that is obstructed or damaged while preserving its visual coherence. It is commonly used to remove unwanted objects or replace an object in images and can be extended to manipulate objects such as movi...

Full description

Saved in:
Bibliographic Details
Main Author: Chen, Weiyi
Other Authors: Cham Tat Jen
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175089
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Image inpainting is a challenging computer vision task that involves filling in a part of an image that is obstructed or damaged while preserving its visual coherence. It is commonly used to remove unwanted objects or replace an object in images and can be extended to manipulate objects such as moving or resizing them. While advancements in deep learning models have been made in this field, many of them are difficult to access by users with little to no image editing background due to the technical knowledge required to use them. As the paradigm of deep learning shifts towards foundation models, two major foundation models in computer vision emerged: Segment Anything Model (SAM) and Stable Diffusion (SD). SAM tackles image segmentation, which involves partitioning an image into multiple parts and objects, whereas SD focuses on image generation and, by extension, image inpainting. Both models are open-sourced and achieve high performance in their respective tasks. This project proposes a framework that concatenates the abovementioned models into an object manipulation pipeline, focusing primarily on object removal and movement. In this framework, SAM generates a mask from an input point, and SD inpaints the masked area with a suitable background, removing the masked object from the image. Multiple SD variants were explored to find the most appropriate model configuration. A web application was also developed to demonstrate basic movements with the object manipulation capabilities of this framework using an image uploaded by the user.