Portrait matting using an attention-based memory network

Matting is the process of generating prediction alpha and foreground with rich details from the input images. There are three major challenges for traditional matting algorithms. Firstly, most of them focus on auxiliary-based matting, while putting those algorithms into daily use is impractical sinc...

Full description

Saved in:
Bibliographic Details
Main Author: Song, Shufeng
Other Authors: Lin Zhiping
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166590
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Matting is the process of generating prediction alpha and foreground with rich details from the input images. There are three major challenges for traditional matting algorithms. Firstly, most of them focus on auxiliary-based matting, while putting those algorithms into daily use is impractical since additional input is not applicable in most scenarios. The second thing is the construction of temporal-guided modules to exploit temporal coherence for video matting tasks. Last but not least is the availability of matting datasets. This thesis addresses the above challenges and proposes a novel auxiliary-free video matting network. To eliminate the reliance on additional inputs, we perform a interleaved training strategy, in which we use binary masks from segmentation outputs to help our model to locate the portrait position and separate its boundary from the background. Then, we design a temporal-guided memory module based on the attention mechanism to compute and store the temporal coherence among video frames. Moreover, we also provide direct supervision for the the attention-based memory block to further boost the network’s robustness. Finally, we collect multiple matting datasets to generate synthesized video clips for training and testing. The validation results show that our method outperforms several state-of-the-art methods in terms of the alpha and foreground prediction quality and temporal consistency.