Makeup transfer using generative adversarial network

Today, cosmetic style transfer algorithm is a kind of popular technology. Its main function is to transfer the makeup of the reference makeup portrait to the non-makeup portrait, so that users can try different makeup at will to find the makeup that suits them. Existing research uses model structure...

Full description

Saved in:
Bibliographic Details
Main Author: Feng, Qiyuan
Other Authors: Yap Kim Hui
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156875
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-156875
record_format dspace
spelling sg-ntu-dr.10356-1568752023-07-04T17:50:19Z Makeup transfer using generative adversarial network Feng, Qiyuan Yap Kim Hui School of Electrical and Electronic Engineering EKHYap@ntu.edu.sg Engineering::Electrical and electronic engineering Today, cosmetic style transfer algorithm is a kind of popular technology. Its main function is to transfer the makeup of the reference makeup portrait to the non-makeup portrait, so that users can try different makeup at will to find the makeup that suits them. Existing research uses model structures such as feedforward neural networks and generative adversarial networks to generate high-quality and high-fidelity portraits. However, most research treated the learning process as a black box, ignore the generation process of the model, and directly analyze the generated results. Inspired by recent research on disentangled representation, this dissertation proposes a dual-input/dual-output cycle-consistent Generative Adversarial Network DBGAN (Disentangled BeautyGAN) that decompose portraits into makeup features and identity features. Specifically, the model proposed includes a generative network and two discriminative networks. In the generative network, there is a network for extracting identity features and a network for extracting makeup features, called IdentityNet and MakeupNet respectively. In this project, IdentityNet and MakeupNet encode identity features and makeup features, and the generator utilizes an inverse encoder to generate transfer portraits. The discriminant network is to determine that the generated portrait is from real image or fake image. In addition to implementing makeup transfer, the model can also implement multiple makeup transfer scenarios. For example, changing the degree of transfer for a single makeup look, and on-demand makeup transfer for mixing multiple makeup looks. Experimental data show that the model can also achieve high-quality transfer results in these scenarios. Master of Science (Signal Processing) 2022-04-27T02:11:52Z 2022-04-27T02:11:52Z 2022 Thesis-Master by Coursework Feng, Q. (2022). Makeup transfer using generative adversarial network. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156875 https://hdl.handle.net/10356/156875 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Feng, Qiyuan
Makeup transfer using generative adversarial network
description Today, cosmetic style transfer algorithm is a kind of popular technology. Its main function is to transfer the makeup of the reference makeup portrait to the non-makeup portrait, so that users can try different makeup at will to find the makeup that suits them. Existing research uses model structures such as feedforward neural networks and generative adversarial networks to generate high-quality and high-fidelity portraits. However, most research treated the learning process as a black box, ignore the generation process of the model, and directly analyze the generated results. Inspired by recent research on disentangled representation, this dissertation proposes a dual-input/dual-output cycle-consistent Generative Adversarial Network DBGAN (Disentangled BeautyGAN) that decompose portraits into makeup features and identity features. Specifically, the model proposed includes a generative network and two discriminative networks. In the generative network, there is a network for extracting identity features and a network for extracting makeup features, called IdentityNet and MakeupNet respectively. In this project, IdentityNet and MakeupNet encode identity features and makeup features, and the generator utilizes an inverse encoder to generate transfer portraits. The discriminant network is to determine that the generated portrait is from real image or fake image. In addition to implementing makeup transfer, the model can also implement multiple makeup transfer scenarios. For example, changing the degree of transfer for a single makeup look, and on-demand makeup transfer for mixing multiple makeup looks. Experimental data show that the model can also achieve high-quality transfer results in these scenarios.
author2 Yap Kim Hui
author_facet Yap Kim Hui
Feng, Qiyuan
format Thesis-Master by Coursework
author Feng, Qiyuan
author_sort Feng, Qiyuan
title Makeup transfer using generative adversarial network
title_short Makeup transfer using generative adversarial network
title_full Makeup transfer using generative adversarial network
title_fullStr Makeup transfer using generative adversarial network
title_full_unstemmed Makeup transfer using generative adversarial network
title_sort makeup transfer using generative adversarial network
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/156875
_version_ 1772828846780841984