Makeup transfer using generative adversarial network
Today, cosmetic style transfer algorithm is a kind of popular technology. Its main function is to transfer the makeup of the reference makeup portrait to the non-makeup portrait, so that users can try different makeup at will to find the makeup that suits them. Existing research uses model structure...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156875 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Today, cosmetic style transfer algorithm is a kind of popular technology. Its main function is to transfer the makeup of the reference makeup portrait to the non-makeup portrait, so that users can try different makeup at will to find the makeup that suits them. Existing research uses model structures such as feedforward neural networks and generative adversarial networks to generate high-quality and high-fidelity portraits. However, most research treated the learning process as a black box, ignore the generation process of the model, and directly analyze the generated results. Inspired by recent research on disentangled representation, this dissertation proposes a dual-input/dual-output cycle-consistent Generative Adversarial Network DBGAN (Disentangled BeautyGAN) that decompose portraits into makeup features and identity features. Specifically, the model proposed includes a generative network and two discriminative networks. In the generative network, there is a network for extracting identity features and a network for extracting makeup features, called IdentityNet and MakeupNet respectively. In this project, IdentityNet and MakeupNet encode identity features and makeup features, and the generator utilizes an inverse encoder to generate transfer portraits. The discriminant network is to determine that the generated portrait is from real image or fake image. In addition to implementing makeup transfer, the model can also implement multiple makeup transfer scenarios. For example, changing the degree of transfer for a single makeup look, and on-demand makeup transfer for mixing multiple makeup looks. Experimental data show that the model can also achieve high-quality transfer results in these scenarios. |
---|