Digital makeup using machine learning algorithms

Self-photographs, also known as selfies, have become indispensable in social media and the glamor industry. One’s face can be further enhanced with modern photo-editing software such as Adobe Photoshop. These makeup tools can digitally beautify the face with a click. Beauty industries have begun to...

Full description

Saved in:
Bibliographic Details
Main Author: Malani, Surabhi
Other Authors: He Ying
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/144579
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Self-photographs, also known as selfies, have become indispensable in social media and the glamor industry. One’s face can be further enhanced with modern photo-editing software such as Adobe Photoshop. These makeup tools can digitally beautify the face with a click. Beauty industries have begun to embrace virtual makeup to support their customers’ online shopping experience. The project aims to evaluate the proof of concept behind virtual makeup. It investigates the “how’s” and “what’s” behind the implementation of digital makeup, and also analyses how the program fared against different use cases. Curious individuals can experiment with how their appearance will change according to the latest trends, through a simple automated algorithm. The author implemented a state-of-the-art algorithm, in Python, for semantic segmentation of portrait images using fully convolutional networks (FCN) and other open-source libraries. This was followed by an example-based skin and hair colour transfer using N-dimensional Probability Density Function (PDF) statistical transfer. Ethnically diverse datasets were built with the photographs offered by enthusiastic photographers on the Internet. Colour transfer was done on a Part-to-Part basis between semantically similar features, and then parsed back onto the original image for a completed look. The author has delivered an application that performed a full face-to-face makeup transfer involving a series of part-to-part colour transfer for each individual facial feature. The end results accurately capture the essence of the reference image. The application is able to obtain a reasonable segmentation of the input and reference image, and successfully performed a colour transfer, revealing visually aesthetic results. This resource-efficient program adopted high-performant technology, taking a total of 284 seconds to execute. The colour transfer algorithm is not optimal when applied to humans because the human eye can easily perceive the distortion. Careful consideration can be made by ensuring that the input images have a relatively similar histogram distribution. Further research can be conducted to improve the algorithmic scope or even adopt more sophisticated and advanced technology that can parse makeup with content awareness.