Human face segmentation and feature learning
This report documents the application of two models of segmentation, namely semantic and instance segmentation, to the purpose of segmenting human faces. It details the training of both models, showcases results of the trained model, and explains explicitly wherever necessary the inner workings of s...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/75288 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This report documents the application of two models of segmentation, namely semantic and instance segmentation, to the purpose of segmenting human faces. It details the training of both models, showcases results of the trained model, and explains explicitly wherever necessary the inner workings of such models in processing raw inputs into segmented outputs.
Split into two major parts, the report deals, within its first part, with semantic segmentation applied to two batches of images, firstly of single human faces and secondly of multiple human faces. Next, the report provides results of experiments aimed at evaluating possible causes for shortcomings of the trained model, such as the presence of headgear, adjacency of faces, angle of faces, etc. Saliency maps are then created to investigate whether specific features of the human face bear significance on segmentation results.
The second part of the report is concerned with the state-of-the-art instance segmentation model, Mask R-CNN. Firstly, the report details how such a model is trained to detect and segment, and at the same time distinguish between, multiple instances of human faces. Then it seeks to explain the step-by-step process of bounding box detection and mask segmentation in such results: the inner workings of the RPN, assignment of class IDs and class confidence scores, non-max suppression (NMS), etc. Finally, it details evaluation results of intersection-over-union (IoU) and pixel accuracies of segmentation masks. The mean average precision of bounding boxes was calculated to be 0.912. The average intersection-over-union accuracy was found to be 0.796 and the average pixel accuracy to be 0.844. |
---|