TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS
In recent years, deep learning has seen various successes and empirically has outperformed other approaches in many problems. A deep learning based model can be used to represent an estimate of data distribution. Thee model is able to generate samples from the learned data distribution, for example...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/27248 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:27248 |
---|---|
spelling |
id-itb.:272482018-10-01T08:58:36ZTEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS AGIL IFDILLAH - NIM : 13514010, FEBI Indonesia Final Project INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/27248 In recent years, deep learning has seen various successes and empirically has outperformed other approaches in many problems. A deep learning based model can be used to represent an estimate of data distribution. Thee model is able to generate samples from the learned data distribution, for example in the form of images or text. In the visual context, several deep learning models have been developed to produce images automatically or commonly called image synthesis. Automated image synthesis is a very interesting issue to solve and will be very useful. It can be applied for various purposes such as digital design, animation, and visual editing. <br /> <br /> <br /> <br /> <br /> <br /> <br /> The system built in this final project implements machine learning models which take input sentences that have been previously converted into vectors using Skip-thought and Sent2Vec and then processes the vectors into single object illustrations. The models belong to a class of generative model called Generative Adversarial Networks (GANs), which utilizes two artificial neural networks: a generator and discriminator to achieve equilibrium. Then the GANs are trained using the GAN-CLS algorithm which utilizes the combination of text vectors and noise vectors drawn from a uniform distribution. With the GAN-CLS algorithm, several types of loss are calculated between original images, fake images, original texts, and artificial images which can then be used in GANs learning. <br /> <br /> <br /> <br /> <br /> <br /> <br /> The experiments were carried out on Oxford-102 Flowers Dataset which consisted of 102 flower categories, 8192 images, and 10 texts for each image. There are three architecture of generators employed, namely simple, normal, and deep generator. The experiment of various generators using Skip-thought and Sent2Vec were conducted separately. Based on the experiments, Skip-thought-based models produce images that are more real, more varied, and more stable to train than the Sent2Vec or Wasserstein-based models. The best model is Skip-thought normal with inception scores of 2,457 ± 0,356. <br /> text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
In recent years, deep learning has seen various successes and empirically has outperformed other approaches in many problems. A deep learning based model can be used to represent an estimate of data distribution. Thee model is able to generate samples from the learned data distribution, for example in the form of images or text. In the visual context, several deep learning models have been developed to produce images automatically or commonly called image synthesis. Automated image synthesis is a very interesting issue to solve and will be very useful. It can be applied for various purposes such as digital design, animation, and visual editing. <br />
<br />
<br />
<br />
<br />
<br />
<br />
The system built in this final project implements machine learning models which take input sentences that have been previously converted into vectors using Skip-thought and Sent2Vec and then processes the vectors into single object illustrations. The models belong to a class of generative model called Generative Adversarial Networks (GANs), which utilizes two artificial neural networks: a generator and discriminator to achieve equilibrium. Then the GANs are trained using the GAN-CLS algorithm which utilizes the combination of text vectors and noise vectors drawn from a uniform distribution. With the GAN-CLS algorithm, several types of loss are calculated between original images, fake images, original texts, and artificial images which can then be used in GANs learning. <br />
<br />
<br />
<br />
<br />
<br />
<br />
The experiments were carried out on Oxford-102 Flowers Dataset which consisted of 102 flower categories, 8192 images, and 10 texts for each image. There are three architecture of generators employed, namely simple, normal, and deep generator. The experiment of various generators using Skip-thought and Sent2Vec were conducted separately. Based on the experiments, Skip-thought-based models produce images that are more real, more varied, and more stable to train than the Sent2Vec or Wasserstein-based models. The best model is Skip-thought normal with inception scores of 2,457 ± 0,356. <br />
|
format |
Final Project |
author |
AGIL IFDILLAH - NIM : 13514010, FEBI |
spellingShingle |
AGIL IFDILLAH - NIM : 13514010, FEBI TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS |
author_facet |
AGIL IFDILLAH - NIM : 13514010, FEBI |
author_sort |
AGIL IFDILLAH - NIM : 13514010, FEBI |
title |
TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS |
title_short |
TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS |
title_full |
TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS |
title_fullStr |
TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS |
title_full_unstemmed |
TEXT TO SINGLE OBJECT ILLUSTRATION SYNTHESIS BY USING GENERATIVE ADVERSARIAL NETWORKS |
title_sort |
text to single object illustration synthesis by using generative adversarial networks |
url |
https://digilib.itb.ac.id/gdl/view/27248 |
_version_ |
1821934321074700288 |