Automated image generation
In the movie and animation industry, concept art of character designs and scenery plays a crucial part in a movie or animation’s success. In current times, concept arts are usually created only by professional graphic design concept artists. These concept artists would use highly specialised grap...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156588 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In the movie and animation industry, concept art of character designs and scenery plays a
crucial part in a movie or animation’s success. In current times, concept arts are usually created
only by professional graphic design concept artists. These concept artists would use highly
specialised graphic design software to create these concept arts for movie / animation directions
so that they are able to plan and coordinate ideas for specific actions in a movie / animation
scene. Even though concept artists are crucial, they are also very costly to hire and for small
movie and animation studios and companies that are on a tight budget, thus these small studios
and companies find it hard to generate ideas for a movie / animation scene due not being able
to hire more concept artists. Therefore, an intuitive approach would then be needed to solve
this problem.
Image generation using Generative Adversarial Network (GAN) has become an immensely
popular topic in the field of Computer Vision in recent years. With the emergence of new state
of the art GAN architectures, the Pix2Pix architecture which is a type of Conditional GAN
demonstrated that it was able to generate detailed images of buildings, bags and street maps
given an input [Edges, Image Labels, aerial map].
In this project, we will be exploring Pix2Pix image-to-image translation abilities to generate
facial images of Japanese Manga / Anime characters given a user sketch as its input, images of
Asian faces given an artist sketch as its input and vice versa to serve as alternative ways to
generate character design concept arts which can serve as inspirational and motivating ideas
for the movie and animation directors of a small company.
We will also be experimenting with Enhanced Super-Resolution Generative Adversarial
Networks to sharpen the output image of the Pix2Pix network, and also observe the effects of
using traditional edge detection (Canny edge detection) methods versus using a state-of-the-art
edge detection method called Holistically-Nested Edge Detection to generate edges of an image
to serve as a paired training image input for the Pix2Pix network.
Based on the results of our experiments, image-to-image translation using GAN may be a
viable option to replace concept artists for small movie and animation studios in the near future
as they can be trained to generate specific types of images given the right quality and amount
of training data. |
---|