Reference-based screentone transfer via pattern correspondence and regularization
Adding screentone to initial line drawings is a crucial step for manga generation, but is a tedious and human-laborious task. In this work, we propose a novel data-driven method aiming to transfer the screentone pattern from a reference manga image. This not only ensures the quality, but also adds c...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8370 https://ink.library.smu.edu.sg/context/sis_research/article/9373/viewcontent/v42i6_26_14800__1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Adding screentone to initial line drawings is a crucial step for manga generation, but is a tedious and human-laborious task. In this work, we propose a novel data-driven method aiming to transfer the screentone pattern from a reference manga image. This not only ensures the quality, but also adds controllability to the generated manga results. The reference-based screentone translation task imposes several unique challenges. Since manga image often contains multiple screentone patterns interweaved with line drawing, as an abstract art, this makes it even more difficult to extract disentangled style code from the reference. Also, finding correspondence for mapping between the reference and the input line drawing without any screentone is hard. As screentone contains many subtle details, how to guarantee the style consistency to the reference remains challenging. To suit our purpose and resolve the above difficulties, we propose a novel Reference-based Screentone Transfer Network (RSTN). We encode the screentone style through a 1D stylegram. A patch correspondence loss is designed to build a similarity mapping function for guiding the translation. To mitigate the generated artefacts, a pattern regularization loss is introduced in the patch-level. Through extensive experiments and a user study, we have demonstrated the effectiveness of our proposed model. |
---|