Image-to-character-to-word transformers for accurate scene text recognition
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via 'sequential decoding'. However, scene text images suffer f...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/172173 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-172173 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1721732023-11-28T05:04:54Z Image-to-character-to-word transformers for accurate scene text recognition Xue, Chuhui Huang, Jiaxing Zhang, Wenqing Lu, Shijian Wang, Changhu Bai, Song School of Computer Science and Engineering Engineering::Computer science and engineering Scene Text Recognition Transformer Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via 'sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets. 2023-11-28T05:04:53Z 2023-11-28T05:04:53Z 2023 Journal Article Xue, C., Huang, J., Zhang, W., Lu, S., Wang, C. & Bai, S. (2023). Image-to-character-to-word transformers for accurate scene text recognition. IEEE Transactions On Pattern Analysis and Machine Intelligence, 45(11), 12908-12921. https://dx.doi.org/10.1109/TPAMI.2022.3230962 0162-8828 https://hdl.handle.net/10356/172173 10.1109/TPAMI.2022.3230962 37022831 2-s2.0-85149413490 11 45 12908 12921 en IEEE Transactions on Pattern Analysis and Machine Intelligence © 2023 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Scene Text Recognition Transformer |
spellingShingle |
Engineering::Computer science and engineering Scene Text Recognition Transformer Xue, Chuhui Huang, Jiaxing Zhang, Wenqing Lu, Shijian Wang, Changhu Bai, Song Image-to-character-to-word transformers for accurate scene text recognition |
description |
Leveraging the advances of natural language processing, most recent scene text recognizers adopt an encoder-decoder architecture where text images are first converted to representative features and then a sequence of characters via 'sequential decoding'. However, scene text images suffer from rich noises of different sources such as complex background and geometric distortions which often confuse the decoder and lead to incorrect alignment of visual features at noisy decoding time steps. This paper presents I2C2W, a novel scene text recognition technique that is tolerant to geometric and photometric degradation by decomposing scene text recognition into two inter-connected tasks. The first task focuses on image-to-character (I2C) mapping which detects a set of character candidates from images based on different alignments of visual features in an non-sequential way. The second task tackles character-to-word (C2W) mapping which recognizes scene text by decoding words from the detected character candidates. The direct learning from character semantics (instead of noisy image features) corrects falsely detected character candidates effectively which improves the final text recognition accuracy greatly. Extensive experiments over nine public datasets show that the proposed I2C2W outperforms the state-of-the-art by large margins for challenging scene text datasets with various curvature and perspective distortions. It also achieves very competitive recognition performance over multiple normal scene text datasets. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Xue, Chuhui Huang, Jiaxing Zhang, Wenqing Lu, Shijian Wang, Changhu Bai, Song |
format |
Article |
author |
Xue, Chuhui Huang, Jiaxing Zhang, Wenqing Lu, Shijian Wang, Changhu Bai, Song |
author_sort |
Xue, Chuhui |
title |
Image-to-character-to-word transformers for accurate scene text recognition |
title_short |
Image-to-character-to-word transformers for accurate scene text recognition |
title_full |
Image-to-character-to-word transformers for accurate scene text recognition |
title_fullStr |
Image-to-character-to-word transformers for accurate scene text recognition |
title_full_unstemmed |
Image-to-character-to-word transformers for accurate scene text recognition |
title_sort |
image-to-character-to-word transformers for accurate scene text recognition |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/172173 |
_version_ |
1783955548055535616 |