TY - JOUR
T1 - Weakly supervised scene text generation for low-resource languages
AU - Xie, Yangchen
AU - Chen, Xinyuan
AU - Zhan, Hongjian
AU - Shivakumara, Palaiahnakote
AU - Yin, Bing
AU - Liu, Cong
AU - Lu, Yue
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2024/3/1
Y1 - 2024/3/1
N2 - A large number of annotated training images is crucial for training successful scene text recognition models. However, collecting sufficient datasets can be a labor-intensive and costly process, particularly for low-resource languages. To address this challenge, auto-generating text data has shown promise in alleviating the problem. Unfortunately, existing scene text generation methods typically rely on a large amount of paired data, which is difficult to obtain for low-resource languages. In this paper, we propose a novel weakly supervised scene text generation method that leverages a few recognition-level labels as weak supervision. The proposed method can generate a large amount of scene text images with diverse backgrounds and font styles through cross-language generation. Our method disentangles the content and style features of scene text images, with the former representing textual information and the latter representing characteristics such as font, alignment, and background. To preserve the complete content structure of generated images, we introduce an integrated attention module. Furthermore, to bridge the style gap in the style of different languages, we incorporate a pre-trained font classifier. We evaluate our method using state-of-the-art scene text recognition models. Experiments demonstrate that our generated scene text significantly improves the scene text recognition accuracy and helps achieve higher accuracy when complemented with other generative methods.
AB - A large number of annotated training images is crucial for training successful scene text recognition models. However, collecting sufficient datasets can be a labor-intensive and costly process, particularly for low-resource languages. To address this challenge, auto-generating text data has shown promise in alleviating the problem. Unfortunately, existing scene text generation methods typically rely on a large amount of paired data, which is difficult to obtain for low-resource languages. In this paper, we propose a novel weakly supervised scene text generation method that leverages a few recognition-level labels as weak supervision. The proposed method can generate a large amount of scene text images with diverse backgrounds and font styles through cross-language generation. Our method disentangles the content and style features of scene text images, with the former representing textual information and the latter representing characteristics such as font, alignment, and background. To preserve the complete content structure of generated images, we introduce an integrated attention module. Furthermore, to bridge the style gap in the style of different languages, we incorporate a pre-trained font classifier. We evaluate our method using state-of-the-art scene text recognition models. Experiments demonstrate that our generated scene text significantly improves the scene text recognition accuracy and helps achieve higher accuracy when complemented with other generative methods.
KW - Low-resource languages
KW - Scene text generation
KW - Style transfer
UR - https://www.scopus.com/pages/publications/85172312111
U2 - 10.1016/j.eswa.2023.121622
DO - 10.1016/j.eswa.2023.121622
M3 - 文献综述
AN - SCOPUS:85172312111
SN - 0957-4174
VL - 237
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 121622
ER -