TY - GEN
T1 - Scene Text Recognition with Image-Text Matching-Guided Dictionary
AU - Wei, Jiajun
AU - Zhan, Hongjian
AU - Tu, Xiao
AU - Lu, Yue
AU - Pal, Umapada
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Employing a dictionary can efficiently rectify the deviation between the visual prediction and the ground truth in scene text recognition methods. However, the independence of the dictionary on the visual features may lead to incorrect rectification of accurate visual predictions. In this paper, we propose a new dictionary language model leveraging the Scene Image-Text Matching(SITM) network, which avoids the drawbacks of the explicit dictionary language model: 1) the independence of the visual features; 2) noisy choice in candidates etc. The SITM network accomplishes this by using Image-Text Contrastive (ITC) Learning to match an image with its corresponding text among candidates in the inference stage. ITC is widely used in vision-language learning to pull the positive image-text pair closer in feature space. Inspired by ITC, the SITM network combines the visual features and the text features of all candidates to identify the candidate with the minimum distance in the feature space. Our lexicon method achieves better results(93.8% accuracy) than the ordinary method results(92.1% accuracy) on six mainstream benchmarks. Additionally, we integrate our method with ABINet and establish new state-of-the-art results on several benchmarks.
AB - Employing a dictionary can efficiently rectify the deviation between the visual prediction and the ground truth in scene text recognition methods. However, the independence of the dictionary on the visual features may lead to incorrect rectification of accurate visual predictions. In this paper, we propose a new dictionary language model leveraging the Scene Image-Text Matching(SITM) network, which avoids the drawbacks of the explicit dictionary language model: 1) the independence of the visual features; 2) noisy choice in candidates etc. The SITM network accomplishes this by using Image-Text Contrastive (ITC) Learning to match an image with its corresponding text among candidates in the inference stage. ITC is widely used in vision-language learning to pull the positive image-text pair closer in feature space. Inspired by ITC, the SITM network combines the visual features and the text features of all candidates to identify the candidate with the minimum distance in the feature space. Our lexicon method achieves better results(93.8% accuracy) than the ordinary method results(92.1% accuracy) on six mainstream benchmarks. Additionally, we integrate our method with ABINet and establish new state-of-the-art results on several benchmarks.
KW - Dictionary Language Model
KW - Image-Text Contrastive Learning
KW - Scene Image-Text Matching
KW - Scene Text Recognition
UR - https://www.scopus.com/pages/publications/85173581763
U2 - 10.1007/978-3-031-41731-3_4
DO - 10.1007/978-3-031-41731-3_4
M3 - 会议稿件
AN - SCOPUS:85173581763
SN - 9783031417306
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 54
EP - 69
BT - Document Analysis and Recognition – ICDAR 2023 - 17th International Conference, Proceedings
A2 - Fink, Gernot A.
A2 - Jain, Rajiv
A2 - Kise, Koichi
A2 - Zanibbi, Richard
PB - Springer Science and Business Media Deutschland GmbH
T2 - 17th International Conference on Document Analysis and Recognition, ICDAR 2023
Y2 - 21 August 2023 through 26 August 2023
ER -