Browse > Article
http://dx.doi.org/10.6109/jkiice.2022.26.8.1111

A general-purpose model capable of image captioning in Korean and Englishand a method to generate text suitable for the purpose  

Cho, Su Hyun (Department of Department of Artificial Intelligence Convergence, Sungkyunkwan University)
Oh, Hayoung (College of Computing and Informatics, Sungkyunkwan University)
Abstract
Image Capturing is a matter of viewing images and describing images in language. The problem is an important problem that can be solved by keeping, understanding, and bringing together two areas of image processing and natural language processing. In addition, by automatically recognizing and describing images in text, images can be converted into text and then into speech for visually impaired people to help them understand their surroundings, and important issues such as image search, art therapy, sports commentary, and real-time traffic information commentary. So far, the image captioning research approach focuses solely on recognizing and texturing images. However, various environments in reality must be considered for practical use, as well as being able to provide image descriptions for the intended purpose. In this work, we limit the universally available Korean and English image captioning models and text generation techniques for the purpose of image captioning.
Keywords
Image Captioning; OCR; TextCaps; COCO;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 K. Papineni, S. Roukos, T. Ward, and W. J. Zhu, "Bleu: a Method for Automatic Evaluation of Machine Translation," in Proceedings of 40th annual meeting of the Association for Computational Linguistics, Philadelphia: PA, USA, pp. 311-318, 2002.
2 C. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries," in Proceedings of Text summarization branches out, Barcelona, Spain, pp. 74-81, 2004.
3 P. Anderson, B. Fernando, M. Johnson, and S. Gould, "SPICE: Semantic Propositional Image Caption Evaluation," in Proceedings of European conference on computer vision. Springer, Amsterdam, The Netherlands, pp. 382-398, 2016.
4 A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention Is All You Need," in Proceedings of neural information processing systems, Long Beach: CA, USA, vol. 30, 2017.
5 J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv preprint arXiv:1810.04805, 2018.
6 X. Chen, H. Fang, T. Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick, "Microsoft COCO Captions: Data Collection and Evaluation Server," arXiv preprint arXiv:1504.00325. 2015.
7 X. Hu, X. Yin, K. Lin, L. Wang, L. Zhang, J. Gao, and Z. Liu, "VICO: Visual Vocabulary Pre-Training for Novel Object Captioning," arXiv:2009.13682, 2020.
8 X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei, Y. Choi, and J. Gao, "Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks," in Proceedings of European Conference on Computer Vision, Glasgow, UK, pp. 121-137, 2020.
9 P. Zhang, X. Li, X. Hu, J. Yang, L. Zhang, L. Wang, Y. Choi, and J. Gao, "Vinvl: Revisiting Visual Representations in Vision-Language Models," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, pp. 5579-5588, 2021.
10 P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions," Transactions of the Association for Computational Linguistics, vol. 2, pp. 67-78, Feb. 2014.   DOI
11 P. Sharma, N. Ding, S. Goodman, and R. Soricut, "Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning," in Proceedings of 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, vol. 1, pp. 2556-2565, 2018.
12 R. Luo and G. Shakhnarovich, "Controlling Length in Image Captioning," arXiv preprint arXiv:2005.14386, 2020.
13 B. Shi, X. Bai, and C. Yao, "An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 11, pp. 2298-2304, Nov. 2017.   DOI
14 R. Vedantam, C. L. Zitnick, and D. Parikh, "CIDEr: Consensus-Based Image Description Evaluation," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston: MA, USA, pp. 4566-4575, 2015.
15 D. A. Hudson and C. D. Manning, "GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach: CA, USA, pp. 6700-6709, 2019.
16 Y. Du, C. Li, R. Guo, X. Yin, W. Liu, J. Zhou, Y. Bai, Z. Yu, Y. Yang, Q. Dang, and H. Wang, "PP-OCR: A Practical Ultra Lightweight OCR System," arXiv preprint arXiv:2009.09941, 2020.
17 V. Ordonez, G. Kulkarni, and T. Berg, "Im2text: Describing Images Using 1 Million Captioned Photographs," in Proceedings of Advances in neural information processing systems, Virtual,vol. 24, pp. 1143-1151, 2011.
18 D. Yu, X. Li, C. Zhang, T. Liu, J. Han, J. Liu, and E. Ding, "Towards Accurate Scene Text Recognition with Semantic Reasoning Networks," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, pp. 12113-12122, 2020.
19 S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, "Self-Critical Sequence Training for Image Captioning," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu: HI, USA, pp. 7008-7024, 2017.
20 S. Banerjee and A. Lavie, "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments," in Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Michigan: MI, USA, pp. 65-72, 2005.
21 D. Kiela, H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, and D. Testuggine, "The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes," in Advances in Neural Information Processing Systems, Vancouver, Canada, vol. 33, pp. 2611-2624, 2020.
22 A. Singh, V. Natarajan, M. Shah, Y. Jiang, X. Chen, D. Batra, D. Parikh, and M. Rohrbach, "Towards VQA Models That Can Read," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach: CA, USA, pp. 8317-8326, 2019.
23 H. Agrawal, K. Desai, Y. Wang, X. Chen, R. Jain, M. Johnson, D. Batra, D. Parikh, S. Lee and P. Anderson, "nocaps: novel object captioning at scale." in Proceedings of IEEE/CVF International Conference on Computer Vision, Seoul, KR, pp. 8948-8957, 2019.
24 O. Sidorov, R. Hu, M. Rohrbach, and A. Singh, "TextCaps: A Dataset for Image Captioning with Reading Comprehension," in Proceedings of European Conference on Computer Vision, Glasgow, UK, pp. 742-758, 2020.