• Title/Summary/Keyword: Scene text recognition

Search Result 30, Processing Time 0.02 seconds

Text Extraction in HIS Color Space by Weighting Scheme

  • Le, Thi Khue Van;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.1
    • /
    • pp.31-36
    • /
    • 2013
  • A robust and efficient text extraction is very important for an accuracy of Optical Character Recognition (OCR) systems. Natural scene images with degradations such as uneven illumination, perspective distortion, complex background and multi color text give many challenges to computer vision task, especially in text extraction. In this paper, we propose a method for extraction of the text in signboard images based on a combination of mean shift algorithm and weighting scheme of hue and saturation in HSI color space for clustering algorithm. The number of clusters is determined automatically by mean shift-based density estimation, in which local clusters are estimated by repeatedly searching for higher density points in feature vector space. Weighting scheme of hue and saturation is used for formulation a new distance measure in cylindrical coordinate for text extraction. The obtained experimental results through various natural scene images are presented to demonstrate the effectiveness of our approach.

  • PDF

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.

Three-Level Color Clustering Algorithm for Binarizing Scene Text Images (자연영상 텍스트 이진화를 위한 3단계 색상 군집화 알고리즘)

  • Kim Ji-Soo;Kim Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.737-744
    • /
    • 2005
  • In this paper, we propose a three-level color clustering algerian for the binarization of text regions extracted from natural scene images. The proposed algorithm consists of three phases of color segmentation. First, the ordinary images in which the texts are well separated from the background, are binarized. Then, in the second phase, the input image is passed through a high pass filter to deal with those affected by natural or artificial light. Finally, the image Is passed through a low pass filter to deal with the texture in texts and/or background. We have shown that the proposed algorithm is more effective used gray-information binarization algorithm. To evaluate the effectiveness of the proposed algorithm we use a commercial OCR software ARMI 6.0 to observe the recognition accuracies on the binarized images. The experimental results on word and character recognition show that the proposed approach is more accurate than conventional methods by over $35\%$.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Gradation Image Processing for Text Recognition in Road Signs Using Image Division and Merging

  • Chong, Kyusoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.13 no.2
    • /
    • pp.27-33
    • /
    • 2014
  • This paper proposes a gradation image processing method for the development of a Road Sign Recognition Platform (RReP), which aims to facilitate the rapid and accurate management and surveying of approximately 160,000 road signs installed along the highways, national roadways, and local roads in the cities, districts (gun), and provinces (do) of Korea. RReP is based on GPS(Global Positioning System), IMU(Inertial Measurement Unit), INS(Inertial Navigation System), DMI(Distance Measurement Instrument), and lasers, and uses an imagery information collection/classification module to allow the automatic recognition of signs, the collection of shapes, pole locations, and sign-type data, and the creation of road sign registers, by extracting basic data related to the shape and sign content, and automated database design. Image division and merging, which were applied in this study, produce superior results compared with local binarization method in terms of speed. At the results, larger texts area were found in images, the accuracy of text recognition was improved when images had been gradated. Multi-threshold values of natural scene images are used to improve the extraction rate of texts and figures based on pattern recognition.

A Recognition Method for Korean Spatial Background in Historical Novels (한국어 역사 소설에서 공간적 배경 인식 기법)

  • Kim, Seo-Hee;Kim, Seung-Hoon
    • Journal of Information Technology Services
    • /
    • v.15 no.1
    • /
    • pp.245-253
    • /
    • 2016
  • Background in a novel is most important elements with characters and events, and means time, place and situation that characters appeared. Among the background, spatial background can help conveys topic of a novel. So, it may be helpful for choosing a novel that readers want to read. In this paper, we are targeting Korean historical novels. In case of English text, It can be recognize spatial background easily because it use upper and lower case and words used with the spatial information such as Bank, University and City. But, in case Korean text, it is difficult to recognize that spatial background because there is few information about usage of letter. In the previous studies, they use machine learning or dictionaries and rules to recognize about spatial information in text such as news and text messages. In this paper, we build a nation dictionaries that refer to information such as 'Korean history' and 'Google maps.' We Also propose a method for recognizing spatial background based on patterns of postposition in Korean sentences comparing to previous works. We are grasp using of postposition with spatial background because Korean characteristics. And we propose a method based on result of morpheme analyze and frequency in a novel text for raising accuracy about recognizing spatial background. The recognized spatial background can help readers to grasp the atmosphere of a novel and to understand the events and atmosphere through recognition of the spatial background of the scene that characters appeared.

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF

Image Based Human Action Recognition System to Support the Blind (시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템)

  • Ko, ByoungChul;Hwang, Mincheol;Nam, Jae-Yeal
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.138-143
    • /
    • 2015
  • In this paper we develop a novel human action recognition system based on communication between an ear-mounted Bluetooth camera and an action recognition server to aid scene recognition for the blind. First, if the blind capture an image of a specific location using the ear-mounted camera, the captured image is transmitted to the recognition server using a smartphone that is synchronized with the camera. The recognition server sequentially performs human detection, object detection and action recognition by analyzing human poses. The recognized action information is retransmitted to the smartphone and the user can hear the action information through the text-to-speech (TTS). Experimental results using the proposed system showed a 60.7% action recognition performance on the test data captured in indoor and outdoor environments.

Text Region Extraction Using Pattern Histogram of Character-Edge Map in Natural Images (문자-에지 맵의 패턴 히스토그램을 이용한 자연이미지에세 텍스트 영역 추출)

  • Park, Jong-Cheon;Hwang, Dong-Guk;Lee, Woo-Ram;Jun, Byoung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.6
    • /
    • pp.1167-1174
    • /
    • 2006
  • Text region detection from a natural scene is useful in many applications such as vehicle license plate recognition. Therefore, in this paper, we propose a text region extraction method using pattern histogram of character-edge maps. We create 16 kinds of edge maps from the extracted edges and then, we create the 8 kinds of edge maps which compound 16 kinds of edge maps, and have a character feature. We extract a candidate of text regions using the 8 kinds of character-edge maps. The verification about candidate of text region used pattern histogram of character-edge maps and structural features of text region. Experimental results show that the proposed method extracts a text regions composed of complex background, various font sizes and font colors effectively.

  • PDF