• Title/Summary/Keyword: Scene Text Detection

Search Result 38, Processing Time 0.023 seconds

Text Detection based on Edge Enhanced Contrast Extremal Region and Tensor Voting in Natural Scene Images

  • Pham, Van Khien;Kim, Soo-Hyung;Yang, Hyung-Jeong;Lee, Guee-Sang
    • Smart Media Journal
    • /
    • v.6 no.4
    • /
    • pp.32-40
    • /
    • 2017
  • In this paper, a robust text detection method based on edge enhanced contrasting extremal region (CER) is proposed using stroke width transform (SWT) and tensor voting. First, the edge enhanced CER extracts a number of covariant regions, which is a stable connected component from input images. Next, SWT is created by the distance map, which is used to eliminate non-text regions. Then, these candidate text regions are verified based on tensor voting, which uses the input center point in the previous step to compute curve salience values. Finally, the connected component grouping is applied to a cluster closed to characters. The proposed method is evaluated with the ICDAR2003 and ICDAR2013 text detection competition datasets and the experiment results show high accuracy compared to previous methods.

Scene Text Detection with Length of Text (글자 수 정보를 이용한 이미지 내 글자 영역 검출 방법)

  • Yeong Woo Kim;Wonjun Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.177-179
    • /
    • 2022
  • 딥러닝의 발전과 함께 합성곱 신경망 기반의 이미지 내 글자 영역 검출(Scene Text Detection) 방법들이 제안됐다. 그러나 이러한 방법들은 대부분 데이터셋이 제공하는 단어의 위치 정보만을 이용할 뿐 글자 영역이 갖는 고유한 정보인 글자 수는 활용하지 않는다. 따라서 본 논문에서는 글자 수 정보를 학습하여 효과적으로 이미지 내의 글자 영역을 검출하는 모듈을 제안한다. 제안하는 방법은 간단한 합성곱 신경망으로 구성된 이미지 내 글자 영역 검출 모델에 글자 수를 예측하는 모듈을 추가하여 학습을 진행하였다. 글자 영역 검출 성능 평가에 널리 사용되는 ICDAR 2015 데이터셋을 통해 기존 방법 대비 성능이 향상됨을 보였고, 글자 수 정보가 글자 영역을 감지하는 데 유효한 정보임을 확인했다.

  • PDF

Text Detection in Scene Images using spatial frequency (공간주파수를 이용한 장면영상에서 텍스트 검출)

  • Sin, Bong-Kee;Kim, Seon-Kyu
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.31-39
    • /
    • 2003
  • It is often assumed that text regions in images are characterized by some distinctive or characteristic spatial frequencies. This feature is highly intuitive, and thus appealing as much. We propose a method of detecting horizontal texts in natural scene images. It is based on the use of two features that can be employed separately or in succession: the frequency of edge pixels across vertical and horizontal scan lines, and the fundamental frequency in the Fourier domain. We confirmed that the frequency features are language independent. Also addressed is the detection of quadrilaterals or approximate rectangles using Hough transform. Since texts that is meaningful to many viewers usually appear within rectangles with colors in high contrast to the background. Hence it is natural to assume the detection rectangles may be helpful for locating desired texts correctly in natural outdoor scene images.

Scene Text Recognition Performance Improvement through an Add-on of an OCR based Classifier (OCR 엔진 기반 분류기 애드온 결합을 통한 이미지 내부 텍스트 인식 성능 향상)

  • Chae, Ho-Yeol;Seok, Ho-Sik
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1086-1092
    • /
    • 2020
  • An autonomous agent for real world should be able to recognize text in scenes. With the advancement of deep learning, various DNN models have been utilized for transformation, feature extraction, and predictions. However, the existing state-of-the art STR (Scene Text Recognition) engines do not achieve the performance required for real world applications. In this paper, we introduce a performance-improvement method through an add-on composed of an OCR (Optical Character Recognition) engine and a classifier for STR engines. On instances from IC13 and IC15 datasets which a STR engine failed to recognize, our method recognizes 10.92% of unrecognized characters.

Text Extraction from Complex Natural Images

  • Kumar, Manoj;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.6 no.2
    • /
    • pp.1-5
    • /
    • 2010
  • The rapid growth in communication technology has led to the development of effective ways of sharing ideas and information in the form of speech and images. Understanding this information has become an important research issue and drawn the attention of many researchers. Text in a digital image contains much important information regarding the scene. Detecting and extracting this text is a difficult task and has many challenging issues. The main challenges in extracting text from natural scene images are the variation in the font size, alignment of text, font colors, illumination changes, and reflections in the images. In this paper, we propose a connected component based method to automatically detect the text region in natural images. Since text regions in mages contain mostly repetitions of vertical strokes, we try to find a pattern of closely packed vertical edges. Once the group of edges is found, the neighboring vertical edges are connected to each other. Connected regions whose geometric features lie outside of the valid specifications are considered as outliers and eliminated. The proposed method is more effective than the existing methods for slanted or curved characters. The experimental results are given for the validation of our approach.

Detecting and Segmenting Text from Images for a Mobile Translator System

  • Chalidabhongse, Thanarat H.;Jeeraboon, Poonsak
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.875-878
    • /
    • 2004
  • Researching in text detection and segmentation has been done for a long period in the OCR area. However, there is some other area that the text detection and segmentation from images can be very useful. In this report, we first propose the design of a mobile translator system which helps non-native speakers to understand the foreign language using ubiquitous mobile network and camera mobile phones. The main focus of the paper will be the algorithm in detecting and segmenting texts embedded in the natural scenes from taken images. The image, which is captured by a camera mobile phone, is transmitted to a translator server. It is initially passed through some preprocessing processes to smooth the image as well as suppress noises. A threshold is applied to binarize the image. Afterward, an edge detection algorithm and connected component analysis are performed on the filtered image to find edges and segment the components in the image. Finally, the pre-defined layout relation constraints are utilized in order to decide which components likely to be texts in the image. A preliminary experiment was done and the system yielded a recognition rate of 94.44% on a set of 36 various natural scene images that contain texts.

  • PDF

Integrated Method for Text Detection in Natural Scene Images

  • Zheng, Yang;Liu, Jie;Liu, Heping;Li, Qing;Li, Gen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5583-5604
    • /
    • 2016
  • In this paper, we present a novel image operator to extract textual information in natural scene images. First, a powerful refiner called the Stroke Color Extension, which extends the widely used Stroke Width Transform by incorporating color information of strokes, is proposed to achieve significantly enhanced performance on intra-character connection and non-character removal. Second, a character classifier is trained by using gradient features. The classifier not only eliminates non-character components but also remains a large number of characters. Third, an effective extractor called the Character Color Transform combines color information of characters and geometry features. It is used to extract potential characters which are not correctly extracted in previous steps. Fourth, a Convolutional Neural Network model is used to verify text candidates, improving the performance of text detection. The proposed technique is tested on two public datasets, i.e., ICDAR2011 dataset and ICDAR2013 dataset. The experimental results show that our approach achieves state-of-the-art performance.

Text Detection and Binarization using Color Variance and an Improved K-means Color Clustering in Camera-captured Images (카메라 획득 영상에서의 색 분산 및 개선된 K-means 색 병합을 이용한 텍스트 영역 추출 및 이진화)

  • Song Young-Ja;Choi Yeong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.205-214
    • /
    • 2006
  • Texts in images have significant and detailed information about the scenes, and if we can automatically detect and recognize those texts in real-time, it can be used in various applications. In this paper, we propose a new text detection method that can find texts from the various camera-captured images and propose a text segmentation method from the detected text regions. The detection method proposes color variance as a detection feature in RGB color space, and the segmentation method suggests an improved K-means color clustering in RGB color space. We have tested the proposed methods using various kinds of document style and natural scene images captured by digital cameras and mobile-phone camera, and we also tested the method with a portion of ICDAR[1] contest images.

The Binarization of Text Regions in Natural Scene Images, based on Stroke Width Estimation (자연 영상에서 획 너비 추정 기반 텍스트 영역 이진화)

  • Zhang, Chengdong;Kim, Jung Hwan;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.27-34
    • /
    • 2012
  • In this paper, a novel text binarization is presented that can deal with some complex conditions, such as shadows, non-uniform illumination due to highlight or object projection, and messy backgrounds. To locate the target text region, a focus line is assumed to pass through a text region. Next, connected component analysis and stroke width estimation based on location information of the focus line is used to locate the bounding box of the text region, and each box of connected components. A series of classifications are applied to identify whether each CC(Connected component) is text or non-text. Also, a modified K-means clustering method based on an HCL color space is applied to reduce the color dimension. A text binarization procedure based on location of text component and seed color pixel is then used to generate the final result.

  • PDF

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.