• Title/Summary/Keyword: Text Lines Extraction

Search Result 11, Processing Time 0.025 seconds

Extracting curved text lines using the chain composition and the expanded grouping method (체인 정합과 확장된 그룹핑 방법을 사용한 곡선형 텍스트 라인 추출)

  • Bai, Nguyen Noi;Yoon, Jin-Seon;Song, Young-Jun;Kim, Nam;Kim, Yong-Gi
    • The KIPS Transactions:PartB
    • /
    • v.14B no.6
    • /
    • pp.453-460
    • /
    • 2007
  • In this paper, we present a method to extract the text lines in poorly structured documents. The text lines may have different orientations, considerably curved shapes, and there are possibly a few wide inter-word gaps in a text line. Those text lines can be found in posters, blocks of addresses, artistic documents. Our method based on the traditional perceptual grouping but we develop novel solutions to overcome the problems of insufficient seed points and vaned orientations un a single line. In this paper, we assume that text lines contained tone connected components, in which each connected components is a set of black pixels within a letter, or some touched letters. In our scheme, the connected components closer than an iteratively incremented threshold will make together a chain. Elongate chains are identified as the seed chains of lines. Then the seed chains are extended to the left and the right regarding the local orientations. The local orientations will be reevaluated at each side of the chains when it is extended. By this process, all text lines are finally constructed. The proposed method is good for extraction of the considerably curved text lines from logos and slogans in our experiment; 98% and 94% for the straight-line extraction and the curved-line extraction, respectively.

Touch TT: Scene Text Extractor Using Touchscreen Interface

  • Jung, Je-Hyun;Lee, Seong-Hun;Cho, Min-Su;Kim, Jin-Hyung
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.78-88
    • /
    • 2011
  • In this paper, we present the Touch Text exTractor (Touch TT), an interactive text segmentation tool for the extraction of scene text from camera-based images. Touch TT provides a natural interface for a user to simply indicate the location of text regions with a simple touchline. Touch TT then automatically estimates the text color and roughly locates the text regions. By inferring text characteristics from the estimated text color and text region, Touch TT can extract text components. Touch TT can also handle partially drawn lines which cover only a small section of text area. The proposed system achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database and our own database.

Word Extraction from Table Regions in Document Images (문서 영상 내 테이블 영역에서의 단어 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.369-378
    • /
    • 2005
  • Document image is segmented and classified into text, picture, or table by a document layout analysis, and the words in table regions are significant for keyword spotting because they are more meaningful than the words in other regions. This paper proposes a method to extract words from table regions in document images. As word extraction from table regions is practically regarded extracting words from cell regions composing the table, it is necessary to extract the cell correctly. In the cell extraction module, table frame is extracted first by analyzing connected components, and then the intersection points are extracted from the table frame. We modify the false intersections using the correlation between the neighboring intersections, and extract the cells using the information of intersections. Text regions in the individual cells are located by using the connected components information that was obtained during the cell extraction module, and they are segmented into text lines by using projection profiles. Finally we divide the segmented lines into words using gap clustering and special symbol detection. The experiment performed on In table images that are extracted from Korean documents, and shows $99.16\%$ accuracy of word extraction.

Skew Compensation and Text Extraction of The Traffic Sign in Natural Scenes (자연영상에서 교통 표지판의 기울기 보정 및 덱스트 추출)

  • Choi Gyu-Dam;Kim Sung-Dong;Choi Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.3 no.2 s.5
    • /
    • pp.19-28
    • /
    • 2004
  • This paper shows how to compensate the skew from the traffic sign included in the natural image and extract the text. The research deals with the Process related to the array image. Ail the process comprises four steps. In the first fart we Perform the preprocessing and Canny edge extraction for the edge in the natural image. In the second pan we perform preprocessing and postprocessing for Hough Transform in order to extract the skewed angle. In the third part we remove the noise images and the complex lines, and then extract the candidate region using the features of the text. In the last part after performing the local binarization in the extracted candidate region, we demonstrate the text extraction by using the differences of the features which appeared between the tett and the non-text in order to select the unnecessary non-text. After carrying out an experiment with the natural image of 100 Pieces that includes the traffic sign. The research indicates a 82.54 percent extraction of the text and a 79.69 percent accuracy of the extraction, and this improved more accurate text extraction in comparison with the existing works such as the method using RLS(Run Length Smoothing) or Fourier Transform. Also this research shows a 94.5 percent extraction in respect of the extraction on the skewed angle. That improved a 26 percent, compared with the way used only Hough Transform. The research is applied to giving the information of the location regarding the walking aid system for the blind or the operation of a driverless vehicle

  • PDF

Fast Skew Detection of Document Images by Extraction of Center Points of Blank Lines (공백행의 중심점 추출에 의한 고속 문서 기울기 검출)

  • Jeong, Jae-Yeong;Kim, Mun-Hyeon
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.11
    • /
    • pp.1342-1349
    • /
    • 1999
  • 본 논문에서는 문서 내의 인접한 두 행 사이에는 일정한 두께의 공백 행이 존재하며 그 공백 행의 기울기는 실제 문서의 기울어진 정도를 반영한다는 사실에 기반하여, 선형적으로 기울어진 문서 영상의 기울기 추정을 위한 고속의 알고리즘을 제안한다. 먼저, 간단한 모폴로지 연산(dilation)을 이용하여 문자행 영역과 공백행 영역을 분리한 후, 이를 일정 간격으로 수직 샘플링하여 수직선 상에 있는 모든 공백행의 중심점(행간점)을 찾는다. 동일한 공백 행 상에 있는 인접한 두 행간점 간에 기울기를 계산하고, 전체 영상으로부터 이들의 분포를 조사하여 최대 빈도를 가지는 기울기를 입력 문서의 기울기로 추정한다. 실험에서는 제안한 알고리즘을 필기체 및 인쇄체를 포함하는 다양한 형태의 가로쓰기 문서에 적용한 결과를 보인다.Abstract In this paper, we propose a fast algorithm to estimate the skew angle of linearly skewed document images. This paper is based on the fact that there is a blank line with uniform thickness between two adjacent text lines and the slope of the line is the same as that of the document. Firstly, we apply a dilation operation to the image to separate blank lines from text lines, and we detect center points of blank lines along the vertically sampled lines. Then we calculate the slope between neighboring center points in the same blank line. Calculated slopes for the entire image are accumulated on the histogram to display the distribution of them. Finally, the peak in the histogram is detected and estimated as the slope of the document image. In the experiments, we adopted a lot of images of various format with hand-printed or machine-printed document to verify our algorithm.

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

A Block Classification and Rotation Angle Extraction for Document Image (문서 영상의 영역 분류와 회전각 검출)

  • Mo, Moon-Jung;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.509-516
    • /
    • 2002
  • This paper proposes an efficient algorithm which recognizes the mixed document image consisting of the images, texts, tables, and straight lines. This system is composed of three steps. The first step is the detection of rotation angle for complementing skewed images, the second is detection of erasing an unnecessary background region and last is the classification of each component included in document images. This algorithm performs preprocessing of detecting rotation angles and correcting documents based on the detected rotation angles in order to minimize the error rate by skewness of the documentation. We detected the rotation angie using only horizontal and vertical components in document images and minimized calculation time by erasing unnecessary background region in the detecting process of component of document. In the next step, we classify various components such as image, text, table and line area included in document images. we applied this method to various document images in order to evaluate the performance of document recognition system and show the successful experimental results.

Caption Detection and Recognition for Video Image Information Retrieval (비디오 영상 정보 검색을 위한 문자 추출 및 인식)

  • 구건서
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.901-914
    • /
    • 2002
  • In this paper, We propose an efficient automatic caption detection and location method, caption recognition using FE-MCBP(Feature Extraction based Multichained BackPropagation) neural network for content based retrieval of video. Frames are selected at fixed time interval from video and key frames are selected by gray scale histogram method. for each key frames, segmentation is performed and caption lines are detected using line scan method. lastly each characters are separated. This research improves speed and efficiency by color segmentation using local maximum analysis method before line scanning. Caption detection is a first stage of multimedia database organization and detected captions are used as input of text recognition system. Recognized captions can be searched by content based retrieval method.

  • PDF

The Geometric Layout Analysis of the Document Image Using Connected Components Method and Median Filter (연결요소 방법과 메디안 필터를 이용한 문서영상 기하학적 구조분석)

  • Jang, Dae-Geun;Hwang, Chan-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.805-813
    • /
    • 2002
  • Document image should be classified into detailed regions as text, picture, table and etc through the geometric layout analysis if paper documents can be converted automatically into electronic documents. However, complexity of the document layout and variety of the size and density of a picture are the reason to make it difficult to analyze the geometric layout of the document images. In this paper, we propose the method which have a better performance of the region segmentation and classifications, and the line extraction in the table region than the commercial softwares and previous methods. The proposed method can segment the document into detailed regions by using connected components method even if its layout is complex. This method also classifies texts and pictures by using separable median filter even. Though their size and density are diverse, In addition, this method extracts the lines from the table adapting one dimensional median filter to the each horizontal and vertical direction, even though lines are deformed or texts attached to them.

Multi-modal Image Processing for Improving Recognition Accuracy of Text Data in Images (이미지 내의 텍스트 데이터 인식 정확도 향상을 위한 멀티 모달 이미지 처리 프로세스)

  • Park, Jungeun;Joo, Gyeongdon;Kim, Chulyun
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.148-158
    • /
    • 2018
  • The optical character recognition (OCR) is a technique to extract and recognize texts from images. It is an important preprocessing step in data analysis since most actual text information is embedded in images. Many OCR engines have high recognition accuracy for images where texts are clearly separable from background, such as white background and black lettering. However, they have low recognition accuracy for images where texts are not easily separable from complex background. To improve this low accuracy problem with complex images, it is necessary to transform the input image to make texts more noticeable. In this paper, we propose a method to segment an input image into text lines to enable OCR engines to recognize each line more efficiently, and to determine the final output by comparing the recognition rates of CLAHE module and Two-step module which distinguish texts from background regions based on image processing techniques. Through thorough experiments comparing with well-known OCR engines, Tesseract and Abbyy, we show that our proposed method have the best recognition accuracy with complex background images.