• Title/Summary/Keyword: Document Images

Search Result 181, Processing Time 0.028 seconds

Moire Noise Removal from Document Images on Electronic Monitor (모니터 문서 영상의 모아레 잡음 제거)

  • Simon, Christian;Williem;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.11a
    • /
    • pp.237-238
    • /
    • 2013
  • The quality of document image captured from electronic display might be worse when it is compared with document image captured from paper. The problem appears because of Moir? noise. This problem can lead to achieve inaccurate intermediate result for further image processing. This paper proposes a method to remove Moir? noise of document images captured from electronic display. The proposed algorithm is separated in two parts. In the first step, it corrects the text area region (foreground) with small area of smoothing. Then, it corrects the background area with large area of smoothing.

  • PDF

Document Image Binarization Using a Water Flow Model (Water Flow Model을 이용한 문서 영상의 이진화)

  • Kim, In-Gwon;Jeong, Dong-Uk;Song, Jeong-Hui;Park, Rae-Hong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.19-32
    • /
    • 2001
  • This paper proposes a local adaptive thresholding method based on a water flow model, in which an image surface is considered as a 3-dimensional (3-D) terrain. To extract characters from backgrounds, we pour water onto the terrain surface. Water flows down to the lower regions of the terrain and fills valleys. Then, the amount of filled water is thresholded, in which the proposed thresholding method is applied to gray level document images consisting of characters and backgrounds. The proposed method based on a water flow model shows the property of locally adaptive thresholding. Computer simulation with synthetic and real document images shows that the proposed method yields effective adaptive thresholding results for binarization of document images.

  • PDF

Skew Detection for Thai Printed Document Images

  • Premchaiswad, Wichian;Duangphasuk, Surakarn
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.326-328
    • /
    • 2000
  • The paper proposes the scheme of skew detection for Thai printed document images by using linear regression algorithm. It intends to use with the Thai character recognition systems to reduce the skew detection time. This scheme begins by finding the center of gravity of a document image. This point is used as the starting point for gathering data in the scheme. The data is obtained by scanning incrementally one pixel in vertically with the width of 20-pixels. After the scanning process, if data Is different from it's neighbor more than ${\pm}$ 15 pixels, it will be considered as noise or data in other lines and will be deleted. The last step is the operation by using linear regression algorithm on these selected data and the skew angle will be obtained. The proposed method has been tested with 45 document images with different fonts, sizes and skew angles. The experiment results show that the proposed method can detect the skew angle with the error of less then one degree. The average processing time is about 19 times faster than that of the Hough Transform method.

  • PDF

Word Extraction from Table Regions in Document Images (문서 영상 내 테이블 영역에서의 단어 추출)

  • Jeong, Chang-Bu;Kim, Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.369-378
    • /
    • 2005
  • Document image is segmented and classified into text, picture, or table by a document layout analysis, and the words in table regions are significant for keyword spotting because they are more meaningful than the words in other regions. This paper proposes a method to extract words from table regions in document images. As word extraction from table regions is practically regarded extracting words from cell regions composing the table, it is necessary to extract the cell correctly. In the cell extraction module, table frame is extracted first by analyzing connected components, and then the intersection points are extracted from the table frame. We modify the false intersections using the correlation between the neighboring intersections, and extract the cells using the information of intersections. Text regions in the individual cells are located by using the connected components information that was obtained during the cell extraction module, and they are segmented into text lines by using projection profiles. Finally we divide the segmented lines into words using gap clustering and special symbol detection. The experiment performed on In table images that are extracted from Korean documents, and shows $99.16\%$ accuracy of word extraction.

Character Segmentation on Printed Korean Document Images Using a Simplification of Projection Profiles (투영 프로파일의 간략화 방법을 이용한 인쇄체 한글 문서 영상에서의 문자 분할)

  • Park Sang-Cheol;Kim Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.89-96
    • /
    • 2006
  • In this paper, we propose two approaches for the character segmentation on Korean document images. One is an improved version of a projection profile-based algorithm. It involves estimating the number of characters, obtaining the split points and then searching for each character's boundary, and selecting the best segmentation result. The other is developed for low quality document images where adjacent characters are connected. In this case, parts of the projection profile are cut to resolve the connection between the characters. This is called ${\alpha}$-cut. Afterwards, the revised former segmentation procedure is conducted. The two approaches have been tested with 43,572 low-quality Korean word images punted in various font styles. The segmentation accuracies of the former and the latter are 91.81% and 99.57%, respectively. This result shows that the proposed algorithm using a ${\alpha}$-cut is effective for low-quality Korean document images.

Automatic Generation of Training Character Samples for OCR Systems

  • Le, Ha;Kim, Soo-Hyung;Na, In-Seop;Do, Yen;Park, Sang-Cheol;Jeong, Sun-Hwa
    • International Journal of Contents
    • /
    • v.8 no.3
    • /
    • pp.83-93
    • /
    • 2012
  • In this paper, we propose a novel method that automatically generates real character images to familiarize existing OCR systems with new fonts. At first, we generate synthetic character images using a simple degradation model. The synthetic data is used to train an OCR engine, and the trained OCR is used to recognize and label real character images that are segmented from ideal document images. Since the OCR engine is unable to recognize accurately all real character images, a substring matching method is employed to fix wrongly labeled characters by comparing two strings; one is the string grouped by recognized characters in an ideal document image, and the other is the ordered string of characters which we are considering to train and recognize. Based on our method, we build a system that automatically generates 2350 most common Korean and 117 alphanumeric characters from new fonts. The ideal document images used in the system are postal envelope images with characters printed in ascending order of their codes. The proposed system achieved a labeling accuracy of 99%. Therefore, we believe that our system is effective in facilitating the generation of numerous character samples to enhance the recognition rate of existing OCR systems for fonts that have never been trained.

An Adaptive Binarization Algorithm for Degraded Document Images (저화질 문서영상들을 위한 적응적 이진화 알고리즘)

  • Ju, Jae-Hyon;Oh, Jeong-Su
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7A
    • /
    • pp.581-585
    • /
    • 2012
  • This paper proposes an adaptive binarization algorithm which is highly effective for a degraded document image including printed Hangul and Chinese characters. Because of the attribute of character composed of thin horizontal strokes and thick vertical strokes, the conventional algorithms can't easily extract horizontal strokes which have weaker components than vertical ones in the degraded document image. The proposed algorithm solves the conventional algorithm's problem by adding a vertical-directional reference adaptive binarization algorithm to an omni-directional reference one. The simulation results show the proposed algorithm extracts well characters from various degraded document images.

Automatic Title Detection by Spatial Feature and Projection Profile for Document Images (공간 정보와 투영 프로파일을 이용한 문서 영상에서의 타이틀 영역 추출)

  • Park, Hyo-Jin;Kim, Bo-Ram;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.209-214
    • /
    • 2010
  • This paper proposes an algorithm of segmentation and title detection for document image. The automated title detection method that we have developed is composed of two phases, segmentation and title area detection. In the first phase, we extract and segment the document image. To perform this operation, the binary map is segmented by combination of morphological operation and CCA(connected component algorithm). The first phase provides segmented regions that would be detected as title area for the second stage. Candidate title areas are detected using geometric information, then we can extract the title region that is performed by removing non-title regions. After classification step that removes non-text regions, projection is performed to detect a title region. From the fact that usually the largest font is used for the title in the document, horizontal projection is performed within text areas. In this paper, we proposed a method of segmentation and title detection for various forms of document images using geometric features and projection profile analysis. The proposed system is expected to have various applications, such as document title recognition, multimedia data searching, real-time image processing and so on.

Font Classification of English Printed Character using Non-negative Matrix Factorization (NMF를 이용한 영문자 활자체 폰트 분류)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.65-76
    • /
    • 2004
  • Today, most documents are electronically produced and their paleography is digitalized by imaging, resulting in a tremendous number of electronic documents in the shape of images. Therefore, to process these document images, many methods of document structure analysis and recognition have already been proposed, including font classification. Accordingly, the current paper proposes a font classification method for document images that uses non-negative matrix factorization (NMF), which is able to learn part-based representations of objects. In the proposed method, spatially total features of font images are automatically extracted using NMF, then the appropriateness of the features specifying each font is investigated. The proposed method is expected to improve the performance of optical character recognition (OCR), document indexing, and retrieval systems, when such systems adopt a font classifier as a preprocessor.

The Region Analysis of Document Images Based on One Dimensional Median Filter (1차원 메디안 필터 기반 문서영상 영역해석)

  • 박승호;장대근;황찬식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.3
    • /
    • pp.194-202
    • /
    • 2003
  • To convert printed images into electronic ones automatically, it requires region analysis of document images and character recognition. In these, regional analysis segments document image into detailed regions and classifies thee regions into the types of text, picture, table and so on. But it is difficult to classify the text and the picture exactly, because the size, density and complexity of pixel distribution of some of these are similar. Thu, misclassification in region analysis is the main reason that makes automatic conversion difficult. In this paper, we propose region analysis method that segments document image into text and picture regions. The proposed method solves the referred problems using one dimensional median filter based method in text and picture classification. And the misclassification problems of boldface texts and picture regions like graphs or tables, caused by using median filtering, are solved by using of skin peeling filter and maximal text length. The performance, therefore, is better than previous methods containing commercial softwares.