• Title/Summary/Keyword: document image processing

Search Result 105, Processing Time 0.031 seconds

The Project and Prospects of Old Documents Information Systems in Korea (한국 고문헌 정보시스템의 구축 및 전망)

  • Kang Soon-Ae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.31 no.4
    • /
    • pp.83-112
    • /
    • 1997
  • The purpose of this paper Is to describe the matters to plan the best information systems in Korean old books. It analyzes: i) a range of definition of old books, ii) its characteristics and current state of processing the old documents, iii) the scope of automation and building up the library institution, iv) the construction of Korean old books Information systems, v) its case study, and vi) the evaluation and vision of system. The old document information system have been organized on the basis of library networks systems with the National Central Library as leader, its implemented system has the subsystem such as cataloging system, annotation system, full-text or image-based system, and retrieval system. In case study, it is suggested two examples which has been built in the National Central Library and Sung Kyun Kwan university. finally, it provides the evaluation criteria and vision for the library which designs the old document information systems.

  • PDF

Supporting Media using XML-based Messages on Online Conversational Activity (온라인 대화 행위에서 XML 기반 메시지를 이용한 미디어 지원)

  • Kim, Kyung-Deok
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.91-98
    • /
    • 2004
  • This paper proposes how to support various media on online conversational activity using XML(extensible Markup Language). The method converts media information into XML based messages and handles alike conventional text based messages. The XML based messages are unified to an XML document, and then a HTML document is generated using the XML and an XSLT documents in a server. A user in each client can play or present media through the hyperlink that is associated media information on the HTML document. The suggested method supports use of various media (text, image, audio, video, documents, etc) and efficient maintenance of font size, color, and style on messages according to extension and modification of XML tags. For application, this paper implemented the system to support media that has client and server architecture on online conversational activity. A user in each client inputs text or media based message using JAVA applet and servlet on the system, and conversational messages on every users' interfaces are automatically updated whenever a user inputs new message. Media on conversational messages are played or presented according to a user's click on hyperlink. Applications for the media presentation are as follows : distance learning, online game, collaboration, etc.

Adaptive thresholding for two-dimensional barcode images using two thresholds and the integral image (이중 문턱 값과 적분영상을 이용한 2차원 바코드 영상의 적응적 이진화)

  • Lee, Yeon-Kyung;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.11
    • /
    • pp.2453-2458
    • /
    • 2012
  • In this paper, we propose an adaptive thresholding method to binarize two-dimensional barcode images. Adaptive thresholding methods that minimize light effects convert an original image into a binary image. The methods are applied to document image binarization. The methods, however, have problems of determining box size used in adaptive thresholding. thus, they inappropriate to use in recognition of two-dimensional barcode images. To overcome the problem, we analysis the problem and propose a new adaptive threshold method using the integral image. To show the effectiveness of our method, we compared our method with the well-known existing methods in terms of visual quality and processing time. The experimental result indicates that the proposed method is superior to the existing method.

Research on the Table Vacuolization in the Document Image (문서 영상 내의 테이블 벡터화 연구)

  • Kim, U-Seong;Sim, Jin-Bo;Park, Yong-Beom;Mun, Gyeong-Ae;Ji, Su-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.5
    • /
    • pp.1147-1159
    • /
    • 1996
  • In this paper. we develop an efficient algorithm which vectorize the table input for mixed document recognition system. It is necessary to separate character and line for recognizing the character in the table. For recognizing table, we have to recognize the character which is blocked by table line and develop the efficient rectorization method for table line. For vectorizing table, we develop several methods. The first method is to extract table line part using 8-dircction chaincodes. The second method is to extract horizontal and vertical lines using histogram of lines. The third one is to extract diagonal lines of table by using the cross points of horizontal and verticallines. Finally we also develop the table vectorization method which finds the regularity characteristics of horizontal and vertical lines composing table, In the paper, we sugest a regularity method for efficient table vectorization.

  • PDF

An Extracting Text Area Using Adaptive Edge Enhanced MSER in Real World Image (실세계 영상에서 적응적 에지 강화 기반의 MSER을 이용한 글자 영역 추출 기법)

  • Park, Youngmok;Park, Sunhwa;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.4
    • /
    • pp.219-226
    • /
    • 2016
  • In our general life, what we recognize information with our human eyes and use it is diverse and massive. But even the current technologies improved by artificial intelligence are exorbitantly deficient comparing to human visual processing ability. Nevertheless, many researchers are trying to get information in everyday life, especially concentrate effort on recognizing information consisted of text. In the fields of recognizing text, to extract the text from the general document is used in some information processing fields, but to extract and recognize the text from real image is deficient too much yet. It is because the real images have many properties like color, size, orientation and something in common. In this paper, we applies an adaptive edge enhanced MSER(Maximally Stable Extremal Regions) to extract the text area in those diverse environments and the scene text, and show that the proposed method is a comparatively nice method with experiments.

A Study on the Integrated Coding of Image and Document Data (영상과 문자정보의 통합 부호화에 관한 연구)

  • Lee, Huen-Joo;Park, Goo-Man;Park, Kyu-Tae
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.42-49
    • /
    • 1989
  • A new integrated coding method is proposed in this study for embedding the text information including Hangul into an image. A monochrome analog image may be quantized to a few leveled digital image and be displayed on bi-leveled output devices by using halftone processing techniques. Text data are embedded on each micro pattern. Based on this concept, the encoding and the decoding algorithm are implemented and experiments are performed. As a result, the average amount of the embedded text information is more than 8 bpp (bits per pixer) in this halftone processed image converted form a $64{\times}64$ image, i.e, corresponding to 2000 characters in Hangul, or 4000 characters in alphanumeral. using this algorithm, the integrated personal record management system is implemented.

  • PDF

The Construction and Common Use of Old Document DB in the Foreign Countries (해외 소장 고문헌의 DB구축과 공동활용 방안)

  • Kang, Soon-Ae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.42 no.3
    • /
    • pp.61-79
    • /
    • 2008
  • The purpose of this paper is to investigate the three aspects of the construction and common use of old document DB in the foreign countries: i) the processing of old documents, ii) the problem and improvement of DB systems of old documents. and iii) the common use of old document DB. Results from this research are as follows: The National Library of Korea(NLK) copied old documents in the foreign countries from 1982 to 2006 and published the brief catalog. The Reogang Publishing company issued four volumes catalogs of old document in Japan. The National Research Institute of Cultural Heritage(NRICH) investigated old books and published some catalogs of several organizations in Japan. America. France. and all. The National Institute of Korean History(NIKH) investigated old archives and published some catalogs of several organizations in Japan. The characteristics of the Korean Old and Rare Collection Information System(KORCIS) of the NLK, the Old Books Cultural Heritage in Overseas System of the NRICH. and the Korea History DB System and MF Catalog/ Image System of NIKH were described in the DB systems of old documents, the problems of DB systems were checked over and some alternatives were suggested. In the common use of old document DB, KORMARC format and description rules(draft) for archives should be revised to adopt a new standard such as KS editions. and all the institutes involved should thoroughly follow the standards. when creating bibliographic records and digitizing texts. It is necessary to educate and train the specialists of old documents. A government organization should be established to supervise all the procedures of developing technology for sharing digitized resources. using contents. and cooperating with the related internationl organizations and institutes.

Skew Estimation and Correction in Text Images using Shape Moments (형태 모멘트를 이용한 텍스트 이미지 경사 측정 및 교정)

  • Choo, Moon-Won;Chin, Seong-Ah
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.1
    • /
    • pp.14-20
    • /
    • 2003
  • In this paper efficient skew estimation and correction approaches are proposed. To detect the skew of text images, Hough transform using the perpendicular angle view property and shape moments are peformed. The resultant primary text skew angle is used to align the original text. The performance evaluations of the proposed methods with respect to running time are shown.

  • PDF

Hypermedia Models for CALS Environment (CALS환경에서의 하이퍼미디어 모델 적용에 관한 연구)

  • 임만택
    • The Journal of Society for e-Business Studies
    • /
    • v.1 no.1
    • /
    • pp.159-171
    • /
    • 1996
  • Nowadays, multimedia and Hypermedia become hot topics in information industry. Due to high capacity of media storage and fast communication network, it is possible to exchange text data as well as image, moving picture and voice. Especially to apply hypermedia under CALS standard environment, the relation between international standard and CALS standard needs to be considered. This study introduces conceptual background and processing model of HyTime (Hypermedia Time-based Structuring Language) which is a specification of hypermedia exchange, Hyper ODA (Hyper Open Document Architecture) which is a major multimedia communication basis, MMCF (Multimedia Communication Forum), AHM(Amsterdam Hypermedia Model), and DSRM(DAVIC System Reference Model) reference model which helps determination of hypermedia communication specification Although they are international standard, provisional standard or non-standard, it discusses the Possibility of adopting them as CALS standard. Hence, this paper chooses the best recommend for CALS among these models.

  • PDF

Convolutional Neural Networks for Character-level Classification

  • Ko, Dae-Gun;Song, Su-Han;Kang, Ki-Min;Han, Seong-Wook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.53-59
    • /
    • 2017
  • Optical character recognition (OCR) automatically recognizes text in an image. OCR is still a challenging problem in computer vision. A successful solution to OCR has important device applications, such as text-to-speech conversion and automatic document classification. In this work, we analyze character recognition performance using the current state-of-the-art deep-learning structures. One is the AlexNet structure, another is the LeNet structure, and the other one is the SPNet structure. For this, we have built our own dataset that contains digits and upper- and lower-case characters. We experiment in the presence of salt-and-pepper noise or Gaussian noise, and report the performance comparison in terms of recognition error. Experimental results indicate by five-fold cross-validation that the SPNet structure (our approach) outperforms AlexNet and LeNet in recognition error.