• Title/Summary/Keyword: WeOCR

Search Result 165, Processing Time 0.021 seconds

A Keyword Matching for the Retrieval of Low-Quality Hangul Document Images

  • Na, In-Seop;Park, Sang-Cheol;Kim, Soo-Hyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.47 no.1
    • /
    • pp.39-55
    • /
    • 2013
  • It is a difficult problem to use keyword retrieval for low-quality Korean document images because these include adjacent characters that are connected. In addition, images that are created from various fonts are likely to be distorted during acquisition. In this paper, we propose and test a keyword retrieval system, using a support vector machine (SVM) for the retrieval of low-quality Korean document images. We propose a keyword retrieval method using an SVM to discriminate the similarity between two word images. We demonstrated that the proposed keyword retrieval method is more effective than the accumulated Optical Character Recognition (OCR)-based searching method. Moreover, using the SVM is better than Bayesian decision or artificial neural network for determining the similarity of two images.

A Study on Word Learning and Error Type for Character Correction in Hangul Character Recognition (한글 문자 인식에서의 오인식 문자 교정을 위한 단어 학습과 오류 형태에 관한 연구)

  • Lee, Byeong-Hui;Kim, Tae-Gyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.5
    • /
    • pp.1273-1280
    • /
    • 1996
  • In order perform high accuracy recognition of text recognition systems, the recognized text must be processed through a post-processing stage using contextual information. We present a system that combines multiple knowledge sources to post-process the output of an optical character recognition(OCR) system. The multiple knowledge sources include characteristics of word, wrongly recognized types of Hangul characters, and Hangul word learning In this paper, the wrongly recognized characters which are made by OCR systems are collected and analyzed. We imput a Korean dictionary with approximately 15 0,000 words, and Korean language texts of Korean elementary/middle/high school. We found that only 10.7% words in Korean language texts of Korean elementary/middle /high school were used in a Korean dictionary. And we classified error types of Korean character recognition with OCR systems. For Hangul word learning, we utilized indexes of texts. With these multiple knowledge sources, we could predict a proper word in large candidate words.

  • PDF

Semantic Segmentation of the Habitats of Ecklonia Cava and Sargassum in Undersea Images Using HRNet-OCR and Swin-L Models (HRNet-OCR과 Swin-L 모델을 이용한 조식동물 서식지 수중영상의 의미론적 분할)

  • Kim, Hyungwoo;Jang, Seonwoong;Bak, Suho;Gong, Shinwoo;Kwak, Jiwoo;Kim, Jinsoo;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.913-924
    • /
    • 2022
  • In this paper, we presented a database construction of undersea images for the Habitats of Ecklonia cava and Sargassum and conducted an experiment for semantic segmentation using state-of-the-art (SOTA) models such as High Resolution Network-Object Contextual Representation (HRNet-OCR) and Shifted Windows-L (Swin-L). The result showed that our segmentation models were superior to the existing experiments in terms of the 29% increased mean intersection over union (mIOU). Swin-L model produced better performance for every class. In particular, the information of the Ecklonia cava class that had small data were also appropriately extracted by Swin-L model. Target objects and the backgrounds were well distinguished owing to the Transformer backbone better than the legacy models. A bigger database under construction will ensure more accuracy improvement and can be utilized as deep learning database for undersea images.

A Novel Character Segmentation Method for Text Images Captured by Cameras

  • Lue, Hsin-Te;Wen, Ming-Gang;Cheng, Hsu-Yung;Fan, Kuo-Chin;Lin, Chih-Wei;Yu, Chih-Chang
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.729-739
    • /
    • 2010
  • Due to the rapid development of mobile devices equipped with cameras, instant translation of any text seen in any context is possible. Mobile devices can serve as a translation tool by recognizing the texts presented in the captured scenes. Images captured by cameras will embed more external or unwanted effects which need not to be considered in traditional optical character recognition (OCR). In this paper, we segment a text image captured by mobile devices into individual single characters to facilitate OCR kernel processing. Before proceeding with character segmentation, text detection and text line construction need to be performed in advance. A novel character segmentation method which integrates touched character filters is employed on text images captured by cameras. In addition, periphery features are extracted from the segmented images of touched characters and fed as inputs to support vector machines to calculate the confident values. In our experiment, the accuracy rate of the proposed character segmentation system is 94.90%, which demonstrates the effectiveness of the proposed method.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Digital Library and Information Management (디지털 도서관(圖書館)과 정보관리)

  • Kim, Soon-Ja
    • Journal of Information Management
    • /
    • v.26 no.1
    • /
    • pp.16-51
    • /
    • 1995
  • Information management area faced new challenge arised from the developments of the computer and the information network, and the advent of information super highway. With deep perception of importance of the information, developments of information technologies, and change of the users' environment, we came to envision the digital library. This paper intends to describe the concept and function of the digital library, and to examine some of information technologies such as CD-ROM, OCR technology and image scanning, hypertext, hypermedia and multimedia. And it also considers the strategies for electronic information services and the applicability of the current information technology for digitalization by case studies of the existing database systems.

  • PDF

Pet Disease Prediction Service and Integrated Management Application (반려동물 질병예측서비스 및 통합관리 어플리케이션)

  • Ki-Du Pyo;Dong-Young Lee;Won-Se Jung;Oh-Jun Kwon;Kyung-Suk Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.133-137
    • /
    • 2023
  • In this paper, we developed a 'comprehensive pet management application' that combines pet AI diagnosis, animal hospital search, smart household accounts, and community functions. The application can solve the inconvenience of users who have to use multiple functions as separate applications, and can easily use pet AI diagnosis services through photos, provides animal hospital information using crawling, finds nearby animal hospitals, and supports smart households that can scan receipts using OCR text extraction techniques. By using this application, information necessary for raising pets such as health and consumption details of pets can be managed in one system.

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

Feasibility of Optical Character Recognition (OCR) for Non-native Turtle Detection (UAV 기반 외래거북 탐지를 위한 광학문자 인식(OCR)의 가능성 평가)

  • Lim, Tai-Yang;Kim, Ji-Yoon;Kim, Whee-Moon;Kang, Wan-Mo;Song, Won-Kyong
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.5
    • /
    • pp.29-41
    • /
    • 2022
  • Alien species cause problems in various ecosystems, reduce biodiversity, and destroy ecosystems. Due to these problems, the problem of a management plan is increasing, and it is difficult to accurately identify each individual and calculate the number of individuals, especially when researching alien turtle species such as GPS and PIT based on capture. this study intends to conduct an individual recognition study using a UAV. Recently, UAVs can take various sensor-based photos and easily obtain high-definition image data at low altitudes. Therefore, based on previous studies, this study investigated five variables to be considered in UAV flights and produced a test paper using them. OCR was used to monitor the displayed turtles using the manufactured test paper, and this confirmed the recognition rate. As a result, the use of yellow numbers showed the highest recognition rate. In addition, the minimum threat distance was confirmed to be 3 to 6m, and turtles with a shell size of 6 to 8cm were also identified during the flight. Therefore, we tried to propose an object recognition methodology for turtle display text using OCR, and it is expected to be used as a new turtle monitoring technique.

Robust Recognition of a Player Name in Golf Videos (골프 동영상에서의 강건한 선수명 인식)

  • Jung, Cheol-Kon;Kim, Joong-Kyu
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.659-662
    • /
    • 2008
  • In sports videos, text provides valuable information about the game such as scores and information about the players. This paper proposed a robust recognition method of player name in golf videos. In golf, most of users want to search the scenes which contain the play shots of favorite players. We use text information in golf videos for robust extraction of player information, By using OCR, we have obtained the text information, and then recognized the player information from player name DB. We can search the scenes of favorite players by using this player information. By conducting experiments on several golf videos, we demonstrate that our method achieves impressive performance with respect to the robustness.

  • PDF