• Title/Summary/Keyword: Character Matching

Search Result 155, Processing Time 0.034 seconds

Fast Shape Matching Algorithm Based on the Improved Douglas-Peucker Algorithm (개량 Douglas-Peucker 알고리즘 기반 고속 Shape Matching 알고리즘)

  • Sim, Myoung-Sup;Kwak, Ju-Hyun;Lee, Chang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.10
    • /
    • pp.497-502
    • /
    • 2016
  • Shape Contexts Recognition(SCR) is a technology recognizing shapes such as figures and objects, greatly supporting technologies such as character recognition, motion recognition, facial recognition, and situational recognition. However, generally SCR makes histograms for all contours and maps the extracted contours one to one to compare Shape A and B, which leads to slow progress speed. Thus, this paper has made simple yet more effective algorithm with optimized contour, finding the outlines according to shape figures and using the improved Douglas-Peucker algorithm and Harris corner detector. With this improved method, progress speed is recognized as faster.

Structuring of Pulmonary Function Test Paper Using Deep Learning

  • Jo, Sang-Hyun;Kim, Dae-Hoon;Kim, Yoon;Kwon, Sung-Ok;Kim, Woo-Jin;Lee, Sang-Ah
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.61-67
    • /
    • 2021
  • In this paper, we propose a method of extracting and recognizing related information for research from images of the unstructured pulmonary function test papers using character detection and recognition techniques. Also, we develop a post-processing method to reduce the character recognition error rate. The proposed structuring method uses a character detection model for the pulmonary function test paper images to detect all characters in the test paper and passes the detected character image through the character recognition model to obtain a string. The obtained string is reviewed for validity using string matching and structuring is completed. We confirm that our proposed structuring system is a more efficient and stable method than the structuring method through manual work of professionals because our system's error rate is within about 1% and the processing speed per pulmonary function test paper is within 2 seconds.

A Hangul Script Matching Algorithm for PDA (PDA상에서의 한글 필기체 매칭 알고리즘)

  • Cho, Mi-Gyung;Cho, Hwan-Gue
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.10
    • /
    • pp.684-693
    • /
    • 2002
  • Electronic Ink is a stored data in the form of the handwritten text or the script without converting it into ASCII by handwritten recognition on the pen-based computers and Personal Digital Assistants(PDAs) for supporting natural and convenient data input. One of the most Important issue is to search the electronic ink in order to use it. We proposed and implemented a script matching algorithm for the electronic ink. Proposed matching algorithm separated the input stroke into a set of primitive stroke using the curvature of the stroke curve. After determining the type of separated strokes, it produced a stroke feature vector. And then it calculated the distance between the stroke feature vector of input strokes and one of strokes in the database using the dynamic programming technique. We did various experiments and our algorithm showed high matching rate over 97.7% for only the Korean script and 94% for the data mixed Korean with the Chinese character.

The hand-drawn diagram recognition for OrCAD matching (OrCAD 정합을 위한 수작업 도면 인식)

  • Park, Young-Sik;Kim, Jin-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.3
    • /
    • pp.229-235
    • /
    • 1996
  • CAD diagrams generally consists of many basic components: symbols, character, and connection lines. Thus, to recognize the diagrams, it is necessary to extract each components, and understand their meanings and relation among them. This paper describes a method for linking basic components extracted efficiently from hand-down diagrams to OrCAD data format. Experimental results with a hand-drawn diagrams of electronic and logic circuit show utility of the proposed method.

  • PDF

Studies on the volatile compounds of Cnidium officinale (천궁(Cnidium officinale)의 향기성분)

  • 이재곤;권영주;장희진;김옥찬;박준영
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.16 no.1
    • /
    • pp.20-25
    • /
    • 1994
  • The volatile components were extracted from root of Cnidium officinale M. by SDE(Simultaneous steam distillation and extraction) apparatus and analyzed by GC/M.5 and GC retention index matching. The experimental results revealed the presence of over 22 volatile components. Major components were cnidilide (35.1%), neocnidilids (13.4%), ligustilide (23.2%). The essential oils were separated by silica gel column chromatography(Merck 70-230mesh), and 4 fractions among 12 fractions separated had a, good aroma character.

  • PDF

A Method for Reconstructing Original Images for Captions Areas in Videos Using Block Matching Algorithm (블록 정합을 이용한 비디오 자막 영역의 원 영상 복원 방법)

  • 전병태;이재연;배영래
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.113-122
    • /
    • 2000
  • It is sometimes necessary to remove the captions and recover original images from video images already broadcast, When the number of images requiring such recovery is small, manual processing is possible, but as the number grows it would be very difficult to do it manually. Therefore, a method for recovering original image for the caption areas in needed. Traditional research on image restoration has focused on restoring blurred images to sharp images using frequency filtering or video coding for transferring video images. This paper proposes a method for automatically recovering original image using BMA(Block Matching Algorithm). We extract information on caption regions and scene change that is used as a prior-knowledge for recovering original image. From the result of caption information detection, we know the start and end frames of captions in video and the character areas in the caption regions. The direction for the recovery is decided using information on the scene change and caption region(the start and end frame for captions). According to the direction, we recover the original image by performing block matching for character components in extracted caption region. Experimental results show that the case of stationary images with little camera or object motion is well recovered. We see that the case of images with motion in complex background is also recovered.

  • PDF

Slab Region Localization for Text Extraction using SIFT Features (문자열 검출을 위한 슬라브 영역 추정)

  • Choi, Jong-Hyun;Choi, Sung-Hoo;Yun, Jong-Pil;Koo, Keun-Hwi;Kim, Sang-Woo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1025-1034
    • /
    • 2009
  • In steel making production line, steel slabs are given a unique identification number. This identification number, Slab management number(SMN), gives information about the use of the slab. Identification of SMN has been done by humans for several years, but this is expensive and not accurate and it has been a heavy burden on the workers. Consequently, to improve efficiency, automatic recognition system is desirable. Generally, a recognition system consists of text localization, text extraction, character segmentation, and character recognition. For exact SMN identification, all the stage of the recognition system must be successful. In particular, the text localization is great important stage and difficult to process. However, because of many text-like patterns in a complex background and high fuzziness between the slab and background, directly extracting text region is difficult to process. If the slab region including SMN can be detected precisely, text localization algorithm will be able to be developed on the more simple method and the processing time of the overall recognition system will be reduced. This paper describes about the slab region localization using SIFT(Scale Invariant Feature Transform) features in the image. First, SIFT algorithm is applied the captured background and slab image, then features of two images are matched by Nearest Neighbor(NN) algorithm. However, correct matching rate can be low when two images are matched. Thus, to remove incorrect match between the features of two images, geometric locations of the matched two feature points are used. Finally, search rectangle method is performed in correct matching features, and then the top boundary and side boundaries of the slab region are determined. For this processes, we can reduce search region for extraction of SMN from the slab image. Most cases, to extract text region, search region is heuristically fixed [1][2]. However, the proposed algorithm is more analytic than other algorithms, because the search region is not fixed and the slab region is searched in the whole image. Experimental results show that the proposed algorithm has a good performance.

A Method for Recovering Text Regions in Video using Extended Block Matching and Region Compensation (확장적 블록 정합 방법과 영역 보상법을 이용한 비디오 문자 영역 복원 방법)

  • 전병태;배영래
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.767-774
    • /
    • 2002
  • Conventional research on image restoration has focused on restoring degraded images resulting from image formation, storage and communication, mainly in the signal processing field. Related research on recovering original image information of caption regions includes a method using BMA(block matching algorithm). The method has problem with frequent incorrect matching and propagating the errors by incorrect matching. Moreover, it is impossible to recover the frames between two scene changes when scene changes occur more than twice. In this paper, we propose a method for recovering original images using EBMA(Extended Block Matching Algorithm) and a region compensation method. To use it in original image recovery, the method extracts a priori knowledge such as information about scene changes, camera motion and caption regions. The method decides the direction of recovery using the extracted caption information(the start and end frames of a caption) and scene change information. According to the direction of recovery, the recovery is performed in units of character components using EBMA and the region compensation method. Experimental results show that EBMA results in good recovery regardless of the speed of moving object and complexity of background in video. The region compensation method recovered original images successfully, when there is no information about the original image to refer to.

Design and Implementation for Korean Character and Pen-gesture Recognition System using Stroke Information (획 정보를 이용한 한글문자와 펜 제스처 인식 시스템의 설계 및 구현)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.765-774
    • /
    • 2002
  • The purpose of this paper is a design and implementation for korean character and pen-gesture recognition system in multimedia terminal, PDA and etc, which demand both a fast process and a high recognition rate. To recognize writing-types which are written by various users, the korean character recognition system uses a database which is based on the characteristic information of korean and the stroke information Which composes a phoneme, etc. In addition. it has a fast speed by the phoneme segmentation which uses the successive process or the backtracking process. The pen-gesture recognition system is performed by a matching process between the classification features extracted from an input pen-gesture and the classification features of 15 pen-gestures types defined in the gesture model. The classification feature is using the insensitive stroke information. i.e., the positional relation between two strokes. the crossing number, the direction transition, the direction vector, the number of direction code. and the distance ratio between starting and ending point in each stroke. In the experiment, we acquired a high recognition rate and a fart speed.

Meter Numeric Character Recognition Using Illumination Normalization and Hybrid Classifier (조명 정규화 및 하이브리드 분류기를 이용한 계량기 숫자 인식)

  • Oh, Hangul;Cho, Seongwon;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.71-77
    • /
    • 2014
  • In this paper, we propose an improved numeric character recognition method which can recognize numeric characters well under low-illuminated and shade-illuminated environment. The LN(Local Normalization) preprocessing method is used in order to enhance low-illuminated and shade-illuminated image quality. The reading area is detected using line segment information extracted from the illumination-normalized meter images, and then the three-phase procedures are performed for segmentation of numeric characters in the reading area. Finally, an efficient hybrid classifier is used to classify the segmented numeric characters. The proposed numeric character classifier is a combination of multi-layered feedforward neural network and template matching module. Robust heuristic rules are applied to classify the numeric characters. Experiments using meter image database were conducted. Meter image database was made using various kinds of meters under low-illuminated and shade-illuminated environment. The experimental results indicates the superiority of the proposed numeric character recognition method.