• Title/Summary/Keyword: Optical Character Recognition

Search Result 181, Processing Time 0.027 seconds

Character Recognition Based on Adaptive Statistical Learning Algorithm

  • K.C. Koh;Park, H.J.;Kim, J.S.;K. Koh;H.S. Cho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.109.2-109
    • /
    • 2001
  • In the PCB assembly lines, as components become more complex and smaller, the conventional inspection method using traditional ICT and function test show their limitations in application. The automatic optical inspection(AOI) gradually becomes the alternative in the PCB assembly line. In Particular, the PCB inspection machines need more reliable and flexible object recognition algorithms for high inspection accuracy. The conventional AOI machines use the algorithmic approaches such as template matching, Fourier analysis, edge analysis, geometric feature recognition or optical character recognition (OCR), which mostly require much of teaching time and expertise of human operators. To solve this problem, in this paper, a statistical learning based part recognition method is proposed. The performance of the ...

  • PDF

Korean Character Recognition Using Optical Associative Memory (광 연상 기억 장치를 이용한 한글 문자 인식)

  • 김정우;배장근;도양회
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.31A no.6
    • /
    • pp.61-69
    • /
    • 1994
  • For distortion-invariant recognition of Korean characters, a holographic implementation of an optical associative memory system is proposed. The structure of the proposed system is a single-layer neural network employing interconneclion matrix, thresholding and feedback. To provide the interconnection matrix, we use two CGII's which are placed on intermcdiate plane of cascaded Vander Lugt corrclators to form an optical memory loop. The holographic correlator stores reference images in a hologram and retrives them in a coherently illuminated feedback loop. An input image which maybe noisy or incomplete, is applicd to the system and simultaneously correlated optically with all of the stord images. These correlations are throsholed and fed back to the input, where the strongest correlation reinforces the input image. The enhanced image passes arround the loop repeatedly, approaching the stored image more closely on each pass until the system stabilizes on the desired image. The computer simulation results show that the proposed Korean Character recognition algorithm has high discrimination capability and noise immunity.

  • PDF

Structure Recognition Method of Invoice Document Image for Document Processing Automation (문서 처리 자동화를 위한 인보이스 이미지의 구조 인식 방법)

  • Dong-seok Lee;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.2
    • /
    • pp.11-19
    • /
    • 2023
  • In this paper, we propose the methods of invoice document structure recognition and of making a spreadsheet electronic document. The texts and block location information of word blocks are recognized by an optical character recognition engine through deep learning. The word blocks on the same row and same column are found through their coordinates. The document area is divided through arrangement information of the word blocks. The character recognition result is inputted in the spreadsheet based on the document structure. In simulation result, the item placement through the proposed method shows an average accuracy of 92.30%.

Implementation of Multiprocessor for Classification of High Speed OCR (고속 문자 인식기의 대분류용 다중 처리기의 구현)

  • 김형구;강선미;김덕진
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.6
    • /
    • pp.10-16
    • /
    • 1994
  • In case of off-line character recognition with statistical method, the character recognition speed for Korean or Chinese characters is slow since the amount of calculation is huge. To improve this problem, we seperate the recognition steps into several functional stages and implement them with hardwares for each stage so that all the stages can be processed with pipline structure. In accordance with temporal parallel processing, a high speed character recognition system can be implemented. In this paper, we implement a classification hardware, which is one of the several functional stages, to improve the speed by parallel structure with multiple DSPs(Digital Signal Processors). Also, it is designed to be able to expand DSP boards in parallel to make processing faster as much as we wish. We implement the hardware as an add-on board in IBM-PC, and the result of experiment is that it can process about 47-times and 71-times faster with 2 DSPs and 3 DSPs respectively than the IBM-PC(486D$\times$2-66MHz). The effectiveness is proved by developing a high speed OCR(Optical Character Recognizer).

  • PDF

An Implementation of a System for Video Translation on Window Platform Using OCR (윈도우 기반의 광학문자인식을 이용한 영상 번역 시스템 구현)

  • Hwang, Sun-Myung;Yeom, Hee-Gyun
    • Journal of Internet of Things and Convergence
    • /
    • v.5 no.2
    • /
    • pp.15-20
    • /
    • 2019
  • As the machine learning research has developed, the field of translation and image analysis such as optical character recognition has made great progress. However, video translation that combines these two is slower than previous developments. In this paper, we develop an image translator that combines existing OCR technology and translation technology and verify its effectiveness. Before developing, we presented what functions are needed to implement this system and how to implement them, and then tested their performance. With the application program developed through this paper, users can access translation more conveniently, and also can contribute to ensuring the convenience provided in any environment.

The Centering of the Invariant Feature for the Unfocused Input Character using a Spherical Domain System

  • Seo, Choon-Weon
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.29 no.9
    • /
    • pp.14-22
    • /
    • 2015
  • TIn this paper, a centering method for an unfocused input character using the spherical domain system and the centering character to use the shift invariant feature for the recognition system is proposed. A system for recognition is implemented using the centroid method with coordinate average values, and the results of an above 78.14% average differential ratio for the character features were obtained. It is possible to extract the shift invariant feature using spherical transformation similar to the human eyeball. The proposed method, which is feature extraction using spherical coordinate transform and transformed extracted data, makes it possible to move the character to the center position of the input plane. Both digital and optical technologies are mixed using a spherical coordinate similar to the 3 dimensional human eyeball for the 2 dimensional plane format. In this paper, a centering character feature using the spherical domain is proposed for character recognition, and possibilities for the recognized possible character shape as well as calculating the differential ratio of the centered character using a centroid method are suggested.

Cyber Character Implementation with Recognition and Synthesis of Speech/lmage (음성/영상의 인식 및 합성 기능을 갖는 가상캐릭터 구현)

  • Choe, Gwang-Pyo;Lee, Du-Seong;Hong, Gwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.5
    • /
    • pp.54-63
    • /
    • 2000
  • In this paper, we implemented cyber character that can do speech recognition, speech synthesis, Motion tracking and 3D animation. For speech recognition, we used Discrete-HMM algorithm with K-means 128 level vector quantization and MFCC feature vector. For speech synthesis, we used demi-syllables TD-PSOLA algorithm. For PC based Motion tracking, we present Fast Optical Flow like Method. And for animating 3D model, we used vertex interpolation with DirectSD retained mode. Finally, we implemented cyber character integrated above systems, which game calculating by the multiplication table with user and the cyber character always look at user using of Motion tracking system.

  • PDF

Development of vision system for the character recognition of the billet image (빌렛영상에 포함된 문자인식을 위한 비전시스템 개발)

  • Park, Sang-Gug
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.22-29
    • /
    • 2008
  • This paper describes the developed results of vision system for the recognition of material management characters, which was included in the billet image. The material management characters, which was marked at the surface of billet, should be recognized before billet moves to the next process. Our vision system for the character recognition includes that CCD camera system which acquire billet image, optical transmission system which transmit captured image to the long distance, input and output system for the interface with existing system and software for the character recognition. We have installed our vision system at the wire rod line of steel & iron plant and tested. Also, we have performed inspection of durability, reliability and recognition rate. Through the testing, we have confirmed that our system have high recognition rate, 98.6%.

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.