• Title/Summary/Keyword: Optical character recognition

Search Result 181, Processing Time 0.027 seconds

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

An Optical Character Recognition Method using a Smartphone Gyro Sensor for Visually Impaired Persons (스마트폰 자이로센서를 이용한 시각장애인용 광학문자인식 방법)

  • Kwon, Soon-Kak;Kim, Heung-Jun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.4
    • /
    • pp.13-20
    • /
    • 2016
  • It is possible to implement an optical character recognition system using a high-resolution camera mounted on smart phones in the modern society. Further, characters extracted from the implemented application is possible to provide the voice service for the visually impaired person by using TTS. But, it is difficult for the visually impaired person to properly shoot the objects that character information are included, because it is very hard to accurately understand the current state of the object. In this paper, we propose a method of inducing an appropriate shooting for the visually impaired persons by using a smartphone gyro sensor. As a result of simulation using the implemented program, we were able to see that it is possible to recognize the more character from the same object using the proposed method.

Recognition of Bill Form using Feature Pyramid Network (FPN(Feature Pyramid Network)을 이용한 고지서 양식 인식)

  • Kim, Dae-Jin;Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.523-529
    • /
    • 2021
  • In the era of the Fourth Industrial Revolution, technological changes are being applied in various fields. Automation digitization and data management are also in the field of bills. There are more than tens of thousands of forms of bills circulating in society and bill recognition is essential for automation, digitization and data management. Currently in order to manage various bills, OCR technology is used for character recognition. In this time, we can increase the accuracy, when firstly recognize the form of the bill and secondly recognize bills. In this paper, a logo that can be used as an index to classify the form of the bill was recognized as an object. At this time, since the size of the logo is smaller than that of the entire bill, FPN was used for Small Object Detection among deep learning technologies. As a result, it was possible to reduce resource waste and increase the accuracy of OCR recognition through the proposed algorithm.

Structure Recognition Method in Various Table Types for Document Processing Automation (문서 처리 자동화를 위한 다양한 표 유형에서 표 구조 인식 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.5
    • /
    • pp.695-702
    • /
    • 2022
  • In this paper, we propose the method of a table structure recognition in various table types for document processing automation. A table with items surrounded by ruled lines are analyzed by detecting horizontal and vertical lines for recognizing the table structure. In case of a table with items separated by spaces, the table structure are recognized by analyzing the arrangement of row items. After recognizing the table structure, the areas of the table items are input into OCR engine and the character recognition result output to a text file in a structured format such as CSV or JSON. In simulation results, the average accuracy of table item recognition is about 94%.

Development of a Low-cost Industrial OCR System with an End-to-end Deep Learning Technology

  • Subedi, Bharat;Yunusov, Jahongir;Gaybulayev, Abdulaziz;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.2
    • /
    • pp.51-60
    • /
    • 2020
  • Optical character recognition (OCR) has been studied for decades because it is very useful in a variety of places. Nowadays, OCR's performance has improved significantly due to outstanding deep learning technology. Thus, there is an increasing demand for commercial-grade but affordable OCR systems. We have developed a low-cost, high-performance OCR system for the industry with the cheapest embedded developer kit that supports GPU acceleration. To achieve high accuracy for industrial use on limited computing resources, we chose a state-of-the-art text recognition algorithm that uses an end-to-end deep learning network as a baseline model. The model was then improved by replacing the feature extraction network with the best one suited to our conditions. Among the various candidate networks, EfficientNet-B3 has shown the best performance: excellent recognition accuracy with relatively low memory consumption. Besides, we have optimized the model written in TensorFlow's Python API using TensorFlow-TensorRT integration and TensorFlow's C++ API, respectively.

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

  • Gerber, Christian;Chung, Mokdong
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.100-108
    • /
    • 2016
  • In this paper, we propose a method to achieve improved number plate detection for mobile devices by applying a multiple convolutional neural network (CNN) approach. First, we processed supervised CNN-verified car detection and then we applied the detected car regions to the next supervised CNN-verifier for number plate detection. In the final step, the detected number plate regions were verified through optical character recognition by another CNN-verifier. Since mobile devices are limited in computation power, we are proposing a fast method to recognize number plates. We expect for it to be used in the field of intelligent transportation systems.

Unit Under Tester Auto System using OCR(Optical Character Recognition) (Optical Character Recognition을 이용한 계측기기 자동 교정시스템구축기술)

  • Kang, Sang-Mu;Kim, Young-Jic;Cheon, Yong-Sik
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1772-1773
    • /
    • 2011
  • 현대기술의 발전에 따라 계측기술 또한 다양하고 복잡하게 변모하였으며, 정밀측정기기의 교정 업무에서는 복잡하고 정확한, 반복적이며 계속적인 데이터 취득을 요구한다. 또한 여러 장비를 사용할 경우, 장시간 소요되는 데이터 취득과 정확한 계측기 사용법 및 고도의 관련기술을 필요로 한다. 그러므로 컴퓨터를 이용 한 계측장비 제어로 측정에 필요한 시간을 최대한 단축하고, 개인오차를 제거할 수 있는 동일한 결과와 쉽게 데이터를 취득할 수 있도록 측정자동화가 필요하다.

  • PDF

A Study on Construction of Technical Reports Management System Using Optical Technology (광기술을 이용한 연구보고서 관리시스템 구축)

  • 이상헌;김익철
    • Journal of the Korean Society for information Management
    • /
    • v.9 no.1
    • /
    • pp.131-164
    • /
    • 1992
  • In this study. a technical report management system using optical technology is described in detail. This management system is designed for both bibliographic (character) and full-text (image) information. Several optical filing systems already on the Korean market are scrutinized and compared with standard functions in order to build a more efficient management system for technical reports which can be easily integrated into existing KRISS library automation system. For that purpose, up-to-date technologies (i.e., digital image PI-ocessing (DIP), MARC standards, and optical character recognition (OCR), etc.) are applied to this system.

  • PDF

Variations of AlexNet and GoogLeNet to Improve Korean Character Recognition Performance

  • Lee, Sang-Geol;Sung, Yunsick;Kim, Yeon-Gyu;Cha, Eui-Young
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.205-217
    • /
    • 2018
  • Deep learning using convolutional neural networks (CNNs) is being studied in various fields of image recognition and these studies show excellent performance. In this paper, we compare the performance of CNN architectures, KCR-AlexNet and KCR-GoogLeNet. The experimental data used in this paper is obtained from PHD08, a large-scale Korean character database. It has 2,187 samples of each Korean character with 2,350 Korean character classes for a total of 5,139,450 data samples. In the training results, KCR-AlexNet showed an accuracy of over 98% for the top-1 test and KCR-GoogLeNet showed an accuracy of over 99% for the top-1 test after the final training iteration. We made an additional Korean character dataset with fonts that were not in PHD08 to compare the classification success rate with commercial optical character recognition (OCR) programs and ensure the objectivity of the experiment. While the commercial OCR programs showed 66.95% to 83.16% classification success rates, KCR-AlexNet and KCR-GoogLeNet showed average classification success rates of 90.12% and 89.14%, respectively, which are higher than the commercial OCR programs' rates. Considering the time factor, KCR-AlexNet was faster than KCR-GoogLeNet when they were trained using PHD08; otherwise, KCR-GoogLeNet had a faster classification speed.

Development of a Video Caption Recognition System for Sport Event Broadcasting (스포츠 중계를 위한 자막 인식 시스템 개발)

  • Oh, Ju-Hyun
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.94-98
    • /
    • 2009
  • A video caption recognition system has been developed for broadcasting sport events such as major league baseball. The purpose of the system is to translate the information expressed in English units such as miles per hour (MPH) to the international system of units (SI) such as km/h. The system detects the ball speed displayed in the video and recognizes the numerals. The ball speed is then converted to km/h and displayed by the following character generator (CG) system. Although neural-network based methods are widely used for character and numeral recognition, we use template matching to avoid the training process required before the broadcasting. With the proposed template matching method, the operator can cope with the situation when the caption’s appearance changed without any notification. Templates are configured by the operator with a captured screenshot of the first pitch with ball speed. Templates are updated with following correct recognition results. The accuracy of the recognition module is over 97%, which is still not enough for live broadcasting. When the recognition confidence is low, the system asks the operator for the correct recognition result. The operator chooses the right one using hot keys.

  • PDF