• Title/Summary/Keyword: OCR text extraction

Search Result 27, Processing Time 0.019 seconds

Text Region Extraction and OCR on Camera Based Images (카메라 영상 위에서의 문자 영역 추출 및 OCR)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.1
    • /
    • pp.59-66
    • /
    • 2010
  • Traditional OCR engines are designed to the scanned documents in calibrated environment. Three dimensional perspective distortion and smooth distortion in images are critical problems caused by un-calibrated devices, e.g. image from smart phones. To meet the growing demand of character recognition of texts embedded in the photos acquired from the non-calibrated hand-held devices, we address the problem in three categorical aspects: rotational invariant method of text region extraction, scale invariant method of text line segmentation, and three dimensional perspective mapping. With the integration of the methods, we developed an OCR for camera-captured images.

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Scene Text Recognition Performance Improvement through an Add-on of an OCR based Classifier (OCR 엔진 기반 분류기 애드온 결합을 통한 이미지 내부 텍스트 인식 성능 향상)

  • Chae, Ho-Yeol;Seok, Ho-Sik
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1086-1092
    • /
    • 2020
  • An autonomous agent for real world should be able to recognize text in scenes. With the advancement of deep learning, various DNN models have been utilized for transformation, feature extraction, and predictions. However, the existing state-of-the art STR (Scene Text Recognition) engines do not achieve the performance required for real world applications. In this paper, we introduce a performance-improvement method through an add-on composed of an OCR (Optical Character Recognition) engine and a classifier for STR engines. On instances from IC13 and IC15 datasets which a STR engine failed to recognize, our method recognizes 10.92% of unrecognized characters.

Text Extraction In WWW Images (웹 영상에 포함된 문자 영역의 추출)

  • 김상현;심재창;김중수
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.15-18
    • /
    • 2000
  • In this paper, we propose a method for text extraction in the Web images. Our approach is based on contrast detecting and pixel component ratio analysis in mouse position. Extracted data with OCR can be used for real time dictionary call or language translation application in Web browser.

  • PDF

Text Extraction in HIS Color Space by Weighting Scheme

  • Le, Thi Khue Van;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.1
    • /
    • pp.31-36
    • /
    • 2013
  • A robust and efficient text extraction is very important for an accuracy of Optical Character Recognition (OCR) systems. Natural scene images with degradations such as uneven illumination, perspective distortion, complex background and multi color text give many challenges to computer vision task, especially in text extraction. In this paper, we propose a method for extraction of the text in signboard images based on a combination of mean shift algorithm and weighting scheme of hue and saturation in HSI color space for clustering algorithm. The number of clusters is determined automatically by mean shift-based density estimation, in which local clusters are estimated by repeatedly searching for higher density points in feature vector space. Weighting scheme of hue and saturation is used for formulation a new distance measure in cylindrical coordinate for text extraction. The obtained experimental results through various natural scene images are presented to demonstrate the effectiveness of our approach.

  • PDF

A Comparative Study on OCR using Super-Resolution for Small Fonts

  • Cho, Wooyeong;Kwon, Juwon;Kwon, Soonchu;Yoo, Jisang
    • International journal of advanced smart convergence
    • /
    • v.8 no.3
    • /
    • pp.95-101
    • /
    • 2019
  • Recently, there have been many issues related to text recognition using Tesseract. One of these issues is that the text recognition accuracy is significantly lower for smaller fonts. Tesseract extracts text by creating an outline with direction in the image. By searching the Tesseract database, template matching with characters with similar feature points is used to select the character with the lowest error. Because of the poor text extraction, the recognition accuracy is lowerd. In this paper, we compared text recognition accuracy after applying various super-resolution methods to smaller text images and experimented with how the recognition accuracy varies for various image size. In order to recognize small Korean text images, we have used super-resolution algorithms based on deep learning models such as SRCNN, ESRCNN, DSRCNN, and DCSCN. The dataset for training and testing consisted of Korean-based scanned images. The images was resized from 0.5 times to 0.8 times with 12pt font size. The experiment was performed on x0.5 resized images, and the experimental result showed that DCSCN super-resolution is the most efficient method to reduce precision error rate by 7.8%, and reduce the recall error rate by 8.4%. The experimental results have demonstrated that the accuracy of text recognition for smaller Korean fonts can be improved by adding super-resolution methods to the OCR preprocessing module.

Study on OCR Enhancement of Homomorphic Filtering with Adaptive Gamma Value

  • Heeyeon Jo;Jeongwoo Lee;Hongrae Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.101-108
    • /
    • 2024
  • AI-OCR (Artificial Intelligence Optical Character Recognition) combines OCR technology with Artificial Intelligence to overcome limitations that required human intervention. To enhance the performance of AI-OCR, training on diverse data sets is essential. However, the recognition rate declines when image colors have similar brightness levels. To solve this issue, this study employs Homomorphic filtering as a preprocessing step to clearly differentiate color levels, thereby increasing text recognition rates. While Homomorphic filtering is ideal for text extraction because of its ability to adjust the high and low frequency components of an image separately using a gamma value, it has the downside of requiring manual adjustments to the gamma value. This research proposes a range for gamma threshold values based on tests involving image contrast, brightness, and entropy. Experimental results using the proposed range of gamma values in Homomorphic filtering suggest a high likelihood for effective AI-OCR performance.

Pet Disease Prediction Service and Integrated Management Application (반려동물 질병예측서비스 및 통합관리 어플리케이션)

  • Ki-Du Pyo;Dong-Young Lee;Won-Se Jung;Oh-Jun Kwon;Kyung-Suk Han
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.133-137
    • /
    • 2023
  • In this paper, we developed a 'comprehensive pet management application' that combines pet AI diagnosis, animal hospital search, smart household accounts, and community functions. The application can solve the inconvenience of users who have to use multiple functions as separate applications, and can easily use pet AI diagnosis services through photos, provides animal hospital information using crawling, finds nearby animal hospitals, and supports smart households that can scan receipts using OCR text extraction techniques. By using this application, information necessary for raising pets such as health and consumption details of pets can be managed in one system.

An End-to-End Sequence Learning Approach for Text Extraction and Recognition from Scene Image

  • Lalitha, G.;Lavanya, B.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.220-228
    • /
    • 2022
  • Image always carry useful information, detecting a text from scene images is imperative. The proposed work's purpose is to recognize scene text image, example boarding image kept on highways. Scene text detection on highways boarding's plays a vital role in road safety measures. At initial stage applying preprocessing techniques to the image is to sharpen and improve the features exist in the image. Likely, morphological operator were applied on images to remove the close gaps exists between objects. Here we proposed a two phase algorithm for extracting and recognizing text from scene images. In phase I text from scenery image is extracted by applying various image preprocessing techniques like blurring, erosion, tophat followed by applying thresholding, morphological gradient and by fixing kernel sizes, then canny edge detector is applied to detect the text contained in the scene images. In phase II text from scenery image recognized using MSER (Maximally Stable Extremal Region) and OCR; Proposed work aimed to detect the text contained in the scenery images from popular dataset repositories SVT, ICDAR 2003, MSRA-TD 500; these images were captured at various illumination and angles. Proposed algorithm produces higher accuracy in minimal execution time compared with state-of-the-art methodologies.

HunMinJeomUm: Text Extraction and Braille Conversion System for the Learning of the Blind (시각장애인의 학습을 위한 텍스트 추출 및 점자 변환 시스템)

  • Kim, Chae-Ri;Kim, Ji-An;Kim, Yong-Min;Lee, Ye-Ji;Kong, Ki-Sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.53-60
    • /
    • 2021
  • The number of visually impaired and blind people is increasing, but braille translation textbooks for them are insufficient, which violates their rights to education despite their will. In order to guarantee their rights, this paper develops a learning system, HunMinJeomUm, that helps them access textbooks, documents, and photographs that are not available in braille, without the assistance of others. In our system, a smart phone app and web pages are designed to promote the accessibility of the blind, and a braille kit is produced using Arduino and braille modules. The system supports the following functions. First, users select documents or pictures that they want, and the system extracts the text using OCR. Second, the extracted text is converted into voice and braille. Third, a membership registration function is provided so that the user can view the extracted text. Experiments have confirmed that our system generates braille and audio outputs successfully, and provides high OCR recognition rates. The study has also found that even completely blind users can easily access the smart phone app.