• Title/Summary/Keyword: Character Detection

Search Result 248, Processing Time 0.023 seconds

Regular Expression Matching Processor Architecture Supporting Character Class Matching (문자클래스 매칭을 지원하는 정규표현식 매칭 프로세서 구조)

  • Yun, SangKyun
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1280-1285
    • /
    • 2015
  • Many hardware-based regular expression matching architectures are proposed for high performance matching. In particular, regular expression processors such as ReCPU and SMPU perform pattern matching in a similar approach to that used in general purpose processors, which provide the flexibility when updating patterns. However, these processors are inefficient in performing class matching since they do not provide character class matching capabilities. This paper proposes an instruction set and architecture of a regular expression matching processor, which can support character class matching. The proposed processor can efficiently perform character class matching since it includes character class, character range, and negated character class matching capabilities.

An Efficient Binarization Method for Vehicle License Plate Character Recognition

  • Yang, Xue-Ya;Kim, Kyung-Lok;Hwang, Byung-Kon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1649-1657
    • /
    • 2008
  • In this paper, to overcome the failure of binarization for the characters suffered from low contrast and non-uniform illumination in license plate character recognition system, we improved the binarization method by combining local thresholding with global thresholding and edge detection. Firstly, apply the local thresholding method to locate the characters in the license plate image and then get the threshold value for the character based on edge detector. This method solves the problem of local low contrast and non-uniform illumination. Finally, back-propagation Neural Network is selected as a powerful tool to perform the recognition process. The results of the experiments i1lustrate that the proposed binarization method works well and the selected classifier saves the processing time. Besides, the character recognition system performed better recognition accuracy 95.7%, and the recognition speed is controlled within 0.3 seconds.

  • PDF

A Novel Character Segmentation Method for Text Images Captured by Cameras

  • Lue, Hsin-Te;Wen, Ming-Gang;Cheng, Hsu-Yung;Fan, Kuo-Chin;Lin, Chih-Wei;Yu, Chih-Chang
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.729-739
    • /
    • 2010
  • Due to the rapid development of mobile devices equipped with cameras, instant translation of any text seen in any context is possible. Mobile devices can serve as a translation tool by recognizing the texts presented in the captured scenes. Images captured by cameras will embed more external or unwanted effects which need not to be considered in traditional optical character recognition (OCR). In this paper, we segment a text image captured by mobile devices into individual single characters to facilitate OCR kernel processing. Before proceeding with character segmentation, text detection and text line construction need to be performed in advance. A novel character segmentation method which integrates touched character filters is employed on text images captured by cameras. In addition, periphery features are extracted from the segmented images of touched characters and fed as inputs to support vector machines to calculate the confident values. In our experiment, the accuracy rate of the proposed character segmentation system is 94.90%, which demonstrates the effectiveness of the proposed method.

Polygonal Model Simplification Method for Game Character (게임 캐릭터를 위한 폴리곤 모델 단순화 방법)

  • Lee, Chang-Hoon;Cho, Seong-Eon;Kim, Tai-Hoon
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.1
    • /
    • pp.142-150
    • /
    • 2009
  • It is very important to generate a simplified model from a complex 3D character in computer game. We propose a new method of extracting feature lines from a 3D game character. Given an unstructured 3D character model containing texture information, we use model feature map (MFM), which is a 2D map that abstracts the variation of texture and curvature in the 3D character model. The MFM is created from both a texture map and a curvature map, which are produced separately by edge-detection to locate line features. The MFM can be edited interactively using standard image-processing tools. We demonstrate the technique on several data sets, including, but not limited to facial character.

  • PDF

Low-Quality Banknote Serial Number Recognition Based on Deep Neural Network

  • Jang, Unsoo;Suh, Kun Ha;Lee, Eui Chul
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.224-237
    • /
    • 2020
  • Recognition of banknote serial number is one of the important functions for intelligent banknote counter implementation and can be used for various purposes. However, the previous character recognition method is limited to use due to the font type of the banknote serial number, the variation problem by the solid status, and the recognition speed issue. In this paper, we propose an aspect ratio based character region segmentation and a convolutional neural network (CNN) based banknote serial number recognition method. In order to detect the character region, the character area is determined based on the aspect ratio of each character in the serial number candidate area after the banknote area detection and de-skewing process is performed. Then, we designed and compared four types of CNN models and determined the best model for serial number recognition. Experimental results showed that the recognition accuracy of each character was 99.85%. In addition, it was confirmed that the recognition performance is improved as a result of performing data augmentation. The banknote used in the experiment is Indian rupee, which is badly soiled and the font of characters is unusual, therefore it can be regarded to have good performance. Recognition speed was also enough to run in real time on a device that counts 800 banknotes per minute.

An Effective Method of Product Number Detection from Thick Plates (효과적인 후판의 제품번호 검출 방법)

  • Park, Sang-Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.1
    • /
    • pp.139-148
    • /
    • 2015
  • In this paper, a new algorithm is proposed for detecting the product number of each thick plate and extracting each character of the product number from a image which contains several thick plates. In general, a image of thick plates contains several steal plates. To obtain the product number from the image, we first need to separate each plate. To do so, we use the line edges of thick plates and a clustering algorithm. After separating each plate, background parts are eliminated from the image of each plate. Background parts of an individual thick plate image consist of the dark part of steel and the white part of paint which is used for printing the product number. We propose a two-tiered method where dark background parts are first eliminated and then white parts are eliminated. Finally, each character is extracted from the product number image using the characteristics of product number. The results of the experiments on the various steal plates images emphasize that the proposed algorithm detects each thick plate and extracts the product number from a image effectively.

Improvement OCR Algorithm for Efficient Book Catalog RetrievalTechnology (효과적인 도서목록 검색을 위한 개선된 OCR알고리즘에 관한 연구)

  • HeWen, HeWen;Baek, Young-Hyun;Moon, Sung-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.152-159
    • /
    • 2010
  • Existing character recognition algorithm recognize characters in simple conditional. It has the disadvantage that recognition rates often drop drastically when input document image has low quality, rotated text, various font or size text because of external noise or data loss. In this paper, proposes the optical character recognition algorithm which using bicubic interpolation method for the catalog retrieval when the input image has rotated text, blurred, various font and size. In this paper, applied optical character recognition algorithm consist of detection and recognition part. Detection part applied roberts and hausdorff distance algorithm for correct detection the catalog of book. Recognition part applied bicubic interpolation to interpolate data loss due to low quality, various font and size text. By the next time, applied rotation for the bicubic interpolation result image to slant proofreading. Experimental results show that proposal method can effectively improve recognition rate 6% and search-time 1.077s process result.

A License Plate Detection Method Using Multiple-Color Model and Character Layout Information in Complex Background (다중색상 모델과 문자배치 정보를 이용한 복잡한 배경 영상에서의 자동차 번호판 추출)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1515-1524
    • /
    • 2008
  • This paper proposes a method that detects a license plate in complex background using a multiple-color model and character layout information. A layout of a green license plate is different from that of a white license plate. So, this study used a strategy that firstly assumes the plate color and then utilizes its layout information. At first, it extracts green areas from an input image using a multiple-color model which combined HIS and YIQ color models with RGB color model. If green areas are detected, it searches the character layout of the green plate by analyzing the connected components in each areas. If not detected, it searches the character layout of the white plate in all area. Finally, it extracts a license plate by grouping the connected components which corresponds to characters. Experimental result shows that 98.1% of 419 input images are correctly detected. It also shows that the proposed method is robust against illumination, shadow, and weather condition.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

A Study on the License Plate Recognition Based on Direction Normalization and CNN Deep Learning (방향 정규화 및 CNN 딥러닝 기반 차량 번호판 인식에 관한 연구)

  • Ki, Jaewon;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.568-574
    • /
    • 2022
  • In this paper, direction normalization and CNN deep learning are used to develop a more reliable license plate recognition system. The existing license plate recognition system consists of three main modules: license plate detection module, character segmentation module, and character recognition module. The proposed system minimizes recognition error by adding a direction normalization module when a detected license plate is inclined. Experimental results show the superiority of the proposed method in comparison to the previous system.