• Title/Summary/Keyword: character image

Search Result 1,161, Processing Time 0.029 seconds

Image Comparison Using Directional Expansion Operation

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.3
    • /
    • pp.173-177
    • /
    • 2018
  • Masks are generated by adding different fonts of learning data characters in pixel unit, and pixel values belonging to each of the masks are divided into 3 groups. Using the directional expansion operators, we expand the text area of the test data character into 4 diagonal directions in order to create the boundary areas to distinguish it from the background area. A mask with a minimum average discordance is selected as the final recognition result by calculating the degree of discordance between the expanded test data and the masks. Image comparison using directional expansion operations more accurately recognizes test data through 4 subdivided recognition processes. It is also possible to expand the ranges of 3 groups of pixel values of masks more evenly such that new fonts can easily be added to the given learning data.

An Efficient Block Segmentation and Classification of a Document Image Using Edge Information (문서영상의 에지 정보를 이용한 효과적인 블록분할 및 유형분류)

  • 박창준;전준형;최형문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.120-129
    • /
    • 1996
  • This paper presents an efficient block segmentation and classification using the edge information of the document image. We extract four prominent features form the edge gradient and orientaton, all of which, and thereby the block clssifications, are insensitive to the background noise and the brightness variation of of the image. Using these four features, we can efficiently classify a document image into the seven categrories of blocks of small-size letters, large-size letters, tables, equations, flow-charts, graphs, and photographs, the first five of which are text blocks which are character-recognizable, and the last two are non-character blocks. By introducing the clumn interval and text line intervals of the document in the determination of th erun length of CRLA (constrained run length algorithm), we can obtain an efficient block segmentation with reduced memory size. The simulation results show that the proposed algorithm can rigidly segment and classify the blocks of the documents into the above mentioned seven categories and classification performance is high enough for all the categories except for the graphs with too much variations.

  • PDF

Character Extraction from Color Map Image Using Interactive Clustering (대화식 클러스터링 기법을 이용한 칼라 지도의 문자 영역 추출에 관한 연구)

  • Ahn, Chang;Park, Chan-Jung;Rhee, Sang-Burm
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.270-279
    • /
    • 1997
  • The conversion of printed maps into computerized databases is an enormous task. Thus the automation of the conversion process is essential. Efficient computer representation of printed maps and line drawings depends on codes assigned to characters, symbols, and vector representation of the graphics. In many cases, maps are constructed in a number of layers, where each layer is printed in a distinct color, and it represents a subset of the map information. In order to properly represent the character layer from color map images, an interactive clustering and character extraction technique is proposed. Character is usually separated from graphics by extracting and classifying connected components in the image. But this procedure fails, when characters touch or overlap lines-something that occurs often in land register maps. By vectorizing line segments, the touched characters and numbers are extracted. The algorithm proposed in this paper is intended to contribute towards the solution of the color image clustering and touched character problem.

  • PDF

A Method for Thresholding and Correction of Skew in Camera Document Images (카메라 문서 영상의 이진화 및 기울어짐 보정 방법)

  • Jang Dae-Geun;Chun Byung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.143-150
    • /
    • 2005
  • Camera image is very sensitive to illumination that result in difficulties for recognizing character. Also Camera captured document images have not only skew but also vignetting effect and geometric distortion. Vignetting effect make it difficult to separate characters from the document images. Geometric distortion, occurred by the mismatch of angle and center position between the document image and the camera, make the shape of characters to be distorted, so that the character recognition is more difficult than the case of using scanner. In this paper, we propose a method that can increase the performance of character recognition by correcting the geometric distortion of document images using a linear approximation which changes the quadrilateral region to the rectangle one. The proposed method also determine the quadrilateral transform region automatically, using the alignment of character lines and the skewed angles of characters located in the edges of each character line. Proposed method, therefore, can correct the geometric distortion without getting positional information from camera.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Design and Implementation of Personal Information Identification and Masking System Based on Image Recognition (이미지 인식 기반 향상된 개인정보 식별 및 마스킹 시스템 설계 및 구현)

  • Park, Seok-Cheon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.1-8
    • /
    • 2017
  • Recently, with the development of ICT technology such as cloud and mobile, image utilization through social networks is increasing rapidly. These images contain personal information, and personal information leakage accidents may occur. As a result, studies are underway to recognize and mask personal information in images. However, optical character recognition, which recognizes personal information in images, varies greatly depending on brightness, contrast, and distortion, and Korean recognition is insufficient. Therefore, in this paper, we design and implement a personal information identification and masking system based on image recognition through deep learning application using CNN algorithm based on optical character recognition method. Also, the proposed system and optical character recognition compares and evaluates the recognition rate of personal information on the same image and measures the face recognition rate of the proposed system. Test results show that the recognition rate of personal information in the proposed system is 32.7% higher than that of optical character recognition and the face recognition rate is 86.6%.

Meter Numeric Character Recognition Using Illumination Normalization and Hybrid Classifier (조명 정규화 및 하이브리드 분류기를 이용한 계량기 숫자 인식)

  • Oh, Hangul;Cho, Seongwon;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.71-77
    • /
    • 2014
  • In this paper, we propose an improved numeric character recognition method which can recognize numeric characters well under low-illuminated and shade-illuminated environment. The LN(Local Normalization) preprocessing method is used in order to enhance low-illuminated and shade-illuminated image quality. The reading area is detected using line segment information extracted from the illumination-normalized meter images, and then the three-phase procedures are performed for segmentation of numeric characters in the reading area. Finally, an efficient hybrid classifier is used to classify the segmented numeric characters. The proposed numeric character classifier is a combination of multi-layered feedforward neural network and template matching module. Robust heuristic rules are applied to classify the numeric characters. Experiments using meter image database were conducted. Meter image database was made using various kinds of meters under low-illuminated and shade-illuminated environment. The experimental results indicates the superiority of the proposed numeric character recognition method.

Character Recognition Algorithm using Accumulation Mask

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.123-128
    • /
    • 2018
  • Learning data is composed of 100 characters with 10 different fonts, and test data is composed of 10 characters with a new font that is not used for the learning data. In order to consider the variety of learning data with several different fonts, 10 learning masks are constructed by accumulating pixel values of same characters with 10 different fonts. This process eliminates minute difference of characters with different fonts. After finding maximum values of learning masks, test data is expanded by multiplying these maximum values to the test data. The algorithm calculates sum of differences of two corresponding pixel values of the expanded test data and the learning masks. The learning mask with the smallest value among these 10 calculated sums is selected as the result of the recognition process for the test data. The proposed algorithm can recognize various types of fonts, and the learning data can be modified easily by adding a new font. Also, the recognition process is easy to understand, and the algorithm makes satisfactory results for character recognition.

Character Extraction Algorithm from Scenery Images by Parallel and Local Processing

  • Iwakata, Satoshi;Ajioka, Yoshiaki;Hagiwara, Masafumi
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.54-57
    • /
    • 2003
  • In this paper, we propose an algorithm extracting character regions from scenery images. This algorithm works under a severe constraint: each pixel of a result image must be derived from only information of their neighbor pixels. This constraint is very important for a low cost device like a mobile camera. The proposed algorithm is represented by the local and parallel image processing. It has been tested for 100 scenery images. A result shows that the proposed algorithm can extract character regions at a rate of more than 90%. The result was obtained without learning any template images. the algorithm is very useful.

  • PDF

A study for improvement of Recognition velocity of Korean Character using Neural Oscillator (신경 진동자를 이용한 한글 문자의 인식 속도의 개선에 관한 연구)

  • Kwon, Yong-Bum;Lee, Joon-Tark
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.491-494
    • /
    • 2004
  • Neural Oscillator can be applied to oscillatory systems such as the image recognition, the voice recognition, estimate of the weather fluctuation and analysis of geological fluctuation etc in nature and principally, it is used often to pattern recoglition of image information. Conventional BPL(Back-Propagation Learning) and MLNN(Multi Layer Neural Network) are not proper for oscillatory systems because these algorithm complicate Learning structure, have tedious procedures and sluggish convergence problem. However, these problems can be easily solved by using a synchrony characteristic of neural oscillator with PLL(phase-Locked Loop) function and by using a simple Hebbian learning rule. And also, Recognition velocity of Korean Character can be improved by using a Neural Oscillator's learning accelerator factor η$\_$ij/

  • PDF