• Title/Summary/Keyword: Text image

Search Result 960, Processing Time 0.029 seconds

Character Segmentation and Recognition Algorithm for Various Text Region Images (다양한 문자열영상의 개별문자분리 및 인식 알고리즘)

  • Koo, Keun-Hwi;Choi, Sung-Hoo;Yun, Jong-Pil;Choi, Jong-Hyun;Kim, Sang-Woo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.4
    • /
    • pp.806-816
    • /
    • 2009
  • Character recognition system consists of four step; text localization, text segmentation, character segmentation, and recognition. The character segmentation is very important and difficult because of noise, illumination, and so on. For high recognition rates of the system, it is necessary to take good performance of character segmentation algorithm. Many algorithms for character segmentation have been developed up to now, and many people have been recently making researches in segmentation of touching or overlapping character. Most of algorithms cannot apply to the text regions of management number marked on the slab in steel image, because the text regions are irregular such as touching character by strong illumination and by trouble of nozzle in marking machine, and loss of character. It is difficult to gain high success rate in various cases. This paper describes a new algorithm of character segmentation to recognize slab management number marked on the slab in the steel image. It is very important that pre-processing step is to convert gray image to binary image without loss of character and touching character. In this binary image, non-touching characters are simply separated by using vertical projection profile. For separating touching characters, after we use combined profile to find candidate points of boundary, decide real character boundary by using method based on recognition. In recognition step, we remove noise of character images, then recognize respective character images. In this paper, the proposed algorithm is effective for character segmentation and recognition of various text regions on the slab in steel image.

Quantized DCT Coefficient Category Address Encryption for JPEG Image

  • Li, Shanshan;Zhang, Yuanyuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1790-1806
    • /
    • 2016
  • Digital image encryption is widely used for image data security. JPEG standard compresses image with great performance on reducing file size. Thus, to encrypt an image in JPEG format we should keep the quality of original image and reduced size. This paper proposes a JPEG image encryption scheme based on quantized DC and non-zero AC coefficients inner category scrambling. Instead of coefficient value encryption, the address of coefficient is encrypted to get the address of cipher text. Then 8*8 blocks are shuffled. Chaotic iteration is employed to generate chaotic sequences for address scrambling and block shuffling. Analysis of simulation shows the proposed scheme is resistant to common attacks. Moreover, the proposed method keeps the file size of the encrypted image in an acceptable range compared with the plain text. To enlarge the cipher text possible space and improve the resistance to sophisticated attacks, several additional procedures are further developed. Contrast experiments verify these procedures can refine the proposed scheme and achieve significant improvements.

Design of Image Generation System for DCGAN-Based Kids' Book Text

  • Cho, Jaehyeon;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1437-1446
    • /
    • 2020
  • For the last few years, smart devices have begun to occupy an essential place in the life of children, by allowing them to access a variety of language activities and books. Various studies are being conducted on using smart devices for education. Our study extracts images and texts from kids' book with smart devices and matches the extracted images and texts to create new images that are not represented in these books. The proposed system will enable the use of smart devices as educational media for children. A deep convolutional generative adversarial network (DCGAN) is used for generating a new image. Three steps are involved in training DCGAN. Firstly, images with 11 titles and 1,164 images on ImageNet are learned. Secondly, Tesseract, an optical character recognition engine, is used to extract images and text from kids' book and classify the text using a morpheme analyzer. Thirdly, the classified word class is matched with the latent vector of the image. The learned DCGAN creates an image associated with the text.

Flame Diagnosis using Image Processing Technique (영상처리 기술을 이용한 연소상태 진단)

  • Lee, Tae-Young;Kim, Song-Hwan;Lee, Sang-Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.7
    • /
    • pp.196-202
    • /
    • 1999
  • Recent trend changes a criterion for evaluation of burner that environmental problem is raised as global issue. For efficient driving problem, the higher thermal efficiency and the lower oxygen in exhaust gas, burner is evaluated the better. For environmental problem, burner must satisfy $NO_{X}$ limit and CO limit. Consequently, 'good burner' means on whose thermal efficiency is high under the constraint of $NO_{X}$ and CO consistency. To make existing burner satisfy recent criterion, it is highly recommended to develop feedback control scheme whose output is the consistency of $NO_{X}$ and CO. This paper describes development of real time flame diagnosis technique that evaluate and diagnose combustion state such as consistency of components in exhaust gas, stability of flame in quantitative sense. This study focuses on wave length of luminescence from chemical reaction measurement of the luminescence via optical measuring apparatus and derive correlation with consistency of components in exhaust gas by image processing technique.

  • PDF

Text line separation in handwritten address image using partial projection technique (부분 투영기법을 이용한 필기체 주소 영상에서의 문자열 분리)

  • 정선화;남윤석
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.31-34
    • /
    • 2003
  • In this paper, we describe a method for separating text lines in handwritten Korean address images. The most remarkable feature of the proposed method is to use a modified projection technique. named a partial projection technique. A projection based text line separation method which projects the whole address image in horizontal direction to find split points for text line separation cannot avoid failing separation in case of images with a little skew or overlap between vertically neighboring text lines. To overcome this problem, we have introduced a partial projection technique which splits an address image into a few partial address images to be equal width and then project them each horizontally. The experiment done with 989 handwritten Korean address images extracted from live mails shows the superiority of the proposed method. The correct text-line separation rate fir the testing images was about 91.5%.

  • PDF

Spam Image Detection Model based on Deep Learning for Improving Spam Filter

  • Seong-Guk Nam;Dong-Gun Lee;Yeong-Seok Seo
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.289-301
    • /
    • 2023
  • Due to the development and dissemination of modern technology, anyone can easily communicate using services such as social network service (SNS) through a personal computer (PC) or smartphone. The development of these technologies has caused many beneficial effects. At the same time, bad effects also occurred, one of which was the spam problem. Spam refers to unwanted or rejected information received by unspecified users. The continuous exposure of such information to service users creates inconvenience in the user's use of the service, and if filtering is not performed correctly, the quality of service deteriorates. Recently, spammers are creating more malicious spam by distorting the image of spam text so that optical character recognition (OCR)-based spam filters cannot easily detect it. Fortunately, the level of transformation of image spam circulated on social media is not serious yet. However, in the mail system, spammers (the person who sends spam) showed various modifications to the spam image for neutralizing OCR, and therefore, the same situation can happen with spam images on social media. Spammers have been shown to interfere with OCR reading through geometric transformations such as image distortion, noise addition, and blurring. Various techniques have been studied to filter image spam, but at the same time, methods of interfering with image spam identification using obfuscated images are also continuously developing. In this paper, we propose a deep learning-based spam image detection model to improve the existing OCR-based spam image detection performance and compensate for vulnerabilities. The proposed model extracts text features and image features from the image using four sub-models. First, the OCR-based text model extracts the text-related features, whether the image contains spam words, and the word embedding vector from the input image. Then, the convolution neural network-based image model extracts image obfuscation and image feature vectors from the input image. The extracted feature is determined whether it is a spam image by the final spam image classifier. As a result of evaluating the F1-score of the proposed model, the performance was about 14 points higher than the OCR-based spam image detection performance.

A general-purpose model capable of image captioning in Korean and Englishand a method to generate text suitable for the purpose (한국어 및 영어 이미지 캡션이 가능한 범용적 모델 및 목적에 맞는 텍스트를 생성해주는 기법)

  • Cho, Su Hyun;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1111-1120
    • /
    • 2022
  • Image Capturing is a matter of viewing images and describing images in language. The problem is an important problem that can be solved by keeping, understanding, and bringing together two areas of image processing and natural language processing. In addition, by automatically recognizing and describing images in text, images can be converted into text and then into speech for visually impaired people to help them understand their surroundings, and important issues such as image search, art therapy, sports commentary, and real-time traffic information commentary. So far, the image captioning research approach focuses solely on recognizing and texturing images. However, various environments in reality must be considered for practical use, as well as being able to provide image descriptions for the intended purpose. In this work, we limit the universally available Korean and English image captioning models and text generation techniques for the purpose of image captioning.

Slab Region Localization for Text Extraction using SIFT Features (문자열 검출을 위한 슬라브 영역 추정)

  • Choi, Jong-Hyun;Choi, Sung-Hoo;Yun, Jong-Pil;Koo, Keun-Hwi;Kim, Sang-Woo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.5
    • /
    • pp.1025-1034
    • /
    • 2009
  • In steel making production line, steel slabs are given a unique identification number. This identification number, Slab management number(SMN), gives information about the use of the slab. Identification of SMN has been done by humans for several years, but this is expensive and not accurate and it has been a heavy burden on the workers. Consequently, to improve efficiency, automatic recognition system is desirable. Generally, a recognition system consists of text localization, text extraction, character segmentation, and character recognition. For exact SMN identification, all the stage of the recognition system must be successful. In particular, the text localization is great important stage and difficult to process. However, because of many text-like patterns in a complex background and high fuzziness between the slab and background, directly extracting text region is difficult to process. If the slab region including SMN can be detected precisely, text localization algorithm will be able to be developed on the more simple method and the processing time of the overall recognition system will be reduced. This paper describes about the slab region localization using SIFT(Scale Invariant Feature Transform) features in the image. First, SIFT algorithm is applied the captured background and slab image, then features of two images are matched by Nearest Neighbor(NN) algorithm. However, correct matching rate can be low when two images are matched. Thus, to remove incorrect match between the features of two images, geometric locations of the matched two feature points are used. Finally, search rectangle method is performed in correct matching features, and then the top boundary and side boundaries of the slab region are determined. For this processes, we can reduce search region for extraction of SMN from the slab image. Most cases, to extract text region, search region is heuristically fixed [1][2]. However, the proposed algorithm is more analytic than other algorithms, because the search region is not fixed and the slab region is searched in the whole image. Experimental results show that the proposed algorithm has a good performance.

Text-based Image Indexing and Retrieval using Formal Concept Analysis

  • Ahmad, Imran Shafiq
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.3
    • /
    • pp.150-170
    • /
    • 2008
  • In recent years, main focus of research on image retrieval techniques is on content-based image retrieval. Text-based image retrieval schemes, on the other hand, provide semantic support and efficient retrieval of matching images. In this paper, based on Formal Concept Analysis (FCA), we propose a new image indexing and retrieval technique. The proposed scheme uses keywords and textual annotations and provides semantic support with fast retrieval of images. Retrieval efficiency in this scheme is independent of the number of images in the database and depends only on the number of attributes. This scheme provides dynamic support for addition of new images in the database and can be adopted to find images with any number of matching attributes.

Text Line Segmentation of Handwritten Documents by Area Mapping

  • Boragule, Abhijeet;Lee, GueeSang
    • Smart Media Journal
    • /
    • v.4 no.3
    • /
    • pp.44-49
    • /
    • 2015
  • Text line segmentation is a preprocessing step in OCR, which can significantly influence the accuracy of document analysis applications. This paper proposes a novel methodology for the text line segmentation of handwritten documents. First, the average width of the connected components is used to form a 1-D Gaussian kernel and a smoothing operation is then applied to the input binary image. The adaptive binarization of the smoothed image forms the final text lines. In this work, the segmentation method involves two stages: firstly, the large connected components are labelled as a unique text line using text line area mapping. Secondly, the final refinement of the segmentation is performed using the Euclidean distance between the text line and small connected components. The group of uniquely labelled text candidates achieves promising segmentation results. The proposed approach works well on Korean and English language handwritten documents captured using a camera.