• Title/Summary/Keyword: Text-to-Image

Search Result 889, Processing Time 0.029 seconds

The Effectiveness of High-level Text Features in SOM-based Web Image Clustering (SOM 기반 웹 이미지 분류에서 고수준 텍스트 특징들의 효과)

  • Cho Soo-Sun
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.121-126
    • /
    • 2006
  • In this paper, we propose an approach to increase the power of clustering Web images by using high-level semantic features from text information relevant to Web images as well as low-level visual features of image itself. These high-level text features can be obtained from image URLs and file names, page titles, hyperlinks, and surrounding text. As a clustering engine, self-organizing map (SOM) proposed by Kohonen is used. In the SOM-based clustering using high-level text features and low-level visual features, the 200 images from 10 categories are divided in some suitable clusters effectively. For the evaluation of clustering powers, we propose simple but novel measures indicating the degrees of scattering images from the same category, and degrees of accumulation of the same category images. From the experiment results, we find that the high-level text features are more useful in SOM-based Web image clustering.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

A Comparative Study on OCR using Super-Resolution for Small Fonts

  • Cho, Wooyeong;Kwon, Juwon;Kwon, Soonchu;Yoo, Jisang
    • International journal of advanced smart convergence
    • /
    • v.8 no.3
    • /
    • pp.95-101
    • /
    • 2019
  • Recently, there have been many issues related to text recognition using Tesseract. One of these issues is that the text recognition accuracy is significantly lower for smaller fonts. Tesseract extracts text by creating an outline with direction in the image. By searching the Tesseract database, template matching with characters with similar feature points is used to select the character with the lowest error. Because of the poor text extraction, the recognition accuracy is lowerd. In this paper, we compared text recognition accuracy after applying various super-resolution methods to smaller text images and experimented with how the recognition accuracy varies for various image size. In order to recognize small Korean text images, we have used super-resolution algorithms based on deep learning models such as SRCNN, ESRCNN, DSRCNN, and DCSCN. The dataset for training and testing consisted of Korean-based scanned images. The images was resized from 0.5 times to 0.8 times with 12pt font size. The experiment was performed on x0.5 resized images, and the experimental result showed that DCSCN super-resolution is the most efficient method to reduce precision error rate by 7.8%, and reduce the recall error rate by 8.4%. The experimental results have demonstrated that the accuracy of text recognition for smaller Korean fonts can be improved by adding super-resolution methods to the OCR preprocessing module.

Implementation and Evaluation of Integrated Viewier for Displanning Text and TIFF Image Materials on the Internet Environments (인터넷상에서 텍스트와 TIFF 이미지 자료 디스플레이를 위한 뷰어 구현 및 평가)

  • 최흥식
    • Journal of the Korean Society for information Management
    • /
    • v.17 no.1
    • /
    • pp.67-87
    • /
    • 2000
  • The purpose of the study is to develop an integrated viewer which can display both text and image files on the Internet environment. Up to now, most viewers for full-text databases can be displayed documents only by image or graphic viewers. The newly developed system can compress document files in commercial word processors (e.g, 한글TM, WordTM, ExceITM, PowerpointTM, HunminJungumTM, ArirangTM, CADTM), as well as conventional TIFF image file in smaller size, which were converted into DVI(DeVice Independent) file format, and display them on computer screen. IDoc Viewer was evaluated to test its performance by user group, consisting of 5 system developers, 5 librarians, and 10 end-users. IDoc Viewer has been proved to be good or excellent at 20 out of 26 check lists.

  • PDF

Lossless image compression using subband decomposition and BW transform (대역분할과 BW 변환을 이용한 무손실 영상압축)

  • 윤정오;박영호;황찬식
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.5 no.1
    • /
    • pp.102-107
    • /
    • 2000
  • In general text compression techniques cannot be used directly in image compression because the model of text and image are different Recently, a new class of text compression, namely, block-sorting algorithm which involves Burrows and Wheeler transformation(BWT) gives excellent results in text compression. However, if we apply it directly into image compression, the result is poor. So, we propose simple method in order to improve the lossless compression performance of image. The proposed method can be divided into three steps. It is decomposed into ten subbands with the help of symmetric short kernel filter. The resulting subbands are block-sorted according to the method by BWT, and the redundancy is removed with the help of an adaptive arithmetic coder. Experimental results show that the proposed method is better than lossless JPEG and LZ-based compression method(PKZIP).

  • PDF

Fast Text Line Segmentation Model Based on DCT for Color Image (컬러 영상 위에서 DCT 기반의 빠른 문자 열 구간 분리 모델)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.6
    • /
    • pp.463-470
    • /
    • 2010
  • We presented a very fast and robust method of text line segmentation based on the DCT blocks of color image without decompression and binary transformation processes. Using DC and another three primary AC coefficients from block DCT we created a gray-scale image having reduced size by 8x8. In order to detect and locate white strips between text lines we analyzed horizontal and vertical projection profiles of the image and we applied a direct markov model to recover the missing white strips by estimating hidden periodicity. We presented performance results. The results showed that our method was 40 - 100 times faster than traditional method.

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF

A Still Image Compression System with a High Quality Text Compression Capability (고 품질 텍스트 압축 기능을 지원하는 정지영상 압축 시스템)

  • Lee, Je-Myung;Lee, Ho-Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.275-302
    • /
    • 2007
  • We propose a novel still image compression system which supports a high quality text compression function. The system segments the text from the image and compresses the text with a high quality. The system shows 48:1 high compression ratio using context-based adaptive binary arithmetic coding. The arithmetic coding performs the high compression by the codeblocks in the bitplane. The input of the system consists of a segmentation mode and a ROI(Region Of Interest) mode. In segmentation mode, the input image is segmented into a foreground consisting of text and a background consisting of the remaining region. In ROI mode, the input image is represented by the region of interest window. The high quality text compression function with a high compression ratio shows that the proposed system can be comparable with the JPEG2000 products. This system also uses gray coding to improve the compression ratio.

An Extracting Text Area Using Adaptive Edge Enhanced MSER in Real World Image (실세계 영상에서 적응적 에지 강화 기반의 MSER을 이용한 글자 영역 추출 기법)

  • Park, Youngmok;Park, Sunhwa;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.4
    • /
    • pp.219-226
    • /
    • 2016
  • In our general life, what we recognize information with our human eyes and use it is diverse and massive. But even the current technologies improved by artificial intelligence are exorbitantly deficient comparing to human visual processing ability. Nevertheless, many researchers are trying to get information in everyday life, especially concentrate effort on recognizing information consisted of text. In the fields of recognizing text, to extract the text from the general document is used in some information processing fields, but to extract and recognize the text from real image is deficient too much yet. It is because the real images have many properties like color, size, orientation and something in common. In this paper, we applies an adaptive edge enhanced MSER(Maximally Stable Extremal Regions) to extract the text area in those diverse environments and the scene text, and show that the proposed method is a comparatively nice method with experiments.

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.