• Title/Summary/Keyword: Text Region Detection

Search Result 55, Processing Time 0.041 seconds

Automatic Title Detection by Spatial Feature and Projection Profile for Document Images (공간 정보와 투영 프로파일을 이용한 문서 영상에서의 타이틀 영역 추출)

  • Park, Hyo-Jin;Kim, Bo-Ram;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.209-214
    • /
    • 2010
  • This paper proposes an algorithm of segmentation and title detection for document image. The automated title detection method that we have developed is composed of two phases, segmentation and title area detection. In the first phase, we extract and segment the document image. To perform this operation, the binary map is segmented by combination of morphological operation and CCA(connected component algorithm). The first phase provides segmented regions that would be detected as title area for the second stage. Candidate title areas are detected using geometric information, then we can extract the title region that is performed by removing non-title regions. After classification step that removes non-text regions, projection is performed to detect a title region. From the fact that usually the largest font is used for the title in the document, horizontal projection is performed within text areas. In this paper, we proposed a method of segmentation and title detection for various forms of document images using geometric features and projection profile analysis. The proposed system is expected to have various applications, such as document title recognition, multimedia data searching, real-time image processing and so on.

An Improved Method for Detecting Caption in image using DCT-coefficient and Transition-map Analysis (DCT계수와 천이지도 분석을 이용한 개선된 영상 내 자막영역 검출방법)

  • An, Kwon-Jae;Joo, Sung-Il;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.61-71
    • /
    • 2011
  • In this paper, we proposed the method for detecting text region on image using DCT-coefficient and transition-map analysis. The detecting rate of traditional method for detecting text region using DCT-coefficient analysis is high, but false positive detecting rate also is high and the method using transition-map often reject true text region in step of verification because of sticky threshold. To overcome these problems, we generated PTRmap(Promising Text Region map) through DCT-coefficient analysis and applied PTRmap to method for detecting text region using transition map. As the result, the false positive detecting rate decreased as compared with the method using DCT-coefficient analysis, and the detecting rate increased as compared with the method using transition map.

Text Region Detection Method in Mobile Phone Video (휴대전화 동영상에서의 문자 영역 검출 방법)

  • Lee, Hoon-Jae;Sull, Sang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.192-198
    • /
    • 2010
  • With the popularization of the mobile phone with a built-in camera, there are a lot of effort to provide useful information to users by detecting and recognizing the text in the video which is captured by the camera in mobile phone, and there is a need to detect the text regions in such mobile phone video. In this paper, we propose a method to detect the text regions in the mobile phone video. We employ morphological operation as a preprocessing and obtain binarized image using modified k-means clustering. After that, candidate text regions are obtained by applying connected component analysis and general text characteristic analysis. In addition, we increase the precision of the text detection by examining the frequency of the candidate regions. Experimental results show that the proposed method detects the text regions in the mobile phone video with high precision and recall.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Text extraction in images using simplify color and edges pattern analysis (색상 단순화와 윤곽선 패턴 분석을 통한 이미지에서의 글자추출)

  • Yang, Jae-Ho;Park, Young-Soo;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.8
    • /
    • pp.33-40
    • /
    • 2017
  • In this paper, we propose a text extraction method by pattern analysis on contour for effective text detection in image. Text extraction algorithms using edge based methods show good performance in images with simple backgrounds, The images of complex background has a poor performance shortcomings. The proposed method simplifies the color of the image by using K-means clustering in the preprocessing process to detect the character region in the image. Enhance the boundaries of the object through the High pass filter to improve the inaccuracy of the boundary of the object in the color simplification process. Then, by using the difference between the expansion and erosion of the morphology technique, the edges of the object is detected, and the character candidate region is discriminated by analyzing the pattern of the contour portion of the acquired region to remove the unnecessary region (picture, background). As a final result, we have shown that the characters included in the candidate character region are extracted by removing unnecessary regions.

Pre-processing Algorithm for Detection of Steel Product Number in Images (영상에서 철강 제품 번호 검출을 위한 전처리 알고리즘)

  • Koo, Keun-Hwi;Yun, Jong-Pil;Choi, Sung-Hoo;Choi, Jong-Hyun;Kim, Sang-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2008.04a
    • /
    • pp.117-118
    • /
    • 2008
  • 제철소에서 생산된 Slab에는 서로를 구분하기 위해 관리번호가 기재되어 있다. 호스트 컴퓨터에서 보내 온 Slab 관기번호와 제품에 마킹되어 있는 Slab 관리번호의 일치 여부를 확인하기 위하여 자동 인식 시스템이 설치되어 있다. 자동 인식 시스템은 실시간으로 Slab가 없을 때의 영상과 Slab가 있을 때의 영상을 촬영하고 이를 이용하여 Slab 관리 번호를 인식하는 방법으로 구성되어 있다. 제철소 배경 영상이 복잡하고 조명이 계속 바뀌기 때문에 Text Region을 찾는 방법은 Slab 관리 번호를 인식하는데 가장 큰 문제점이다. 본 논문에서는 복잡한 배경을 실시간으로 Training하여 Text Region을 찾기 위한 전처리 과정을 나타내었다. 복잡한 배경 영상을 이용하여 Slab가 위치한 Region을 찾을 수 있고 실시간으로 Training하기 때문에 조명의 영향을 줄일 수 있다.

  • PDF

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Single Shot Detector for Detecting Clickable Object in Mobile Device Screen (모바일 디바이스 화면의 클릭 가능한 객체 탐지를 위한 싱글 샷 디텍터)

  • Jo, Min-Seok;Chun, Hye-won;Han, Seong-Soo;Jeong, Chang-Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.29-34
    • /
    • 2022
  • We propose a novel network architecture and build dataset for recognizing clickable objects on mobile device screens. The data was collected based on clickable objects on the mobile device screen that have numerous resolution, and a total of 24,937 annotation data were subdivided into seven categories: text, edit text, image, button, region, status bar, and navigation bar. We use the Deconvolution Single Shot Detector as a baseline, the backbone network with Squeeze-and-Excitation blocks, the Single Shot Detector layer structure to derive inference results and the Feature pyramid networks structure. Also we efficiently extract features by changing the input resolution of the existing 1:1 ratio of the network to a 1:2 ratio similar to the mobile device screen. As a result of experimenting with the dataset we have built, the mean average precision was improved by up to 101% compared to baseline.

Text Extraction from Complex Natural Images

  • Kumar, Manoj;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.6 no.2
    • /
    • pp.1-5
    • /
    • 2010
  • The rapid growth in communication technology has led to the development of effective ways of sharing ideas and information in the form of speech and images. Understanding this information has become an important research issue and drawn the attention of many researchers. Text in a digital image contains much important information regarding the scene. Detecting and extracting this text is a difficult task and has many challenging issues. The main challenges in extracting text from natural scene images are the variation in the font size, alignment of text, font colors, illumination changes, and reflections in the images. In this paper, we propose a connected component based method to automatically detect the text region in natural images. Since text regions in mages contain mostly repetitions of vertical strokes, we try to find a pattern of closely packed vertical edges. Once the group of edges is found, the neighboring vertical edges are connected to each other. Connected regions whose geometric features lie outside of the valid specifications are considered as outliers and eliminated. The proposed method is more effective than the existing methods for slanted or curved characters. The experimental results are given for the validation of our approach.

Pre-processing Algorithm for Detection of Slab Information on Steel Process using Robust Feature Points extraction (강건한 특징점 추출을 이용한 철강제품 정보 검출을 위한 전처리 알고리즘)

  • Choi, Jong-Hyun;Yun, Jong-Pil;Choi, Sung-Hoo;Koo, Keun-Hwi;Kim, Sang-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1819-1820
    • /
    • 2008
  • Steel slabs are marked with slab management numbers (SMNs). To increase efficiency, automated identification of SMNs from digital images is desirable. Automatic extraction of SMNs is a prerequisite for automatic character segmentation and recognition. The images include complex background, and the position of the text region of the slabs is variable. This paper describes an pre-processing algorithm for detection of slab information using robust feature points extraction. Using SIFT(Scale Invariant Feature Transform) algorithm, we can reduce the search region for extraction of SMNs from the slab image.

  • PDF