• Title/Summary/Keyword: Text Region Extraction

Search Result 46, Processing Time 0.039 seconds

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors

  • Milevskiy, Igor;Ha, Jin-Young
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.161-166
    • /
    • 2011
  • We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user's marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Text extraction from camera based document image (카메라 기반 문서영상에서의 문자 추출)

  • 박희주;김진호
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.14-20
    • /
    • 2003
  • This paper presents a text extraction method of camera based document image. It is more difficult to recognize camera based document image in comparison with scanner based image because of segmentation problem due to variable lighting condition and versatile fonts. Both document binarization and character extraction are important processes to recognize camera based document image. After converting color image into grey level image, gray level normalization is used to extract character region independent of lighting condition and background image. Local adaptive binarization method is then used to extract character from the background after the removal of noise. In this character extraction step, the information of the horizontal and vertical projection and the connected components is used to extract character line, word region and character region. To evaluate the proposed method, we have experimented with documents mixed Hangul, English, symbols and digits of the ETRI database. An encouraging binarization and character extraction results have been obtained.

  • PDF

Pre-processing Algorithm for Detection of Slab Information on Steel Process using Robust Feature Points extraction (강건한 특징점 추출을 이용한 철강제품 정보 검출을 위한 전처리 알고리즘)

  • Choi, Jong-Hyun;Yun, Jong-Pil;Choi, Sung-Hoo;Koo, Keun-Hwi;Kim, Sang-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1819-1820
    • /
    • 2008
  • Steel slabs are marked with slab management numbers (SMNs). To increase efficiency, automated identification of SMNs from digital images is desirable. Automatic extraction of SMNs is a prerequisite for automatic character segmentation and recognition. The images include complex background, and the position of the text region of the slabs is variable. This paper describes an pre-processing algorithm for detection of slab information using robust feature points extraction. Using SIFT(Scale Invariant Feature Transform) algorithm, we can reduce the search region for extraction of SMNs from the slab image.

  • PDF

Study on video character extraction and recognition (비디오 자막 추출 및 인식 기법에 관한 연구)

  • 김종렬;김성섭;문영식
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.141-144
    • /
    • 2001
  • In this paper, a new algorithm for extracting and recognizing characters from video, without pre-knowledge such as font, color, size of character, is proposed. To improve the recognition rate for videos with complex background at low resolution, continuous frames with identical text region are automatically detected to compose an average frame. Using boundary pixels of a text region as seeds, we apply region filling to remove background from the character Then color clustering is applied to remove remaining backgrounds according to the verification of region filling process. Features such as white run and zero-one transition from the center, are extracted from unknown characters. These feature are compared with a pre-composed character feature set to recognize the characters.

  • PDF

A Study on Extraction of text region using shape analysis of text in natural scene image (자연영상에서 문자의 형태 분석을 이용한 문자영역 추출에 관한 연구)

  • Yang, Jae-Ho;Han, Hyun-Ho;Kim, Ki-Bong;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.61-68
    • /
    • 2018
  • In this paper, we propose a method of character detection by analyzing image enhancement and character type to detect characters in natural images that can be acquired in everyday life. The proposed method emphasizes the boundaries of the object part using the unsharp mask in order to improve the detection rate of the area to be recognized as a character in a natural image. By using the boundary of the enhanced object, the character candidate region of the image is detected using Maximal Stable Extermal Regions (MSER). In order to detect the region to be judged as a real character in the detected character candidate region, the shape of each region is analyzed and the non-character region other than the region having the character characteristic is removed to increase the detection rate of the actual character region. In order to compare the objective test of this paper, we compare the detection rate and the accuracy of the character region with the existing methods. Experimental results show that the proposed method improves the detection rate and accuracy of the character region over the existing character detection method.

Segmentation of region strings using connection-characteristic function (연결특성함수를 이용한 문서화상에서의 영역 분리와 문자열 추출)

  • 김석태;이대원;박찬용;남궁재찬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.11
    • /
    • pp.2531-2542
    • /
    • 1997
  • This paper describes a method for region segmentation and string extractionin documents which are mixed with text, graphic and picture images by the use of the structural characteristic of connceted components. In segmentation of non-text regionas, with connection-characteristic functions which are made by structural characteristic of connected components, segmentation process is progressed. In the string extraction, first we organize basic-unit-region of which vertical and horizontal length are 1/4 of average length of connection components. Second, by merging the basic-unit-regions one other that have smaller values than a given connection intensity threshold. Third, by linking the word blocks with similar block anagles, initial strings are cresed. Finally the whold strings are generated by merging remaining word blocks whose angles are not decided, if their height and prosition are similar to the initial strings. This method can extract strings that are neither horizontal nor of various character sizes. Through computer exteriments with different style documents, we have shown that the feasibility of our method successes.

  • PDF

Implementation of JBIG2 CODEC with Effective Document Segmentation (문서의 효율적 영역 분할과 JBIG2 CODEC의 구현)

  • 백옥규;김현민;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6A
    • /
    • pp.575-583
    • /
    • 2002
  • JBIG2 is an International Standard fur compression of Bi-level images and documents. JBIG2 supports three encoding modes for high compression according to region features of documents. One of which is generic region coding for bitmap coding. The basic bitmap coder is either MMR or arithmetic coding. Pattern matching coding method is used for text region, and halftone pattern coding is used for halftone region. In this paper, a document is segmented into line-art, halftone and text region for JBIG2 encoding and JBIG2 CODEC is implemented. For efficient region segmentation of documents, region segmentation method using wavelet coefficient is applied with existing boundary extraction technique. In case of facsimile test image(IEEE-167a), there is improvement in compression ratio of about 2% and enhancement of subjective quality. Also, we propose arbitrary shape halftone region coding, which improves subjective quality in talc neighboring text of halftone region.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Extraction text-region's pixel on caption of video (동영상에 삽입된 자막 내 문자영역화소추출)

  • An, Kwon-Jae;Kim, Gye-Young
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2011.01a
    • /
    • pp.43-45
    • /
    • 2011
  • 본 논문은 동영상 내 삽입된 자막을 문자인식이 가능하도록 문자영역을 이루는 화소를 추출하는 방법을 제안한다. 최초 자막영상을 통계학적 방법을 이용하여 색상극성을 결정한다. 이 후 색상극성에 따른 잡음제거 방법을 명암값기반과 형태학적기반으로 달리한다. 제안된 방법은 각 색상결정에 따른 적합한 잡음제거를 수행함으로서 추출된 화소들이 이루는 문자영역의 영상을 이용하여 문자인식을 수행하였을 때 기존방법보다 높은 문자인식률을 보였다.

  • PDF

Text Region Extraction of Natural Scene Images using Gray-level Information and Split/Merge Method (명도 정보와 분할/합병 방법을 이용한 자연 영상에서의 텍스트 영역 추출)

  • Kim Ji-Soo;Kim Soo-Hyung;Choi Yeong-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.502-511
    • /
    • 2005
  • In this paper, we propose a hybrid analysis method(HAM) based on gray-intensity information from natural scene images. The HAM is composed of GIA(Gray-intensity Information Analysis) and SMA(Split/Merge Analysis). Our experimental results show that the proposed approach is superior to conventional methods both in simple and complex images.