• Title/Summary/Keyword: Scene Text Detection

Search Result 38, Processing Time 0.02 seconds

Performance Improvement of TextFuseNet using Image Sharpening (선명화 기법을 이용한 TextFuseNet 성능 향상)

  • Jeong, Ji-Yeon;Cheon, Ji-Eun;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.71-73
    • /
    • 2021
  • 본 논문에서는 Scene Text Detection의 새로운 프레임워크인 TextFuseNet에 영상처리 관련 기술인 선명화 기법을 제안한다. Scene Text Detection은 야외 간판이나 표지판 등 불특정 배경에서 글자를 인식하는 기술이며, 그중 하나의 프레임워크가 TextFuseNet이다. TextFuseNet은 문자, 단어, 전역 기준으로 텍스트를 감지하는데, 여기서는 영상처리의 기술인 선명화 기법을 적용하여 TextFuseNet의 성능을 향상시키는 것이 목적이다. 선명화 기법은 기존 Sharpening Filter 방법과 Unsharp Masking 방법을 사용하였고 이 중 Sharpening Filter 방법을 적용하였을 때 AP가 0.9% 향상되었음을 확인하였다.

  • PDF

Extraction of Text Alignment by Tensor Voting and its Application to Text Detection (텐서보팅을 이용한 텍스트 배열정보의 획득과 이를 이용한 텍스트 검출)

  • Lee, Guee-Sang;Dinh, Toan Nguyen;Park, Jong-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.912-919
    • /
    • 2009
  • A novel algorithm using 2D tensor voting and edge-based approach is proposed for text detection in natural scene images. The tensor voting is used based on the fact that characters in a text line are usually close together on a smooth curve and therefore the tokens corresponding to centers of these characters have high curve saliency values. First, a suitable edge-based method is used to find all possible text regions. Since the false positive rate of text detection result generated from the edge-based method is high, 2D tensor voting is applied to remove false positives and find only text regions. The experimental results show that our method successfully detects text regions in many complex natural scene images.

Text Detection in Scene Images Based on Interest Points

  • Nguyen, Minh Hieu;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.528-537
    • /
    • 2015
  • Text in images is one of the most important cues for understanding a scene. In this paper, we propose a novel approach based on interest points to localize text in natural scene images. The main ideas of this approach are as follows: first we used interest point detection techniques, which extract the corner points of characters and center points of edge connected components, to select candidate regions. Second, these candidate regions were verified by using tensor voting, which is capable of extracting perceptual structures from noisy data. Finally, area, orientation, and aspect ratio were used to filter out non-text regions. The proposed method was tested on the ICDAR 2003 dataset and images of wine labels. The experiment results show the validity of this approach.

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

Text Region Detection using Edge and Regional Minima/Maxima Transformation from Natural Scene Images (에지 및 국부적 최소/최대 변환을 이용한 자연 이미지로부터 텍스트 영역 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.2
    • /
    • pp.358-363
    • /
    • 2009
  • Text region detection from the natural scene images used in a variety of applications, many research are needed in this field. Recent research methods is to detect the text region using various algorithm which it is combination of edge based and connected component based. Therefore, this paper proposes an text region detection using edge and regional minima/maxima transformation algorithm from natural scene images, and then detect the connected components of edge and regional minima/maxima, labeling edge and regional minima/maxima connected components. Analysis the labeled regions and then detect a text candidate regions, each of detected text candidates combined and create a single text candidate image, Final text region validated by comparing the similarity and adjacency of individual characters, and then as the final text regions are detected. As the results of experiments, proposed algorithm improved the correctness of text regions detection using combined edge and regional minima/maxima connected components detection methods.

YOLO, EAST : Comparison of Scene Text Detection Performance, Using a Neural Network Model (YOLO, EAST: 신경망 모델을 이용한 문자열 위치 검출 성능 비교)

  • Park, Chan Yong;Lim, Young Min;Jeong, Seung Dae;Cho, Young Heuk;Lee, Byeong Chul;Lee, Gyu Hyun;Kim, Jin Wook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.115-124
    • /
    • 2022
  • In this paper, YOLO and EAST models are tested to analyze their performance in text area detecting for real-world and normal text images. The earl ier YOLO models which include YOLOv3 have been known to underperform in detecting text areas for given images, but the recently released YOLOv4 and YOLOv5 achieved promising performances to detect text area included in various images. Experimental results show that both of YOLO v4 and v5 models are expected to be widely used for text detection in the filed of scene text recognition in the future.

An End-to-End Sequence Learning Approach for Text Extraction and Recognition from Scene Image

  • Lalitha, G.;Lavanya, B.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.220-228
    • /
    • 2022
  • Image always carry useful information, detecting a text from scene images is imperative. The proposed work's purpose is to recognize scene text image, example boarding image kept on highways. Scene text detection on highways boarding's plays a vital role in road safety measures. At initial stage applying preprocessing techniques to the image is to sharpen and improve the features exist in the image. Likely, morphological operator were applied on images to remove the close gaps exists between objects. Here we proposed a two phase algorithm for extracting and recognizing text from scene images. In phase I text from scenery image is extracted by applying various image preprocessing techniques like blurring, erosion, tophat followed by applying thresholding, morphological gradient and by fixing kernel sizes, then canny edge detector is applied to detect the text contained in the scene images. In phase II text from scenery image recognized using MSER (Maximally Stable Extremal Region) and OCR; Proposed work aimed to detect the text contained in the scenery images from popular dataset repositories SVT, ICDAR 2003, MSRA-TD 500; these images were captured at various illumination and angles. Proposed algorithm produces higher accuracy in minimal execution time compared with state-of-the-art methodologies.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Variance Recovery in Text Detection using Color Variance Feature (색 분산 특징을 이용한 텍스트 추출에서의 손실된 분산 복원)

  • Choi, Yeong-Woo;Cho, Eun-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.73-82
    • /
    • 2009
  • This paper proposes a variance recovery method for character strokes that can be missed in applying the previously proposed color variance approach in text detection of natural scene images. The previous method has a shortcoming of missing the color variance due to the fixed length of horizontal and vertical windows of variance detection when the character strokes are thick or long. Thus, this paper proposes a variance recovery method by using geometric information of bounding boxes of connected components and heuristic knowledge. We have tested the proposed method using various kinds of document-style and natural scene images such as billboards, signboards, etc captured by digital cameras and mobile-phone cameras. And we showed the improved text detection accuracy even in the images of containing large characters.

A Novel Text Sample Selection Model for Scene Text Detection via Bootstrap Learning

  • Kong, Jun;Sun, Jinhua;Jiang, Min;Hou, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.771-789
    • /
    • 2019
  • Text detection has been a popular research topic in the field of computer vision. It is difficult for prevalent text detection algorithms to avoid the dependence on datasets. To overcome this problem, we proposed a novel unsupervised text detection algorithm inspired by bootstrap learning. Firstly, the text candidate in a novel form of superpixel is proposed to improve the text recall rate by image segmentation. Secondly, we propose a unique text sample selection model (TSSM) to extract text samples from the current image and eliminate database dependency. Specifically, to improve the precision of samples, we combine maximally stable extremal regions (MSERs) and the saliency map to generate sample reference maps with a double threshold scheme. Finally, a multiple kernel boosting method is developed to generate a strong text classifier by combining multiple single kernel SVMs based on the samples selected from TSSM. Experimental results on standard datasets demonstrate that our text detection method is robust to complex backgrounds and multilingual text and shows stable performance on different standard datasets.