• 제목/요약/키워드: Scene text

검색결과 118건 처리시간 0.024초

An End-to-End Sequence Learning Approach for Text Extraction and Recognition from Scene Image

  • Lalitha, G.;Lavanya, B.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권7호
    • /
    • pp.220-228
    • /
    • 2022
  • Image always carry useful information, detecting a text from scene images is imperative. The proposed work's purpose is to recognize scene text image, example boarding image kept on highways. Scene text detection on highways boarding's plays a vital role in road safety measures. At initial stage applying preprocessing techniques to the image is to sharpen and improve the features exist in the image. Likely, morphological operator were applied on images to remove the close gaps exists between objects. Here we proposed a two phase algorithm for extracting and recognizing text from scene images. In phase I text from scenery image is extracted by applying various image preprocessing techniques like blurring, erosion, tophat followed by applying thresholding, morphological gradient and by fixing kernel sizes, then canny edge detector is applied to detect the text contained in the scene images. In phase II text from scenery image recognized using MSER (Maximally Stable Extremal Region) and OCR; Proposed work aimed to detect the text contained in the scenery images from popular dataset repositories SVT, ICDAR 2003, MSRA-TD 500; these images were captured at various illumination and angles. Proposed algorithm produces higher accuracy in minimal execution time compared with state-of-the-art methodologies.

Touch TT: Scene Text Extractor Using Touchscreen Interface

  • Jung, Je-Hyun;Lee, Seong-Hun;Cho, Min-Su;Kim, Jin-Hyung
    • ETRI Journal
    • /
    • 제33권1호
    • /
    • pp.78-88
    • /
    • 2011
  • In this paper, we present the Touch Text exTractor (Touch TT), an interactive text segmentation tool for the extraction of scene text from camera-based images. Touch TT provides a natural interface for a user to simply indicate the location of text regions with a simple touchline. Touch TT then automatically estimates the text color and roughly locates the text regions. By inferring text characteristics from the estimated text color and text region, Touch TT can extract text components. Touch TT can also handle partially drawn lines which cover only a small section of text area. The proposed system achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database and our own database.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

Representative Batch Normalization for Scene Text Recognition

  • Sun, Yajie;Cao, Xiaoling;Sun, Yingying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2390-2406
    • /
    • 2022
  • Scene text recognition has important application value and attracted the interest of plenty of researchers. At present, many methods have achieved good results, but most of the existing approaches attempt to improve the performance of scene text recognition from the image level. They have a good effect on reading regular scene texts. However, there are still many obstacles to recognizing text on low-quality images such as curved, occlusion, and blur. This exacerbates the difficulty of feature extraction because the image quality is uneven. In addition, the results of model testing are highly dependent on training data, so there is still room for improvement in scene text recognition methods. In this work, we present a natural scene text recognizer to improve the recognition performance from the feature level, which contains feature representation and feature enhancement. In terms of feature representation, we propose an efficient feature extractor combined with Representative Batch Normalization and ResNet. It reduces the dependence of the model on training data and improves the feature representation ability of different instances. In terms of feature enhancement, we use a feature enhancement network to expand the receptive field of feature maps, so that feature maps contain rich feature information. Enhanced feature representation capability helps to improve the recognition performance of the model. We conducted experiments on 7 benchmarks, which shows that this method is highly competitive in recognizing both regular and irregular texts. The method achieved top1 recognition accuracy on four benchmarks of IC03, IC13, IC15, and SVTP.

선명화 기법을 이용한 TextFuseNet 성능 향상 (Performance Improvement of TextFuseNet using Image Sharpening)

  • 정지연;천지은;정유철
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2021년도 제63차 동계학술대회논문집 29권1호
    • /
    • pp.71-73
    • /
    • 2021
  • 본 논문에서는 Scene Text Detection의 새로운 프레임워크인 TextFuseNet에 영상처리 관련 기술인 선명화 기법을 제안한다. Scene Text Detection은 야외 간판이나 표지판 등 불특정 배경에서 글자를 인식하는 기술이며, 그중 하나의 프레임워크가 TextFuseNet이다. TextFuseNet은 문자, 단어, 전역 기준으로 텍스트를 감지하는데, 여기서는 영상처리의 기술인 선명화 기법을 적용하여 TextFuseNet의 성능을 향상시키는 것이 목적이다. 선명화 기법은 기존 Sharpening Filter 방법과 Unsharp Masking 방법을 사용하였고 이 중 Sharpening Filter 방법을 적용하였을 때 AP가 0.9% 향상되었음을 확인하였다.

  • PDF

텐서보팅을 이용한 텍스트 배열정보의 획득과 이를 이용한 텍스트 검출 (Extraction of Text Alignment by Tensor Voting and its Application to Text Detection)

  • 이귀상;또안;박종현
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제36권11호
    • /
    • pp.912-919
    • /
    • 2009
  • 본 논문에서는 이차원 텐서보팅과 에지 기반 방법을 이용하여 자연영상에서 문자를 검출하는 새로운 방법을 제시한다. 텍스트의 문자들은 보통 연속적인 완만한 곡선 상에 배열되어 있고 서로 가깝게 위치하며, 이러한 특성은 텐서보팅에 의하여 효과적으로 검출될 수 있다. 이차원 텐서보팅은 토큰의 연속성을 curve saliency 로 산출하며 이러한 특성은 다양한 영상해석에 사용된다. 먼저 에지 검출을 이용하여 영상 내의 텍스트 영역이 위치할 가능성이 있는 텍스트 후보영역을 찾고 이러한 후보영역의 연속성을 텐서보팅에 의해 검증하여 잡음영역을 제거하고 텍스트 영역만을 구분한다. 실험 결과, 제안된 방법은 복잡한 자연영상에서 효과적으로 텍스트 영역을 검출함을 확인하였다.

Text Detection in Scene Images Based on Interest Points

  • Nguyen, Minh Hieu;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • 제11권4호
    • /
    • pp.528-537
    • /
    • 2015
  • Text in images is one of the most important cues for understanding a scene. In this paper, we propose a novel approach based on interest points to localize text in natural scene images. The main ideas of this approach are as follows: first we used interest point detection techniques, which extract the corner points of characters and center points of edge connected components, to select candidate regions. Second, these candidate regions were verified by using tensor voting, which is capable of extracting perceptual structures from noisy data. Finally, area, orientation, and aspect ratio were used to filter out non-text regions. The proposed method was tested on the ICDAR 2003 dataset and images of wine labels. The experiment results show the validity of this approach.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

YOLO, EAST: 신경망 모델을 이용한 문자열 위치 검출 성능 비교 (YOLO, EAST : Comparison of Scene Text Detection Performance, Using a Neural Network Model)

  • 박찬용;임영민;정승대;조영혁;이병철;이규현;김진욱
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제11권3호
    • /
    • pp.115-124
    • /
    • 2022
  • 본 논문에서는 최근 다양한 분야에서 많이 활용되고 있는 YOLO와 EAST 신경망을 이미지 속 문자열 탐지문제에 적용해보고 이들의 성능을 비교분석 해 보았다. YOLO 신경망은 일반적으로 이미지 속 문자영역 탐지에 낮은 성능을 보인다고 알려졌으나, 실험결과 YOLOv3는 문자열 탐지에 비교적 약점을 보이지만 최근 출시된 YOLOv4와 YOLOv5의 경우 다양한 형태의 이미지 속에 있는 한글과 영문 문자열 탐지에 뛰어난 성능을 보여줌을 확인하였다. 따라서, 이들 YOLO 신경망 기반 문자열 탐지방법이 향후 문자 인식 분야에서 많이 활용될 것으로 전망한다.

자연 영상에서의 정확한 문자 검출에 관한 연구 (A Study on Localization of Text in Natural Scene Images)

  • 최미영;김계영;최형일
    • 한국컴퓨터정보학회논문지
    • /
    • 제13권5호
    • /
    • pp.77-84
    • /
    • 2008
  • 본 논문에서는 자연영상에 존재하는 문자들을 효율적으로 검출하기 위한 새로운 접근 방법을 제안한다. 빛 또는 조명의 영향에 의해 획득된 영상 내에 존재하는 반사성분은 문자 또는 관심객체들의 경계가 모호해 지거나 관심객체와 배경이 서로 혼합되었을 경우, 문자추출 및 인식을 함에 있어서 오류를 포함시킬 수 있다. 따라서 영상 내에 존재하는 반사성분을 제거하기 위해 먼저, 영상으로부터 Red컬러 성분에 해당하는 히스토그램에서 두개의 피크 점을 검출한다. 검출된 두 개의 피크 점들 간의 분포를 사용하여 노말 또는 편광 영상에 해당하는지를 판별한다. 노말 영상의 경우 부가적인 처리를 거치지 않고 문자영역을 검출하며 편광 영상인 경우 조명성분을 제거하기 위해 호모모픽 필터링 방법을 적용하여 반사성분에 해당하는 영역을 제거한다. 그리고 문자영역을 검출하기 위해 색 병합과 세일런스 맵을 이용하여 각각의 문자 후보영역을 결정한다. 마지막으로 두 후보영역을 이용하여 최종 문자영역을 검출한다.

  • PDF