• Title/Summary/Keyword: Scene text

Search Result 117, Processing Time 0.037 seconds

Extracting the Slope and Compensating the Image Using Edges and Image Segmentation in Real World Image (실세계 영상에서 경계선과 영상 분할을 이용한 기울기 검출 및 보정)

  • Paek, Jaegyung;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.5
    • /
    • pp.441-448
    • /
    • 2016
  • In this paper, we propose a method that segments the image, extracts its slope and compensate it in the image that text and background are mixed. The proposed method uses morphology based preprocessing and extracts the edges using canny operator. And after segmenting the image which the edges are extracted, it excludes the areas which the edges are included, only uses the area which the edges are included and creates the projection histograms according to their various direction slopes. Using them, it takes a slope having the greatest edge concentrativeness of each area and compensates the slope of the scene. On extracting the slope of the mixed scene of the text and background, the method can get better results as 0.7% than the existing methods as it excludes the useless areas that the edges do not exist.

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.

A Gaussian Mixture Model for Binarization of Natural Scene Text

  • Tran, Anh Khoa;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.2
    • /
    • pp.14-19
    • /
    • 2013
  • Recently, due to the increase of the use of scanned images, the text segmentation techniques, which play critical role to optimize the quality of the scanned images, are required to be updated and advanced. In this study, an algorithm has been developed based on the modification of Gaussian mixture model (GMM) by integrating the calculation of Gaussian detection gradient and the estimation of the number clusters. The experimental results show an efficient method for text segmentation in natural scenes such as storefronts, street signs, scanned journals and newspapers at different size, shape or color of texts in condition of lighting changes and complex background. These indicate that our model algorithm and research approach can address various issues, which are still limitations of other senior algorithms and methods.

  • PDF

A Recognition Method for Korean Spatial Background in Historical Novels (한국어 역사 소설에서 공간적 배경 인식 기법)

  • Kim, Seo-Hee;Kim, Seung-Hoon
    • Journal of Information Technology Services
    • /
    • v.15 no.1
    • /
    • pp.245-253
    • /
    • 2016
  • Background in a novel is most important elements with characters and events, and means time, place and situation that characters appeared. Among the background, spatial background can help conveys topic of a novel. So, it may be helpful for choosing a novel that readers want to read. In this paper, we are targeting Korean historical novels. In case of English text, It can be recognize spatial background easily because it use upper and lower case and words used with the spatial information such as Bank, University and City. But, in case Korean text, it is difficult to recognize that spatial background because there is few information about usage of letter. In the previous studies, they use machine learning or dictionaries and rules to recognize about spatial information in text such as news and text messages. In this paper, we build a nation dictionaries that refer to information such as 'Korean history' and 'Google maps.' We Also propose a method for recognizing spatial background based on patterns of postposition in Korean sentences comparing to previous works. We are grasp using of postposition with spatial background because Korean characteristics. And we propose a method based on result of morpheme analyze and frequency in a novel text for raising accuracy about recognizing spatial background. The recognized spatial background can help readers to grasp the atmosphere of a novel and to understand the events and atmosphere through recognition of the spatial background of the scene that characters appeared.

Text Detection and Binarization using Color Variance and an Improved K-means Color Clustering in Camera-captured Images (카메라 획득 영상에서의 색 분산 및 개선된 K-means 색 병합을 이용한 텍스트 영역 추출 및 이진화)

  • Song Young-Ja;Choi Yeong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.205-214
    • /
    • 2006
  • Texts in images have significant and detailed information about the scenes, and if we can automatically detect and recognize those texts in real-time, it can be used in various applications. In this paper, we propose a new text detection method that can find texts from the various camera-captured images and propose a text segmentation method from the detected text regions. The detection method proposes color variance as a detection feature in RGB color space, and the segmentation method suggests an improved K-means color clustering in RGB color space. We have tested the proposed methods using various kinds of document style and natural scene images captured by digital cameras and mobile-phone camera, and we also tested the method with a portion of ICDAR[1] contest images.

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.

Text Region Verification in Natural Scene Images using Multi-resolution Wavelet Transform and Support Vector Machine (다해상도 웨이블릿 변환과 써포트 벡터 머신을 이용한 자연영상에서의 문자 영역 검증)

  • Bae Kyungsook;Choi Youngwoo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.667-674
    • /
    • 2004
  • Extraction of texts from images is a fundamental and important problem to understand the images. This paper suggests a text region verification method by statistical means of stroke features of the characters. The method extracts 36 dimensional features from $16\times16$sized text and non-text images using wavelet transform - these 36 dimensional features express stroke and direction of characters - and select 12 sub-features out of 36 dimensional features which yield adequate separation between classes. After selecting the features, SVM trains the selected features. For the verification of the text region, each $16\times16$image block is scanned and classified as text or non-text. Then, the text region is finally decided as text region or non-text region. The proposed method is able to verify text regions which can hardly be distin guished.

An Extracting Text Area Using Adaptive Edge Enhanced MSER in Real World Image (실세계 영상에서 적응적 에지 강화 기반의 MSER을 이용한 글자 영역 추출 기법)

  • Park, Youngmok;Park, Sunhwa;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.4
    • /
    • pp.219-226
    • /
    • 2016
  • In our general life, what we recognize information with our human eyes and use it is diverse and massive. But even the current technologies improved by artificial intelligence are exorbitantly deficient comparing to human visual processing ability. Nevertheless, many researchers are trying to get information in everyday life, especially concentrate effort on recognizing information consisted of text. In the fields of recognizing text, to extract the text from the general document is used in some information processing fields, but to extract and recognize the text from real image is deficient too much yet. It is because the real images have many properties like color, size, orientation and something in common. In this paper, we applies an adaptive edge enhanced MSER(Maximally Stable Extremal Regions) to extract the text area in those diverse environments and the scene text, and show that the proposed method is a comparatively nice method with experiments.

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF