• Title/Summary/Keyword: 장면 텍스트 인식

Search Result 9, Processing Time 0.021 seconds

The Slope Extraction and Compensation Based on Adaptive Edge Enhancement to Extract Scene Text Region (장면 텍스트 영역 추출을 위한 적응적 에지 강화 기반의 기울기 검출 및 보정)

  • Back, Jaegyung;Jang, Jaehyuk;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.777-785
    • /
    • 2017
  • In the modern real world, we can extract and recognize some texts to get a lot of information from the scene containing them, so the techniques for extracting and recognizing text areas from a scene are constantly evolving. They can be largely divided into texture-based method, connected component method, and mixture of both. Texture-based method finds and extracts text based on the fact that text and others have different values such as image color and brightness. Connected component method is determined by using the geometrical properties after making similar pixels adjacent to each pixel to the connection element. In this paper, we propose a method to adaptively change to improve the accuracy of text region extraction, detect and correct the slope of the image using edge and image segmentation. The method only extracts the exact area containing the text by correcting the slope of the image, so that the extracting rate is 15% more accurate than MSER and 10% more accurate than EEMSER.

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.

Three-Level Color Clustering Algorithm for Binarizing Scene Text Images (자연영상 텍스트 이진화를 위한 3단계 색상 군집화 알고리즘)

  • Kim Ji-Soo;Kim Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.737-744
    • /
    • 2005
  • In this paper, we propose a three-level color clustering algerian for the binarization of text regions extracted from natural scene images. The proposed algorithm consists of three phases of color segmentation. First, the ordinary images in which the texts are well separated from the background, are binarized. Then, in the second phase, the input image is passed through a high pass filter to deal with those affected by natural or artificial light. Finally, the image Is passed through a low pass filter to deal with the texture in texts and/or background. We have shown that the proposed algorithm is more effective used gray-information binarization algorithm. To evaluate the effectiveness of the proposed algorithm we use a commercial OCR software ARMI 6.0 to observe the recognition accuracies on the binarized images. The experimental results on word and character recognition show that the proposed approach is more accurate than conventional methods by over $35\%$.

Audio-Visual Scene Aware Dialogue System Utilizing Action From Vision and Language Features (이미지-텍스트 자질을 이용한 행동 포착 비디오 기반 대화시스템)

  • Jungwoo Lim;Yoonna Jang;Junyoung Son;Seungyoon Lee;Kinam Park;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.253-257
    • /
    • 2023
  • 최근 다양한 대화 시스템이 스마트폰 어시스턴트, 자동 차 내비게이션, 음성 제어 스피커, 인간 중심 로봇 등의 실세계 인간-기계 인터페이스에 적용되고 있다. 하지만 대부분의 대화 시스템은 텍스트 기반으로 작동해 다중 모달리티 입력을 처리할 수 없다. 이 문제를 해결하기 위해서는 비디오와 같은 다중 모달리티 장면 인식을 통합한 대화 시스템이 필요하다. 기존의 비디오 기반 대화 시스템은 주로 시각, 이미지, 오디오 등의 다양한 자질을 합성하거나 사전 학습을 통해 이미지와 텍스트를 잘 정렬하는 데에만 집중하여 중요한 행동 단서와 소리 단서를 놓치고 있다는 한계가 존재한다. 본 논문은 이미지-텍스트 정렬의 사전학습 임베딩과 행동 단서, 소리 단서를 활용해 비디오 기반 대화 시스템을 개선한다. 제안한 모델은 텍스트와 이미지, 그리고 오디오 임베딩을 인코딩하고, 이를 바탕으로 관련 프레임과 행동 단서를 추출하여 발화를 생성하는 과정을 거친다. AVSD 데이터셋에서의 실험 결과, 제안한 모델이 기존의 모델보다 높은 성능을 보였으며, 대표적인 이미지-텍스트 자질들을 비디오 기반 대화시스템에서 비교 분석하였다.

  • PDF

Scene Text Extraction in Natural Images using Hierarchical Feature Combination and Verification (계층적 특징 결합 및 검증을 이용한 자연이미지에서의 장면 텍스트 추출)

  • 최영우;김길천;송영자;배경숙;조연희;노명철;이성환;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.420-438
    • /
    • 2004
  • Artificially or naturally contained texts in the natural images have significant and detailed information about the scenes. If we develop a method that can extract and recognize those texts in real-time, the method can be applied to many important applications. In this paper, we suggest a new method that extracts the text areas in the natural images using the low-level image features of color continuity. gray-level variation and color valiance and that verifies the extracted candidate regions by using the high-level text feature such as stroke. And the two level features are combined hierarchically. The color continuity is used since most of the characters in the same text lesion have the same color, and the gray-level variation is used since the text strokes are distinctive in their gray-values to the background. Also, the color variance is used since the text strokes are distinctive in their gray-values to the background, and this value is more sensitive than the gray-level variations. The text level stroke features are extracted using a multi-resolution wavelet transforms on the local image areas and the feature vectors are input to a SVM(Support Vector Machine) classifier for the verification. We have tested the proposed method using various kinds of the natural images and have confirmed that the extraction rates are very high even in complex background images.

Identification of Korea Traditional Color Harmony (비디오에서 프로젝션을 이용한 문자 인식)

  • Baek, Jeong-Uk;Shin, Seong-Yoon;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.196-197
    • /
    • 2009
  • In Video, key frame generated from the scene change detection is to perform character recognition through the projections. The separation between the text are separated by a vertical projection. Phoneme is separated Cho-sung, Jung-sung, and Jong-sung and is divided 6 types. Phoneme pattern is separated to suitable 6 types through the horizontal projection. Phoneme are separated horizontal, vertical, diagonal, reverse-diagonal direction. Phoneme is recognized using the 4-direction projection and location information.

  • PDF

Performance Improvement of TextFuseNet using Image Sharpening (선명화 기법을 이용한 TextFuseNet 성능 향상)

  • Jeong, Ji-Yeon;Cheon, Ji-Eun;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.71-73
    • /
    • 2021
  • 본 논문에서는 Scene Text Detection의 새로운 프레임워크인 TextFuseNet에 영상처리 관련 기술인 선명화 기법을 제안한다. Scene Text Detection은 야외 간판이나 표지판 등 불특정 배경에서 글자를 인식하는 기술이며, 그중 하나의 프레임워크가 TextFuseNet이다. TextFuseNet은 문자, 단어, 전역 기준으로 텍스트를 감지하는데, 여기서는 영상처리의 기술인 선명화 기법을 적용하여 TextFuseNet의 성능을 향상시키는 것이 목적이다. 선명화 기법은 기존 Sharpening Filter 방법과 Unsharp Masking 방법을 사용하였고 이 중 Sharpening Filter 방법을 적용하였을 때 AP가 0.9% 향상되었음을 확인하였다.

  • PDF

An Extracting Text Area Using Adaptive Edge Enhanced MSER in Real World Image (실세계 영상에서 적응적 에지 강화 기반의 MSER을 이용한 글자 영역 추출 기법)

  • Park, Youngmok;Park, Sunhwa;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.4
    • /
    • pp.219-226
    • /
    • 2016
  • In our general life, what we recognize information with our human eyes and use it is diverse and massive. But even the current technologies improved by artificial intelligence are exorbitantly deficient comparing to human visual processing ability. Nevertheless, many researchers are trying to get information in everyday life, especially concentrate effort on recognizing information consisted of text. In the fields of recognizing text, to extract the text from the general document is used in some information processing fields, but to extract and recognize the text from real image is deficient too much yet. It is because the real images have many properties like color, size, orientation and something in common. In this paper, we applies an adaptive edge enhanced MSER(Maximally Stable Extremal Regions) to extract the text area in those diverse environments and the scene text, and show that the proposed method is a comparatively nice method with experiments.

Study on Extracting Filming Location Information in Movies Using OCR for Developing Customized Travel Content (맞춤형 여행 콘텐츠 개발을 위한 OCR 기법을 활용한 영화 속 촬영지 정보 추출 방안 제시)

  • Park, Eunbi;Shin, Yubin;Kang, Juyoung
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.29-39
    • /
    • 2020
  • Purpose The atmosphere of respect for individual tastes that have spread throughout society has changed the consumption trend. As a result, the travel industry is also seeing customized travel as a new trend that reflects consumers' personal tastes. In particular, there is a growing interest in 'film-induced tourism', one of the areas of travel industry. We hope to satisfy the individual's motivation for traveling while watching movies with customized travel proposals, which we expect to be a catalyst for the continued development of the 'film-induced tourism industry'. Design/methodology/approach In this study, we implemented a methodology through 'OCR' of extracting and suggesting film location information that viewers want to visit. First, we extract a scene from a movie selected by a user by using 'OpenCV', a real-time image processing library. In addition, we detected the location of characters in the scene image by using 'EAST model', a deep learning-based text area detection model. The detected images are preprocessed by using 'OpenCV built-in function' to increase recognition accuracy. Finally, after converting characters in images into recognizable text using 'Tesseract', an optical character recognition engine, the 'Google Map API' returns actual location information. Significance This research is significant in that it provides personalized tourism content using fourth industrial technology, in addition to existing film tourism. This could be used in the development of film-induced tourism packages with travel agencies in the future. It also implies the possibility of being used for inflow from abroad as well as to abroad.