• Title/Summary/Keyword: 장면 텍스트 검출

Search Result 10, Processing Time 0.028 seconds

The Slope Extraction and Compensation Based on Adaptive Edge Enhancement to Extract Scene Text Region (장면 텍스트 영역 추출을 위한 적응적 에지 강화 기반의 기울기 검출 및 보정)

  • Back, Jaegyung;Jang, Jaehyuk;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.777-785
    • /
    • 2017
  • In the modern real world, we can extract and recognize some texts to get a lot of information from the scene containing them, so the techniques for extracting and recognizing text areas from a scene are constantly evolving. They can be largely divided into texture-based method, connected component method, and mixture of both. Texture-based method finds and extracts text based on the fact that text and others have different values such as image color and brightness. Connected component method is determined by using the geometrical properties after making similar pixels adjacent to each pixel to the connection element. In this paper, we propose a method to adaptively change to improve the accuracy of text region extraction, detect and correct the slope of the image using edge and image segmentation. The method only extracts the exact area containing the text by correcting the slope of the image, so that the extracting rate is 15% more accurate than MSER and 10% more accurate than EEMSER.

Mobile Phone Camera Based Scene Text Detection Using Edge and Color Quantization (에지 및 컬러 양자화를 이용한 모바일 폰 카메라 기반장면 텍스트 검출)

  • Park, Jong-Cheon;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.3
    • /
    • pp.847-852
    • /
    • 2010
  • Text in natural images has a various and important feature of image. Therefore, to detect text and extraction of text, recognizing it is a studied as an important research area. Lately, many applications of various fields is being developed based on mobile phone camera technology. Detecting edge component form gray-scale image and detect an boundary of text regions by local standard deviation and get an connected components using Euclidean distance of RGB color space. Labeling the detected edges and connected component and get bounding boxes each regions. Candidate of text achieved with heuristic rule of text. Detected candidate text regions was merged for generation for one candidate text region, then text region detected with verifying candidate text region using ectilarity characterization of adjacency and ectilarity between candidate text regions. Experctental results, We improved text region detection rate using completentary of edge and color connected component.

An Implementation of Cut Detection using Edge Image (에지 이미지를 사용한 컷 검출의 구현)

  • Kim, Sul-Ho;Choi, Hyung-Il;Kim, Gye-Young
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2011.01a
    • /
    • pp.51-53
    • /
    • 2011
  • 최근에는 텍스트정보 보다 동영상정보를 다루는 일이 많아졌고 그에 따라 동영상 데이터의 분할, 색인, 검색 등을 위해 장면 전환 검출이 필요하게 되었다. 장면 전환 검출 기술은 비디오 데이터의 장면 변화가 발생하는 경계를 검출하는 기술이다. 본 논문에서는 에지 이미지를 이용한 장면전환 검출과 이를 위한 임계값 설정, 그리고 결과에서 중복된 이미지와 오 검출 된 이미지를 줄여줄 수 있는 구현에 대하여 실험결과를 바탕으로 설명한다.

  • PDF

Rotation-robust text localization technique using deep learning (딥러닝 기반의 회전에 강인한 텍스트 검출 기법)

  • Choi, In-Kyu;Kim, Jewoo;Song, Hyok;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.80-81
    • /
    • 2019
  • 본 논문에서는 자연스러운 장면 영상에서 임의의 방향성을 가진 텍스트를 검출하기 위한 기법을 제안한다. 텍스트 검출을 위한 기본적인 프레임 워크는 Faster R-CNN[1]을 기반으로 한다. 먼저 RPN(Region Proposal Network)을 통해 다른 방향성을 가진 텍스트를 포함하는 bounding box를 생성한다. 이어서 RPN에서 생성한 각각의 bounding box에 대해 세 가지의 서로 다른 크기로 pooling된 특징지도를 추출하고 병합한다. 병합한 특징지도에서 텍스트와 텍스트가 아닌 대상에 대한 score, 정렬된 bounding box 좌표, 기울어진 bounding box 좌표를 모두 예측한다. 마지막으로 NMS(Non-Maximum Suppression)을 이용하여 검출 결과를 획득한다. COCO Text 2017 dataset[2]을 이용하여 학습 및 테스트를 진행하였으며 주관적으로 평가한 결과 기울어진 텍스트에 적합하게 회전된 영역을 얻을 수 있음을 확인하였다.

  • PDF

Text Detection in Scene Images using spatial frequency (공간주파수를 이용한 장면영상에서 텍스트 검출)

  • Sin, Bong-Kee;Kim, Seon-Kyu
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.31-39
    • /
    • 2003
  • It is often assumed that text regions in images are characterized by some distinctive or characteristic spatial frequencies. This feature is highly intuitive, and thus appealing as much. We propose a method of detecting horizontal texts in natural scene images. It is based on the use of two features that can be employed separately or in succession: the frequency of edge pixels across vertical and horizontal scan lines, and the fundamental frequency in the Fourier domain. We confirmed that the frequency features are language independent. Also addressed is the detection of quadrilaterals or approximate rectangles using Hough transform. Since texts that is meaningful to many viewers usually appear within rectangles with colors in high contrast to the background. Hence it is natural to assume the detection rectangles may be helpful for locating desired texts correctly in natural outdoor scene images.

Performance Improvement of TextFuseNet using Image Sharpening (선명화 기법을 이용한 TextFuseNet 성능 향상)

  • Jeong, Ji-Yeon;Cheon, Ji-Eun;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.71-73
    • /
    • 2021
  • 본 논문에서는 Scene Text Detection의 새로운 프레임워크인 TextFuseNet에 영상처리 관련 기술인 선명화 기법을 제안한다. Scene Text Detection은 야외 간판이나 표지판 등 불특정 배경에서 글자를 인식하는 기술이며, 그중 하나의 프레임워크가 TextFuseNet이다. TextFuseNet은 문자, 단어, 전역 기준으로 텍스트를 감지하는데, 여기서는 영상처리의 기술인 선명화 기법을 적용하여 TextFuseNet의 성능을 향상시키는 것이 목적이다. 선명화 기법은 기존 Sharpening Filter 방법과 Unsharp Masking 방법을 사용하였고 이 중 Sharpening Filter 방법을 적용하였을 때 AP가 0.9% 향상되었음을 확인하였다.

  • PDF

Identification of Korea Traditional Color Harmony (비디오에서 프로젝션을 이용한 문자 인식)

  • Baek, Jeong-Uk;Shin, Seong-Yoon;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.196-197
    • /
    • 2009
  • In Video, key frame generated from the scene change detection is to perform character recognition through the projections. The separation between the text are separated by a vertical projection. Phoneme is separated Cho-sung, Jung-sung, and Jong-sung and is divided 6 types. Phoneme pattern is separated to suitable 6 types through the horizontal projection. Phoneme are separated horizontal, vertical, diagonal, reverse-diagonal direction. Phoneme is recognized using the 4-direction projection and location information.

  • PDF

Design and Implementation of Automated Detection System of Personal Identification Information for Surgical Video De-Identification (수술 동영상의 비식별화를 위한 개인식별정보 자동 검출 시스템 설계 및 구현)

  • Cho, Youngtak;Ahn, Kiok
    • Convergence Security Journal
    • /
    • v.19 no.5
    • /
    • pp.75-84
    • /
    • 2019
  • Recently, the value of video as an important data of medical information technology is increasing due to the feature of rich clinical information. On the other hand, video is also required to be de-identified as a medical image, but the existing methods are mainly specialized in the stereotyped data and still images, which makes it difficult to apply the existing methods to the video data. In this paper, we propose an automated system to index candidate elements of personal identification information on a frame basis to solve this problem. The proposed system performs indexing process using text and person detection after preprocessing by scene segmentation and color knowledge based method. The generated index information is provided as metadata according to the purpose of use. In order to verify the effectiveness of the proposed system, the indexing speed was measured using prototype implementation and real surgical video. As a result, the work speed was more than twice as fast as the playing time of the input video, and it was confirmed that the decision making was possible through the case of the production of surgical education contents.

Study on Extracting Filming Location Information in Movies Using OCR for Developing Customized Travel Content (맞춤형 여행 콘텐츠 개발을 위한 OCR 기법을 활용한 영화 속 촬영지 정보 추출 방안 제시)

  • Park, Eunbi;Shin, Yubin;Kang, Juyoung
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.29-39
    • /
    • 2020
  • Purpose The atmosphere of respect for individual tastes that have spread throughout society has changed the consumption trend. As a result, the travel industry is also seeing customized travel as a new trend that reflects consumers' personal tastes. In particular, there is a growing interest in 'film-induced tourism', one of the areas of travel industry. We hope to satisfy the individual's motivation for traveling while watching movies with customized travel proposals, which we expect to be a catalyst for the continued development of the 'film-induced tourism industry'. Design/methodology/approach In this study, we implemented a methodology through 'OCR' of extracting and suggesting film location information that viewers want to visit. First, we extract a scene from a movie selected by a user by using 'OpenCV', a real-time image processing library. In addition, we detected the location of characters in the scene image by using 'EAST model', a deep learning-based text area detection model. The detected images are preprocessed by using 'OpenCV built-in function' to increase recognition accuracy. Finally, after converting characters in images into recognizable text using 'Tesseract', an optical character recognition engine, the 'Google Map API' returns actual location information. Significance This research is significant in that it provides personalized tourism content using fourth industrial technology, in addition to existing film tourism. This could be used in the development of film-induced tourism packages with travel agencies in the future. It also implies the possibility of being used for inflow from abroad as well as to abroad.

Cut detection methods of real-time image sequences using color characteristics (컬러 특성을 이용한 실시간 동영상의 cut detection 기법)

  • Park, Jin-Nam;Lee, Jae-Duck;Huh, Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.1
    • /
    • pp.67-74
    • /
    • 2002
  • A study on image searching and management techniques is actively developed by user requirements for multimedia information that are existing as images, audios, texts data from various information processing devices. If we can automatically detect and segment changing scenes from real-time image sequences, we can improve an effectiveness of image searching systems. In this paper, we propose cut detection techniques based on image color distribution and we evaluated its performance on various real-time image sequences. Results of experiments show that the proposed method are robust on various image patterns than color histogram method using statistical informations of images. Also, these methods can be used for cut detection on real-time image sequences.