• Title/Summary/Keyword: Scene Text Detection

Search Result 38, Processing Time 0.023 seconds

Identification of Korea Traditional Color Harmony (비디오에서 프로젝션을 이용한 문자 인식)

  • Baek, Jeong-Uk;Shin, Seong-Yoon;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.196-197
    • /
    • 2009
  • In Video, key frame generated from the scene change detection is to perform character recognition through the projections. The separation between the text are separated by a vertical projection. Phoneme is separated Cho-sung, Jung-sung, and Jong-sung and is divided 6 types. Phoneme pattern is separated to suitable 6 types through the horizontal projection. Phoneme are separated horizontal, vertical, diagonal, reverse-diagonal direction. Phoneme is recognized using the 4-direction projection and location information.

  • PDF

A Comparison of Deep Neural Network based Scene Text Detection with YOLO and EAST (이미지 속 문자열 탐지에 대한 YOLO와 EAST 신경망의 성능 비교)

  • Park, Chan-Yong;Lee, Gyu-Hyun;Lim, Young-Min;Jeong, Seung-Dae;Cho, Young-Heuk;Kim, Jin-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.422-425
    • /
    • 2021
  • 본 논문에서는 최근 다양한 분야에서 많이 활용되고 있는 YOLO와 EAST 신경망을 이미지 속 문자열 탐지문제에 적용해보고 이들의 성능을 비교분석 해 보았다. YOLO 신경망은 v3 이전 모델까지는 이미지 속 문자영역 탐지에 낮은 성능을 보인다고 알려졌으나, 최근 출시된 YOLOv4와 YOLOv5의 경우 다양한 형태의 이미지 속에 있는 한글과 영문 문자열 탐지에 뛰어난 성능을 보여줌을 확인하고 향후 문자 인식 분야에서 많이 활용될 것으로 기대된다.

Label Restoration Using Biquadratic Transformation

  • Le, Huy Phat;Nguyen, Toan Dinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.6 no.1
    • /
    • pp.6-11
    • /
    • 2010
  • Recently, there has been research to use portable digital camera to recognize objects in natural scene images, including labels or marks on a cylindrical surface. In many cases, text or logo in a label can be distorted by a structural movement of the object on which the label resides. Since the distortion in the label can degrade the performance of object recognition, the label should be rectified or restored from deformations. In this paper, a new method for label detection and restoration in digital images is presented. In the detection phase, the Hough transform is employed to detect two vertical boundaries of the label, and a horizontal edge profile is analyzed to detect upper-side and lower-side boundaries of the label. Then, the biquadratic transformation is used to restore the rectangular shape of the label. The proposed algorithm performs restoration of 3D objects in a 2D space, and it requires neither an auxiliary hardware such as 3D camera to construct 3D models nor a multi-camera to capture objects in different views. Experimental results demonstrate the effectiveness of the proposed method.

Text Region Detection Using Regional Connected Component and Edge Structure Component Feature From Natural Scene Images (지역적 연결요소 및 에지 구조 성분 특징을 이용한 자연이미지로부터 문자영역 검출)

  • Bak, Jong-Cheon;Hwang, Dong-Guk;Gwon, Gyo-Hyeon;Jeon, Byeong-Min
    • Proceedings of the KAIS Fall Conference
    • /
    • 2009.05a
    • /
    • pp.40-43
    • /
    • 2009
  • 최근 모바일 영상기반 응용 분야에 관한 연구가 활발히 진행되고 있으며 모바일기기로 촬영된 영상에서 문자정보를 추출하고자 하는 많은 연구도 진행되고 있다. 자연이미지로부터 문자정보를 추출을 위한 전단계로 문자영역 검출이 필수적이다. 본 연구는 문자영역의 지역적 에지 및 연결요소 특징을 고려하여 조명 및 복잡한 배경에서도 문자영역을 검출하는 방법을 제안한다. 에지 검출은 캐니-에지 검출기로 추출하고, RGB 컬러분포 패턴을 분석하여 컬러 양자화를 함으로서 연결성분을 추출한다. 각각 추출된 에지 및 연결성분으로부터 문자후보 영역을 검출하고, 각각의 결과를 결합하여 최종적인 문자 후보 영역을 검출하고, 문자 후보 영역에 대한 검증을 수행함으로서 최종적인 문자영역을 검출한다. 제안한 방법은 다양한 환경에서 얻어진 자연이미지를 대상으로 실험한 결과, 에지 및 연결성분의 두 가지 특징을 결합함으로서 자연이미지에 존재하는 다양한 형태의 문자영역을 효과적으로 검출하였다.

  • PDF

Framework for Content-Based Image Identification with Standardized Multiview Features

  • Das, Rik;Thepade, Sudeep;Ghosh, Saurav
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.174-184
    • /
    • 2016
  • Information identification with image data by means of low-level visual features has evolved as a challenging research domain. Conventional text-based mapping of image data has been gradually replaced by content-based techniques of image identification. Feature extraction from image content plays a crucial role in facilitating content-based detection processes. In this paper, the authors have proposed four different techniques for multiview feature extraction from images. The efficiency of extracted feature vectors for content-based image classification and retrieval is evaluated by means of fusion-based and data standardization-based techniques. It is observed that the latter surpasses the former. The proposed methods outclass state-of-the-art techniques for content-based image identification and show an average increase in precision of 17.71% and 22.78% for classification and retrieval, respectively. Three public datasets - Wang; Oliva and Torralba (OT-Scene); and Corel - are used for verification purposes. The research findings are statistically validated by conducting a paired t-test.

Study on Extracting Filming Location Information in Movies Using OCR for Developing Customized Travel Content (맞춤형 여행 콘텐츠 개발을 위한 OCR 기법을 활용한 영화 속 촬영지 정보 추출 방안 제시)

  • Park, Eunbi;Shin, Yubin;Kang, Juyoung
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.29-39
    • /
    • 2020
  • Purpose The atmosphere of respect for individual tastes that have spread throughout society has changed the consumption trend. As a result, the travel industry is also seeing customized travel as a new trend that reflects consumers' personal tastes. In particular, there is a growing interest in 'film-induced tourism', one of the areas of travel industry. We hope to satisfy the individual's motivation for traveling while watching movies with customized travel proposals, which we expect to be a catalyst for the continued development of the 'film-induced tourism industry'. Design/methodology/approach In this study, we implemented a methodology through 'OCR' of extracting and suggesting film location information that viewers want to visit. First, we extract a scene from a movie selected by a user by using 'OpenCV', a real-time image processing library. In addition, we detected the location of characters in the scene image by using 'EAST model', a deep learning-based text area detection model. The detected images are preprocessed by using 'OpenCV built-in function' to increase recognition accuracy. Finally, after converting characters in images into recognizable text using 'Tesseract', an optical character recognition engine, the 'Google Map API' returns actual location information. Significance This research is significant in that it provides personalized tourism content using fourth industrial technology, in addition to existing film tourism. This could be used in the development of film-induced tourism packages with travel agencies in the future. It also implies the possibility of being used for inflow from abroad as well as to abroad.

Design and Implementation of Automated Detection System of Personal Identification Information for Surgical Video De-Identification (수술 동영상의 비식별화를 위한 개인식별정보 자동 검출 시스템 설계 및 구현)

  • Cho, Youngtak;Ahn, Kiok
    • Convergence Security Journal
    • /
    • v.19 no.5
    • /
    • pp.75-84
    • /
    • 2019
  • Recently, the value of video as an important data of medical information technology is increasing due to the feature of rich clinical information. On the other hand, video is also required to be de-identified as a medical image, but the existing methods are mainly specialized in the stereotyped data and still images, which makes it difficult to apply the existing methods to the video data. In this paper, we propose an automated system to index candidate elements of personal identification information on a frame basis to solve this problem. The proposed system performs indexing process using text and person detection after preprocessing by scene segmentation and color knowledge based method. The generated index information is provided as metadata according to the purpose of use. In order to verify the effectiveness of the proposed system, the indexing speed was measured using prototype implementation and real surgical video. As a result, the work speed was more than twice as fast as the playing time of the input video, and it was confirmed that the decision making was possible through the case of the production of surgical education contents.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.