• Title/Summary/Keyword: caption extraction

Search Result 33, Processing Time 0.03 seconds

Caption Detection and Recognition for Video Image Information Retrieval (비디오 영상 정보 검색을 위한 문자 추출 및 인식)

  • 구건서
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.901-914
    • /
    • 2002
  • In this paper, We propose an efficient automatic caption detection and location method, caption recognition using FE-MCBP(Feature Extraction based Multichained BackPropagation) neural network for content based retrieval of video. Frames are selected at fixed time interval from video and key frames are selected by gray scale histogram method. for each key frames, segmentation is performed and caption lines are detected using line scan method. lastly each characters are separated. This research improves speed and efficiency by color segmentation using local maximum analysis method before line scanning. Caption detection is a first stage of multimedia database organization and detected captions are used as input of text recognition system. Recognized captions can be searched by content based retrieval method.

  • PDF

Study on News Video Character Extraction and Recognition (뉴스 비디오 자막 추출 및 인식 기법에 관한 연구)

  • 김종열;김성섭;문영식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.10-19
    • /
    • 2003
  • Caption information in news videos can be useful for video indexing and retrieval since it usually suggests or implies the contents of the video very well. In this paper, a new algorithm for extracting and recognizing characters from news video is proposed, without a priori knowledge such as font type, color, size of character. In the process of text region extraction, in order to improve the recognition rate for videos with complex background at low resolution, continuous frames with identical text regions are automatically detected to compose an average frame. The image of the averaged frame is projected to horizontal and vertical direction, and we apply region filling to remove backgrounds to produce the character. Then, K-means color clustering is applied to remove remaining backgrounds to produce the final text image. In the process of character recognition, simple features such as white run and zero-one transition from the center, are extracted from unknown characters. These feature are compared with the pre-composed character feature set to recognize the characters. Experimental results tested on various news videos show that the proposed method is superior in terms of caption extraction ability and character recognition rate.

Extraction and Recognition of Character from MPEG-2 news Video Images (MPEG-2 뉴스영상에서 문자영역 추출 및 문자 인식)

  • Park, Yeong-Gyu;Kim, Seong-Guk;Yu, Won-Yeong;Kim, Jun-Cheol;Lee, Jun-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.5
    • /
    • pp.1410-1417
    • /
    • 1999
  • In this paper, we propose the method of extracting the caption regions from news video and the method of recognizing the captions that can be used mainly for content-based indexing and retrieving the MPEG-2 compressed news for NOD(News On Demand). The proposed method can reduce the searching time on detecting caption frames with minimum MPEG-2 decoding, and effectively eliminate the noise in caption regions by deliberately devised preprocessing. Because the kind of fonts that are used for captions is not various in the news video, an enhanced template matching method is used for recognizing characters. We could obtain good recognition result in the experiment of sports news video by the proposed methods.

  • PDF

Knowledge-based Video Retrieval System Using Korean Closed-caption (한국어 폐쇄자막을 이용한 지식기반 비디오 검색 시스템)

  • 조정원;정승도;최병욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.115-124
    • /
    • 2004
  • The content-based retrieval using low-level features can hardly provide the retrieval result that corresponds with conceptual demand of user for intelligent retrieval. Video includes not only moving picture data, but also audio or closed-caption data. Knowledge-based video retrieval is able to provide the retrieval result that corresponds with conceptual demand of user because of performing automatic indexing with such a variety data. In this paper, we present the knowledge-based video retrieval system using Korean closed-caption. The closed-caption is indexed by Korean keyword extraction system including the morphological analysis process. As a result, we are able to retrieve the video by using keyword from the indexing database. In the experiment, we have applied the proposed method to news video with closed-caption generated by Korean stenographic system, and have empirically confirmed that the proposed method provides the retrieval result that corresponds with more meaningful conceptual demand of user.

Extraction of Features in key frames of News Video for Content-based Retrieval (내용 기반 검색을 위한 뉴스 비디오 키 프레임의 특징 정보 추출)

  • Jung, Yung-Eun;Lee, Dong-Seop;Jeon, Keun-Hwan;Lee, Yang-Weon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.9
    • /
    • pp.2294-2301
    • /
    • 1998
  • The aim of this paper is to extract features from each news scenes for example, symbol icon which can be distinct each broadcasting corp, icon and caption which are has feature and important information for the scene in respectively, In this paper, we propose extraction methods of caption that has important prohlem of news videos and it can be classified in three steps, First of al!, we converted that input images from video frame to YIQ color vector in first stage. And then, we divide input image into regions in clear hy using equalized color histogram of input image, In last, we extracts caption using edge histogram based on vertical and horizontal line, We also propose the method which can extract news icon in selected key frames by the difference of inter-histogram and can divide each scene by the extracted icon. In this paper, we used comparison method of edge histogram instead of complex methcxls based on color histogram or wavelet or moving objects, so we shorten computation through using simpler algorithm. and we shown good result of feature's extraction.

  • PDF

Efficient Object Classification Scheme for Scanned Educational Book Image (교육용 도서 영상을 위한 효과적인 객체 자동 분류 기술)

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Young-Woon;Lee, Jong-Hyeok;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1323-1331
    • /
    • 2017
  • Despite the fact that the copyright has grown into a large-scale business, there are many constant problems especially in image copyright. In this study, we propose an automatic object extraction and classification system for the scanned educational book image by combining document image processing and intelligent information technology like deep learning. First, the proposed technology removes noise component and then performs a visual attention assessment-based region separation. Then we carry out grouping operation based on extracted block areas and categorize each block as a picture or a character area. Finally, the caption area is extracted by searching around the classified picture area. As a result of the performance evaluation, it can be seen an average accuracy of 83% in the extraction of the image and caption area. For only image region detection, up-to 97% of accuracy is verified.

A Method for Character Segmentation using MST(Minimum Spanning Tree) (MST를 이용한 문자 영역 분할 방법)

  • Chun, Byung-Tae;Kim, Young-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.73-78
    • /
    • 2006
  • Conventional caption extraction methods use the difference between frames or color segmentation methods from the whole image. Because these methods depend heavily on heuristics, we should have a priori knowledge of the captions to be extracted. Also they are difficult to implement. In this paper, we propose a method that uses little heuristic and simplified algorithm. We use topographical features of characters to extract the character points and use MST(Minimum Spanning Tree) to extract the candidate regions for captions. Character regions are determined by testing several conditions and verifying those candidate regions. Experimental results show that the candidate region extraction rate is 100%, and the character region extraction rate is 98.2%. And then we can see the results that caption area in complex images is well extracted.

  • PDF

Connected Component-based Regardless of Caption Size Caption Extraction with Neural Network (신경망을 이용한 자막 크기에 무관한 연결 객체 기반의 자막 추출)

  • Jeong, Je-Hui;Yun, Tae-Bok;Kim, Dong-Mun;Lee, Ji-Hyeong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.172-175
    • /
    • 2007
  • 영상에 나타나는 자막은 영상과 관계가 있는 정보를 포함한다. 이러한 자막의 정보를 이용하기 위해 영상으로부터 자막을 추출하는 연구는 근래에 들어 활발히 진행되고 있다. 기존의 연구는 일정한 높이의 자막이나 획의 두께를 가진 자막만을 추출하였다. 본 논문에서는 일정 크기 이상의 크기에 무관한 자막을 추출하는 방법을 제안한다. 먼저, 자막 추출을 위해서 영상에 포함된 픽셀들의 연결 객체를 생성하였다. 그리고 연결 객체 중에서 자막의 형태적인 특정의 패턴을 분석하고, 패턴을 이용하여 자막을 추출하였다. 실험에 사용된 영상은 다큐멘터리, 쇼 프로그램과 같은 대중 방송에서 획득하였으며, 실험 결과는 다양한 크기의 자막을 포함한 영상을 사용하여 실험하였고, 자막 추출의 결과는 찾아진 연결객체 중에 자막의 비율과 자막 중에 찾아진 자막의 비율로 분석하였다. 제안한 방법에 의해 다양한 크기의 자막을 추출할 수 있었다.

  • PDF

Video Captioning with Visual and Semantic Features

  • Lee, Sujin;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1318-1330
    • /
    • 2018
  • Video captioning refers to the process of extracting features from a video and generating video captions using the extracted features. This paper introduces a deep neural network model and its learning method for effective video captioning. In this study, visual features as well as semantic features, which effectively express the video, are also used. The visual features of the video are extracted using convolutional neural networks, such as C3D and ResNet, while the semantic features are extracted using a semantic feature extraction network proposed in this paper. Further, an attention-based caption generation network is proposed for effective generation of video captions using the extracted features. The performance and effectiveness of the proposed model is verified through various experiments using two large-scale video benchmarks such as the Microsoft Video Description (MSVD) and the Microsoft Research Video-To-Text (MSR-VTT).

Caption Extraction in News Video Sequence using Frequency Characteristic

  • Youglae Bae;Chun, Byung-Tae;Seyoon Jeong
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.835-838
    • /
    • 2000
  • Popular methods for extracting a text region in video images are in general based on analysis of a whole image such as merge and split method, and comparison of two frames. Thus, they take long computing time due to the use of a whole image. Therefore, this paper suggests the faster method of extracting a text region without processing a whole image. The proposed method uses line sampling methods, FFT and neural networks in order to extract texts in real time. In general, text areas are found in the higher frequency domain, thus, can be characterized using FFT The candidate text areas can be thus found by applying the higher frequency characteristics to neural network. Therefore, the final text area is extracted by verifying the candidate areas. Experimental results show a perfect candidate extraction rate and about 92% text extraction rate. The strength of the proposed algorithm is its simplicity, real-time processing by not processing the entire image, and fast skipping of the images that do not contain a text.

  • PDF