• Title/Summary/Keyword: Korean caption

Search Result 87, Processing Time 0.022 seconds

EXTRACTION OF DTV CLOSED CAPTION STREAM AND GENERATION OF VIDEO CAPTION FILE

  • Kim, Jung-Youn;Nam, Je-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.364-367
    • /
    • 2009
  • This paper presents a scheme that generates a caption file by extracting a Closed Caption stream from DTV signal. Note that Closed-Captioning service helps to bridge "digital divide" through extending broadcasting accessibility of a neglected class such as hearing-impaired person and foreigner. In Korea, DTV Closed Captioning standard was developed in June 2007, and Closed Captioning service should be supported by an enforcing law in all broadcasting services in 2008. In this paper, we describe the method of extracting a caption data from MPEG-2 Transport Stream of ATSC-based digital TV signal and generating a caption file (SAMI and SRT) using the extracted caption data and time information. Experimental results verify the feasibility of a generated caption file using a PC-based media player which is widely used in multimedia service.

  • PDF

Web Image Caption Extraction using Positional Relation and Lexical Similarity (위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출)

  • Lee, Hyoung-Gyu;Kim, Min-Jeong;Hong, Gum-Won;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.335-345
    • /
    • 2009
  • In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

An Effective Method for Replacing Caption in Video Images (비디오 자막 문자의 효과적인 교환 방법)

  • Chun Byung-Tae;Kim Sook-Yeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.97-104
    • /
    • 2005
  • Caption texts frequently inserted in a manufactured video image for helping an understanding of the TV audience. In the movies. replacement of the caption texts can be achieved without any loss of an original image, because the caption texts have their own track in the films. To replace the caption texts in early methods. the new texts have been inserted the caption area in the video images, which is filled a certain color for removing established caption texts. However, the use of these methods could be lost the original images in the caption area, so it is a Problematic method to the TV audience. In this Paper, we propose a new method for replacing the caption text after recovering original image in the caption area. In the experiments. the results in the complex images show some distortion after recovering original images, but most results show a good caption text with the recovered image. As such, this new method is effectively demonstrated to replace the caption texts in video images.

  • PDF

Development of Korean Sign Language Generation System using TV Caption Signal (TV 자막 신호를 이용한 한글 수화 발생 시스템의 개발)

  • Kim, Dae-Jin;Kim, Jung-Bae;Jang, Won;Bien, Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.5
    • /
    • pp.32-44
    • /
    • 2002
  • In this paper, we propose TV caption-based KSL(Korean Sign Language) generation system. Through TV caption decoder, this caption signal is transmitted to PC. Next, caption signal is segmented into meaning units by morphological analyzer in considering specific characteristics of Korean sign language. Finally, 3D KSL generation system represents the transformed morphological information by 3D visual graphics. Specifically, we propose a morphological analyzer with many pre-processing techniques for real-time capability. Our developed system is applied to real TV caption program. Through usage of the deaf, we conclude that our system is sufficiently usable compared to conventional TV caption program.

Knowledge-Based Numeric Open Caption Recognition for Live Sportscast

  • Sung, Si-Hun
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1871-1874
    • /
    • 2003
  • Knowledge-based numeric open caption recognition is proposed that can recognize numeric captions generated by character generator (CG) and automatically superimpose a modified caption using the recognized text only when a valid numeric caption appears in the aimed specific region of a live sportscast scene produced by other broadcasting stations. in the proposed method, mesh features are extracted from an enhanced binary image as feature vectors, then a valuable information is recovered from a numeric image by perceiving the character using a multiplayer perceptron (MLP) network. The result is verified using knowledge-based hie set designed for a more stable and reliable output and then the modified information is displayed on a screen by CG. MLB Eye Caption based on the proposed algorithm has already been used for regular Major League Base-ball (MLB) programs broadcast five over a Korean nationwide TV network and has produced a favorable response from Korean viewer.

  • PDF

Development of Closed Caption Decoder System on Broadcast Monitor (방송용 모니터의 방송 자막 디코더 시스템 개발)

  • Song, Young-Kyu;Jeong, Jae-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.36-39
    • /
    • 2010
  • 멀티 포맷 방송용 모니터는 SDI 신호뿐만 아니라 HDMI, DVI, Component, Composite로 전송되는 영상, 음성, 부가 데이터를 보여주는 모니터로 방송용 레퍼런스 모니터로 사용되고 있다. 특히 부가 데이터 중에서 Closed Caption의 경우 북미에서는 EIA-608과 EIA-708 두 가지 표준이 있고, 세부적으로 네 가지의 방법으로 전송되는데 일반적인 방송용 모니터에는 적용되어 있는 것이 극히 드물다. 또한 SDI 신호로 전송되는 Closed Caption 데이터를 Decoding하는 상용 IC는 거의 없는 수준이다. 이에 본 논문에서는 SDI로 전송되는 다양한 방식의 Closed Caption 데이터를 모두 표시하기 위한 방법을 제안하였다. 먼저 VBI (Vertical Blanking Interval) 에 아날로그 Waveform 형태로 입력되는 경우 데이터의 신뢰도를 높이기 위해 Clock Run In을 실시간으로 검출 할 수 있는 구조를 제안하고 FPGA (Field Programmable Gata Array)로 구현하였다. 또한 VANC (Vertical Ancillary Space)로 들어오는 Caption데이터의 경우 특히 EIA-708 처럼 많은 데이터가 입력되는 경우 실시간으로 처리하기 위해서 기존의 I2C와 같은 느린 전송 방법이 아닌 FPGA와 프로세서 간에 메모리를 직접 Access 할 수 있는 방법을 제안하였다. 본 논문에서 제안 한 방법을 FPGA로 구현하였고, 실제 미국이나 캐나다 방송국에서 사용하는 Caption 인코더 장비 뿐만아니라 방송 콘텐츠를 직접 이용하여 동작 상태를 검증하였다.

  • PDF

Sports Video Position Retrival System Using Frame Merging (프레임 병합을 이용한 스포츠 동영상 위치 검색 시스템)

  • 이지현;임정훈;이양원
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.619-623
    • /
    • 2002
  • We can speak caption as information that can not except caption on sports video. The sports highlight were composed that we recognize captioning. This paper is the necessary work to the middle-step to analysis the caption through the retrieval and discrimination from the position of caption. This paper improve at first and simplify the image through the excellent threshold value algorithm in the preprocessing and then use method that can analysis caption through the multiplex frame merging algorithm. Its speed performing shows up higher and simplier than the region growing process.

  • PDF

Size-Independent Caption Extraction for Korean Captions with Edge Connected Components

  • Jung, Je-Hee;Kim, Jaekwang;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.308-318
    • /
    • 2012
  • Captions include information which relates to the images. In order to obtain the information in the captions, text extraction methods from images have been developed. However, most existing methods can be applied to captions with a fixed height or stroke width using fixed pixel-size or block-size operators which are derived from morphological supposition. We propose an edge connected components based method that can extract Korean captions that are composed of various sizes and fonts. We analyze the properties of edge connected components embedding captions and build a decision tree which discriminates edge connected components which include captions from ones which do not. The images for the experiment are collected from broadcast programs such as documentaries and news programs which include captions with various heights and fonts. We evaluate our proposed method by comparing the performance of the latent caption area extraction. The experiment shows that the proposed method can efficiently extract various sizes of Korean captions.

A Study on Multimedia Application Service using DTV Closed Caption Data (디지털방송 자막데이터를 이용한 멀티미디어 응용 서비스 연구)

  • Kim, Jung-Youn;Nam, Je-Ho
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.488-500
    • /
    • 2009
  • In this paper, we study on making a use of value-added services using DTV closed caption data. Note that Closed-Captioning service helps to bridge "digital divide" through extending broadcasting accessibility of a neglected class such as hearing-impaired person and foreigner. In Korea, DTV Closed Captioning standard was developed in June 2007, and Closed Captioning service should be provided by an enforcing law in all broadcasting services in April 2008. Here, we describe the method of extracting a caption data from MPEG-2 Transport Stream of ATSC-based digital TV signal and generating a caption file using the extracted caption data and time information. In addition, we present the segmentation method of broadcasting content using caption file. Experimental results show that implemented S/W tool provides the feasibility of the proposed methods and the usability of closed caption for a variety of data application service.

Knowledge-based Video Retrieval System Using Korean Closed-caption (한국어 폐쇄자막을 이용한 지식기반 비디오 검색 시스템)

  • 조정원;정승도;최병욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.115-124
    • /
    • 2004
  • The content-based retrieval using low-level features can hardly provide the retrieval result that corresponds with conceptual demand of user for intelligent retrieval. Video includes not only moving picture data, but also audio or closed-caption data. Knowledge-based video retrieval is able to provide the retrieval result that corresponds with conceptual demand of user because of performing automatic indexing with such a variety data. In this paper, we present the knowledge-based video retrieval system using Korean closed-caption. The closed-caption is indexed by Korean keyword extraction system including the morphological analysis process. As a result, we are able to retrieve the video by using keyword from the indexing database. In the experiment, we have applied the proposed method to news video with closed-caption generated by Korean stenographic system, and have empirically confirmed that the proposed method provides the retrieval result that corresponds with more meaningful conceptual demand of user.