• Title/Summary/Keyword: Korean caption

Search Result 87, Processing Time 0.027 seconds

EXTRACTION OF DTV CLOSED CAPTION STREAM AND GENERATION OF VIDEO CAPTION FILE

  • Kim, Jung-Youn;Nam, Je-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.364-367
    • /
    • 2009
  • This paper presents a scheme that generates a caption file by extracting a Closed Caption stream from DTV signal. Note that Closed-Captioning service helps to bridge "digital divide" through extending broadcasting accessibility of a neglected class such as hearing-impaired person and foreigner. In Korea, DTV Closed Captioning standard was developed in June 2007, and Closed Captioning service should be supported by an enforcing law in all broadcasting services in 2008. In this paper, we describe the method of extracting a caption data from MPEG-2 Transport Stream of ATSC-based digital TV signal and generating a caption file (SAMI and SRT) using the extracted caption data and time information. Experimental results verify the feasibility of a generated caption file using a PC-based media player which is widely used in multimedia service.

  • PDF

Web Image Caption Extraction using Positional Relation and Lexical Similarity (위치적 연관성과 어휘적 유사성을 이용한 웹 이미지 캡션 추출)

  • Lee, Hyoung-Gyu;Kim, Min-Jeong;Hong, Gum-Won;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.335-345
    • /
    • 2009
  • In this paper, we propose a new web image caption extraction method considering the positional relation between a caption and an image and the lexical similarity between a caption and the main text containing the caption. The positional relation between a caption and an image represents how the caption is located with respect to the distance and the direction of the corresponding image. The lexical similarity between a caption and the main text indicates how likely the main text generates the caption of the image. Compared with previous image caption extraction approaches which only utilize the independent features of image and captions, the proposed approach can improve caption extraction recall rate, precision rate and 28% F-measure by including additional features of positional relation and lexical similarity.

An Effective Method for Replacing Caption in Video Images (비디오 자막 문자의 효과적인 교환 방법)

  • Chun Byung-Tae;Kim Sook-Yeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.97-104
    • /
    • 2005
  • Caption texts frequently inserted in a manufactured video image for helping an understanding of the TV audience. In the movies. replacement of the caption texts can be achieved without any loss of an original image, because the caption texts have their own track in the films. To replace the caption texts in early methods. the new texts have been inserted the caption area in the video images, which is filled a certain color for removing established caption texts. However, the use of these methods could be lost the original images in the caption area, so it is a Problematic method to the TV audience. In this Paper, we propose a new method for replacing the caption text after recovering original image in the caption area. In the experiments. the results in the complex images show some distortion after recovering original images, but most results show a good caption text with the recovered image. As such, this new method is effectively demonstrated to replace the caption texts in video images.

  • PDF

Development of Korean Sign Language Generation System using TV Caption Signal (TV 자막 신호를 이용한 한글 수화 발생 시스템의 개발)

  • Kim, Dae-Jin;Kim, Jung-Bae;Jang, Won;Bien, Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.5
    • /
    • pp.32-44
    • /
    • 2002
  • In this paper, we propose TV caption-based KSL(Korean Sign Language) generation system. Through TV caption decoder, this caption signal is transmitted to PC. Next, caption signal is segmented into meaning units by morphological analyzer in considering specific characteristics of Korean sign language. Finally, 3D KSL generation system represents the transformed morphological information by 3D visual graphics. Specifically, we propose a morphological analyzer with many pre-processing techniques for real-time capability. Our developed system is applied to real TV caption program. Through usage of the deaf, we conclude that our system is sufficiently usable compared to conventional TV caption program.

Knowledge-Based Numeric Open Caption Recognition for Live Sportscast

  • Sung, Si-Hun
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1871-1874
    • /
    • 2003
  • Knowledge-based numeric open caption recognition is proposed that can recognize numeric captions generated by character generator (CG) and automatically superimpose a modified caption using the recognized text only when a valid numeric caption appears in the aimed specific region of a live sportscast scene produced by other broadcasting stations. in the proposed method, mesh features are extracted from an enhanced binary image as feature vectors, then a valuable information is recovered from a numeric image by perceiving the character using a multiplayer perceptron (MLP) network. The result is verified using knowledge-based hie set designed for a more stable and reliable output and then the modified information is displayed on a screen by CG. MLB Eye Caption based on the proposed algorithm has already been used for regular Major League Base-ball (MLB) programs broadcast five over a Korean nationwide TV network and has produced a favorable response from Korean viewer.

  • PDF

Development of Closed Caption Decoder System on Broadcast Monitor (방송용 모니터의 방송 자막 디코더 시스템 개발)

  • Song, Young-Kyu;Jeong, Jae-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.36-39
    • /
    • 2010
  • 멀티 포맷 방송용 모니터는 SDI 신호뿐만 아니라 HDMI, DVI, Component, Composite로 전송되는 영상, 음성, 부가 데이터를 보여주는 모니터로 방송용 레퍼런스 모니터로 사용되고 있다. 특히 부가 데이터 중에서 Closed Caption의 경우 북미에서는 EIA-608과 EIA-708 두 가지 표준이 있고, 세부적으로 네 가지의 방법으로 전송되는데 일반적인 방송용 모니터에는 적용되어 있는 것이 극히 드물다. 또한 SDI 신호로 전송되는 Closed Caption 데이터를 Decoding하는 상용 IC는 거의 없는 수준이다. 이에 본 논문에서는 SDI로 전송되는 다양한 방식의 Closed Caption 데이터를 모두 표시하기 위한 방법을 제안하였다. 먼저 VBI (Vertical Blanking Interval) 에 아날로그 Waveform 형태로 입력되는 경우 데이터의 신뢰도를 높이기 위해 Clock Run In을 실시간으로 검출 할 수 있는 구조를 제안하고 FPGA (Field Programmable Gata Array)로 구현하였다. 또한 VANC (Vertical Ancillary Space)로 들어오는 Caption데이터의 경우 특히 EIA-708 처럼 많은 데이터가 입력되는 경우 실시간으로 처리하기 위해서 기존의 I2C와 같은 느린 전송 방법이 아닌 FPGA와 프로세서 간에 메모리를 직접 Access 할 수 있는 방법을 제안하였다. 본 논문에서 제안 한 방법을 FPGA로 구현하였고, 실제 미국이나 캐나다 방송국에서 사용하는 Caption 인코더 장비 뿐만아니라 방송 콘텐츠를 직접 이용하여 동작 상태를 검증하였다.

  • PDF

Sports Video Position Retrival System Using Frame Merging (프레임 병합을 이용한 스포츠 동영상 위치 검색 시스템)

  • 이지현;임정훈;이양원
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.619-623
    • /
    • 2002
  • We can speak caption as information that can not except caption on sports video. The sports highlight were composed that we recognize captioning. This paper is the necessary work to the middle-step to analysis the caption through the retrieval and discrimination from the position of caption. This paper improve at first and simplify the image through the excellent threshold value algorithm in the preprocessing and then use method that can analysis caption through the multiplex frame merging algorithm. Its speed performing shows up higher and simplier than the region growing process.

  • PDF

Size-Independent Caption Extraction for Korean Captions with Edge Connected Components

  • Jung, Je-Hee;Kim, Jaekwang;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.308-318
    • /
    • 2012
  • Captions include information which relates to the images. In order to obtain the information in the captions, text extraction methods from images have been developed. However, most existing methods can be applied to captions with a fixed height or stroke width using fixed pixel-size or block-size operators which are derived from morphological supposition. We propose an edge connected components based method that can extract Korean captions that are composed of various sizes and fonts. We analyze the properties of edge connected components embedding captions and build a decision tree which discriminates edge connected components which include captions from ones which do not. The images for the experiment are collected from broadcast programs such as documentaries and news programs which include captions with various heights and fonts. We evaluate our proposed method by comparing the performance of the latent caption area extraction. The experiment shows that the proposed method can efficiently extract various sizes of Korean captions.

A Study on Multimedia Application Service using DTV Closed Caption Data (디지털방송 자막데이터를 이용한 멀티미디어 응용 서비스 연구)

  • Kim, Jung-Youn;Nam, Je-Ho
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.488-500
    • /
    • 2009
  • In this paper, we study on making a use of value-added services using DTV closed caption data. Note that Closed-Captioning service helps to bridge "digital divide" through extending broadcasting accessibility of a neglected class such as hearing-impaired person and foreigner. In Korea, DTV Closed Captioning standard was developed in June 2007, and Closed Captioning service should be provided by an enforcing law in all broadcasting services in April 2008. Here, we describe the method of extracting a caption data from MPEG-2 Transport Stream of ATSC-based digital TV signal and generating a caption file using the extracted caption data and time information. In addition, we present the segmentation method of broadcasting content using caption file. Experimental results show that implemented S/W tool provides the feasibility of the proposed methods and the usability of closed caption for a variety of data application service.

Korean-to-English Machine Translation System based on Verb-Phrase : 'CaptionEye/KE' (용언구에 기반한 한영 기계번역 시스템 : 'CaptionEye/KE')

  • Seo, Young-Ae;Kim, Young-Kil;Seo, Kwang-Jun;Choi, Sung-Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10a
    • /
    • pp.269-272
    • /
    • 2000
  • 본 논문에서는 ETRI에서 개발 중인 용언구에 기반한 한영 기계번역 시스템 CaptionEye/KE에 대하여 논술한다. CaptionEye/KE는 대량의 고품질 한-영 양방향 코퍼스로부터 추출된 격틀사전 및 대역패턴, 대역문 연결패턴 등의 언어 지식들을 바탕으로 하여, 한국어의 용언구 단위의 번역을 조합하여 전체 번역을 수행한다. CaptionEye/KE는 변환방식의 기계번역 시스템으로서, 크게 한국어 형태소 분석기, 한국어 구문 분석기, 부분 대역문 연결기, 부분 대역문 생성기, 대역문 선택/정련기, 영어형태소 생성기로 구성된다. 입력된 한국어 문장에 대해 형태소 분석 및 태깅을 수행한 후, 격틀사전을 이용하여 구문구조를 분석하고 의존 트리를 생성해 낸다. 이렇게 생성된 의존 트리로부터 대역문 연결패턴을 이용하여 용언구들간의 연결에 대한 번역을 수행한 후 대역패턴을 이용하여 각 용언구들을 번역하고 문장 정련과정을 거쳐 영어 문장을 최종 생성한다.

  • PDF