• Title/Summary/Keyword: TV Caption

Search Result 22, Processing Time 0.026 seconds

Development of Korean Sign Language Generation System using TV Caption Signal (TV 자막 신호를 이용한 한글 수화 발생 시스템의 개발)

  • Kim, Dae-Jin;Kim, Jung-Bae;Jang, Won;Bien, Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.5
    • /
    • pp.32-44
    • /
    • 2002
  • In this paper, we propose TV caption-based KSL(Korean Sign Language) generation system. Through TV caption decoder, this caption signal is transmitted to PC. Next, caption signal is segmented into meaning units by morphological analyzer in considering specific characteristics of Korean sign language. Finally, 3D KSL generation system represents the transformed morphological information by 3D visual graphics. Specifically, we propose a morphological analyzer with many pre-processing techniques for real-time capability. Our developed system is applied to real TV caption program. Through usage of the deaf, we conclude that our system is sufficiently usable compared to conventional TV caption program.

An Effective Method for Replacing Caption in Video Images (비디오 자막 문자의 효과적인 교환 방법)

  • Chun Byung-Tae;Kim Sook-Yeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.97-104
    • /
    • 2005
  • Caption texts frequently inserted in a manufactured video image for helping an understanding of the TV audience. In the movies. replacement of the caption texts can be achieved without any loss of an original image, because the caption texts have their own track in the films. To replace the caption texts in early methods. the new texts have been inserted the caption area in the video images, which is filled a certain color for removing established caption texts. However, the use of these methods could be lost the original images in the caption area, so it is a Problematic method to the TV audience. In this Paper, we propose a new method for replacing the caption text after recovering original image in the caption area. In the experiments. the results in the complex images show some distortion after recovering original images, but most results show a good caption text with the recovered image. As such, this new method is effectively demonstrated to replace the caption texts in video images.

  • PDF

EXTRACTION OF DTV CLOSED CAPTION STREAM AND GENERATION OF VIDEO CAPTION FILE

  • Kim, Jung-Youn;Nam, Je-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.364-367
    • /
    • 2009
  • This paper presents a scheme that generates a caption file by extracting a Closed Caption stream from DTV signal. Note that Closed-Captioning service helps to bridge "digital divide" through extending broadcasting accessibility of a neglected class such as hearing-impaired person and foreigner. In Korea, DTV Closed Captioning standard was developed in June 2007, and Closed Captioning service should be supported by an enforcing law in all broadcasting services in 2008. In this paper, we describe the method of extracting a caption data from MPEG-2 Transport Stream of ATSC-based digital TV signal and generating a caption file (SAMI and SRT) using the extracted caption data and time information. Experimental results verify the feasibility of a generated caption file using a PC-based media player which is widely used in multimedia service.

  • PDF

A Study on Analyzing Caption Characteristic for Recovering Original Images of Caption Region in TV Scene (원 영상 복원을 위한 TV 자막 특성 분석에 관한 연구)

  • Chun, Byung-Tae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.4
    • /
    • pp.177-182
    • /
    • 2010
  • Research on recovering original images from captions has been widely done in a reusability point of view. In usual, dynamic images imported from foreign countries often have captions of foreign languages, so it is necessary to translate them into one's language. For the natural exchange of captions without loss of original images, recovering the images corresponding to captions is necessary. However, though recovering original images is very important, systematic analysis on the characteristics of captions has not been done yet. Therefore, in this paper, we first survey the classification methods of TV programs at academic worlds, broadcasting stations, and broadcasting organizations, and then analyses the frequency of captions, importance of caption contents, and necessity of recovering according to their types. Also, we analyze the characteristics of captions which are significantly recognized to be necessary to recover, and use them as recovering information.

A Research of Character Graphic Design on Larger Television Screens -Based on Analysis of its Visual Perception- (TV화면 대형화에 따른 문자그래픽 표현 연구 -시각인지도 분석 기반-)

  • Lee, Kook-Se;Moon, Nam-Mee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.129-138
    • /
    • 2009
  • Character graphic design of TV screen, the major visual element, has become greatly important in its roles to help viewers understand visual information better and to enhance the qualities of the program. This research is to figure our a way of changing and improving the attributes of TV captions and graphics such as fonts, size, and caption speed appropriate to bigger and better qualified TV screen. Based on two Delphi surveys of graphics experts along with the theoretical studies, this article analyzes the relevance of visual perception to various visual elements of TV screen, and proposes a better plan in visual effects for various media under OSMU (One Source Multi Use).

  • PDF

A Study on Multimedia Application Service using DTV Closed Caption Data (디지털방송 자막데이터를 이용한 멀티미디어 응용 서비스 연구)

  • Kim, Jung-Youn;Nam, Je-Ho
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.488-500
    • /
    • 2009
  • In this paper, we study on making a use of value-added services using DTV closed caption data. Note that Closed-Captioning service helps to bridge "digital divide" through extending broadcasting accessibility of a neglected class such as hearing-impaired person and foreigner. In Korea, DTV Closed Captioning standard was developed in June 2007, and Closed Captioning service should be provided by an enforcing law in all broadcasting services in April 2008. Here, we describe the method of extracting a caption data from MPEG-2 Transport Stream of ATSC-based digital TV signal and generating a caption file using the extracted caption data and time information. In addition, we present the segmentation method of broadcasting content using caption file. Experimental results show that implemented S/W tool provides the feasibility of the proposed methods and the usability of closed caption for a variety of data application service.

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

Implement of Realtime Character Recognition System for Numeric Region of Sportscast (스포츠 중계 화면 내 숫자영역에 대한 실시간 문자인식 시스템 구현)

  • 성시훈;전우성
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.5-8
    • /
    • 2001
  • We propose a realtime numeric caption recognition algorithm that automatically recognizes the numeric caption generated by computer graphics (CG) and displays the modified caption using the recognized resource only when a valuable numeric caption appears in the aimed specific region of the live sportscast scene produced by other broadcasting stations. We extract the mesh feature from the enhanced binary image as a feature vector after acquiring the sports broadcast scenes using a frame grabber in realtime and then recover the valuable resource from just a numeric image by perceiving the character using the neural network. Finally, the result is verified by the knowledge-based rule set designed for more stable and reliable output and is displayed on a screen as the converted CC caption serving our purpose. At present, we have actually provided the realtime automatic mile-to-kilometer caption conversion system taking up our algorithm f3r the regular Major League Baseball (MLB) program being broadcasted live throughout Korea over our nationwide network. This caption conversion system is able to automatically convert the caption in mile universally used in the United States into that in kilometer in realtime, which is familiar to almost Koreans, and makes us get a favorable criticism from the TV audience.

  • PDF

Knowledge-Based Numeric Open Caption Recognition for Live Sportscast

  • Sung, Si-Hun
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1871-1874
    • /
    • 2003
  • Knowledge-based numeric open caption recognition is proposed that can recognize numeric captions generated by character generator (CG) and automatically superimpose a modified caption using the recognized text only when a valid numeric caption appears in the aimed specific region of a live sportscast scene produced by other broadcasting stations. in the proposed method, mesh features are extracted from an enhanced binary image as feature vectors, then a valuable information is recovered from a numeric image by perceiving the character using a multiplayer perceptron (MLP) network. The result is verified using knowledge-based hie set designed for a more stable and reliable output and then the modified information is displayed on a screen by CG. MLB Eye Caption based on the proposed algorithm has already been used for regular Major League Base-ball (MLB) programs broadcast five over a Korean nationwide TV network and has produced a favorable response from Korean viewer.

  • PDF

Closed Caption Synchronization Using Dynamic Programming (동적계획법을 이용한 장애인방송 폐쇄자막 동기화)

  • Oh, Juhyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.461-464
    • /
    • 2020
  • 지상파 방송에서는 청각장애인을 위해 폐쇄자막(closed caption) 서비스가 제공되고 있다. 현재의 폐쇄자막 방송은 속기사가 실시간으로 방송을 보면서 입력하기 때문에 지연이 있다. 또한 이렇게 입력된 폐쇄자막은 TV 프로그램 영상과 별도로 저장되기 때문에 영상과 그 시작점이 맞지 않는 경우가 대부분이다. 폐쇄자막을 온라인 서비스 등에 제공하고자 할 때 이러한 문제로 인해 영상과의 동기가 맞지 않아 사용이 어렵다. 본 논문에서는 TV 프로그램의 음성을 인식하여 동기화된 텍스트를 추출하고, 이를 기 저장된 폐쇄자막과 정렬하여 동기화하는 방법을 제안한다. 실제 TV 프로그램과 자막에 적용하였을 때 대부분의 음절과 라인에서 동기화가 정확히 이루어짐을 확인하였다.

  • PDF