• Title/Summary/Keyword: Caption

Search Result 167, Processing Time 0.023 seconds

Creation of Soccer Video Highlights Using Caption Information (자막 정보를 이용한 축구 비디오 하이라이트 생성)

  • Shin Seong-Yoon;Kang Il-Ko;Rhee Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.65-76
    • /
    • 2005
  • A digital video is a very long data that requires large-capacity storage space. As such, prior to watching a long original video, video watchers want to watch a summarized version of the video. In the field of sports, in particular, highlights videos are frequently watched. In short, a highlights video allows a video watcher to determine whether the highlights video is well worth watching. This paper proposes a scheme for creating soccer video highlights using the structural features of captions in terms of time and space. Such structural features are used to extract caption frame intervals and caption keyframes. A highlights video is created through resetting shots for caption keyframes, by means of logical indexing, and through the use of the rule for creating highlights. Finally, highlights videos and video segments can be searched and browsed in a way that allows the video watcher to select his/her desired items from the browser.

  • PDF

Knowledge-based Video Retrieval System Using Korean Closed-caption (한국어 폐쇄자막을 이용한 지식기반 비디오 검색 시스템)

  • 조정원;정승도;최병욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.115-124
    • /
    • 2004
  • The content-based retrieval using low-level features can hardly provide the retrieval result that corresponds with conceptual demand of user for intelligent retrieval. Video includes not only moving picture data, but also audio or closed-caption data. Knowledge-based video retrieval is able to provide the retrieval result that corresponds with conceptual demand of user because of performing automatic indexing with such a variety data. In this paper, we present the knowledge-based video retrieval system using Korean closed-caption. The closed-caption is indexed by Korean keyword extraction system including the morphological analysis process. As a result, we are able to retrieve the video by using keyword from the indexing database. In the experiment, we have applied the proposed method to news video with closed-caption generated by Korean stenographic system, and have empirically confirmed that the proposed method provides the retrieval result that corresponds with more meaningful conceptual demand of user.

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

The Effect of Layout Framing on SNS Shopping Information: A-D Perspective (SNS 쇼핑정보의 레이아웃 프레이밍 연구: A-D 관점에서)

  • Yanjinlkham Khurelchuluun;Zainab Shabir;Dong-Seok Lee;Gwi-Gon Kim
    • Journal of Industrial Convergence
    • /
    • v.21 no.11
    • /
    • pp.1-12
    • /
    • 2023
  • With the recent explosive popularity of SNS, it is increasingly important to utilize SNS marketing, and in this process, the importance of image and caption order in SNS layout is also growing. This research aims to analyze the impact of SNS layouts (Image First vs. Caption First) on the user's attitude toward SNS shopping. A survey was conducted targeting 350 general public and college(graduate) students living in Daegu City and Gyeongbuk Province. The data was analyzed using PROCESS, regression analysis, and t-test by SPSS 21.0 program. The result of this study, it was confirmed that the Image First was more accessible than the Caption First. The Caption First was confirmed to be more diagnostic than the Image First. Moreover, from three specific mediation paths, only two were confirmed, named is through diagnosticity and usefulness, and through accessibility, diagnosticity, and usefullness. The path through diagnosticity and usefulness were stronger than another. Additionally, the impact of accessibility on diagnosticity was found to be higher when involvement was high rather than when involvement was low.

Automatic Indexing for the Content-based Retrieval of News Video (뉴스 비디오의 내용기반 검색을 위한 자동 인덱싱)

  • Yang, Myung-Sup;Yoo, Cheol-Jung;Chang, Ok-Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.5
    • /
    • pp.1130-1139
    • /
    • 1998
  • This paper presents an integrated solution for the content-based news video indexing and the retrieval. Currently, it is impossible to automatically index a general video, but we can index a specific structural video such as news videos. Our proposed model extracts automatically the key frames by using the structured knowledge of news and consists of the news item segmentation, caption recognition and search browser modules. We present above three modules in the following: the news event segmentation module recognizes an anchor-person shot based on face recognition, and then its news event are divided by the anchor-person's frame information. The caption recognition module detects the caption-frames with the caption characteristics, extracts their character region by the using split-merge method, and then recognizes characters with OCR software. Finally, the search browser module could make a various of searching mechanism possible.

  • PDF

Design and Implementation of Multimedia Data Retrieval System using Image Caption Information (영상 캡션 정보를 이용한 멀티미디어 데이터 검색 시스템의 설계 및 구현)

  • 이현창;배상현
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.630-636
    • /
    • 2004
  • According to the increase of audio and video data utilization, the presentation of multimedia data contents and the work of retrieving, storing and manipulating a multimedia data have been the focus of recent work. The display for multimedia data should retrieve and access the contents easily that users want to present. This study is about the design and implementation of a system to retrieve multimedia data based on the contents of documentation or the caption information of a multimedia data for retrieving documentation including multimedia data. It intends to develop an filtering step to retrieve all of keyword within the caption information of multimedia data and text of a documentation. Also, the system is designed to retrieve a large amount of data quickly using an inverted file structure available for B+ tree.

Connected Component-Based and Size-Independent Caption Extraction with Neural Networks (신경망을 이용한 자막 크기에 무관한 연결 객체 기반의 자막 추출)

  • Jung, Je-Hee;Yoon, Tae-Bok;Kim, Dong-Moon;Lee, Jee-Hyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.924-929
    • /
    • 2007
  • Captions which appear in images include information that relates to the images. In order to obtain the information carried by captions, the methods for text extraction from images have been developed. However, most existing methods can be applied to captions with fixed height of stroke's width. We propose a method which can be applied to various caption size. Our method is based on connected components. And then the edge pixels are detected and grouped into connected components. We analyze the properties of connected components and build a neural network which discriminates connected components which include captions from ones which do not. Experimental data is collected from broadcast programs such as news, documentaries, and show programs which include various height caption. Experimental result is evaluated by two criteria : recall and precision. Recall is the ratio of the identified captions in all the captions in images and the precision is the ratio of the captions in the objects identified as captions. The experiment shows that the proposed method can efficiently extract captions various in size.

Study on the meaning and delivery of caption recording in mass media - On the function of caption recording TV mass media and video art - (미디어에 있어서의 자막기록의 의미와 전달성 - 공중파방송과 비디오 아트에서의 자막기록을 중심으로 -)

  • Rhee, Ji-Young
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.3 no.2
    • /
    • pp.78-96
    • /
    • 2003
  • Nowadays, mass media innovates and has the great power of revolution of our lives. Marshall MacLuhan says new media is the new method of language and also, it is connecting to the real world possibly. The letters make media world a big different. At the end of voiceless age, the caption not only delivers means of the contents but also provides for the composition of the screen itself. In these kinds of composition elements contain explanations such as aesthetic, entertainment, and revival aspects. The caption as translation that used to use was as changing as new way of exploring method. To deliver means of contents, the letters of inside screen has extremely big changes and meaning as well. The design of lettering is the new aesthetic method of media world. Also, the elements of lettering is approaching as the new way of lives. Therefore, this study is to provide the aspects of the lettering to the mass media respectively.

Development of Video Caption Editor with Kinetic Typography (글자가 움직이는 동영상 자막 편집 어플리케이션 개발)

  • Ha, Yea-Young;Kim, So-Yeon;Park, In-Sun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.385-392
    • /
    • 2014
  • In this paper, we developed an Android application named VIVID where users can edit the moving captions easily on smartphone videos. This makes it convenient to set the time range, text, location and motion of caption text on the video. The editing result is uploaded to web server in html and can be shared with other users.

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF