• Title/Summary/Keyword: Automatic Video Summarization

Search Result 16, Processing Time 0.026 seconds

Viewer's Affective Feedback for Video Summarization

  • Dammak, Majdi;Wali, Ali;Alimi, Adel M.
    • Journal of Information Processing Systems
    • /
    • v.11 no.1
    • /
    • pp.76-94
    • /
    • 2015
  • For different reasons, many viewers like to watch a summary of films without having to waste their time. Traditionally, video film was analyzed manually to provide a summary of it, but this costs an important amount of work time. Therefore, it has become urgent to propose a tool for the automatic video summarization job. The automatic video summarization aims at extracting all of the important moments in which viewers might be interested. All summarization criteria can differ from one video to another. This paper presents how the emotional dimensions issued from real viewers can be used as an important input for computing which part is the most interesting in the total time of a film. Our results, which are based on lab experiments that were carried out, are significant and promising.

An Automatic Summarization System of Baseball Game Video Using the Caption Information (자막 정보를 이용한 야구경기 비디오의 자동요약 시스템)

  • 유기원;허영식
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.107-113
    • /
    • 2002
  • In this paper, we propose a method and a software system for automatic summarization of baseball game videos. The proposed system pursues fast execution and high accuracy of summarization. To satisfy the requirement, the detection of important events in baseball video is performed through DC-based shot boundary detection algorithm and simple caption recognition method. Furthermore, the proposed system supports a hierarchical description so that users can browse and navigate videos in several levels of summarization. In this paper, we propose a method and a software system for automatic summarization of baseball game videos. The proposed system pursues fast execution and high accuracy of summarization. To satisfy the requirement, the detection of important events in baseball video is performed through DC-based shot boundary detection algorithm and simple caption recognition method. Furthermore, the proposed system supports a hierarchical description so that users can browse and navigate videos in several levels of summarization.

Automatic Summarization of Basketball Video Using the Score Information (스코어 정보를 이용한 농구 비디오의 자동요약)

  • Jung, Cheol-Kon;Kim, Eui-Jin;Lee, Gwang-Gook;Kim, Whoi-Yul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.881-887
    • /
    • 2007
  • In this paper, we proposed a method for content based automatic summarization of basketball game videos. For meaningful summary, we used the score information in basketball videos. And the score information is obtained by recognizing the digits on the score caption and analyzing the variation of the score. Generally, important events of basketball are the 3-point shot, one-sided runs, the lead changes, and so on. We have detected these events using score information and made summaries and highlights of basketball video games.

Automatic Summarization of Basketball Video Using the Score Information (스코어 정보를 이용한 농구 비디오의 자동요약)

  • Jung, Cheol-Kon;Kim, Eui-Jin;Lee, Gwang-Gook;Kim, Whoi-Yul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.8C
    • /
    • pp.738-744
    • /
    • 2007
  • In this paper, we proposed a method for content based automatic summarization of basketball game videos. For meaningful summary, we used the score information in basketball videos. And the score information is obtained by recognizing the digits on the score caption and analyzing the variation of the score. Generally, important events of basketball are the 3-point shot, one-sided runs, the lead changes, and so on. We have detected these events using score information and made summaries and highlights of basketball video games.

Automatic Video Management System Using Face Recognition and MPEG-7 Visual Descriptors

  • Lee, Jae-Ho
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.806-809
    • /
    • 2005
  • The main goal of this research is automatic video analysis using a face recognition technique. In this paper, an automatic video management system is introduced with a variety of functions enabled, such as index, edit, summarize, and retrieve multimedia data. The automatic management tool utilizes MPEG-7 visual descriptors to generate a video index for creating a summary. The resulting index generates a preview of a movie, and allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, a face recognition technique is utilized to personalbased video summarization and indexing in stored video data.

  • PDF

Automatic Genre Classification of Sports News Video Using Features of Playfield and Motion Vector (필드와 모션벡터의 특징정보를 이용한 스포츠 뉴스 비디오의 장르 분류)

  • Song, Mi-Young;Jang, Sang-Hyun;Cho, Hyung-Je
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.89-98
    • /
    • 2007
  • For browsing, searching, and manipulating video documents, an indexing technique to describe video contents is required. Until now, the indexing process is mostly carried out by specialists who manually assign a few keywords to the video contents and thereby this work becomes an expensive and time consuming task. Therefore, automatic classification of video content is necessary. We propose a fully automatic and computationally efficient method for analysis and summarization of spots news video for 5 spots news video such as soccer, golf, baseball, basketball and volleyball. First of all, spots news videos are classified as anchor-person Shots, and the other shots are classified as news reports shots. Shot classification is based on image preprocessing and color features of the anchor-person shots. We then use the dominant color of the field and motion features for analysis of sports shots, Finally, sports shots are classified into five genre type. We achieved an overall average classification accuracy of 75% on sports news videos with 241 scenes. Therefore, the proposed method can be further used to search news video for individual sports news and sports highlights.

A Video Summarization Study On Selecting-Out Topic-Irrelevant Shots Using N400 ERP Components in the Real-Time Video Watching (동영상 실시간 시청시 유발전위(ERP) N400 속성을 이용한 주제무관 쇼트 선별 자동영상요약 연구)

  • Kim, Yong Ho;Kim, Hyun Hee
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1258-1270
    • /
    • 2017
  • 'Semantic gap' has been a year-old problem in automatic video summarization, which refers to the gap between semantics implied in video summarization algorithms and what people actually infer from watching videos. Using the external EEG bio-feedback obtained from video watchers as a solution of this semantic gap problem has several another issues: First, how to define and measure noises against ERP waveforms as signals. Second, whether individual differences among subjects in terms of noise and SNR for conventional ERP studies using still images captured from videos are the same with those differently conceptualized and measured from videos. Third, whether individual differences of subjects by noise and SNR levels help to detect topic-irrelevant shots as signals which are not matched with subject's own semantic topical expectations (mis-match negativity at around 400m after stimulus on-sets). The result of repeated measures ANOVA test clearly shows a 2-way interaction effect between topic-relevance and noise level, implying that subjects of low noise level for video watching session are sensitive to topic-irrelevant visual shots, while showing another 3-way interaction among topic-relevance, noise and SNR levels, implying that subjects of high noise level are sensitive to topic-irrelevant visual shots only if they are of low SNR level.

The Influence of Topic Exploration and Topic Relevance On Amplitudes of Endogenous ERP Components in Real-Time Video Watching (실시간 동영상 시청시 주제탐색조건과 주제관련성이 내재적 유발전위 활성에 미치는 영향)

  • Kim, Yong Ho;Kim, Hyun Hee
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.874-886
    • /
    • 2019
  • To delve into the semantic gap problem of the automatic video summarization, we focused on an endogenous ERP responses at around 400ms and 600ms after the on-set of audio-visual stimulus. Our experiment included two factors: the topic exploration of experimental conditions (Topic Given vs. Topic Exploring) as a between-subject factor and the topic relevance of the shots (Topic-Relevant vs. Topic-Irrelevant) as a within-subject factor. For the Topic Given condition of 22 subjects, 6 short historical documentaries were shown with their video titles and written summaries, while in the Topic Exploring condition of 25 subjects, they were asked instead to explore topics of the same videos with no given information. EEG data were gathered while they were watching videos in real time. It was hypothesized that the cognitive activities to explore topics of videos while watching individual shots increase the amplitude of endogenous ERP at around 600 ms after the onset of topic relevant shots. The amplitude of endogenous ERP at around 400ms after the onset of topic-irrelevant shots was hypothesized to be lower in the Topic Given condition than that in the Topic Exploring condition. The repeated measure MANOVA test revealed that two hypotheses were acceptable.

Automatic Poster Generation System Using Protagonist Face Analysis

  • Yeonhwi You;Sungjung Yong;Hyogyeong Park;Seoyoung Lee;Il-Young Moon
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.287-293
    • /
    • 2023
  • With the rapid development of domestic and international over-the-top markets, a large amount of video content is being created. As the volume of video content increases, consumers tend to increasingly check data concerning the videos before watching them. To address this demand, video summaries in the form of plot descriptions, thumbnails, posters, and other formats are provided to consumers. This study proposes an approach that automatically generates posters to effectively convey video content while reducing the cost of video summarization. In the automatic generation of posters, face recognition and clustering are used to gather and classify character data, and keyframes from the video are extracted to learn the overall atmosphere of the video. This study used the facial data of the characters and keyframes as training data and employed technologies such as DreamBooth, a text-to-image generation model, to automatically generate video posters. This process significantly reduces the time and cost of video-poster production.

Automatic Summary Method of Linguistic Educational Video Using Multiple Visual Features (다중 비주얼 특징을 이용한 어학 교육 비디오의 자동 요약 방법)

  • Han Hee-Jun;Kim Cheon-Seog;Choo Jin-Ho;Ro Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1452-1463
    • /
    • 2004
  • The requirement of automatic video summary is increasing as bi-directional broadcasting contents and various user requests and preferences for the bi -directional broadcast environment are increasing. Automatic video summary is needed for an efficient management and usage of many contents in service provider as well. In this paper, we propose a method to generate a content-based summary of linguistic educational videos automatically. First, shot-boundaries and keyframes are generated from linguistic educational video and then multiple(low-level) visual features are extracted. Next, the semantic parts (Explanation part, Dialog part, Text-based part) of the linguistic educational video are generated using extracted visual features. Lastly the XMI- document describing summary information is made based on HieraTchical Summary architecture oi MPEG-7 MDS (Multimedia I)escription Scheme). Experimental results show that our proposed algorithm provides reasonable performance for automatic summary of linguistic educational videos. We verified that the proposed method is useful ior video summary system to provide various services as well as management of educational contents.

  • PDF