• Title/Summary/Keyword: Video Summarization

Search Result 60, Processing Time 0.027 seconds

Activity-based key-frame detection and video summarization in a wide-area surveillance system (광범위한 지역 감시시스템에서의 행동기반 키프레임 검출 및 비디오 요약)

  • Kwon, Hye-Young;Lee, Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.169-178
    • /
    • 2008
  • In this paper, we propose a video summarization system which is based on activity in video acquired by multiple non-overlapping cameras for wide-area surveillance. The proposed system separates persons by time-independent background removal and detects activities of the segmented persons by their motions. In this paper, we extract eleven activities based on whose direction the persons move to and consider a key-frame as a frame which contains a meaningful activity. The proposed system summarizes based on activity-based key-frames and controls an amount of summarization according to an amount of activities. Thus the system can summarize videos by camera, time, and activity.

  • PDF

Summarization of Soccer Video based on Multiple Cameras Using Dynamic Bayesian Network (동적 베이지안 네트워크를 이용한 다중 카메라기반 축구 비디오 요약)

  • Min, Jun-Ki;Park, Han-Saem;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.567-571
    • /
    • 2009
  • Sports game broadcasting system uses multiple video cameras in order to offer exciting and dynamic scenes for the TV audiences. Since, however, the traditional broadcasting system edits the multiple views into a static video stream, it is difficult to provide the intelligent broadcasting service that summarizes or retrieves specific scenes or events based on the user preference. In this paper, we propose the summarization and retrieval system for the soccer videos based on multiple cameras. It extracts the highlights such as shot on goal, crossing, foul, and set piece using dynamic Bayesian network based on soccer players' primitive behaviors annotated on videos, and selects a proper view for each highlight according to its type. The proposed system, therefore, offers users the highlight summarization or preferred view selection, and can provide personalized broadcasting services by considering the user's preference.

  • PDF

Spatiotemporal Saliency-Based Video Summarization on a Smartphone (스마트폰에서의 시공간적 중요도 기반의 비디오 요약)

  • Lee, Won Beom;Williem, Williem;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2013
  • In this paper, we propose a video summarization technique on a smartphone, based on spatiotemporal saliency. The proposed technique detects scene changes by computing the difference of the color histogram, which is robust to camera and object motion. Then the similarity between adjacent frames, face region, and frame saliency are computed to analyze the spatiotemporal saliency in a video clip. Over-segmented hierarchical tree is created using scene changes and is updated iteratively using mergence and maintenance energies computed during the analysis procedure. In the updated hierarchical tree, segmented frames are extracted by applying a greedy algorithm on the node with high saliency when it satisfies the reduction ratio and the minimum interval requested by the user. Experimental result shows that the proposed method summaries a 2 minute-length video in about 10 seconds on a commercial smartphone. The summarization quality is superior to the commercial video editing software, Muvee.

A Scheme for News Videos based on MPEG-7 and Its Summarization Mechanism by using the Key-Frames of Selected Shot Types (MPEG-7을 기반으로 한 뉴스 동영상 스키마 및 샷 종류별 키프레임을 이용한 요약 생성 방법)

  • Jeong, Jin-Guk;Sim, Jin-Sun;Nang, Jong-Ho;Kim, Gyung-Su;Ha, Myung-Hwan;Jung, Byung-Heei
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.5
    • /
    • pp.530-539
    • /
    • 2002
  • Recently, there have been a lot of researches to develop an archive system for news videos that usually has a fixed structure. However, since the meta-data representation and storing schemes for news video are different from each other in the previously proposed archive systems, it was very hard to exchange these meta-data. This paper proposes a scheme for news video based on MPEG-7 MDS that is an international standard to represent the contents of multimedia, and a summarization mechanism reflecting the characteristics of shots in the news videos. The proposed scheme for news video uses the MPEG-7 MDS schemes such as VideoSegment and TextAnnotation to keep the original structure of news video, and the proposed summarization mechanism uses a slide-show style presentation of key frames with associated audio to reduce the data size of the summary video.

Information Video Summarization and Keyword-based Video Tracking System (정보성 동영상 요약 및 키워드 기반 영상검색 시스템)

  • Gihun Kim;Mikyeong Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.701-702
    • /
    • 2023
  • 비대면 교육이 증가함에 따라 강의, 특강과 같은 정보성 동영상의 수가 급격히 많아지고 있다. 이러한 정보성 동영상을 보아야 하는 학습자들은 자원과 시간을 효율적으로 활용할 수 있는 동영상 이해 및 학습 시스템이 필요하다. 본 논문에서는 GPT-3 모델과 KoNLPy 사용하여 동영상 요약을 수행하고 키워드 기반 해당 영상 프레임으로 바로 갈 수 있는 시스템의 개발내용에 대해 기술한다. 이를 통해 동영상 콘텐츠를 효과적으로 활용하여 학습자들의 학습 효율성을 향상시킬 수 있을 것으로 기대한다.

  • PDF

Character-Based Video Summarization Using Speaker Identification (화자 인식을 통한 등장인물 기반의 비디오 요약)

  • Lee Soon-Tak;Kim Jong-Sung;Kang Chan-Mi;Baek Joong-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.4
    • /
    • pp.163-168
    • /
    • 2005
  • In this paper, we propose a character-based summarization algorithm using speaker identification method from the dialog in video. First, we extract the dialog of shots containing characters' face and then, classify the scene according to actor/actress by performing speaker identification. The classifier is based on the GMM(Gaussian Mixture Model) using the 24 values of MFCC(Mel Frequency Cepstrum Coefficient). GMM is trained to recognize one actor/actress among four who are all trained by GMM. Our experiment result shows that GMM classifier obtains the error rate of 0.138 from our video data.

  • PDF

Automatic Video Management System Using Face Recognition and MPEG-7 Visual Descriptors

  • Lee, Jae-Ho
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.806-809
    • /
    • 2005
  • The main goal of this research is automatic video analysis using a face recognition technique. In this paper, an automatic video management system is introduced with a variety of functions enabled, such as index, edit, summarize, and retrieve multimedia data. The automatic management tool utilizes MPEG-7 visual descriptors to generate a video index for creating a summary. The resulting index generates a preview of a movie, and allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, a face recognition technique is utilized to personalbased video summarization and indexing in stored video data.

  • PDF

Surveillance Video Summarization System based on Multi-person Tracking Status (다수 사람 추적상태에 따른 감시영상 요약 시스템)

  • Yoo, Ju Hee;Lee, Kyoung Mi
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.61-68
    • /
    • 2016
  • Surveillance cameras have been installed in many places because security and safety has become an important issue in modern society. However, watching surveillance videos and judging accidental situations is very labor-intensive and time-consuming. So now, requests for research to automatically analyze the surveillance videos is growing. In this paper, we propose a surveillance system to track multiple persons in videos and to summarize the videos based on tracking information. The proposed surveillance summarization system applies an adaptive illumination correction, subtracts the background, detects multiple persons, tracks the persons, and saves their tracking information in a database. The tracking information includes tracking one's path, their movement status, length of staying time at the location, enterance/exit times, and so on. The movement status is classified into six statuses(Enter, Stay, Slow, Normal, Fast, and Exit). This proposed summarization system provides a person's status as a graph in time and space and helps to quickly determine the status of the tracked person.

Automatic Genre Classification of Sports News Video Using Features of Playfield and Motion Vector (필드와 모션벡터의 특징정보를 이용한 스포츠 뉴스 비디오의 장르 분류)

  • Song, Mi-Young;Jang, Sang-Hyun;Cho, Hyung-Je
    • The KIPS Transactions:PartB
    • /
    • v.14B no.2
    • /
    • pp.89-98
    • /
    • 2007
  • For browsing, searching, and manipulating video documents, an indexing technique to describe video contents is required. Until now, the indexing process is mostly carried out by specialists who manually assign a few keywords to the video contents and thereby this work becomes an expensive and time consuming task. Therefore, automatic classification of video content is necessary. We propose a fully automatic and computationally efficient method for analysis and summarization of spots news video for 5 spots news video such as soccer, golf, baseball, basketball and volleyball. First of all, spots news videos are classified as anchor-person Shots, and the other shots are classified as news reports shots. Shot classification is based on image preprocessing and color features of the anchor-person shots. We then use the dominant color of the field and motion features for analysis of sports shots, Finally, sports shots are classified into five genre type. We achieved an overall average classification accuracy of 75% on sports news videos with 241 scenes. Therefore, the proposed method can be further used to search news video for individual sports news and sports highlights.

The Influence of Topic Exploration and Topic Relevance On Amplitudes of Endogenous ERP Components in Real-Time Video Watching (실시간 동영상 시청시 주제탐색조건과 주제관련성이 내재적 유발전위 활성에 미치는 영향)

  • Kim, Yong Ho;Kim, Hyun Hee
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.8
    • /
    • pp.874-886
    • /
    • 2019
  • To delve into the semantic gap problem of the automatic video summarization, we focused on an endogenous ERP responses at around 400ms and 600ms after the on-set of audio-visual stimulus. Our experiment included two factors: the topic exploration of experimental conditions (Topic Given vs. Topic Exploring) as a between-subject factor and the topic relevance of the shots (Topic-Relevant vs. Topic-Irrelevant) as a within-subject factor. For the Topic Given condition of 22 subjects, 6 short historical documentaries were shown with their video titles and written summaries, while in the Topic Exploring condition of 25 subjects, they were asked instead to explore topics of the same videos with no given information. EEG data were gathered while they were watching videos in real time. It was hypothesized that the cognitive activities to explore topics of videos while watching individual shots increase the amplitude of endogenous ERP at around 600 ms after the onset of topic relevant shots. The amplitude of endogenous ERP at around 400ms after the onset of topic-irrelevant shots was hypothesized to be lower in the Topic Given condition than that in the Topic Exploring condition. The repeated measure MANOVA test revealed that two hypotheses were acceptable.