• Title/Summary/Keyword: automatic shot

Search Result 53, Processing Time 0.025 seconds

Generation of Video Clips Utilizing Shot Boundary Detection (샷 경계 검출을 이용한 영상 클립 생성)

  • Kim, Hyeok-Man;Cho, Seong-Kil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.582-592
    • /
    • 2001
  • Video indexing plays an important role in the applications such as digital video libraries or web VOD which archive large volume of digital videos. Video indexing is usually based on video segmentation. In this paper, we propose a software tool called V2Web Studio which can generate video clips utilizing shot boundary detection algorithm. With the V2Web Studio, the process of clip generation consists of the following four steps: 1) Automatic detection of shot boundaries by parsing the video, 2) Elimination of errors by manually verifying the results of the detection, 3) Building a modeling structure of logical hierarchy using the verified shots, and 4) Generating multiple video clips corresponding to each logically modeled segment. The aforementioned steps are performed by shot detector, shot verifier, video modeler and clip generator in the V2Web Studio respectively.

  • PDF

An Automatic Summarization System of Baseball Game Video Using the Caption Information (자막 정보를 이용한 야구경기 비디오의 자동요약 시스템)

  • 유기원;허영식
    • Journal of Broadcast Engineering
    • /
    • v.7 no.2
    • /
    • pp.107-113
    • /
    • 2002
  • In this paper, we propose a method and a software system for automatic summarization of baseball game videos. The proposed system pursues fast execution and high accuracy of summarization. To satisfy the requirement, the detection of important events in baseball video is performed through DC-based shot boundary detection algorithm and simple caption recognition method. Furthermore, the proposed system supports a hierarchical description so that users can browse and navigate videos in several levels of summarization. In this paper, we propose a method and a software system for automatic summarization of baseball game videos. The proposed system pursues fast execution and high accuracy of summarization. To satisfy the requirement, the detection of important events in baseball video is performed through DC-based shot boundary detection algorithm and simple caption recognition method. Furthermore, the proposed system supports a hierarchical description so that users can browse and navigate videos in several levels of summarization.

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Soccer Video Highlight Building Algorithm using Structural Characteristics of Broadcasted Sports Video (스포츠 중계 방송의 구조적 특성을 이용한 축구동영상 하이라이트 생성 알고리즘)

  • 김재홍;낭종호;하명환;정병희;김경수
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.727-743
    • /
    • 2003
  • This paper proposes an automatic highlight building algorithm for soccer video by using the structural characteristics of broadcasted sports video that an interesting (or important) event (such as goal or foul) in sports video has a continuous replay shot surrounded by gradual shot change effect like wipe. This shot editing rule is used in this paper to analyze the structure of broadcated soccer video and extracts shot involving the important events to build a highlight. It first uses the spatial-temporal image of video to detect wipe transition effects and zoom out/in shot changes. They are used to detect the replay shot. However, using spatial-temporal image alone to detect the wipe transition effect requires too much computational resources and need to change algorithm if the wipe pattern is changed. For solving these problems, a two-pass detection algorithm and a pixel sub-sampling technique are proposed in this paper. Furthermore, to detect the zoom out/in shot change and replay shots more precisely, the green-area-ratio and the motion energy are also computed in the proposed scheme. Finally, highlight shots composed of event and player shot are extracted by using these pre-detected replay shot and zoom out/in shot change point. Proposed algorithm will be useful for web services or broadcasting services requiring abstracted soccer video.

Capturing Distance Parameters Using a Laser Sensor in a Stereoscopic 3D Camera Rig System

  • Chung, Wan-Young;Ilham, Julian;Kim, Jong-Jin
    • Journal of Sensor Science and Technology
    • /
    • v.22 no.6
    • /
    • pp.387-392
    • /
    • 2013
  • Camera rigs for shooting 3D video are classified as manual, motorized, or fully automatic. Even in an automatic camera rig, the process of Stereoscopic 3D (S3D) video capture is very complex and time-consuming. One of the key time-consuming operations is capturing the distance parameters, which are near distance, far distance, and convergence distance. Traditionally these distances are measured by tape measure or triangular indirect measurement methods. These two methods consume a long time for every scene in shot. In our study, a compact laser distance sensing system with long range distance sensitivity is developed. The system is small enough to be installed on top of a camera and the measuring accuracy is within 2% even at a range of 50 m. The shooting time of an automatic camera rig equipped with the laser distance sensing system can be reduced significantly to less than a minute.

Automatic Indexing Algorithm of Golf Video Using Audio Information (오디오 정보를 이용한 골프 동영상 자동 색인 알고리즘)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.441-446
    • /
    • 2009
  • This paper proposes an automatic indexing algorithm of golf video using audio information. In the proposed algorithm, the input audio stream is demultiplexed into the stream of video and audio. By means of Adaboost-cascade classifier, the continuous audio stream is classified into announcer's speech segment recorded in studio, music segment accompanied with players' names on TV screen, reaction segment of audience according to the play, reporter's speech segment with field background, filed noise segment like wind or waves. And golf swing sound including drive shot, iron shot, and putting shot is detected by the method of impulse onset detection and modulation spectrum verification. The detected swing and applause are used effectively to index action or highlight unit. Compared with video based semantic analysis, main advantage of the proposed system is its small computation requirement so that it facilitates to apply the technology to embedded consumer electronic devices for fast browsing.

Taking a Jump Motion Picture Automatically by using Accelerometer of Smart Phones (스마트폰 가속도계를 이용한 점프동작 자동인식 촬영)

  • Choi, Kyungyoon;Jun, Kyungkoo
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.633-641
    • /
    • 2014
  • This paper proposes algorithms to detect jump motion and automatically take a picture when the jump reaches its top. Based on the algorithms, we build jump-shot system by using accelerometer-equipped smart phones. Since the jump motion may vary depending on one's physical condition, gender, and age, it is critical to figure out common features which are independent from such differences. Also it is obvious that the detection algorithm needs to work in real-time because of the short duration of the jump. We propose two different algorithms considering these requirements and develop the system as a smart phone application. Through a series of experiments, we show that the system is able to successfully detect the jump motion and take a picture when it reaches the top.

An Automatic Cut Detection Algorithm Using Median Filter And Neural Network (중간값 필터와 신경망 회로를 사용한 자동 컷 검출 알고리즘)

  • Jun, Seung-Chul;Park, Sung-Han
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.381-387
    • /
    • 2002
  • In this paper, an efficient method to find shot boundaries in the MPEG video stream data is proposed. For this purpose, we first assume that the histogram difference value(HDV) and pixel difference value(PDV) as an one dimensional signal and apply the median filter to these signals. The output of the median filter is subtracted from the original signal to produce the median filtered difference(MFD). The MFD is a criterion of shot boundary. In addition a neural network is employed and trained to find exactly cut boundary. The proposed algorithm shows that the cut boundaries are well extracted, especially in a dynamic video.

New Framework for Automated Extraction of Key Frames from Compressed Video

  • Kim, Kang-Wook;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.693-700
    • /
    • 2012
  • The effective extraction of key frames from a video stream is an essential task for summarizing and representing the content of a video. Accordingly, this paper proposes a new and fast method for extracting key frames from a compressed video. In the proposed approach, after the entire video sequence has been segmented into elementary content units, called shots, key frame extraction is performed by first assigning the number of key frames to each shot, and then distributing the key frames over the shot using a probabilistic approach to locate the optimal position of the key frames. The main advantage of the proposed method is that no time-consuming computations are needed for distributing the key frames within the shots and the procedure for key frame extraction is completely automatic. Furthermore, the set of key frames is independent of any subjective thresholds or manually set parameters.

A Personal Videocasting System with Intelligent TV Browsing for a Practical Video Application Environment

  • Kim, Sang-Kyun;Jeong, Jin-Guk;Kim, Hyoung-Gook;Chung, Min-Gyo
    • ETRI Journal
    • /
    • v.31 no.1
    • /
    • pp.10-20
    • /
    • 2009
  • In this paper, a video broadcasting system between a home-server-type device and a mobile device is proposed. The home-server-type device can automatically extract semantic information from video contents, such as news, a soccer match, and a baseball game. The indexing results are utilized to convert the original video contents to a digested or arranged format. From the mobile device, a user can make recording requests to the home-server-type devices and can then watch and navigate recorded video contents in a digested form. The novelty of this study is the actual implementation of the proposed system by combining the actual IT environment that is available with indexing algorithms. The implementation of the system is demonstrated along with experimental results of the automatic video indexing algorithms. The overall performance of the developed system is compared with existing state-of-the-art personal video recording products.

  • PDF