• Title/Summary/Keyword: Keyframes

Search Result 26, Processing Time 0.021 seconds

Keyframe Extraction from Home Videos Using 5W and 1H Information (육하원칙 정보에 기반한 홈비디오 키프레임 추출)

  • Jang, Cheolhun;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.19 no.2
    • /
    • pp.9-18
    • /
    • 2013
  • We propose a novel method to extract keyframes from home videos based on the 5W and 1H information. Keyframe extraction is a kind of video summarization which selects only specific frames containing important information of a video. As a home video may have content with a variety of topics, we cannot make specific assumptions for information extraction. In addition, to summarize a home video we must analyze human behaviors, because people are important subjects in home videos. In this paper, we extract 5W and 1H information by analyzing human faces, human behaviors, and the global information of background. Experimental results demonstrate that our technique extract more similar keyframes to human selections than previous methods.

A Gesture-Emotion Keyframe Editor for sign-Language Communication between Avatars of Korean and Japanese on the Internet

  • Kim, Sang-Woon;Lee, Yung-Who;Lee, Jong-Woo;Aoki, Yoshinao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.831-834
    • /
    • 2000
  • The sign-language tan be used a9 an auxiliary communication means between avatars of different languages. At that time an intelligent communication method can be also utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper we design a gesture-emotion keyframe editor to provide the means to get easily the parameter values. To calculate both joint angles of the arms and the hands and to goner-ate the in keyframes realistically, a transformation matrix of inverse kinematics and some kinds of constraints are applied. Also, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes nose, and mouth is employed. Experimental results show a possibility that the editor could be used for intelligent sign-language image communications between different lan-guages.

  • PDF

Predicting User Personality Based on Dynamic Keyframes Using Video Stream Structure (비디오 스트림 구조를 활용한 동적 키프레임 기반 사용자 개성 예측)

  • Mira Lee;Simon S.Woo;Hyedong Jung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.601-604
    • /
    • 2023
  • 기술이 발전함에 따라 복합적인 모달리티 정보를 포함하는 멀티미디어 데이터의 수집이 용이해지면서, 사람의 성격 특성을 이해하고 이를 개인화된 에이전트에 적용하고자 하는 연구가 활발히 진행되고 있다. 본 논문에서는 비디오 스트림 구조를 활용하여 사용자 특성을 예측하기 위한 동적 키프레임 추출 방법을 제안한다. 비디오 데이터를 효과적으로 활용하기 위해서는 무작위로 선택한 프레임에서 특징을 추출하던 기존의 방법을 개선하여 영상 내 시간에 따른 정보와 변화량을 기반으로 중요한 프레임을 선택하는 방법이 필요하다. 본 논문에서는 제 3자가 평가한 Big-five 지표 값이 레이블링된 대표적인 데이터셋인 First Impressions V2 데이터셋을 사용하여 외면에서 발현되는 특징들을 기반으로 영상에서 등장하는 인물들의 성격 특성을 예측했다. 결론에서는 선택된 키프레임에서 멀티 모달리티 정보를 조합하여 성격 특성을 예측한 결과와 베이스라인 모델과의 성능을 비교한다.

Implementation of Web Based Video Learning Evaluation System Using User Profiles (사용자 프로파일을 이용한 웹 기반 비디오 학습 평가 시스템의 구현)

  • Shin Seong-Yoon;Kang Il-Ko;Lee Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.137-152
    • /
    • 2005
  • In this Paper, we Propose an efficient web-based video learning evaluation system that is tailored to individual student's characteristics through the use of user profile-based information filtering. As a means of giving video-based questions, keyframes are extracted based on the location, size, and color information, and question-making intervals are extracted by means of differences in gray-level histograms as well as time windows. In addition, through a combination of the category-based system and the keyword-based system, questions for examination are given in order to ensure efficient evaluation. Therefore, students can enhance school achievement by making up for weak areas while continuing to identify their areas of interest.

  • PDF

Stochastic Non-linear Hashing for Near-Duplicate Video Retrieval using Deep Feature applicable to Large-scale Datasets

  • Byun, Sung-Woo;Lee, Seok-Pil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4300-4314
    • /
    • 2019
  • With the development of video-related applications, media content has increased dramatically through applications. There is a substantial amount of near-duplicate videos (NDVs) among Internet videos, thus NDVR is important for eliminating near-duplicates from web video searches. This paper proposes a novel NDVR system that supports large-scale retrieval and contributes to the efficient and accurate retrieval performance. For this, we extracted keyframes from each video at regular intervals and then extracted both commonly used features (LBP and HSV) and new image features from each keyframe. A recent study introduced a new image feature that can provide more robust information than existing features even if there are geometric changes to and complex editing of images. We convert a vector set that consists of the extracted features to binary code through a set of hash functions so that the similarity comparison can be more efficient as similar videos are more likely to map into the same buckets. Lastly, we calculate similarity to search for NDVs; we examine the effectiveness of the NDVR system and compare this against previous NDVR systems using the public video collections CC_WEB_VIDEO. The proposed NDVR system's performance is very promising compared to previous NDVR systems.

Spatio-Temporal Residual Networks for Slide Transition Detection in Lecture Videos

  • Liu, Zhijin;Li, Kai;Shen, Liquan;Ma, Ran;An, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.4026-4040
    • /
    • 2019
  • In this paper, we present an approach for detecting slide transitions in lecture videos by introducing the spatio-temporal residual networks. Given a lecture video which records the digital slides, the speaker, and the audience by multiple cameras, our goal is to find keyframes where slide content changes. Since temporal dependency among video frames is important for detecting slide changes, 3D Convolutional Networks has been regarded as an efficient approach to learn the spatio-temporal features in videos. However, 3D ConvNet will cost much training time and need lots of memory. Hence, we utilize ResNet to ease the training of network, which is easy to optimize. Consequently, we present a novel ConvNet architecture based on 3D ConvNet and ResNet for slide transition detection in lecture videos. Experimental results show that the proposed novel ConvNet architecture achieves the better accuracy than other slide progression detection approaches.

CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events

  • Choe, Giseok;Lee, Seungbin;Nang, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1689-1701
    • /
    • 2019
  • In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC data.

Fast Motion Planning of Wheel-legged Robot for Crossing 3D Obstacles using Deep Reinforcement Learning (심층 강화학습을 이용한 휠-다리 로봇의 3차원 장애물극복 고속 모션 계획 방법)

  • Soonkyu Jeong;Mooncheol Won
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.143-154
    • /
    • 2023
  • In this study, a fast motion planning method for the swing motion of a 6x6 wheel-legged robot to traverse large obstacles and gaps is proposed. The motion planning method presented in the previous paper, which was based on trajectory optimization, took up to tens of seconds and was limited to two-dimensional, structured vertical obstacles and trenches. A deep neural network based on one-dimensional Convolutional Neural Network (CNN) is introduced to generate keyframes, which are then used to represent smooth reference commands for the six leg angles along the robot's path. The network is initially trained using the behavioral cloning method with a dataset gathered from previous simulation results of the trajectory optimization. Its performance is then improved through reinforcement learning, using a one-step REINFORCE algorithm. The trained model has increased the speed of motion planning by up to 820 times and improved the success rates of obstacle crossing under harsh conditions, such as low friction and high roughness.

An Integrated Keyframe Editor of Arms and Hands for 3D Sign-Language Animation (3D 수화 애니메이션을 위한 팔과 손의 통합된 키 프레임 에디터)

  • Kim, Sang-Woon;Lee, Jong-Woo;Aoki, Yoshinao
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.5
    • /
    • pp.21-30
    • /
    • 2000
  • As a means of getting over the linguistic barrier between different languages, a sign-language communication system using CG animation techniques has been studied. In order to generate a kind of sign-language animation in the system, the joint angles of the arms and the hands corresponding to the gesture have to be determined Up to date, however, the values of joint angles have been decided by trial-and-error methods based on the animator's experiences To overcome the drawback, m this paper, we design an integrated keyframe editor of the arms and the hands for 3D sign-language animation with which we can easily and quickly generate the keyframes of the sign-language animation required to build up In the Implemented keyframe editor, the values of joint angles are calculated by using mverse kinematics, and the same transformation matrix is applied to the joints of two arms and twenty fingers Experimental results show a possibility that the editor could be used efficiently for making up the sign-language communication dictionaries needed for inter-communication between different languages.

  • PDF

Automatic Summary Method of Linguistic Educational Video Using Multiple Visual Features (다중 비주얼 특징을 이용한 어학 교육 비디오의 자동 요약 방법)

  • Han Hee-Jun;Kim Cheon-Seog;Choo Jin-Ho;Ro Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1452-1463
    • /
    • 2004
  • The requirement of automatic video summary is increasing as bi-directional broadcasting contents and various user requests and preferences for the bi -directional broadcast environment are increasing. Automatic video summary is needed for an efficient management and usage of many contents in service provider as well. In this paper, we propose a method to generate a content-based summary of linguistic educational videos automatically. First, shot-boundaries and keyframes are generated from linguistic educational video and then multiple(low-level) visual features are extracted. Next, the semantic parts (Explanation part, Dialog part, Text-based part) of the linguistic educational video are generated using extracted visual features. Lastly the XMI- document describing summary information is made based on HieraTchical Summary architecture oi MPEG-7 MDS (Multimedia I)escription Scheme). Experimental results show that our proposed algorithm provides reasonable performance for automatic summary of linguistic educational videos. We verified that the proposed method is useful ior video summary system to provide various services as well as management of educational contents.

  • PDF