• Title/Summary/Keyword: Keyframes

Search Result 26, Processing Time 0.02 seconds

A Study on Good Pose in Pose to Pose (포즈 투 포즈 방식 애니메이션에서 포즈 선별에 대한 연구)

  • Kim, Young-Chul
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.57-73
    • /
    • 2015
  • A pose is an important component in the animation with timing and spacing. Pose is the key to describe the story-telling or how the animation behavior. Key animation method is Straight Ahead and pose to pose method. Many animaters have been using these two methods, or by a mix of two ways. It is possible that computer animation make a pose using interpolation between keyframes. The many animators of computer animation are using pose to pose in their work. It is depend on good and strong pose that make audience understand a story or a situation. This makes animators to be efficient of inefficient operation. In this study, according to the effective good pose to catch proposes four ways. There are four methods of making pose that are stretch and squash, the height of the character, the center of weight, step. The law of 12 kinds of Disney Animation is a good reference for the study.

Big Data Analysis Method for Recommendations of Educational Video Contents (사용자 추천을 위한 교육용 동영상의 빅데이터 분석 기법 비교)

  • Lee, Hyoun-Sup;Kim, JinDeog
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1716-1722
    • /
    • 2021
  • Recently, the capacity of video content delivery services has been increasing significantly. Therefore, the importance of user recommendation is increasing. In addition, these contents contain a variety of characteristics, making it difficult to express the characteristics of the content properly only with a few keywords(Elements used in the search, such as titles, tags, topics, words, etc.) specified by the user. Consequently, existing recommendation systems that use user-defined keywords have limitations that do not properly reflect the characteristics of objects. In this paper, we compare the efficiency of between a method using voice data-based subtitles and an image comparison method using keyframes of images in recommendation module of educational video service systems. Furthermore, we propose the types and environments of video content in which each analysis technique can be efficiently utilized through experimental results.

Compression Method for MPEG CDVA Global Feature Descriptors (MPEG CDVA 전역 특징 서술자 압축 방법)

  • Kim, Joonsoo;Jo, Won;Lim, Guentaek;Yun, Joungil;Kwak, Sangwoon;Jung, Soon-heung;Cheong, Won-Sik;Choo, Hyon-Gon;Seo, Jeongil;Choi, Yukyung
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.295-307
    • /
    • 2022
  • In this paper, we propose a novel compression method for scalable Fisher vectors (SCFV) which is used as a global visual feature description of individual video frames in MPEG CDVA standard. CDVA standard has adopted a temporal descriptor redundancy removal technique that takes advantage of the correlation between global feature descriptors for adjacent keyframes. However, due to the variable length property of SCFV, the temporal redundancy removal scheme often results in inferior compression efficiency. It is even worse than the case when the SCFVs are not compressed at all. To enhance the compression efficiency, we propose an asymmetric SCFV difference computation method and a SCFV reconstruction method. Experiments on the FIVR dataset show that the proposed method significantly improves the compression efficiency compared to the original CDVA Experimental Model implementation.

A Research on Image Metadata Extraction through YCrCb Color Model Analysis for Media Hyper-personalization Recommendation (미디어 초개인화 추천을 위한 YCrCb 컬러 모델 분석을 통한 영상의 메타데이터 추출에 대한 연구)

  • Park, Hyo-Gyeong;Yong, Sung-Jung;You, Yeon-Hwi;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.277-280
    • /
    • 2021
  • Recently as various contents are mass produced based on high accessibility, the media contents market is more active. Users want to find content that suits their taste, and each platform is competing for personalized recommendations for content. For an efficient recommendation system, high-quality metadata is required. Existing platforms take a method in which the user directly inputs the metadata of an image. This will waste time and money processing large amounts of data. In this paper, for media hyperpersonalization recommendation, keyframes are extracted based on the YCrCb color model of the video based on movie trailers, movie genres are distinguished through supervised learning of artificial intelligence and In the future, we would like to propose a utilization plan for generating metadata.

  • PDF

High Resolution Video Synthesis with a Hybrid Camera (하이브리드 카메라를 이용한 고해상도 비디오 합성)

  • Kim, Jong-Won;Kyung, Min-Ho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.4
    • /
    • pp.7-12
    • /
    • 2007
  • With the advent of digital cinema, more and more movies are digitally produced, distributed via digital medium such as hard drives and network, and finally projected using a digital projector. However, digital cameras capable of shotting at 2K or higher resolution for digital cinema are still very expensive and bulky, which impedes rapid transition to digital production. As a low-cost solution for acquiring high resolution digital videos, we propose a hybrid camera consisting of a low-resolution CCD for capturing videos and a high-resolution CCD for capturing still images at regular intervals. From the output of the hybrid camera, we can synthesize high-resolution videos by software as follows: for each frame, 1. find pixel correspondences from the current frame to the previous and subsequent keyframes associated with high resolution still images, 2. synthesize a high-resolution image for the current frame by copying the image blocks associated with the corresponding pixels from the high-resolution keyframe images, and 3. complete the synthesis by filling holes in the synthesized image. This framework can be extended to making NPR video effects and capturing HDR videos.

  • PDF

A Novel Video Copy Detection Method based on Statistical Analysis (통계적 분석 기반 불법 복제 비디오 영상 감식 방법)

  • Cho, Hye-Jeong;Kim, Ji-Eun;Sohn, Chae-Bong;Chung, Kwang-Sue;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.661-675
    • /
    • 2009
  • The carelessly and illegally copied contents are raising serious social problem as internet and multimedia technologies are advancing. Therefore, development of video copy detection system must be settled without delay. In this paper, we propose the hierarchical video copy detection method that estimates similarity using statistical characteristics between original video and manipulated(transformed) copy video. We rank according to luminance value of video to be robust to spacial transformation, and choose similar videos categorized as candidate segments in huge amount of database to reduce processing time and complexity. The copy videos generally insert black area in the edge of the image, so we remove rig black area and decide copy or not by using statistical characteristics of original video and copied video with center part of frame that contains important information of video. Experiment results show that the proposed method has similar keyframe accuracy to reference method, but we use less memory to save feature information than reference's, because the number of keyframes is less 61% than that of reference's. Also, the proposed method detects if the video is copied or not efficiently despite expansive spatial transformations such as blurring, contrast change, zoom in, zoom out, aspect ratio change, and caption insertion.