• Title/Summary/Keyword: Sync Motion

Search Result 6, Processing Time 0.027 seconds

Timing Synchronization for film scoring (영상음악 제작을 위한 Timing Synchronization)

  • Park, Byung-Kyu
    • Journal of Digital Contents Society
    • /
    • v.12 no.2
    • /
    • pp.177-184
    • /
    • 2011
  • This study deals with Timing Synchronization of motion picture and music in film music's physical point of view. Timing Synchronization appears by using Click Track, which calculates time of picture and location of bar. We also can find Hit, the emphasis point of music, by computation Frame Per Beat and estimate the location of note per time. Hit can be placed at upbeat, but, because of music's characteristic, it is more convenient to composer when it's placed at downbeat. Therefore, this study suggests three methods to sync the music and picture. First of all, sync through time conversion, second of all, sync through tempo conversion, and lastly, there is way to sync through offset. These each methods have their pros and cons, so choice should be made based on the music's natural flow.

3D Character Production for Dialog Syntax-based Educational Contents Authoring System (대화구문기반 교육용 콘텐츠 저작 시스템을 위한 3D 캐릭터 제작)

  • Kim, Nam-Jae;Ryu, Seuc-Ho;Kyung, Byung-Pyo;Lee, Dong-Yeol;Lee, Wan-Bok
    • Journal of the Korea Convergence Society
    • /
    • v.1 no.1
    • /
    • pp.69-75
    • /
    • 2010
  • The importance of a using the visual media in English education has been increased. By an importance of Characters in English language content, the more effort is needed for a learner to show the English pronunciation and a realistic implementation. In this paper, we tried to review the Syntax-based Educational Contents Authoring System. For the more realistic lip-sync character, 3D character to enhance the efficiency of the education was constructed. We used a chart of the association structure analysis of mouth's shape. we produced an optimized 3D character through a process of a concept, a modeling, a mapping and an animating design. For more effective educational content for 3D character creation, the next research will be continuously a 3d Character added to a hand motion and body motion in order to show an effective communication example.

Evaluating the Comfort Experience of a Head-Mounted Display with the Delphi Methodology

  • Lee, Doyeon;Chang, Byeng-hee;Park, Jiseob
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.81-94
    • /
    • 2020
  • This study developed evaluation indicators for the comfort experience of virtual reality (VR) headsets by classifying, defining, and weighting cybersickness-causing factors using the Delphi research method and analytic hierarchical process (AHP) approach. Four surveys were conducted with 20 experts on VR motion sickness. The expert surveys involved the 1) classification and definition of cybersickness-causing dimensions, classification of sub-factors for each dimension, and selection of evaluation indicators, 2) self-reassessment of the results of each step, 3) validity revaluation, and 4) final weighting calculation. Based on the surveys, the evaluation indicators for the comfort experience of VR headsets were classified into eight sub-factors: field of view (FoV)-device FoV, latency-device latency, framerate-device framerate, V-sync-device V-sync, rig-camera angle view, rig-no-parallax point, resolution-device resolution, and resolution-pixels per inch (PPI). A total of six dimensions and eight sub-factors were identified; sub-factor-based evaluation indicators were also developed.

The Analysis of the Level of Technological Maturity for the u-Learning of Public Education by Mobile Phone (휴대폰을 이용한 공교육 u-러닝의 기술 성숙도 분석)

  • Lee, Jae-Won;Na, Eun-Gu;Song, Gil-Ju
    • IE interfaces
    • /
    • v.19 no.4
    • /
    • pp.306-315
    • /
    • 2006
  • In this paper we analyze whether we can use the mobile phone having been highly distributed into young generation as a device for the u-learning in Korean public education. For this purpose we deal with the technical maturity in three axes. Firstly, we examine the authoring nature of mobile internet-based contents such as both text and motion picture for the contents developers in the public education. As a research result the authoring of text has almost no difficulty, but that of the motion picture shows some problems. Secondly, we deal with whether u-learners can easily get and use u-contents on both mobile phone and PC respectively. After analysing this factor, we found that the downloading of motion picture contents into mobile phone is very limited. Therfore we talk about the usability and problem of various PC Sync tools and propose their standardization. Finally, the needs of the introduction of the ubiquitous SCORM which could enable to reuse u-contents among different Korean telco’s mobile phones are discussed. Here we describe some functionality of both ubiquitous SCORM and u-LMS. Our study looks like almost the first work examining the technological maturity for the introduction of u-learning with mobile phone in Korean public education and it could be used as a reference for the study of any other wireless telecommunication-based u-learning other than mobile telecommunication.

Multicontents Integrated Image Animation within Synthesis for Hiqh Quality Multimodal Video (고화질 멀티 모달 영상 합성을 통한 다중 콘텐츠 통합 애니메이션 방법)

  • Jae Seung Roh;Jinbeom Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.257-269
    • /
    • 2023
  • There is currently a burgeoning demand for image synthesis from photos and videos using deep learning models. Existing video synthesis models solely extract motion information from the provided video to generate animation effects on photos. However, these synthesis models encounter challenges in achieving accurate lip synchronization with the audio and maintaining the image quality of the synthesized output. To tackle these issues, this paper introduces a novel framework based on an image animation approach. Within this framework, upon receiving a photo, a video, and audio input, it produces an output that not only retains the unique characteristics of the individuals in the photo but also synchronizes their movements with the provided video, achieving lip synchronization with the audio. Furthermore, a super-resolution model is employed to enhance the quality and resolution of the synthesized output.