• Title/Summary/Keyword: 3D motion synthesis

Search Result 23, Processing Time 0.024 seconds

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Cyber Character Implementation with Recognition and Synthesis of Speech/lmage (음성/영상의 인식 및 합성 기능을 갖는 가상캐릭터 구현)

  • Choe, Gwang-Pyo;Lee, Du-Seong;Hong, Gwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.5
    • /
    • pp.54-63
    • /
    • 2000
  • In this paper, we implemented cyber character that can do speech recognition, speech synthesis, Motion tracking and 3D animation. For speech recognition, we used Discrete-HMM algorithm with K-means 128 level vector quantization and MFCC feature vector. For speech synthesis, we used demi-syllables TD-PSOLA algorithm. For PC based Motion tracking, we present Fast Optical Flow like Method. And for animating 3D model, we used vertex interpolation with DirectSD retained mode. Finally, we implemented cyber character integrated above systems, which game calculating by the multiplication table with user and the cyber character always look at user using of Motion tracking system.

  • PDF

3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor (깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법)

  • Sung, Man-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.827-836
    • /
    • 2012
  • After successful advent of Microsoft's Kinect, many interactive contents that control user's 3D avatar motions in realtime have been created. However, due to the Kinect's intrinsic IR projection problem, users are restricted to face the sensor directly forward and to perform all motions in a standing-still position. These constraints are main reasons that make it almost impossible for the 3D character to navigate the virtual environment, which is one of the most required functionalities in games. This paper proposes a new method that makes 3D character navigate the virtual environment with highly realistic motions. First, in order to find out the user's intention of navigating the virtual environment, the method recognizes walking-in-place motion. Second, the algorithm applies the motion splicing technique which segments the upper and the lower motions of character automatically and then switches the lower motion with pre-processed motion capture data naturally. Since the proposed algorithm can synthesize realistic lower-body walking motion while using motion capture data as well as capturing upper body motion on-line puppetry manner, it allows the 3D character to navigate the virtual environment realistically.

Space-Time Warp Curve for Synthesizing Multi-character Motions

  • Sung, Mankyu;Choi, Gyu Sang
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.493-501
    • /
    • 2017
  • This paper introduces a new motion-synthesis technique for animating multiple characters. At a high level, we introduce a hub-sub-control-point scheme that automatically generates many different spline curves from a user scribble. Then, each spline curve becomes a trajectory along which a 3D character moves. Based on the given curves, our algorithm synthesizes motions using a cyclic motion. In this process, space-time warp curves, which are time-warp curves, are embedded in the 3D environment to control the speed of the motions. Since the space-time warp curve represents a trajectory over the time domain, it enables us to verify whether the trajectory causes any collisions between characters by simply checking whether two space-time warp curves intersect. In addition, it is possible to edit space-time warp curves at run time to change the speed of the characters. We use several experiments to demonstrate that the proposed algorithm can efficiently synthesize a group of character motions. Our method creates collision-avoiding trajectories ten times faster than those created manually.

Fast Motion Synthesis of Quadrupedal Animals Using a Minimum Amount of Motion Capture Data

  • Sung, Mankyu
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1029-1037
    • /
    • 2013
  • This paper introduces a novel and fast synthesizing method for 3D motions of quadrupedal animals that uses only a small set of motion capture data. Unlike human motions, animal motions are relatively difficult to capture. Also, it is a challenge to synthesize continuously changing animal motions in real time because animals have various gait types according to their speed. The algorithm proposed herein, however, is able to synthesize continuously varying motions with proper limb configuration by using only one single cyclic animal motion per gait type based on the biologically driven Froude number. During the synthesis process, each gait type is automatically determined by its speed parameter, and the transition motions, which have not been entered as input, are synthesized accordingly by the optimized asynchronous motion blending technique. At the start time, given the user's control input, the motion path and spinal joints for turning are adjusted first and then the motion is stitched at any speed with proper transition motions to synthesize a long stream of motions.

Technology Trends for Motion Synthesis and Control of 3D Character

  • Choi, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.4
    • /
    • pp.19-26
    • /
    • 2019
  • In this study, we study the development and control of motion of 3D character animation and discuss the development direction of technology. Character animation has been developed as a data-based method and a physics-based method. The animation generation technique based on the keyframe method has been made possible by the development of the hardware technology, and the motion capture device has been used. Various techniques for effectively editing the motion data have appeared. At the same time, animation techniques based on physics have emerged, which realistically generate the motion of the character by physically optimized numerical computation. Recently, animation techniques using machine learning have shown new possibilities for creating characters that can be controlled by the user in real time and are expected to be developed in the future.

Emotional Expression through the Selection Control of Gestures of a 3D Avatar (3D 아바타 동작의 선택 제어를 통한 감정 표현)

  • Lee, JiHye;Jin, YoungHoon;Chai, YoungHo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.443-454
    • /
    • 2014
  • In this paper, an intuitive emotional expression of the 3D avatar is presented. Using the motion selection control of 3D avatar, an easy-to-use communication which is more intuitive than emoticon is possible. 12 pieces different emotions of avatar are classified as positive emotions such as cheers, impressive, joy, welcoming, fun, pleasure and negative emotions of anger, jealousy, wrath, frustration, sadness, loneliness. The combination of lower body motion is used to represent additional emotions of amusing, joyous, surprise, enthusiasm, glad, excite, sulk, discomfort, irritation, embarrassment, anxiety, sorrow. In order to get the realistic human posture, BVH format of motion capture data are used and the synthesis of BVH file data are implemented by applying the proposed emotional expression rules of the 3D avatar.

Effects of spatial variability of earthquake ground motion in cable-stayed bridges

  • Ferreira, Miguel P.;Negrao, Joao H.
    • Structural Engineering and Mechanics
    • /
    • v.23 no.3
    • /
    • pp.233-247
    • /
    • 2006
  • Most codes of practice state that for large in-plane structures it is necessary to account for the spatial variability of earthquake ground motion. There are essentially three effects that contribute for this variation: (i) wave passage effect, due to finite propagation velocity; (ii) incoherence effect, due to differences in superposition of waves; and (iii) the local site amplification due to spatial variation in geological conditions. This paper discusses the procedures to be undertaken in the time domain analysis of a cable-stayed bridge under spatial variability of earthquake ground motion. The artificial synthesis of correlated displacements series that simulate the earthquake load is discussed first. Next, it is described the 3D model of the International Guadiana Bridge used for running tests with seismic analysis. A comparison of the effects produced by seismic waves with different apparent propagation velocities and different geological conditions is undertaken. The results in this study show that the differences between the analysis with and without spatial variability of earthquake ground motion can be important for some displacements and internal forces, especially those influenced by symmetric modes.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

View synthesis in uncalibrated images (임의 카메라 구조에서의 영상 합성)

  • Kang, Ji-Hyun;Kim, Dong-Hyun;Sohn, Kwang-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.437-438
    • /
    • 2006
  • Virtual view synthesis is essential for 3DTV systems, which utilizes the motion parallax cue. In this paper, we propose a multi-step view synthesis algorithm to efficiently reconstruct an arbitrary view from limited number of known views of a 3D scene. We describe an efficient image rectification procedure which guarantees that an interpolation process produce valid views. This rectification method can deal with all possible camera motions. The idea consists of using a polar parameterization of the image around the epipole. Then, to generate intermediate views, we use an efficient dense disparity estimation algorithm considering features of stereo image pairs. Main concepts of the algorithm are based on the region dividing bidirectional pixel matching. The estimated disparities are used to synthesize intermediate view of stereo images. We use computer simulation to show the result of the proposed algorithm.

  • PDF