• 제목/요약/키워드: 3D motion synthesis

검색결과 23건 처리시간 0.028초

얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션 (3D Facial Synthesis and Animation for Facial Motion Estimation)

  • 박도영;심연숙;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권6호
    • /
    • pp.618-631
    • /
    • 2000
  • 본 논문에서는 2차원 얼굴 영상의 움직임을 추출하여 3차원 얼굴 모델에 합성하는 방법을 연구하였다. 본 논문은 동영상에서의 움직임을 추정하기 위하여 광류를 기반으로 한 추정방법을 이용하였다. 2차원 동영상에서 얼굴요소 및 얼굴의 움직임을 추정하기 위해 인접한 두 영상으로부터 계산된 광류를 가장 잘 고려하는 매개변수화된 움직임 벡터들을 추출한다. 그리고 나서, 이를 소수의 매개변수들의 조합으로 만들어 얼굴의 움직임에 대한 정보를 묘사할 수 있게 하였다. 매개변수화 된 움직임 벡터는 눈 영역, 입술과 눈썹 영역, 그리고 얼굴영역을 위한 서로 다른 세 종류의 움직임을 위하여 사용하였다. 이를 얼굴 모델의 움직임을 합성할 수 있는 단위행위(Action Unit)와 결합하여 2차원 동영상에서의 얼굴 움직임을 3 차원으로 합성한 결과를 얻을 수 있다.

  • PDF

음성/영상의 인식 및 합성 기능을 갖는 가상캐릭터 구현 (Cyber Character Implementation with Recognition and Synthesis of Speech/lmage)

  • 최광표;이두성;홍광석
    • 전자공학회논문지CI
    • /
    • 제37권5호
    • /
    • pp.54-63
    • /
    • 2000
  • 본 논문에서는 음성인식, 음성합성, Motion Tracking, 3D Animation이 가능한 가상캐릭터를 구현하였다. 음성인식으로는 K-means 128 Level VQ와 MFCC의 특징패턴을 바탕으로 Discrete-HMM 알고리즘을 이용하였다. 음성합성에는 반음절 단위의 TD-PSOLA를 이용하였으며, Motion Tracking에서는 계산량을 줄이기 위해 Fast Optical Flow Like Method를 제안하고, 3D Animation 시스템은 Vertex Interpolation방법으로 Animation을 하고 Direct3D를 이용하여 Rendering을 하였다. 최종적으로 위에 나열된 시스템들을 통합하여 사용자를 계속적으로 주시하면서 사용자와 함께 구구단 게임을 할 수 있는 가상캐릭터를 구현하였다.

  • PDF

깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법 (3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor)

  • 성만규
    • 한국멀티미디어학회논문지
    • /
    • 제15권6호
    • /
    • pp.827-836
    • /
    • 2012
  • 키넥트의 성공적인 등장 이후 이 센서를 이용하여 사용자의 아바타에 해당하는 3차원 캐릭터의 움직임을 제어하는 많은 인터액티브 콘텐츠가 제작되었다. 하지만, 키넥트의 특성 상 사용자는 키넥트를 정면으로 바라보아야 하며, 모션 또한 제자리에서 수행할 수 있는 동작 정도만으로 국한되었다. 이 단점은 게임에서 가장 중요한 요구기능 중 하나인 가상공간 내비게이션을 수행하지 못하게 하는 근본적인 이유가 되었다. 본 논문은 이와 같은 단점을 해결하기 위한 새로운 방법을 제안한다. 두 단계로 이루어진 본 방법은 첫 번째 단계로서 사용자의 내비게이션 의도를 파악하기 위해 제자리 걷기 동작 제스처인식을 수행한다. 내비게이션 의도가 파악되면, 다음 단계에 현재 제자리 걷기동작을 상체와 하체 모션으로 자동으로 분리한 후, 미리 입력 받은 하체모션캡처 데이터를 현재 캐릭터 속도를 반영하여 수정한 뒤 분리된 원래 하체모션과 자연스럽게 교체한다. 본 논문에서 제안된 알고리즘을 이용하면, 키넥트 센서를 통해 사용자의 상체 모션을 그대로 반영함과 동시에 모션캡처 데이터를 이용하여 하체 동작을 실제 걷는 동작으로 바꾸어주기 때문에 사용자가 조정하는 3차원 캐릭터는 가상공간을 자연스럽게 내비게이션할 수 있다.

Space-Time Warp Curve for Synthesizing Multi-character Motions

  • Sung, Mankyu;Choi, Gyu Sang
    • ETRI Journal
    • /
    • 제39권4호
    • /
    • pp.493-501
    • /
    • 2017
  • This paper introduces a new motion-synthesis technique for animating multiple characters. At a high level, we introduce a hub-sub-control-point scheme that automatically generates many different spline curves from a user scribble. Then, each spline curve becomes a trajectory along which a 3D character moves. Based on the given curves, our algorithm synthesizes motions using a cyclic motion. In this process, space-time warp curves, which are time-warp curves, are embedded in the 3D environment to control the speed of the motions. Since the space-time warp curve represents a trajectory over the time domain, it enables us to verify whether the trajectory causes any collisions between characters by simply checking whether two space-time warp curves intersect. In addition, it is possible to edit space-time warp curves at run time to change the speed of the characters. We use several experiments to demonstrate that the proposed algorithm can efficiently synthesize a group of character motions. Our method creates collision-avoiding trajectories ten times faster than those created manually.

Fast Motion Synthesis of Quadrupedal Animals Using a Minimum Amount of Motion Capture Data

  • Sung, Mankyu
    • ETRI Journal
    • /
    • 제35권6호
    • /
    • pp.1029-1037
    • /
    • 2013
  • This paper introduces a novel and fast synthesizing method for 3D motions of quadrupedal animals that uses only a small set of motion capture data. Unlike human motions, animal motions are relatively difficult to capture. Also, it is a challenge to synthesize continuously changing animal motions in real time because animals have various gait types according to their speed. The algorithm proposed herein, however, is able to synthesize continuously varying motions with proper limb configuration by using only one single cyclic animal motion per gait type based on the biologically driven Froude number. During the synthesis process, each gait type is automatically determined by its speed parameter, and the transition motions, which have not been entered as input, are synthesized accordingly by the optimized asynchronous motion blending technique. At the start time, given the user's control input, the motion path and spinal joints for turning are adjusted first and then the motion is stitched at any speed with proper transition motions to synthesize a long stream of motions.

Technology Trends for Motion Synthesis and Control of 3D Character

  • Choi, Jong-In
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권4호
    • /
    • pp.19-26
    • /
    • 2019
  • In this study, we study the development and control of motion of 3D character animation and discuss the development direction of technology. Character animation has been developed as a data-based method and a physics-based method. The animation generation technique based on the keyframe method has been made possible by the development of the hardware technology, and the motion capture device has been used. Various techniques for effectively editing the motion data have appeared. At the same time, animation techniques based on physics have emerged, which realistically generate the motion of the character by physically optimized numerical computation. Recently, animation techniques using machine learning have shown new possibilities for creating characters that can be controlled by the user in real time and are expected to be developed in the future.

3D 아바타 동작의 선택 제어를 통한 감정 표현 (Emotional Expression through the Selection Control of Gestures of a 3D Avatar)

  • 이지혜;진영훈;채영호
    • 한국CDE학회논문집
    • /
    • 제19권4호
    • /
    • pp.443-454
    • /
    • 2014
  • In this paper, an intuitive emotional expression of the 3D avatar is presented. Using the motion selection control of 3D avatar, an easy-to-use communication which is more intuitive than emoticon is possible. 12 pieces different emotions of avatar are classified as positive emotions such as cheers, impressive, joy, welcoming, fun, pleasure and negative emotions of anger, jealousy, wrath, frustration, sadness, loneliness. The combination of lower body motion is used to represent additional emotions of amusing, joyous, surprise, enthusiasm, glad, excite, sulk, discomfort, irritation, embarrassment, anxiety, sorrow. In order to get the realistic human posture, BVH format of motion capture data are used and the synthesis of BVH file data are implemented by applying the proposed emotional expression rules of the 3D avatar.

Effects of spatial variability of earthquake ground motion in cable-stayed bridges

  • Ferreira, Miguel P.;Negrao, Joao H.
    • Structural Engineering and Mechanics
    • /
    • 제23권3호
    • /
    • pp.233-247
    • /
    • 2006
  • Most codes of practice state that for large in-plane structures it is necessary to account for the spatial variability of earthquake ground motion. There are essentially three effects that contribute for this variation: (i) wave passage effect, due to finite propagation velocity; (ii) incoherence effect, due to differences in superposition of waves; and (iii) the local site amplification due to spatial variation in geological conditions. This paper discusses the procedures to be undertaken in the time domain analysis of a cable-stayed bridge under spatial variability of earthquake ground motion. The artificial synthesis of correlated displacements series that simulate the earthquake load is discussed first. Next, it is described the 3D model of the International Guadiana Bridge used for running tests with seismic analysis. A comparison of the effects produced by seismic waves with different apparent propagation velocities and different geological conditions is undertaken. The results in this study show that the differences between the analysis with and without spatial variability of earthquake ground motion can be important for some displacements and internal forces, especially those influenced by symmetric modes.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제2권2호
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

임의 카메라 구조에서의 영상 합성 (View synthesis in uncalibrated images)

  • 강지현;김동현;손광훈
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.437-438
    • /
    • 2006
  • Virtual view synthesis is essential for 3DTV systems, which utilizes the motion parallax cue. In this paper, we propose a multi-step view synthesis algorithm to efficiently reconstruct an arbitrary view from limited number of known views of a 3D scene. We describe an efficient image rectification procedure which guarantees that an interpolation process produce valid views. This rectification method can deal with all possible camera motions. The idea consists of using a polar parameterization of the image around the epipole. Then, to generate intermediate views, we use an efficient dense disparity estimation algorithm considering features of stereo image pairs. Main concepts of the algorithm are based on the region dividing bidirectional pixel matching. The estimated disparities are used to synthesize intermediate view of stereo images. We use computer simulation to show the result of the proposed algorithm.

  • PDF