• Title/Summary/Keyword: 모션캡쳐

Search Result 177, Processing Time 0.02 seconds

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Deep Learning-Based Motion Reconstruction Using Tracker Sensors (트래커를 활용한 딥러닝 기반 실시간 전신 동작 복원 )

  • Hyunseok Kim;Kyungwon Kang;Gangrae Park;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.5
    • /
    • pp.11-20
    • /
    • 2023
  • In this paper, we propose a novel deep learning-based motion reconstruction approach that facilitates the generation of full-body motions, including finger motions, while also enabling the online adjustment of motion generation delays. The proposed method combines the Vive Tracker with a deep learning method to achieve more accurate motion reconstruction while effectively mitigating foot skating issues through the use of an Inverse Kinematics (IK) solver. The proposed method utilizes a trained AutoEncoder to reconstruct character body motions using tracker data in real-time while offering the flexibility to adjust motion generation delays as needed. To generate hand motions suitable for the reconstructed body motion, we employ a Fully Connected Network (FCN). By combining the reconstructed body motion from the AutoEncoder with the hand motions generated by the FCN, we can generate full-body motions of characters that include hand movements. In order to alleviate foot skating issues in motions generated by deep learning-based methods, we use an IK solver. By setting the trackers located near the character's feet as end-effectors for the IK solver, our method precisely controls and corrects the character's foot movements, thereby enhancing the overall accuracy of the generated motions. Through experiments, we validate the accuracy of motion generation in the proposed deep learning-based motion reconstruction scheme, as well as the ability to adjust latency based on user input. Additionally, we assess the correction performance by comparing motions with the IK solver applied to those without it, focusing particularly on how it addresses the foot skating issue in the generated full-body motions.

Development of a Squat Angle Measurement System using an Inertial Sensor (관성 센서기반 스쿼트 각도 측정 융합 시스템 개발)

  • Joo, Hyo-Sung;Woo, Ji-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.355-361
    • /
    • 2020
  • The squat is an exercise that can effectively improve the muscle strength of the lower body, which can be performed in a variety of places without restrictions on places including homes. However, injuries due to incorrect motion or excessive angles are frequently occurring. In this study, we developed a single sensor-based squat angle measurement system that can inform the squat angle according to the correct motion during the squat exercise. The sensor module, including the acceleration sensor and the gyro sensor, is attached to the user's thigh. The squat angle was calculated using the complementary filter complementing the pros and cons of acceleration and gyro sensor. It was found that the calculated squat angle showed the proper correlation compared to the squat angle measured by a goniometer, and the influence of the coefficient of the complementary filter on the accuracy was evaluated.

Une lecture du film d'animation do Christian Volckman (크리스티앙 볼크만(Christian Volckman)의 장편 애니메이션 <르네상스(Renaissance)> 의 해석에 대한 방법적 시도)

  • Han, Sang-Jung
    • Cartoon and Animation Studies
    • /
    • s.13
    • /
    • pp.199-210
    • /
    • 2008
  • Nous tentons ici d'une analyse esthetique sur le film d'animation francais "Renaissance". Le film d'animation est un genre du recit d'images. L'on no peut donc le traiter simplement comme les chaines des images sans I'histoire, ni comme I'histoire sans les images. Nous essaions de trouver dans notre etude, un meddle d'analyse sur le film d'animation. L'acces y est commece par expliquer le context du film et son resume de I'histoire. Ensuite, nous travaillons sur les caracteristiques produites par des images(noir et blanc) et des techniques(motion capture). Elles sont considerees comme les traits expressifs, ou les trails formels. Troisiement, on analysr des codes des genres et des sens implicites qui sont presentes dans le film. Apres aborder tous ces prises on comptes, nous les synthetisons dans ie principe d'esthetique. lci, celui-ci est figure comme un accord entre les caractedristiques formelless et les contenus traites. Mais le film n'arrive pas a cette concordance. L'avancee technique et le plaisir visuel que le film nous donne ne sont pourtant pas meprisable memo si I'objectif esthetique du film no s'acheve sur un echec. Notre etude est un peu large afin d'analyser les detailes du film. Nous laisssons cette faiblesss a un autre travail du futur, on etant content sur l'etude qui pout servir a analyser un film d'animation, mame un peu grossierement.

  • PDF

Capture of Foot Motion for Real-time Virtual Wearing by Stereo Cameras (스테레오 카메라로부터 실시간 가상 착용을 위한 발동작 검출)

  • Jung, Da-Un;Yun, Yong-In;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1575-1591
    • /
    • 2008
  • In this paper, we propose a new method detecting foot motion capture in order to overlap in realtime foot's 3D virtual model from stereo cameras. In order to overlap foot's virtual model at the same position of the foot, a process of the foot's joint detection to regularly track the foot's joint motion is necessary, and accurate register both foot's virtual model and user's foot in complicated motion is most important problem in this technology. In this paper, we propose a dynamic registration using two types of marker groups. A plane information of the ground handles the relationship between foot's virtual model and user's foot and obtains foot's pose and location. Foot's rotation is predicted by two attached marker groups according to instep of center framework. Consequently, we had implemented our proposed system and estimated the accuracy of the proposed method using various experiments.

  • PDF

Biomechanical Analysis of Arm Motion during Steering Using Motion Analysis Technique (동작분석기법을 이용한 조향동작에 대한 팔의 생체역학적 특성분석)

  • Kim, Young-Hwan;Tak, Tea-Oh
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.11
    • /
    • pp.1391-1398
    • /
    • 2011
  • Biomechanical analysis of arm motion during steering was performed using a motion analysis technique. Three-dimensional position data for each part of arm are fed into an interactive model combining a musculoskeletal arm model and the mechanical steering system to calculate joint angles and torques using inverse kinematic and dynamic analyses, respectively. The analysis shows that elbow pronation/supination, wrist flexion/extension, shoulder adduction/abduction, and shoulder flexion/extension have significant magnitudes. Sensitivity analysis of the arm joint motion with respect to seating posture and steering wheel configuration is carried out to investigate the qualitative influence of the seating posture and driver's seat configuration on the steering behavior.

Model-based Body Motion Tracking of a Walking Human (모델 기반의 보행자 신체 추적 기법)

  • Lee, Woo-Ram;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.75-83
    • /
    • 2007
  • A model based approach of tracking the limbs of a walking human subject is proposed in this paper. The tracking process begins by building a data base composed of conditional probabilities of motions between the limbs of a walking subject. With a suitable amount of video footage from various human subjects included in the database, a probabilistic model characterizing the relationships between motions of limbs is developed. The motion tracking of a test subject begins with identifying and tracking limbs from the surveillance video image using the edge and silhouette detection methods. When occlusion occurs in any of the limbs being tracked, the approach uses the probabilistic motion model in conjunction with the minimum cost based edge and silhouette tracking model to determine the motion of the limb occluded in the image. The method has shown promising results of tracking occluded limbs in the validation tests.

Development and Application of Automatic Motion Generator for Game Characters (게임 캐릭터를 위한 자동동작생성기의 개발과 응용)

  • Ok, Soo-Yol;Kang, Young-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.8
    • /
    • pp.1363-1369
    • /
    • 2008
  • As game and character animation industries are growing, techniques for reproducing realistic character behaviors have been required in various fields. Therefore, intensive researches have been performed in order to find various methods for realistic character animation. The most common approaches to character animation involves tedious user input method, simulation with physical laws based on dynamics, and measurement of actors' behaviors with input devices such as motion capture system. These approaches have their own advantages, but they all have common disadvantage in character control. In order to provide users with convenient control, the realistic animation must be generated with high-level parameters, and the modification should also be made with high-level parameters. In this paper we propose techniques for developing an automated character animation tool which operates with high-level parameters, and introduce techniques for developing actual games by utilizing this tool.