• Title/Summary/Keyword: 모션 캡처

Search Result 87, Processing Time 0.027 seconds

Comparative Analysis of Linear and Nonlinear Projection Techniques for the Best Visualization of Facial Expression Data (얼굴 표정 데이터의 최적의 가시화를 위한 선형 및 비선형 투영 기법의 비교 분석)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.97-104
    • /
    • 2009
  • This paper describes comparison and analysis of methodology which enables us in order to search the projection technique of optimum for projection in the plane. For this methodology, we applies the high-dimensional facial motion capture data respectively in linear and nonlinear projection techniques. The one core element of the methodology is to applies the high-dimensional facial expression data of frame unit in PCA where is a linear projection technique and Isomap, MDS, CCA, Sammon's Mapping and LLE where are a nonlinear projection techniques. And another is to find out the methodology which distributes in this low-dimensional space, and analyze the result last. For this goal, we calculate the distance between the high-dimensional facial expression frame data of existing. And we distribute it in two-dimensional plane space to maintain the distance relationship between the high-dimensional facial expression frame data of existing like that from the condition which applies linear and nonlinear projection techniques. When comparing the facial expression data which distribute in two-dimensional space and the data of existing, we find out the projection technique to maintain the relationship of distance between the frame data like that in condition of optimum. Finally, this paper compare linear and nonlinear projection techniques to projection high-dimensional facial expression data in low-dimensional space and analyze it. And we find out the projection technique of optimum from it.

Real-time Interactive Animation System for Low-Priced Motion Capture Sensors (저가형 모션 캡처 장비를 이용한 실시간 상호작용 애니메이션 시스템)

  • Kim, Jeongho;Kang, Daeun;Lee, Yoonsang;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.29-41
    • /
    • 2022
  • In this paper, we introduce a novel real-time, interactive animation system which uses real-time motion inputs from a low-cost motion-sensing device Kinect. Our system generates interaction motions between the user character and the counterpart character in real-time. While the motion of the user character is generated mimicking the user's input motion, the other character's motion is decided to react to the user avatar's motion. During a pre-processing step, our system analyzes the reference motion data and generates mapping model in advance. At run-time, our system first generates initial poses of two characters and then modifies them so that it could provide plausible interacting behavior. Our experimental results show plausible interacting animations in that the user character performs a modified motion of user input and the counterpart character properly reacts against the user character. The proposed method will be useful for developing real-time interactive animation systems which provide a better immersive experience for users.

Application of Immersive Virtual Environment Through Virtual Avatar Based On Rigid-body Tracking (강체 추적 기반의 가상 아바타를 통한 몰입형 가상환경 응용)

  • MyeongSeok Park;Jinmo Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.69-77
    • /
    • 2023
  • This study proposes a rigid-body tracking based virtual avatar application method to increase the social presence and provide various experiences of virtual reality(VR) users in an immersive virtual environment. The proposed method estimates the motion of a virtual avatar through inverse kinematics based on real-time rigid-body tracking based on motion capture using markers. Through this, it aims to design a highly immersive virtual environment with simple object manipulation in the real world. Science experiment educational contents are produced to experiment and analyze applications related to immersive virtual environments through virtual avatars. In addition, audiovisual education, full-body tracking, and the proposed rigid-body tracking method were compared and analyzed through survey. In the proposed virtual environment, participants wore VR HMDs and conducted a survey to confirm immersion and educational effects from virtual avatars performing experimental educational actions from estimated motions. As a result, through the method of utilizing virtual avatars based on rigid-body tracking, it was possible to induce higher immersion and educational effects than traditional audiovisual education. In addition, it was confirmed that a sufficiently positive experience can be provided without much work for full-body tracking.

Implementation of Motion Analysis System based on Inertial Measurement Units for Rehabilitation Purposes (재활훈련을 위한 관성센서 기반 동작 분석 시스템 구현)

  • Kang, S.I.;Cho, J.S.;Lim, D.H.;Lee, J.S.;Kim, I.Y.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.7 no.2
    • /
    • pp.47-54
    • /
    • 2013
  • In this paper, we present an inertial sensor-based motion capturing system to measure and analyze whole body movements. This system implements a wireless AHRS(attitude heading reference system) we developed using a combination of rate gyroscope, accelerometer and magnetometer sensor signals. Several AHRS modules mounted on segments of the patient's body provide the quaternions representing the patient segments's orientation in space. We performed 3D motion capture using the quaternion data calculated. And a method is also proposed for calculating three-dimensional inter-segment joint angle which is an important bio-mechanical measure for a variety of applications related to rehabilitation. To evaluate the performance of our AHRS module, the Vicon motion capture system, which offers millimeter resolution of 3D spatial displacements and orientations, is used as a reference. The evaluation resulted in a RMSE of 2.56 degree. The results suggest that our system will provide an in-depth insight into the effectiveness, appropriate level of care, and feedback of the rehabilitation process by performing real-time limbs or gait analysis during the post-stroke recovery process.

  • PDF

Trajectory Rectification of Marker using Confidence Model (신뢰도 모델을 이용한 마커 궤적 재조정)

  • Ahn, Junghyun;Jang, Mijung;Wohn, Kwangyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.3
    • /
    • pp.17-23
    • /
    • 2002
  • Motion capture system is widely used nowadays in the entertainment industry like movies, computer games and broadcasting. This system consist of several high resolution and high speed CCD cameras and expensive frame grabbing hardware for image acquisition. KAIST VR laboratory focused on low cost system for a few years and have been developed a LAN based optical motion capture system. But, by using low cost system some problems like occlusion, noise and swapping of markers' trajectory can be occurred. And more labor intensive work is needed for post-processing process. In this thesis, we propose a trajectory rectification algorithm by confidence model of markers attached on actor. Confidence model is based on graph structure and consist of linkage, marker and frame confidence. To reduce the manual work in post-processing, we have to reconstruct the marker graph by maximizing the frame confidence.

  • PDF

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

Phased Visualization of Facial Expressions Space using FCM Clustering (FCM 클러스터링을 이용한 표정공간의 단계적 가시화)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.2
    • /
    • pp.18-26
    • /
    • 2008
  • This paper presents a phased visualization method of facial expression space that enables the user to control facial expression of 3D avatars by select a sequence of facial frames from the facial expression space. Our system based on this method creates the 2D facial expression space from approximately 2400 facial expression frames, which is the set of neutral expression and 11 motions. The facial expression control of 3D avatars is carried out in realtime when users navigate through facial expression space. But because facial expression space can phased expression control from radical expressions to detail expressions. So this system need phased visualization method. To phased visualization the facial expression space, this paper use fuzzy clustering. In the beginning, the system creates 11 clusters from the space of 2400 facial expressions. Every time the level of phase increases, the system doubles the number of clusters. At this time, the positions of cluster center and expression of the expression space were not equal. So, we fix the shortest expression from cluster center for cluster center. We let users use the system to control phased facial expression of 3D avatar, and evaluate the system based on the results.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

The Study about the Expression Method of Timing which Produce Movement in the Animation (애니메이션에서 움직임을 연출하는 타이밍 표현방법에 관한 연구)

  • Bang Woo-Song;Kim Soon-Gohn
    • Journal of Game and Entertainment
    • /
    • v.1 no.1
    • /
    • pp.55-62
    • /
    • 2005
  • The expression of the movement in the animation is one of important factors built up the work. It is determined wholly by the experiences of the animator in the animation, on the contrary, the expression of movie depends on the data obtained from the motion capture or the movement of the actors. One of the most important factors is the timing expression in expression of movement of characters and the proper understanding of direction circumstance and the expression of the timing make the animation more plentiful visually and also these are the basic method that introduce the feeling to the characters. In this study we Identify the basic principles of timing expression in the animation, make experiments for the changes of timing by the camera's angle, compare them and show the most proper methods of timing expression.

  • PDF

Learning Multi-Character Competition in Markov Games (마르코프 게임 학습에 기초한 다수 캐릭터의 경쟁적 상호작용 애니메이션 합성)

  • Lee, Kang-Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.2
    • /
    • pp.9-17
    • /
    • 2009
  • Animating multiple characters to compete with each other is an important problem in computer games and animation films. However, it remains difficult to simulate strategic competition among characters because of its inherent complex decision process that should be able to cope with often unpredictable behavior of opponents. We adopt a reinforcement learning method in Markov games to action models built from captured motion data. This enables two characters to perform globally optimal counter-strategies with respect to each other. We also extend this method to simulate competition between two teams, each of which can consist of an arbitrary number of characters. We demonstrate the usefulness of our approach through various competitive scenarios, including playing-tag, keeping-distance, and shooting.

  • PDF