• Title/Summary/Keyword: 모션캡쳐 시스템

Search Result 81, Processing Time 0.022 seconds

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Development of a Squat Angle Measurement System using an Inertial Sensor (관성 센서기반 스쿼트 각도 측정 융합 시스템 개발)

  • Joo, Hyo-Sung;Woo, Ji-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.355-361
    • /
    • 2020
  • The squat is an exercise that can effectively improve the muscle strength of the lower body, which can be performed in a variety of places without restrictions on places including homes. However, injuries due to incorrect motion or excessive angles are frequently occurring. In this study, we developed a single sensor-based squat angle measurement system that can inform the squat angle according to the correct motion during the squat exercise. The sensor module, including the acceleration sensor and the gyro sensor, is attached to the user's thigh. The squat angle was calculated using the complementary filter complementing the pros and cons of acceleration and gyro sensor. It was found that the calculated squat angle showed the proper correlation compared to the squat angle measured by a goniometer, and the influence of the coefficient of the complementary filter on the accuracy was evaluated.

Capture of Foot Motion for Real-time Virtual Wearing by Stereo Cameras (스테레오 카메라로부터 실시간 가상 착용을 위한 발동작 검출)

  • Jung, Da-Un;Yun, Yong-In;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1575-1591
    • /
    • 2008
  • In this paper, we propose a new method detecting foot motion capture in order to overlap in realtime foot's 3D virtual model from stereo cameras. In order to overlap foot's virtual model at the same position of the foot, a process of the foot's joint detection to regularly track the foot's joint motion is necessary, and accurate register both foot's virtual model and user's foot in complicated motion is most important problem in this technology. In this paper, we propose a dynamic registration using two types of marker groups. A plane information of the ground handles the relationship between foot's virtual model and user's foot and obtains foot's pose and location. Foot's rotation is predicted by two attached marker groups according to instep of center framework. Consequently, we had implemented our proposed system and estimated the accuracy of the proposed method using various experiments.

  • PDF

Model-based Body Motion Tracking of a Walking Human (모델 기반의 보행자 신체 추적 기법)

  • Lee, Woo-Ram;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.75-83
    • /
    • 2007
  • A model based approach of tracking the limbs of a walking human subject is proposed in this paper. The tracking process begins by building a data base composed of conditional probabilities of motions between the limbs of a walking subject. With a suitable amount of video footage from various human subjects included in the database, a probabilistic model characterizing the relationships between motions of limbs is developed. The motion tracking of a test subject begins with identifying and tracking limbs from the surveillance video image using the edge and silhouette detection methods. When occlusion occurs in any of the limbs being tracked, the approach uses the probabilistic motion model in conjunction with the minimum cost based edge and silhouette tracking model to determine the motion of the limb occluded in the image. The method has shown promising results of tracking occluded limbs in the validation tests.

Biomechanical Analysis of Arm Motion during Steering Using Motion Analysis Technique (동작분석기법을 이용한 조향동작에 대한 팔의 생체역학적 특성분석)

  • Kim, Young-Hwan;Tak, Tea-Oh
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.11
    • /
    • pp.1391-1398
    • /
    • 2011
  • Biomechanical analysis of arm motion during steering was performed using a motion analysis technique. Three-dimensional position data for each part of arm are fed into an interactive model combining a musculoskeletal arm model and the mechanical steering system to calculate joint angles and torques using inverse kinematic and dynamic analyses, respectively. The analysis shows that elbow pronation/supination, wrist flexion/extension, shoulder adduction/abduction, and shoulder flexion/extension have significant magnitudes. Sensitivity analysis of the arm joint motion with respect to seating posture and steering wheel configuration is carried out to investigate the qualitative influence of the seating posture and driver's seat configuration on the steering behavior.

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Implementation of a Transition Rule Model for Automation of Tracking Exercise Progression (운동 과정 추적의 자동화를 위한 전이 규칙 모델의 구현)

  • Chung, Daniel;Ko, Ilju
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.5
    • /
    • pp.157-166
    • /
    • 2022
  • Exercise is necessary for a healthy life, but it is recommended that it be conducted in a non-face-to-face environment in the context of an epidemic such as COVID-19. However, in the existing non-face-to-face exercise content, it is possible to recognize exercise movements, but the process of interpreting and providing feedback information is not automated. Therefore, in this paper, to solve this problem, we propose a method of creating a formalized rule to track the contents of exercise and the motions that constitute it. To make such a rule, first make a rule for the overall exercise content, and then create a tracking rule for the motions that make up the exercise. A motion tracking rule can be created by dividing the motion into steps and defining a key frame pose that divides the steps, and creating a transition rule between states and states represented by the key frame poses. The rules created in this way are premised on the use of posture and motion recognition technology using motion capture equipment, and are used for logical development for automation of application of these technologies. By using the rules proposed in this paper, not only recognizing the motions appearing in the exercise process, but also automating the interpretation of the entire motion process, making it possible to produce more advanced contents such as an artificial intelligence training system. Accordingly, the quality of feedback on the exercise process can be improved.

A Synchronized Playback Method of 3D Model and Video by Extracting Golf Swing Information from Golf Video (골프 동영상으로부터 추출된 스윙 정보를 활용한 3D 모델과 골프 동영상의 동기화 재생)

  • Oh, Hwang-Seok
    • Journal of the Korean Society for Computer Game
    • /
    • v.31 no.4
    • /
    • pp.61-70
    • /
    • 2018
  • In this paper, we propose a synchronized playback method of 3D reference model and video by extracting golf swing information from learner's golf video to precisely compare and analyze each motion in each position and time in the golf swing, and present the implementation result. In order to synchronize the 3D model with the learner's swing video, the learner's golf swing movie is first photographed and relative time information is extracted from the photographed video according to the position of the golf club from the address posture to the finishing posture. Through applying time information from learners' swing video to a 3D reference model that rigs the motion information of a pro-golfer's captured swing motion at 120 frames per second through high-quality motion capture equipment into a 3D model and by synchronizing the 3D reference model with the learner's swing video, the learner can correct or learn his / her posture by precisely comparing his or her posture with the reference model at each position of the golf swing. Synchronized playback can be used to improve the functionality of manually adjusting system for comparing and analyzing the reference model and learner's golf swing. Except for the part where the image processing technology that detects each position of the golf posture is applied, It is expected that the method of automatically extracting the time information of each location from the video and of synchronized playback can be extended to general life sports field.

Wishbowl: Production Case Study of Music Video and Immersive Interactive Concert of Virtual Band Idol Verse'day (Wishbowl: 버추얼 밴드 아이돌 Verse'day 뮤직비디오 및 몰입형 인터랙티브 공연 제작 사례 연구)

  • Sebin Lee;Gyeongjin Kim;Daye Kim;Jungjin Lee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.23-41
    • /
    • 2024
  • Recently, various virtual avatar music content that showcases singing and dancing have been produced, and as virtual artists gain popularity, offline virtual avatar concerts have also emerged. However, there are few examples of virtual avatar band content where avatars play instruments. In addition, offline virtual avatar concerts using large screens at the front are limited in their ability to utilize the fantastical effects and high degree of freedom unique to virtual reality. In this paper, inspired by these limitations of virtual avatar music content, we introduce the production case of virtual avatar band content and immersive interactive concert of virtual band idol Verse'day. Firstly, we present a case study on creating band performance animations and music videos using motion capture systems and real-time engines. Then, we introduce a production case of an immersive interactive concert using projection mapping technology and a light stick that allows real-time interaction in an offline concert. Finally, based on these production cases, we discussed the future research directions of developing virtual avatar music content creation. We expect that our production cases will inspire the creation of diverse virtual avatar music content and the development of immersive interactive offline virtual avatar concerts in the future.