• Title/Summary/Keyword: 모션 캡처

Search Result 87, Processing Time 0.023 seconds

Study on Effect of Exercise Performance using Non-face-to-face Fitness MR Platform Development (비대면 휘트니스 MR 플랫폼 개발을 활용한 운동 수행 효과에 관한 연구)

  • Kim, Jun-woo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.571-576
    • /
    • 2021
  • This study was carried out to overcome the problems of the existing fitness business and to build a fitness system that can meet the increased demand in the Corona situation. As a platform technology for non-face-to-face fitness edutainment service, it is a next-generation fitness exercise device that can use various body parts and synchronize network-type information. By synchronizing the exercise information of the fitness equipment, it was composed of learning contents through MR-based avatars. A quantified result was derived from examining the applicability of the customized evaluation system through momentum analysis with A.I analysis applying the LSTM-based algorithm according to the cumulative exercise effect of the user. It is a motion capture and 3D visualization fitness program for the application of systematic exercise techniques through academic experts, and it is judged that it will contribute to the improvement of the user's fitness knowledge and exercise ability.

3D Rigid Body Tracking Algorithm Using 2D Passive Marker Image (2D 패시브마커 영상을 이용한 3차원 리지드 바디 추적 알고리즘)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.587-588
    • /
    • 2022
  • In this paper, we propose a rigid body tracking method in 3D space using 2D passive marker images from multiple motion capture cameras. First, a calibration process using a chess board is performed to obtain the internal variables of individual cameras, and in the second calibration process, the triangular structure with three markers is moved so that all cameras can observe it, and then the accumulated data for each frame is calculated. Correction and update of relative position information between cameras. After that, the three-dimensional coordinates of the three markers were restored through the process of converting the coordinate system of each camera into the 3D world coordinate system, the distance between each marker was calculated, and the difference with the actual distance was compared. As a result, an error within an average of 2mm was measured.

  • PDF

The Prediction System of Emotional Reaction to Gaits Using MAX SCRIPT (맥스 스크립트를 이용한 감성적 걸음걸이 예측 시스템)

  • Jeong, Jae-Wook
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2011
  • A perceptual reaction to human being's gaits has "regularity" that possibly obtains sympathy among people. This thesis is in the vein of the study that performs the research on the quantificational extraction of the regularity, reconstitute the result, and apply it to controlling behavior. The purpose of this thesis lies in assuring the validity of the future research by demonstrating the following hypothesis: when the physical numerical values of the gait "A" whose perceptual reaction is "a" and those of the gait "B" whose perceptual reaction is "b" are arbitrarily blended, the perceptual reaction to this blended gait also corresponds to the blend of "a" and "b", "a/b". I blended the samples of two types of gaits in the form of Bipeds using the EAM made by 3D Studio Max Script. Blending outcomes were obtained successfully for four times out of the six tries in total. It implies that without utilizing other methods such as Motion Capturing, the basic Bipeds data itself has an enough capability to generate various gaits of Bipeds. Although the present research targets only the Bipeds samples equipped with the 1Cycle moving condition of arms and legs, I acknowledge that a tool that makes blending possible under various moving conditions is necessary for a completed system.

  • PDF

A Synchronized Playback Method of 3D Model and Video by Extracting Golf Swing Information from Golf Video (골프 동영상으로부터 추출된 스윙 정보를 활용한 3D 모델과 골프 동영상의 동기화 재생)

  • Oh, Hwang-Seok
    • Journal of the Korean Society for Computer Game
    • /
    • v.31 no.4
    • /
    • pp.61-70
    • /
    • 2018
  • In this paper, we propose a synchronized playback method of 3D reference model and video by extracting golf swing information from learner's golf video to precisely compare and analyze each motion in each position and time in the golf swing, and present the implementation result. In order to synchronize the 3D model with the learner's swing video, the learner's golf swing movie is first photographed and relative time information is extracted from the photographed video according to the position of the golf club from the address posture to the finishing posture. Through applying time information from learners' swing video to a 3D reference model that rigs the motion information of a pro-golfer's captured swing motion at 120 frames per second through high-quality motion capture equipment into a 3D model and by synchronizing the 3D reference model with the learner's swing video, the learner can correct or learn his / her posture by precisely comparing his or her posture with the reference model at each position of the golf swing. Synchronized playback can be used to improve the functionality of manually adjusting system for comparing and analyzing the reference model and learner's golf swing. Except for the part where the image processing technology that detects each position of the golf posture is applied, It is expected that the method of automatically extracting the time information of each location from the video and of synchronized playback can be extended to general life sports field.

FBX Format Animation Generation System Combined with Joint Estimation Network using RGB Images (RGB 이미지를 이용한 관절 추정 네트워크와 결합된 FBX 형식 애니메이션 생성 시스템)

  • Lee, Yujin;Kim, Sangjoon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.519-532
    • /
    • 2021
  • Recently, in various fields such as games, movies, and animation, content that uses motion capture to build body models and create characters to express in 3D space is increasing. Studies are underway to generate animations using RGB-D cameras to compensate for problems such as the cost of cinematography in how to place joints by attaching markers, but the problem of pose estimation accuracy or equipment cost still exists. Therefore, in this paper, we propose a system that inputs RGB images into a joint estimation network and converts the results into 3D data to create FBX format animations in order to reduce the equipment cost required for animation creation and increase joint estimation accuracy. First, the two-dimensional joint is estimated for the RGB image, and the three-dimensional coordinates of the joint are estimated using this value. The result is converted to a quaternion, rotated, and an animation in FBX format is created. To measure the accuracy of the proposed method, the system operation was verified by comparing the error between the animation generated based on the 3D position of the marker by attaching a marker to the body and the animation generated by the proposed system.

Analysis of the Effects of Positive and Negative VR Game Contents on Enhancing Environmental Awareness Based on Self-Reliant and Team-Based Play Styles (개인 플레이와 협동 플레이 방식에서 긍정적 및 부정적 VR 콘텐츠가 환경 인식 개선에 미치는 영향)

  • Jihun Chae;Seungeun Yoo;Youngsung Lee;Yunsub Kim;Hyeonjin Kim;Daseong Han
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.137-147
    • /
    • 2023
  • This paper presents a motion-capture-based projection VR system to explore the effectiveness of gamification in improving environmental awareness. We examine the key components of positive and negative VR game content and analyze the impact of individual and cooperative play methods on promoting sustainable behaviors. Our findings are as follows. Firstly, we discovered that the use of positive content in individual play mode was effective in improving awareness of the importance of recycling. Secondly, we confirmed that the use of positive content in cooperative play mode and the use of negative content in individual play mode were each effective in enhancing awareness of the seriousness of environmental pollution. Thirdly, we found that experiencing positive content first, followed by negative content, in individual play mode was effective in increasing interest in the environment. Based on these findings, we determined that adjusting the order of use of positive and negative content is more effective than simply using positive or negative content alone for improving environmental awareness. Moreover, considering the importance of recycling, the seriousness of environmental pollution, and the level of interest in the environment, we confirmed that individual play mode is effective and cooperative play mode can be more effective depending on the measure.

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.