• Title/Summary/Keyword: 모션 캡쳐 기술

Search Result 67, Processing Time 0.023 seconds

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

A Study on the Transition of the Grotesque Expression Method in Animation (애니메이션에 나타난 그로테스크 표현방법 추이 연구)

  • 이종한
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.51-60
    • /
    • 2004
  • The esthetics of 'grotesque' and its entertaining values have many things in common with the characteristics and properties of animation as a medium of expression. The ways to express grotesque images in animation have expanded its realm of expression with the development of modern image digitalization technologies. The grotesque images that were expressed by the combination of photographs and animations and objet animations have been developing its disharmony in a more realistic manner with full 3D animation technology and realistic motions of virtual characters with motion capture. The subject matter also has become more diverse with the advance of modern scientific civilization. An example is the appearance of Cyborg(cybernetic+organism) characters and the grotesque images in the modern times mainly start to appear in animations dealing with the isolation of human beings among its characteristics. Also, the changes of the expression methods of grotesque images along with the development of visual technology development in the future will transform its methodological forms by changing its expression methods to VR forms and interactive visuals. Thus it is expected that the grotesque images and characteristics will change into a visual genre which emphasizes dehumanization.

  • PDF

A Study on XR Handball Sports for Individuals with Developmental Disabilities

  • Byong-Kwon Lee;Sang-Hwa Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.6
    • /
    • pp.31-38
    • /
    • 2024
  • This study proposes a novel approach to enhancing the social inclusion and participation of individuals with developmental disabilities. Utilizing cutting-edge virtual reality (VR) technology, we designed and developed a metaverse simulator that enables individuals with developmental disabilities to safely and conveniently experience indoor handicapped handball sports. This simulator provides an environment where individuals with disabilities can experience and practice handball matches. For the modeling and animation of handball players, we employed advanced modeling and motion capture technologies to accurately replicate the movements required in handball matches. Additionally, we ported various training programs, including basic drills, penalty throws, and target games, onto XR (Extended Reality) devices. Through this research, we have explored the development of immersive assistive tools that enable individuals with developmental disabilities to more easily participate in activities that may be challenging in real-life scenarios. This is anticipated to broaden the scope of social participation for individuals with developmental disabilities and enhance their overall quality of life.

Deep Learning-Based Motion Reconstruction Using Tracker Sensors (트래커를 활용한 딥러닝 기반 실시간 전신 동작 복원 )

  • Hyunseok Kim;Kyungwon Kang;Gangrae Park;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.5
    • /
    • pp.11-20
    • /
    • 2023
  • In this paper, we propose a novel deep learning-based motion reconstruction approach that facilitates the generation of full-body motions, including finger motions, while also enabling the online adjustment of motion generation delays. The proposed method combines the Vive Tracker with a deep learning method to achieve more accurate motion reconstruction while effectively mitigating foot skating issues through the use of an Inverse Kinematics (IK) solver. The proposed method utilizes a trained AutoEncoder to reconstruct character body motions using tracker data in real-time while offering the flexibility to adjust motion generation delays as needed. To generate hand motions suitable for the reconstructed body motion, we employ a Fully Connected Network (FCN). By combining the reconstructed body motion from the AutoEncoder with the hand motions generated by the FCN, we can generate full-body motions of characters that include hand movements. In order to alleviate foot skating issues in motions generated by deep learning-based methods, we use an IK solver. By setting the trackers located near the character's feet as end-effectors for the IK solver, our method precisely controls and corrects the character's foot movements, thereby enhancing the overall accuracy of the generated motions. Through experiments, we validate the accuracy of motion generation in the proposed deep learning-based motion reconstruction scheme, as well as the ability to adjust latency based on user input. Additionally, we assess the correction performance by comparing motions with the IK solver applied to those without it, focusing particularly on how it addresses the foot skating issue in the generated full-body motions.

A Study about a Production of A Game Character Animation Using a Combining with a Motion-capture System (디지털기반 3D 게임캐릭터애니메이션 제작에 있어서 모션캡쳐 활용에 관한 연구)

  • Ryu Seuc-Ho;Kyung Byung-Pyo;Kim Tae-Yul
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.5
    • /
    • pp.115-123
    • /
    • 2005
  • Game industry will be one of the fastest developed industries in the 21st century. It is the outcome derived from the development of hardware such as the accumulated technology and computer in the 20th century. The development of computer graphics and hardware technique has let games have more realistic expression. The reality shown in games has influences over the movement of 3-dimensional game character as well as its background. The animation of character has been influenced by animator's level of skill, but it was problematic to have unnatural movement and long-term production time. In this regard, this study will compare the most widely used animation techniques such as Key-Frame method and Motion-Capture method each other and try to figure out which one is the most appropriate and effective method in 3 dimensional game character animation.

  • PDF

A Character Speech Animation System for Language Education for Each Hearing Impaired Person (청각장애우의 언어교육을 위한 캐릭터 구화 애니메이션 시스템)

  • Won, Yong-Tae;Kim, Ha-Dong;Lee, Mal-Rey;Jang, Bong-Seog;Kwak, Hoon-Sung
    • Journal of Digital Contents Society
    • /
    • v.9 no.3
    • /
    • pp.389-398
    • /
    • 2008
  • There has been some research into a speech system for communications between those who are hearing impaired and those who hear normally, but the system has been pursued in inefficient teaching ways in which existing teachers teach each individual due to social indifference and a lack of marketability. In order to overcome such a weakness, there appeared to be a need to develop contents utilizing 3D animation and digital technology. For the investigation of a standard face and a standard spherical shape for the preparation of a character, the study collected sufficient data concerning students in the third-sixth grades in elementary schools in Seoul and Gyeonggi, Korea, and drew up standards for a face and a spherical shape of such students. This data is not merely the basic data of content development for the hearing impaired, but it can also offer a standard measurement and a standard type realistically applicable to them. As a system for understanding conversations by applying 3D character animation and educating self-expression, the character speech animation system supports effective learning for language education for hearing impaired children who need language education within their families and in special education institutions with the combination of 3D technology and motion capture.

  • PDF

Development and Application of Automatic Motion Generator for Game Characters (게임 캐릭터를 위한 자동동작생성기의 개발과 응용)

  • Ok, Soo-Yol;Kang, Young-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.8
    • /
    • pp.1363-1369
    • /
    • 2008
  • As game and character animation industries are growing, techniques for reproducing realistic character behaviors have been required in various fields. Therefore, intensive researches have been performed in order to find various methods for realistic character animation. The most common approaches to character animation involves tedious user input method, simulation with physical laws based on dynamics, and measurement of actors' behaviors with input devices such as motion capture system. These approaches have their own advantages, but they all have common disadvantage in character control. In order to provide users with convenient control, the realistic animation must be generated with high-level parameters, and the modification should also be made with high-level parameters. In this paper we propose techniques for developing an automated character animation tool which operates with high-level parameters, and introduce techniques for developing actual games by utilizing this tool.

Analysis of Relationship between Biomechanical Factors and Driver's Distance during Golf Driver Swing (골프 드라이버 스윙 시 운동역학 요인들과 비거리 관련성 분석)

  • Lim, Young-Tae;Park, Jun-Sung;Lee, Jae-Woo;Kwon, Moon-Seok
    • Journal of the Korean Applied Science and Technology
    • /
    • v.38 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • The purpose of this study was to analyze relationship between biomechancal factors and diver's distance during golf driver swing. Fifteen professional golfers were participated in as subject. Eight motion capture cameras(250 Hz), 2 force plates(1000 Hz), and Trackman were used to collect kinematic and kinetic datas. It was performed Pearson's correlation analysis using SPSS 24.0. The level of significance was at .05. Ball speed, club head speed, X-Factor, and ground reaction force were correlated on driving distance, However, smash factor and knee moment were not correlated on driving distnace. Ball speed, club head speed, X-Factor, and ground reaction force were effected to driving distance, but smash factor and knee moment were not effected to driving distance.

Capture of Foot Motion for Real-time Virtual Wearing by Stereo Cameras (스테레오 카메라로부터 실시간 가상 착용을 위한 발동작 검출)

  • Jung, Da-Un;Yun, Yong-In;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1575-1591
    • /
    • 2008
  • In this paper, we propose a new method detecting foot motion capture in order to overlap in realtime foot's 3D virtual model from stereo cameras. In order to overlap foot's virtual model at the same position of the foot, a process of the foot's joint detection to regularly track the foot's joint motion is necessary, and accurate register both foot's virtual model and user's foot in complicated motion is most important problem in this technology. In this paper, we propose a dynamic registration using two types of marker groups. A plane information of the ground handles the relationship between foot's virtual model and user's foot and obtains foot's pose and location. Foot's rotation is predicted by two attached marker groups according to instep of center framework. Consequently, we had implemented our proposed system and estimated the accuracy of the proposed method using various experiments.

  • PDF

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.