• Title/Summary/Keyword: 모션 캡처

Search Result 87, Processing Time 0.024 seconds

Analysis of Pitching Motions by Human Pose Estimation Based on RGB Images (RGB 이미지 기반 인간 동작 추정을 통한 투구 동작 분석)

  • Yeong Ju Woo;Ji-Yong Joo;Young-Kwan Kim;Hie Yong Jeong
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.16-22
    • /
    • 2024
  • Pitching is a major part of baseball, so much so that it can be said to be the beginning of baseball. Analysis of accurate pitching motions is very important in terms of performance improvement and injury prevention. When analyzing the correct pitching motion, the currently used motion capture method has several critical environmental drawbacks. In this paper, we propose analysis of pitching motion using the RGB-based Human Pose Estimation (HPE) model to replace motion capture, which has these shortcomings, and use motion capture data and HPE data to verify its reliability. The similarity of the two data was verified by comparing joint coordinates using the Dynamic Time Warping (DTW) algorithm.

Motion Retargetting Simplification for H-Anim Characters (H-Anim 캐릭터의 모션 리타겟팅 단순화)

  • Jung, Chul-Hee;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.791-795
    • /
    • 2009
  • There is a need for a system independent human data format that doesn't depend on a specific graphics tool or program to use interoperable human data in a network environment. To achieve this, the Web3D Consortium and ISO/IEC JTC1 WG6 developed the international draft standard ISO/IEC 19774 Humanoid Animation(H-Anim). H-Anim defines the data structure for an articulated human figure, but it does not yet define the data for human motion generation. This paper discusses a method of obtaining compatibility and independence of motion data between application programs, and describes a method of simplifying motion retargetting necessary for motion definition of H-Anim characters. In addition, it describes a method of generating H-Anim character animation using arbitrary 3D character models and arbitrary motion capture data without any inter-relations, and its implementation results.

3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor (깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법)

  • Sung, Man-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.827-836
    • /
    • 2012
  • After successful advent of Microsoft's Kinect, many interactive contents that control user's 3D avatar motions in realtime have been created. However, due to the Kinect's intrinsic IR projection problem, users are restricted to face the sensor directly forward and to perform all motions in a standing-still position. These constraints are main reasons that make it almost impossible for the 3D character to navigate the virtual environment, which is one of the most required functionalities in games. This paper proposes a new method that makes 3D character navigate the virtual environment with highly realistic motions. First, in order to find out the user's intention of navigating the virtual environment, the method recognizes walking-in-place motion. Second, the algorithm applies the motion splicing technique which segments the upper and the lower motions of character automatically and then switches the lower motion with pre-processed motion capture data naturally. Since the proposed algorithm can synthesize realistic lower-body walking motion while using motion capture data as well as capturing upper body motion on-line puppetry manner, it allows the 3D character to navigate the virtual environment realistically.

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

A Study on Game Character Rigging for Root Motion (루트 모션을 위한 게임 캐릭터 리깅 연구)

  • SangWon Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.163-164
    • /
    • 2023
  • 실시간 3D 렌더링 게임의 제작 환경에서 캐릭터의 움직임은 모션 캡처(motion capture)를 통해 만들거나 애니메이터에 의해 제작된다. 걷기나 달리기 등 일정한 속도로 캐릭터가 움직이는 모션은 캐릭터가 제자리(in place)에서 움직이도록 한 뒤에 게임에서 프로그램에 의해 일정한 속도로 움직임으로써 구현할 수 있다. 하지만 일정하지 않은 속도로 움직이는 모션을 같은 방식으로 적용하면 캐릭터의 이동이 어색해진다. 이런 어색함을 보완하기 위해 언리얼이나 유니티 3D 등의 엔진에서는 루트 모션(root motion) 기능을 사용하고 있다. 그런데 루트 모션을 위한 계층 구조는 애니메이터의 작업 효율을 위한 계층 구조와 다른 측면이 있다. 본 논문에서는 3ds Max를 사용하여 애니메이터 친화적이고 루트 모션에도 적합한 캐릭터 리깅을 제시한다.

  • PDF

A study of center of gravity on 3d character animation (3D 캐릭터 애니메이션에서의 무게중심 관한 연구)

  • Cho, Jae-Yun
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02b
    • /
    • pp.356-361
    • /
    • 2006
  • 모션캡처기술은 이미 많은 애니메이션과 게임에서 보편화되어 사용되고 있다. 하지만, 이런 좋은 기술을 뒤로 한 채 아직도 많은 애니메이터들이 직접 애니메이션을 하고 있다. 모션캡처 기술비용과 제작시간 때문이기도 하지만 사람과 상이한 체형을 가진 3D 캐릭터에 사람의 모션을 적용하기엔 어색한 부분이 많기 때문이다. 또한, 캐릭터의 특징을 부각시키거나 왜곡시키는 등의 과장의 표현은 불가능하다. 캐릭터의 생명은 그 캐릭터가 가진 성격과 체형에서, 그 캐릭터만의 자연스런 움직임을 표현하는데 있다. 기획과 의도에 따른 특성들을 과장하여 표현하되 인간의 동작에 익숙해 있는 우리 눈에 어색함으로 비춰지지 않도록 해주어야 비로소 생명력이 있는 캐릭터를 만들 수 있다. 다양한 모양의 캐릭터는 서로 다른 무게중심을 가졌고 이를 고려하지 않고 애니메이션 했을 때 여러 가지 문제점이 생긴다. 이러한 문제점은 캐릭터가 자연스럽지 못하게 보이는 가장 큰 원인 중 하나다. 본 논문은 게임과 애니메이션 등에서의 3D 캐릭터가 더욱 생생하고 현실적으로 보이도록 돕는데 그 목적이 있다. 그 중, 중요한 요소인 무게중심에 대한 이해와 함께 활용방법에 대한 연구에 목적을 둔다. 캐릭터의 자연스러운 움직임을 위해 무게중심은 반드시 고려해야 할 문제이고 캐릭터의 특성 및 성격 표현에도 중요한 영향을 미친다. 애니메이터들에게 무게중심에 대한 중요성을 알리고 새로운 접근방법을 제시하는 것을 본 논문의 가치로 삼는다.

  • PDF

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Reinforcement Learning of Bipedal Walking with Musculoskeletal Models and Reference Motions (근골격 모델과 참조 모션을 이용한 이족보행 강화학습)

  • Jiwoong Jeon;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.1
    • /
    • pp.23-29
    • /
    • 2023
  • In this paper, we introduce a method to obtain high-quality results at a low cost for simulating musculoskeletal characters based on data from the reference motion through motion capture on two-legged walking through reinforcement learning. We reset the motion data of the reference motion to allow the character model to perform, and then train the corresponding motion to be learned through reinforcement learning. We combine motion imitation of the reference model with minimal metabolic energy for the muscles to learn to allow the musculoskeletal model to perform two-legged walking in the desired direction. In this way, the musculoskeletal model can learn at a lower cost than conventional manually designed controllers and perform high-quality bipedal walking.

Facial Expression Animation which Applies a Motion Data in the Vector based Caricature (벡터 기반 캐리커처에 모션 데이터를 적용한 얼굴 표정 애니메이션)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.90-98
    • /
    • 2010
  • This paper describes methodology which enables user in order to generate facial expression animation of caricature which applies a facial motion data in the vector based caricature. This method which sees was embodied with the plug-in of illustrator. And It is equipping the user interface of separate way. The data which is used in experiment attaches 28 small-sized markers in important muscular part of the actor face and captured the multiple many expression which is various with Facial Tracker. The caricature was produced in the bezier curve form which has a respectively control point from location of the important marker which attaches in the face of the actor when motion capturing to connection with motion data and the region which is identical. The facial motion data compares in the caricature and the spatial scale went through a motion calibration process too because of size. And with the user letting the control did possibly at any time. In order connecting the caricature and the markers also, we did possibly with the click the corresponding region of the caricature, after the user selects each name of the face region from the menu. Finally, this paper used a user interface of illustrator and in order for the caricature facial expression animation generation which applies a facial motion data in the vector based caricature to be possible.