• Title/Summary/Keyword: Motion capture

Search Result 654, Processing Time 0.021 seconds

Motion Retargetting Simplification for H-Anim Characters (H-Anim 캐릭터의 모션 리타겟팅 단순화)

  • Jung, Chul-Hee;Lee, Myeong-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.791-795
    • /
    • 2009
  • There is a need for a system independent human data format that doesn't depend on a specific graphics tool or program to use interoperable human data in a network environment. To achieve this, the Web3D Consortium and ISO/IEC JTC1 WG6 developed the international draft standard ISO/IEC 19774 Humanoid Animation(H-Anim). H-Anim defines the data structure for an articulated human figure, but it does not yet define the data for human motion generation. This paper discusses a method of obtaining compatibility and independence of motion data between application programs, and describes a method of simplifying motion retargetting necessary for motion definition of H-Anim characters. In addition, it describes a method of generating H-Anim character animation using arbitrary 3D character models and arbitrary motion capture data without any inter-relations, and its implementation results.

3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor (깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법)

  • Sung, Man-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.827-836
    • /
    • 2012
  • After successful advent of Microsoft's Kinect, many interactive contents that control user's 3D avatar motions in realtime have been created. However, due to the Kinect's intrinsic IR projection problem, users are restricted to face the sensor directly forward and to perform all motions in a standing-still position. These constraints are main reasons that make it almost impossible for the 3D character to navigate the virtual environment, which is one of the most required functionalities in games. This paper proposes a new method that makes 3D character navigate the virtual environment with highly realistic motions. First, in order to find out the user's intention of navigating the virtual environment, the method recognizes walking-in-place motion. Second, the algorithm applies the motion splicing technique which segments the upper and the lower motions of character automatically and then switches the lower motion with pre-processed motion capture data naturally. Since the proposed algorithm can synthesize realistic lower-body walking motion while using motion capture data as well as capturing upper body motion on-line puppetry manner, it allows the 3D character to navigate the virtual environment realistically.

Motion Capture System using Integrated Pose Sensors (융합센서 기반의 모션캡처 시스템)

  • Kim, Byung-Yul;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.65-74
    • /
    • 2010
  • At the aim of solving the problems appearing in traditional optical motion capturing systems such as the interference among multiple patches and the complexity of sensor and patch allocations, this paper proposes a new motion capturing system which is composed of a single camera and multiple motion sensors. A motion sensor is consisted of an acceleration sensor and a gyro sensor to detect the motion of a patched body and the orientation (roll, pitch, and yaw) of the motion, respectively. Although Image information provides the positions of the patches in 2D, the orientation information of the patch motions acquired by the motion sensors can generate 3D pose of the patches using simple equations. Since the proposed system uses the minimum number of sensors to detect the relative pose of a patch, it is easy to install on a moving body and can be economically used for various applications. The performance and the advantages of the proposed system have been proved by the experiments.

A Data Driven Motion Generation for Driving Simulators Using Motion Texture (모션 텍스처를 이용한 차량 시뮬레이터의 통합)

  • Cha, Moo-Hyun;Han, Soon-Hung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.7 s.262
    • /
    • pp.747-755
    • /
    • 2007
  • To improve the reality of motion simulator, the method of data-driven motion generation has been introduced to simply record and replay the motion of real vehicles. We can achieve high quality of reality from real samples, but it has no interactions between users and simulations. However, in character animation, user controllable motions are generated by the database made up of motion capture signals and appropriate control algorithms. In this study, as a tool for the interactive data-driven driving simulator, we proposed a new motion generation method. We sample the motion data from a real vehicle, transform the data into the appropriate data structure(motion block), and store a series of them into a database. While simulation, our system searches and synthesizes optimal motion blocks from database and generates motion stream reflecting current simulation conditions and parameterized user demands. We demonstrate the value of the proposed method through experiments with the integrated motion platform system.

Motion-capture-based walking simulation of digital human adapted to laser-scanned 3D as-is environments for accessibility evaluation

  • Maruyama, Tsubasa;Kanai, Satoshi;Date, Hiroaki;Tada, Mitsunori
    • Journal of Computational Design and Engineering
    • /
    • v.3 no.3
    • /
    • pp.250-265
    • /
    • 2016
  • Owing to our rapidly aging society, accessibility evaluation to enhance the ease and safety of access to indoor and outdoor environments for the elderly and disabled is increasing in importance. Accessibility must be assessed not only from the general standard aspect but also in terms of physical and cognitive friendliness for users of different ages, genders, and abilities. Meanwhile, human behavior simulation has been progressing in the areas of crowd behavior analysis and emergency evacuation planning. However, in human behavior simulation, environment models represent only "as-planned" situations. In addition, a pedestrian model cannot generate the detailed articulated movements of various people of different ages and genders in the simulation. Therefore, the final goal of this research was to develop a virtual accessibility evaluation by combining realistic human behavior simulation using a digital human model (DHM) with "as-is" environment models. To achieve this goal, we developed an algorithm for generating human-like DHM walking motions, adapting its strides, turning angles, and footprints to laser-scanned 3D as-is environments including slopes and stairs. The DHM motion was generated based only on a motion-capture (MoCap) data for flat walking. Our implementation constructed as-is 3D environment models from laser-scanned point clouds of real environments and enabled a DHM to walk autonomously in various environment models. The difference in joint angles between the DHM and MoCap data was evaluated. Demonstrations of our environment modeling and walking simulation in indoor and outdoor environments including corridors, slopes, and stairs are illustrated in this study.

Human-like Arm Movement Planning for Humanoid Robots Using Motion Capture Database (모션캡쳐 데이터베이스를 이용한 인간형 로봇의 인간다운 팔 움직임 계획)

  • Kim, Seung-Su;Kim, Chang-Hwan;Park, Jong-Hyeon;You, Bum-Jae
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.188-196
    • /
    • 2006
  • During the communication and interaction with a human using motions or gestures, a humanoid robot needs not only to look like a human but also to behave like a human to make sure the meanings of the motions or gestures. Among various human-like behaviors, arm motions of the humanoid robot are essential for the communication with people through motions. In this work, a mathematical representation for characterizing human arm motions is first proposed. The human arm motions are characterized by the elbow elevation angle which is determined using the position and orientation of human hands. That representation is mathematically obtained using an approximation tool, Response Surface Method (RSM). Then a method to generate human-like arm motions in real time using the proposed representation is presented. The proposed method was evaluated to generate human-like arm motions when the humanoid robot was asked to move its arms from a point to another point including the rotation of its hand. The example motion was performed using the KIST humanoid robot, MAHRU.

  • PDF

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

The Implementation of Day and Night Intruder Motion Detection System using Arduino Kit (아두이노 키트를 이용한 주야간 침입자 움직임 감지 시스템 구현)

  • Young-Oh Han
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.919-926
    • /
    • 2023
  • In this paper, we implemented the surveillance camera system capable of day and night shooting. To this end, it is designed to capture clear images even at night using a CMOS image sensor as well as an IR-LED. In addition, a relatively simple motion detection algorithm was proposed through color model separation. Motions can be detected by extracting only the H channel from the color model, dividing the image into blocks, and then applying the block matching method using the average color value between consecutive frames. When motions are detected during filming, an alarm sounds automatically and a day and night motion detection system is implemented that can capture and save the event screen to a PC.

Medical Digital Twin-Based Dynamic Virtual Body Capture System (메디컬 디지털 트윈 기반 동적 가상 인체 획득 시스템)

  • Kim, Daehwan;Kim, Yongwan;Lee, Kisuk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1398-1401
    • /
    • 2020
  • We present the concept of a Medical Digital Twin (MDT) that can predict and analyze medical diseases using computer simulations and introduce a dynamic virtual body capture system to create it. The MDT is a technology that creates a 3D digital virtual human body by reflecting individual medical and biometric information. The virtual human body is composed of a static virtual human body that reflects an individual's internal and external information and a dynamic virtual human body that reflects his motion. Especially we describe an early version of the dynamic virtual body capture system that enables continuous simulation of musculoskeletal diseases.