• Title/Summary/Keyword: Motion Capture and Mapping

Search Result 15, Processing Time 0.02 seconds

A Motion Capture and Mapping System: Kinect Based Human-Robot Interaction Platform (동작포착 및 매핑 시스템: Kinect 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8563-8567
    • /
    • 2015
  • We propose a human-robot interaction(HRI) platform based on motion capture and mapping. Platform consists of capture, processing/mapping, and action parts. A motion capture sensor, computer, and avatar and/or physical robots are selected as capture, processing/mapping, and action part(s), respectively. Case studies-an interactive presentation and LEGO robot car are presented to show the design and implementation process of Kinect based HRI platform.

Real-time Facial Modeling and Animation based on High Resolution Capture (고해상도 캡쳐 기반 실시간 얼굴 모델링과 표정 애니메이션)

  • Byun, Hae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1138-1145
    • /
    • 2008
  • Recently, performance-driven facial animation has been popular in various area. In television or game, it is important to guarantee real-time animation for various characters with different appearances between a performer and a character. In this paper, we present a new facial animation approach based on motion capture. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. Finally, we show the results of various examination for different types of face models.

  • PDF

Human-like Whole Body Motion Generation of Humanoid Based on Simplified Human Model (단순인체모델 기반 휴머노이드의 인간형 전신동작 생성)

  • Kim, Chang-Hwan;Kim, Seung-Su;Ra, Syung-Kwon;You, Bum-Jae
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.287-299
    • /
    • 2008
  • People have expected a humanoid robot to move as naturally as a human being does. The natural movements of humanoid robot may provide people with safer physical services and communicate with persons through motions more correctly. This work presented a methodology to generate the natural motions for a humanoid robot, which are converted from human motion capture data. The methodology produces not only kinematically mapped motions but dynamically mapped ones. The kinematical mapping reflects the human-likeness in the converted motions, while the dynamical mapping could ensure the movement stability of whole body motions of a humanoid robot. The methodology consists of three processes: (a) Human modeling, (b) Kinematic mapping and (c) Dynamic mapping. The human modeling based on optimization gives the ZMP (Zero Moment Point) and COM (Center of Mass) time trajectories of an actor. Those trajectories are modified for a humanoid robot through the kinematic mapping. In addition to modifying the ZMP and COM trajectories, the lower body (pelvis and legs) motion of the actor is then scaled kinematically and converted to the motion available to the humanoid robot considering dynamical aspects. The KIST humanoid robot, Mahru, imitated a dancing motion to evaluate the methodology, showing the good agreement in the motion.

  • PDF

A Study of Electrode Locations for Design of ECG Monitoring Smart Clothing based on Body Mapping (심전도 모니터링 스마트 의류 디자인을 위한 바디매핑 기반 전극 위치 연구)

  • Cho, Hakyung;Cho, Sang woo
    • Fashion & Textile Research Journal
    • /
    • v.17 no.6
    • /
    • pp.1039-1049
    • /
    • 2015
  • The increase in the need for a 24 hour monitoring of biological signals has been accompanied by an increasing interest in wearable systems that can register ECG at any time and place. ECG-monitoring clothing is a wearable system that records heart function continuously, but there have been difficulties in making accurate measurements due to motion artifacts. Although various factors may cause noise in measurements due to motion, the variations in the body surface and clothing during movements that cause eventual the shifting and displacement of the electrodes is particularly noteworthy. Therefore, this study used biomedical body mapping and a motion-capture system to measure and analyze the changes in the body surface and garment during movements. It was deduced that the area where the friction and separation between the garment and skin is the lowest would be the appropriate location to place the ECG electrodes. For this study, 5 male and 5 female in their 20s were selected as subjects, and through their selected body movements, the changes in the garment and skin were analyzed using the motion-capture system. As a result, the area below the chest circumference and the area below the shoulder blades were proposed as the optimal location of electrode for ECG monitoring.

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Real-time Interactive Animation System for Low-Priced Motion Capture Sensors (저가형 모션 캡처 장비를 이용한 실시간 상호작용 애니메이션 시스템)

  • Kim, Jeongho;Kang, Daeun;Lee, Yoonsang;Kwon, Taesoo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.2
    • /
    • pp.29-41
    • /
    • 2022
  • In this paper, we introduce a novel real-time, interactive animation system which uses real-time motion inputs from a low-cost motion-sensing device Kinect. Our system generates interaction motions between the user character and the counterpart character in real-time. While the motion of the user character is generated mimicking the user's input motion, the other character's motion is decided to react to the user avatar's motion. During a pre-processing step, our system analyzes the reference motion data and generates mapping model in advance. At run-time, our system first generates initial poses of two characters and then modifies them so that it could provide plausible interacting behavior. Our experimental results show plausible interacting animations in that the user character performs a modified motion of user input and the counterpart character properly reacts against the user character. The proposed method will be useful for developing real-time interactive animation systems which provide a better immersive experience for users.