• Title/Summary/Keyword: optical motion capture

Search Result 39, Processing Time 0.022 seconds

Noise-Robust Capturing and Animating Facial Expression by Using an Optical Motion Capture System (광학식 동작 포착 장비를 이용한 노이즈에 강건한 얼굴 애니메이션 제작)

  • Park, Sang-Il
    • Journal of Korea Game Society
    • /
    • v.10 no.5
    • /
    • pp.103-113
    • /
    • 2010
  • In this paper, we present a practical method for generating facial animation by using an optical motion capture system. In our setup, we assumed a situation of capturing the body motion and the facial expression simultaneously, which degrades the quality of the captured marker data. To overcome this problem, we provide an integrated framework based on the local coordinate system of each marker for labeling the marker data, hole-filling and removing noises. We justify the method by applying it to generate a short animated film.

Unsupervised Motion Pattern Mining for Crowded Scenes Analysis

  • Wang, Chongjing;Zhao, Xu;Zou, Yi;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.12
    • /
    • pp.3315-3337
    • /
    • 2012
  • Crowded scenes analysis is a challenging topic in computer vision field. How to detect diverse motion patterns in crowded scenarios from videos is the critical yet hard part of this problem. In this paper, we propose a novel approach to mining motion patterns by utilizing motion information during both long-term period and short interval simultaneously. To capture long-term motions effectively, we introduce Motion History Image (MHI) representation to access to the global perspective about the crowd motion. The combination of MHI and optical flow, which is used to get instant motion information, gives rise to discriminative spatial-temporal motion features. Benefitting from the robustness and efficiency of the novel motion representation, the following motion pattern mining is implemented in a completely unsupervised way. The motion vectors are clustered hierarchically through automatic hierarchical clustering algorithm building on the basis of graphic model. This method overcomes the instability of optical flow in dealing with time continuity in crowded scenes. The results of clustering reveal the situations of motion pattern distribution in current crowded videos. To validate the performance of the proposed approach, we conduct experimental evaluations on some challenging videos including vehicles and pedestrians. The reliable detection results demonstrate the effectiveness of our approach.

Development of Arm Motion Sensing System Using Potentiometer for Robot Arm Control (로봇 팔의 제어를 위한 포텐셜미터를 이용한 팔 움직임 감지 시스템 개발)

  • Park, Ki-Hoon;Park, Seong-Hun;Yoon, Tae-Sung;Kwak, Gun-Pyong;Ann, Ho-Kyun;Park, Seung-Kyu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.4
    • /
    • pp.872-878
    • /
    • 2012
  • In this paper, an arm motion sensing system using potentiometer is developed. Most motion sensing systems use optical method for the quality of motion data. The optical method needs much cost for manufacturing capture system and takes much time for correcting the captured data. And mechanical method entails relativity low cost, but it uses the wires and takes much time for correcting the data like the optical method. For solving the problems, in this paper, an arm motion sensing system is newly developed using low cost potentiometer and based on the suggested simple calculation method for the joint angles and the angular velocities. For the verification of the performance of the developed system, practical experiments were executed using real human arm motion and a robot arm. The experimental results showed that the motion of the robot arm controlled by the output of the developed motion sensing system is much similar with the motion of human arm.

Sensor Fusion for Motion Capture System (모션 캡쳐 시스템을 위한 센서 퓨전)

  • Jeong, Il-Kwon;Park, ChanJong;Kim, Hyeong-Kyo;Wohn, KwangYun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.3
    • /
    • pp.9-15
    • /
    • 2000
  • We Propose a sensor fusion technique for motion capture system. In our system, two kinds of sensors are used for mutual assistance. Four magnetic sensors(markers) are attached on the upper arms and the back of the hands for assisting twelve optical sensors which are attached on the arms of a performer. The optical sensor information is not always complete because the optical markers can be hidden due to obstacles. In this case, magnetic sensor information is used to link discontinuous optical sensor information. We use a system identification techniques for modeling the relation between the sensors' signals. Dynamic systems are constructed from input-output data. We determine the best model from the set of candidate models using the canonical system identification techniques. Our approach is using a simple signal processing technique currently. In the future work, we will propose a new method using other signal processing techniques such as Wiener or Kalman filter.

  • PDF

Motion Capture System using Integrated Pose Sensors (융합센서 기반의 모션캡처 시스템)

  • Kim, Byung-Yul;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.65-74
    • /
    • 2010
  • At the aim of solving the problems appearing in traditional optical motion capturing systems such as the interference among multiple patches and the complexity of sensor and patch allocations, this paper proposes a new motion capturing system which is composed of a single camera and multiple motion sensors. A motion sensor is consisted of an acceleration sensor and a gyro sensor to detect the motion of a patched body and the orientation (roll, pitch, and yaw) of the motion, respectively. Although Image information provides the positions of the patches in 2D, the orientation information of the patch motions acquired by the motion sensors can generate 3D pose of the patches using simple equations. Since the proposed system uses the minimum number of sensors to detect the relative pose of a patch, it is easy to install on a moving body and can be economically used for various applications. The performance and the advantages of the proposed system have been proved by the experiments.

A Study on the Development of Digital Space Design Process Using User′s Motion Data (사용자 모션데이터를 활용한 디지털 공간디자인 프로세스 개발에 관한 연구)

  • 안신욱;박혜경
    • Korean Institute of Interior Design Journal
    • /
    • v.13 no.3
    • /
    • pp.187-196
    • /
    • 2004
  • The purpose of this study is to develope'a digital space design process using user's motion data' through a theoretical and experimental study. In the progress of developing a developing of design process, this study was concentrated on searching a digital method applying user's interactive reflections. As introducing a concept of space form being generated by user's experiences, we proposed'a digital design process using user's motion data'. In the experimental stage, user's motion data were extracted and transferred as digital information by user behavior analysis, optical motion capture system, immersive VR system, 3D softwares com computer programming. As the result of this study, another useful digital design process was embodied by building up a digital form-transforming method using 3D softwares providing internal algorithm. This study would be meaningful in terms of attempting a creative and interactive digital space design method, avoiding dehumanization of existing ones through the theoretical study and the experimental approach.

Restoration of Realtime Three-Dimension Positions Using PSD Sensor (PSD센서를 이용한 실시간 3차원 위치의 복원)

  • Choi, Hun-Il;Jo, Yong-Jun;Ryu, Young-Kee
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.507-510
    • /
    • 2003
  • In this paper, optical sensor system using PSD(Position Sensitive Detection) is proposed to obtain the three dimensional position of moving markers attached to human body. To find the coordinates of an moving marrer with stereo vision system, two different sight rays of an moving marker are required. Usually, those are acquired with two optical sensors synchronized at the same time. PSD sensor is used to measure the position of an incidence light in real-time. To get the three-dimension position of light source on moving markers, a conventional camera calibration method are used. In this research, we realized a low cost motion capture system. The proposed system shows high three-dimension measurement accuracy and fast sampling frequency.

  • PDF

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Discomfort Assessment of Truck Ingress and Egress Motions Based on Simulated Muscle Contraction Forces (모사된 근육 수축력을 바탕으로 한 트럭 승하차 동작의 불편도 평가)

  • Choi, Nam-Chul;Shim, Ji-Sung;Lee, Sang-Hyung;Lee, Ki-Kwang;Lee, Sang-Hun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.1
    • /
    • pp.62-70
    • /
    • 2012
  • This paper proposes a novel discomfort assessment method for truck ingress and egress motions based on the maximum-voluntary-contraction (MVC) ratios of muscles obtained by biomechanical analysis of human musculoskeletal models. In this study, a human motion to enter and exit a truck cabin with different types and heights of footsteps is first measured using an optical motion capture system and load sensors. Next, in a biomechanical analysis system, a human musculoskeletal model with contacting conditions on footsteps and handles is modeled, and then joint torques and muscles forces are calculated by inverse dynamics of the musculoskeletal model with the motion data. Finally, the MVC ratios for the muscles are calculated and their statistical values are used as the measure of discomfort. To ensure the feasibility of our method, subjective discomfort levels have been investigated through the participants' experiments and questionnaires and compared to the results of our method. Comparing to the existing methods based on joint angles or torques, our approach provide a more essential criterion for discomfort because it is based on the muscle contraction by which an active human motion is basically generated.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.