• 제목/요약/키워드: Optical Pose Tracking

검색결과 15건 처리시간 0.023초

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제2권2호
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

실시간 헬멧자세 추적시스템의 설계 및 구현 (Design and Implementation of Real-Time Helmet Pose Tracking System)

  • 황상현;정철주;김동성
    • 한국항공우주학회지
    • /
    • 제44권2호
    • /
    • pp.123-130
    • /
    • 2016
  • 본 논문에서는 항공기 조종사 헬멧에 장착된 HMD(헬멧시현기, Helmet Mounted Display)에 비행정보 및 임무정보를 조종사의 시선(Line of Sight)과 일치시켜 시현하기 위한 헬멧자세 추적시스템(HTS, Helmet Tracking System)을 설계 및 구현했다. HMD 시스템의 기능과 성능은 HTS의 성능에 기반하므로 이를 고려하여 동작의 실시간을 보장하고 정확도를 높이기 위해 관성센서와 광학식센서의 데이타를 융합처리하여 다양한 운용 환경에서도 강건한 성능을 제공할 수 있는 HTS를 설계 및 구현 했다. 구현된 HTS의 효용성을 검증하기 위하여 테스트 베드 환경에서 오차 발생시 동작의 정확도 및 실시간성등을 검증하였다.

다수 마커를 활용한 영상 기반 다중 사용자 증강현실 시스템 (An Image-based Augmented Reality System for Multiple Users using Multiple Markers)

  • 문지원;박동우;정현석;김영헌;황성수
    • 한국멀티미디어학회논문지
    • /
    • 제21권10호
    • /
    • pp.1162-1170
    • /
    • 2018
  • This paper presents an augmented reality system for multiple users. The proposed system performs ar image-based pose estimation of users and pose of each user is shared with other uses via a network server. For camera-based pose estimation, we install multiple markers in a pre-determined space and select the marker with the best appearance. The marker is detected by corner point detection and for robust pose estimation. the marker's corner points are tracked by optical flow tracking algorithm. Experimental results show that the proposed system successfully provides an augmented reality application to multiple users even when users are rapidly moving and some of markers are occluded by users.

영상유도수술을 위한 광학추적 센서 및 관성항법 센서 네트웍의 칼만필터 기반 자세정보 융합 (Kalman Filter Baded Pose Data Fusion with Optical Traking System and Inertial Navigation System Networks for Image Guided Surgery)

  • 오현민;김민영
    • 전기학회논문지
    • /
    • 제66권1호
    • /
    • pp.121-126
    • /
    • 2017
  • Tracking system is essential for Image Guided Surgery(IGS). Optical Tracking System(OTS) is widely used to IGS for its high accuracy and easy usage. However, OTS doesn't work when occlusion of marker occurs. In this paper sensor data fusion with OTS and Inertial Navigation System(INS) is proposed to solve this problem. The proposed system improves the accuracy of tracking system by eliminating gaussian error of the sensor and supplements the disadvantages of OTS and IMU through sensor fusion based on Kalman filter. Also, sensor calibration method that improves the accuracy is introduced. The performed experiment verifies the effectualness of the proposed algorithm.

지능형 헬멧시현시스템 설계 및 시험평가 (Design and Evaluation of Intelligent Helmet Display System)

  • 황상현
    • 한국항공우주학회지
    • /
    • 제45권5호
    • /
    • pp.417-428
    • /
    • 2017
  • 본 논문에서는 항공기 조종사 지능형 헬멧시현시스템(IHDS, Intelligent Helmet Display System)의 아키텍쳐 설계, 단위 구성품 설계, 핵심 소프트웨어 설계내용(헬멧 자세추적, 고도오차 보정 소프트웨어)을 기술하며, 단위시험 및 통합시험에 대한 결과를 기술한다. 세계적인 최신 헬멧시현시스템 개발 추세를 반영하여 3차원 전자지도 시현, FLIR(Forward Looking Infra-Red) 영상시현, 하이브리드형 헬멧자세추적, 바이저 반사형광학계, 야시카메라 영상시현 및 경량 복합소재 헬멧쉘 등의 사양을 설계에 적용하였다. 특히 3차원 전자지도 데이터의 고도오차 자동보정 기법, 고정밀 영상정합 기법, 다색(Multi-color) 조명광학계, 회절소자를 이용한 투과형 영상발광면, 헬멧자세 추정시간을 최소화하는 추적용 카메라, 장/탈착형 야시카메라, 머리 밀착용 에어포켓 등의 신개념의 설계를 제안하였다. 모든 시스템 구성품의 시제작을 완료한 후 단위시험과 시스템 통합시험을 수행하여 기능과 성능을 검증하였다.

모바일 환경 Homography를 이용한 특징점 기반 다중 객체 추적 (Multi-Object Tracking Based on Keypoints Using Homography in Mobile Environments)

  • 한우리;김영섭;이용환
    • 반도체디스플레이기술학회지
    • /
    • 제14권3호
    • /
    • pp.67-72
    • /
    • 2015
  • This paper proposes an object tracking system based on keypoints using homography in mobile environments. The proposed system is based on markerless tracking, and there are four modules which are recognition, tracking, detecting and learning module. Recognition module detects and identifies an object to be matched on current frame correspond to the database using LSH through SURF, and then this module generates a standard object information. Tracking module tracks an object using homography information that generate by being matched on the learned object keypoints to the current object keypoints. Then update the window included the object for defining object's pose. Detecting module finds out the object based on having the best possible knowledge available among the learned objects information, when the system fails to track. The experimental results show that the proposed system is able to recognize and track objects with updating object's pose for the use of mobile platform.

CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템 (Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera)

  • 김승훈;정일균;박창우;황정훈
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

평면형 병렬 케이블 구동 로봇에 대한 형상보정 (Calibration for a Planar Cable-Driven Parallel Robot)

  • ;정진우;전종표;박석호;박종오;고성영
    • 제어로봇시스템학회논문지
    • /
    • 제21권11호
    • /
    • pp.1070-1075
    • /
    • 2015
  • This paper proposes a calibration algorithm for a three-degree-of-freedom (DOF) planar cable-driven parallel robot (CDPR). To evaluate the proposed algorithm, we calibrated winches and an optical tracking sensor, measured the end-effector pose using the optical tracking sensor, and calculated the accurate robot configuration using the measurement information. To conduct an accuracy test on the end-effector pose, we followed guidelines from "Manipulating industrial robots - Performance criteria and related test methods." Through the test, it is verified that the position accuracy can be improved by up to 20% for a $2m{\times}2m$-sized planar cable robot using the proposed calibration algorithm.

화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발 (Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions)

  • 진용규;유수정;조혜경
    • 로봇학회논문지
    • /
    • 제10권3호
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법 (A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation)

  • 김웅기;전준철
    • 인터넷정보학회논문지
    • /
    • 제14권6호
    • /
    • pp.117-124
    • /
    • 2013
  • 본 논문에서는 실시간으로 입력되는 비디오 영상으로부터 사용자의 얼굴 방향을 효율적으로 추정하는 새로운 방법을 제안하였다. 이를 위하여 입력 영상으로부터 외부조명의 변화에 덜 민감한 Haar-like 특성을 이용하여 얼굴영역의 검출을 수행하고 검출 된 얼굴영역 내에서 양쪽 눈, 코, 입 등의 주요 특성을 검출한다. 이 후 실시간으로 매 프레임마다 광류를 이용해 검출된 특징 점을 추적하게 되며, 추적된 특징 점을 이용해 얼굴의 방향성 추정한다. 일반적으로 광류를 이용한 특징 추적에서 발생할 수 있는 특징점의 좌표가 유실되어 잘못된 특징점을 추적하게 되는 상황을 방지하기 위하여 검출된 특징점의 템플릿 매칭(template matching)을 사용해 추적중인 특징점의 유효성을 실시간 판단하고, 그 결과에 따라 얼굴 특징 점들을 다시 검출하거나, 추적을 지속하여 얼굴의 방향성을 추정을 가능하게 한다. 탬플릿 매칭은 특징검출 단계에서 추출된 좌우 눈, 코끝 그리고 입의 위치 등 4가지 정보를 저장한 후 얼굴포즈 측정에 있어 광류에의해 추적중인 해당 특징점들 간의 유사도를 비교하여 유사도가 임계치를 벗어 날 경우 새로이 특징점을 찾아내는 작업을 수행하여 정보를 갱신한다. 제안된 방법을 통해 얼굴의 특성 추출을 위한 특성 검출과정과 검출된 특징을 지속적으로 보완하는 추적과정을 자동적으로 상호 결합하여 안정적으로 실시간에 얼굴 방향성 추정 할 수 있었다. 실험을 통하여 제안된 방법이 효과적으로 얼굴의 포즈를 측정할 수 있음을 입증하였다.