• Title/Summary/Keyword: 3D-face tracking

Search Result 48, Processing Time 0.031 seconds

Online Face Avatar Motion Control based on Face Tracking

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.6
    • /
    • pp.804-814
    • /
    • 2009
  • In this paper, a novel system for avatar motion controlling by tracking face is presented. The system is composed of three main parts: firstly, LCS (Local Cluster Searching) method based face feature detection algorithm, secondly, HMM based feature points recognition algorithm, and finally, avatar controlling and animation generation algorithm. In LCS method, face region can be divided into many small piece regions in horizontal and vertical direction. Then the method will judge each cross point that if it is an object point, edge point or the background point. The HMM method will distinguish the mouth, eyes, nose etc. from these feature points. Based on the detected facial feature points, the 3D avatar is controlled by two ways: avatar orientation and animation, the avatar orientation controlling information can be acquired by analyzing facial geometric information; avatar animation can be generated from the face feature points smoothly. And finally for evaluating performance of the developed system, we implement the system on Window XP OS, the results show that the system can have an excellent performance.

  • PDF

Implementing Augmented Reality By Using Face Detection, Recognition And Motion Tracking (얼굴 검출과 인식 및 모션추적에 의한 증강현실 구현)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.97-104
    • /
    • 2012
  • Natural User Interface(NUI) technologies introduce new trends in using devices such as computer and any other electronic devices. In this paper, an augmented reality on a mobile device is implemented by using face detection, recognition and motion tracking. The face detection is obtained by using Viola-Jones algorithm from the images of the front camera. The Eigenface algorithm is employed for face recognition and face motion tracking. The augmented reality is implemented by overlapping the rear camera image and GPS, accelerator sensors' data with the 3D graphic object which is correspond with the recognized face. The algorithms and methods are limited by the mobile device specification such as processing ability and main memory capacity.

Estimation of Person Height and 3D Location using Stereo Tracking System (스테레오 추적 시스템을 이용한 보행자 높이 및 3차원 위치 추정 기법)

  • Ko, Jung Hwan;Ahn, Sung Soo
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.95-104
    • /
    • 2012
  • In this paper, an estimation of person height and 3D location of a moving person by using the pan/tilt-embedded stereo tracking system is suggested and implemented. In the proposed system, face coordinates of a target person is detected from the sequential input stereo image pairs by using the YCbCr color model and phase-type correlation methods and then, using this data as well as the geometric information of the stereo tracking system, distance to the target from the stereo camera and 3-dimensional location information of a target person are extracted. Basing on these extracted data the pan/tilt system embedded in the stereo camera is controlled to adaptively track a moving person and as a result, moving trajectory of a target person can be obtained. From some experiments using 780 frames of the sequential stereo image pairs, it is analyzed that standard deviation of the position displacement of the target in the horizontal and vertical directions after tracking is kept to be very low value of 1.5, 0.42 for 780 frames on average, and error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 0.5% on average. These good experimental results suggest a possibility of implementation of a new stereo target tracking system having a high degree of accuracy and a very fast response time with this proposed algorithm.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

A System for 3D Face Manipulation in Video (비디오 상의 얼굴에 대한 3차원 변형 시스템)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.440-451
    • /
    • 2019
  • We propose a system that allows three dimensional manipulation of face in video. The 3D face manipulation of the proposed system overlays the 3D face model with the user 's manipulation on the face region of the video frame, and it allows 3D manipulation of the video in real time unlike existing applications or methods. To achieve this feature, first, the 3D morphable face model is registered with the image. At the same time, user's manipulation is applied to the registered model. Finally, the frame image mapped to the model as texture, and the texture-mapped and deformed model is rendered. Since this process requires lots of operations, parallel processing is adopted for real-time processing; the system is divided into modules according to functionalities, and each module runs in parallel on each thread. Experimental results show that specific parts of the face in video can be manipulated in real time.

Object Recognition Face Detection With 3D Imaging Parameters A Research on Measurement Technology (3D영상 객체인식을 통한 얼굴검출 파라미터 측정기술에 대한 연구)

  • Choi, Byung-Kwan;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.53-62
    • /
    • 2011
  • In this paper, high-tech IT Convergence, to the development of complex technology, special technology, video object recognition technology was considered only as a smart - phone technology with the development of personal portable terminal has been developed crossroads. Technology-based detection of 3D face recognition technology that recognizes objects detected through the intelligent video recognition technology has been evolving technologies based on image recognition, face detection technology with through the development speed is booming. In this paper, based on human face recognition technology to detect the object recognition image processing technology is applied through the face recognition technology applied to the IP camera is the party of the mouth, and allowed the ability to identify and apply the human face recognition, measurement techniques applied research is suggested. Study plan: 1) face model based face tracking technology was developed and applied 2) algorithm developed by PC-based measurement of human perception through the CPU load in the face value of their basic parameters can be tracked, and 3) bilateral distance and the angle of gaze can be tracked in real time, proved effective.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

A Tracking of Head Movement for Stereophonic 3-D Sound (스테레오 입체음향을 위한 머리 움직임 추정)

  • Kim Hyun-Tae;Lee Kwang-Eui;Park Jang-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1421-1431
    • /
    • 2005
  • There are two methods in 3-D sound reproduction: a surround system, like 3.1 channel method and a binaural system using 2-channel method. The binaural system utilizes the sound localization principle of a human using two ears. Generally, a crosstalk between each channel of 2-channel loudspeaker system should be canceled to produce a natural 3-D sound. To solve this problem, it is necessary to trace a head movement. In this paper, we propose a new algorithm to correctly trace the head movement of a listener. The Proposed algorithm is based on the detection of face and eye. The face detection uses the intensity of an image and the position of eyes is detected by a mathematical morphology. When the head of the listener moves, length of borderline between face area and eyes may change. We use this information to the tracking of head movement. A computer simulation results show That head movement is effectively estimated within +10 margin of error using the proposed algorithm.

  • PDF