• Title/Summary/Keyword: Facial Region Tracking

Search Result 32, Processing Time 0.026 seconds

Multiple Face Segmentation and Tracking Based on Robust Hausdorff Distance Matching

  • Park, Chang-Woo;Kim, Young-Ouk;Sung, Ha-Gyeong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.632-635
    • /
    • 2003
  • This paper describes a system fur tracking multiple faces in an input video sequence using facial convex hull based facial segmentation and robust hausdorff distance. The algorithm adapts skin color reference map in YCbCr color space and hair color reference map in RGB color space for classifying face region. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, this algorithm computes displacement of the point set between frames using a robust hausdorff distance and the best possible displacement is selected. Finally, the initial face model is updated using the displacement. We provide an example to illustrate the proposed tracking algorithm, which efficiently tracks rotating and zooming faces as well as existing multiple faces in video sequences obtained from CCD camera.

  • PDF

A New Face Tracking Algorithm Using Convex-hull and Hausdorff Distance (Convex hull과 Robust Hausdorff Distance를 이용한 실시간 얼굴 트래킹)

  • Park, Min-Sik;Park, Chang-U;Park, Min-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.438-441
    • /
    • 2001
  • This paper describes a system for tracking a face in a input video sequence using facial convex hull based facial segmentation and a robust hausdorff distance. The algorithm adapts YCbCr color model for classifying face region by [l]. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, a Robust Hausdorff distance is computed and the best possible displacement is selected. Finally, the previous face model is updated using the displacement t. It is robust to some noises and outliers. We provide an example to illustrate the proposed tracking algorithm in video sequences obtained from CCD camera.

  • PDF

Multiple Face Segmentation and Tracking Based on Robust Hausdorff Distance Matching

  • Park, Chang-Woo;Kim, Young-Ouk;Sung, Ha-Gyeong;Park, Mignon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.87-92
    • /
    • 2003
  • This paper describes a system for tracking multiple faces in an input video sequence using facial convex hull based facial segmentation and robust hausdorff distance. The algorithm adapts skin color reference map in YCbCr color space and hair color reference map in RGB color space for classifying face region. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, this algorithm computes displacement of the point set between frames using a robust hausdorff distance and the best possible displacement is selected. Finally, the initial face model is updated using the displacement. We provide an example to illustrate the proposed tracking algorithm, which efficiently tracks rotating and zooming faces as well as existing multiple faces in video sequences obtained from CCD camera.

Extracting & Tracking Algorithm for Facial Motion Capture Animation (얼굴 모션 캡쳐 애니메이션을 위한 추출 및 추적 알고리즘)

  • 이문희;김경석
    • Journal of Broadcast Engineering
    • /
    • v.8 no.2
    • /
    • pp.172-180
    • /
    • 2003
  • In this paper, we propose fast and precise extracting & tracking algorithm based on general camera and frame grabber for facial motion capture animation. Proposed algorithm consists of two steps. extracting and tracking. The former is to separate multiple markers from input image using region merging based on neural networks. The latter Is to track extracted multiple markers at each frame using tracking algorithm based on neural networks. In the experiment, we could remove noise and reduce processing time in the step of extraction. In addition, we could have good tracking results in the low frame rates.

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Understanding the Importance of Presenting Facial Expressions of an Avatar in Virtual Reality

  • Kim, Kyulee;Joh, Hwayeon;Kim, Yeojin;Park, Sohyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.120-128
    • /
    • 2022
  • While online social interactions have been more prevalent with the increased popularity of Metaverse platforms, little has been studied the effects of facial expressions in virtual reality (VR), which is known to play a key role in social contexts. To understand the importance of presenting facial expressions of a virtual avatar under different contexts, we conducted a user study with 24 participants where they were asked to have a conversation and play a charades game with an avatar with and without facial expressions. The results show that participants tend to gaze at the face region for the majority of the time when having a conversation or trying to guess emotion-related keywords when playing charades regardless of the presence of facial expressions. Yet, we confirmed that participants prefer to see facial expressions in virtual reality as well as in real-world scenarios as it helps them to better understand the contexts and to have more immersive and focused experiences.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Real Time System Realization for Binocular Eyeball Tracking Screen Cursor

  • Ryu Kwang-Ryol;Chai Duck-Hyun;Sclabassi Robert J.
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.841-846
    • /
    • 2006
  • A real time system realization for binocular eyeball tracking cursor on the computer monitor screen is presented in the paper. The processing for searching iris and tracking the cursor are that a facial is acquired by the small CCD camera, convert it into binary image, search for the eye two using the five region mask method in the eye surroundings and the side four points diagonal positioning method is searched the each iris. The tracking cursor is matched by measuring the iris central moving position, The cursor controlling is achieved by comparing two related distances between the iris maximum moving and the cursor moving to calculate the moving stance from gazing position and screen. The experimental result are obtained by examining some adults person on the system.

  • PDF

Online Face Avatar Motion Control based on Face Tracking

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.6
    • /
    • pp.804-814
    • /
    • 2009
  • In this paper, a novel system for avatar motion controlling by tracking face is presented. The system is composed of three main parts: firstly, LCS (Local Cluster Searching) method based face feature detection algorithm, secondly, HMM based feature points recognition algorithm, and finally, avatar controlling and animation generation algorithm. In LCS method, face region can be divided into many small piece regions in horizontal and vertical direction. Then the method will judge each cross point that if it is an object point, edge point or the background point. The HMM method will distinguish the mouth, eyes, nose etc. from these feature points. Based on the detected facial feature points, the 3D avatar is controlled by two ways: avatar orientation and animation, the avatar orientation controlling information can be acquired by analyzing facial geometric information; avatar animation can be generated from the face feature points smoothly. And finally for evaluating performance of the developed system, we implement the system on Window XP OS, the results show that the system can have an excellent performance.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.