• Title/Summary/Keyword: face pose

Search Result 191, Processing Time 0.031 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Design of Face Recognition System Based on Pose Estimation : Comparative Studies of Pose Estimation Algorithms (포즈 추정 기반 얼굴 인식 시스템 설계 : 포즈 추정 알고리즘 비교 연구)

  • Kim, Jin-Yul;Kim, Jong-Bum;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.4
    • /
    • pp.672-681
    • /
    • 2017
  • This paper is concerned with the design methodology of face recognition system based on pose estimation. In 2-dimensional face recognition, the variations of facial pose cause the deterioration of recognition performance because object recognition is carried out by using brightness of each pixel on image. To alleviate such problem, the proposed face recognition system deals with Learning Vector Quantizatioin(LVQ) or K-Nearest Neighbor(K-NN) to estimate facial pose on image and then the images obtained from LVQ or K-NN are used as the inputs of networks such as Convolution Neural Networks(CNNs) and Radial Basis Function Neural Networks(RBFNNs). The effectiveness and efficiency of the post estimation using LVQ and K-NN as well as face recognition rate using CNNs and RBFNNs are discussed through experiments carried out by using ICPR and CMU PIE databases.

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

The Impact of Model Pose on Consumer Perceptions of Price: A Perceived-Power Perspective

  • JeongGyu Lee;Dong Hoo Kim
    • Asia Marketing Journal
    • /
    • v.26 no.3
    • /
    • pp.145-155
    • /
    • 2024
  • This study examines how a model's pose that signals power influences consumers' recall ability of price information in advertisements. To extend prior findings on social judgments, we suggest that the direction of consumers' gaze and willingness to pay attention to the model vary depending on the model's pose. Study 1 explores how consumers' perception of the power of the model affects their price recall ability. In particular, consumers demonstrate better price recall for items displayed at the bottom of the ad when the model adopts a powerful pose and items displayed at the top when the model in the ad assumes a submissive pose. Study 2 investigates the influence of the perceived power of a model's pose on price recall depending on the visibility of the model's face and reveals that consumers demonstrate better price recall for items displayed at the top when the model's face is not visible even when the model adopts a powerful pose. Ultimately, this research provides new insights to help marketers identify ideal locations for displaying price information in ads. More theoretical and practical implications are also discussed.

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

Robust Head Pose Estimation for Masked Face Image via Data Augmentation (데이터 증강을 통한 마스크 착용 얼굴 이미지에 강인한 얼굴 자세추정)

  • Kyeongtak, Han;Sungeun, Hong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.944-947
    • /
    • 2022
  • Due to the coronavirus pandemic, the wearing of a mask has been increasing worldwide; thus, the importance of image analysis on masked face images has become essential. Although head pose estimation can be applied to various face-related applications including driver attention, face frontalization, and gaze detection, few studies have been conducted to address the performance degradation caused by masked faces. This study proposes a new data augmentation that synthesizes the masked face, depending on the face image size and poses, which shows robust performance on BIWI benchmark dataset regardless of mask-wearing. Since the proposed scheme is not limited to the specific model, it can be utilized in various head pose estimation models.

Sliding Active Camera-based Face Pose Compensation for Enhanced Face Recognition (얼굴 인식률 개선을 위한 선형이동 능동카메라 시스템기반 얼굴포즈 보정 기술)

  • 장승호;김영욱;박창우;박장한;남궁재찬;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.155-164
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user and is able to doface recognition, which is vital for many surveillance-based systems. The advantage of face recognition compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to the decreasing in dimension from image acquisition step and various changes associated with face pose and background. There are many factors that deteriorate performance of face recognition such as thedistance from camera to the face, changes in lighting, pose change, and change of facial expression. In this paper, we implement a new sliding active camera system to prevent various pose variation that influence face recognition performance andacquired frontal face images using PCA and HMM method to improve the face recognition. This proposed face recognition algorithm can be used for intelligent surveillance system and mobile robot system.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Design of Face Recognition and Tracking System by Using RBFNNs Pattern Classifier with Object Tracking Algorithm (RBFNNs 패턴분류기와 객체 추적 알고리즘을 이용한 얼굴인식 및 추적 시스템 설계)

  • Oh, Seung-Hun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.5
    • /
    • pp.766-778
    • /
    • 2015
  • In this paper, we design a hybrid system for recognition and tracking realized with the aid of polynomial based RBFNNs pattern classifier and particle filter. The RBFNN classifier is built by learning the training data for diverse pose images. The optimized parameters of RBFNN classifier are obtained by Particle Swarm Optimization(PSO). Testing data for pose image is used as a face image obtained under real situation, where the face image is detected by AdaBoost algorithm. In order to improve the recognition performance for a detected image, pose estimation as preprocessing step is carried out before the face recognition step. PCA is used for pose estimation, the pose of detected image is assigned for the built pose by considering the featured difference between the previously built pose image and the newly detected image. The recognition of detected image is performed through polynomial based RBFNN pattern classifier, and if the detected image is equal to target for tracking, the target will be traced by particle filter in real time. Moreover, when tracking is failed by PF, Adaboost algorithm detects facial area again, and the procedures of both the pose estimation and the image recognition are repeated as mentioned above. Finally, experimental results are compared and analyzed by using Honda/UCSD data known as benchmark DB.