• Title/Summary/Keyword: Face and head

Search Result 458, Processing Time 0.029 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

LH-FAS v2: Head Pose Estimation-Based Lightweight Face Anti-Spoofing (LH-FAS v2: 머리 자세 추정 기반 경량 얼굴 위조 방지 기술)

  • Hyeon-Beom Heo;Hye-Ri Yang;Sung-Uk Jung;Kyung-Jae Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.309-316
    • /
    • 2024
  • Facial recognition technology is widely used in various fields but faces challenges due to its vulnerability to fraudulent activities such as photo spoofing. Extensive research has been conducted to overcome this challenge. Most of them, however, require the use of specialized equipment like multi-modal cameras or operation in high-performance environments. In this paper, we introduce LH-FAS v2 (: Lightweight Head-pose-based Face Anti-Spoofing v2), a system designed to operate on a commercial webcam without any specialized equipment, to address the issue of facial recognition spoofing. LH-FAS v2 utilizes FSA-Net for head pose estimation and ArcFace for facial recognition, effectively assessing changes in head pose and verifying facial identity. We developed the VD4PS dataset, incorporating photo spoofing scenarios to evaluate the model's performance. The experimental results show the model's balanced accuracy and speed, indicating that head pose estimation-based facial anti-spoofing technology can be effectively used to counteract photo spoofing.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF

A STUDY ON THE CORELATIVITY BETWEEN THE HEAD AND FACE AND THE MAXILLARY ARCH IN KOREAN (한국인 두부, 안면과 상악치궁의 크기 및 형태에 관한 비교 연구)

  • Lee, Soo Ryong;Ryu, Young Kyu
    • The korean journal of orthodontics
    • /
    • v.13 no.1
    • /
    • pp.105-114
    • /
    • 1983
  • the author studied the corelativity between the head and face and the maxillary arch in Korean. This study was undertaker in 336 persons at age from 9 to 19 years who had normal occlusion by means of angle's classification. The following results were obtained. 1. The corelative coefficient between the Height of Head and Face (H.H.F.) and the Arch Length (A.L.) was 0.203-0.543, 2. The corelative coefficient between the Bizygomatic width (Z.W.) and the Bicanine width (C-C) was 0.203-0.543. 3. The corelative coefficient between the Bizygomatic width (Z.W.) and the Bimolar width (M-M) was 0.206-0.600. 4. The corelative coefficient between the Face shape (Index a) and Maxillaxy arch shape (In-dex c) was 0.232-0.404. 5. The corelative coefficient between the Face shape (Index a) and Maxillary arch shape (Index d) was 0.221-0.401. 6. There was no corelativity between the Anterior-posterior width of head (A.P.W.) and Arch Length A.L.), Head shape (Index b) and Maxillary arch shape (Index c, Index d).

  • PDF

Robust Head Pose Estimation for Masked Face Image via Data Augmentation (데이터 증강을 통한 마스크 착용 얼굴 이미지에 강인한 얼굴 자세추정)

  • Kyeongtak, Han;Sungeun, Hong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.944-947
    • /
    • 2022
  • Due to the coronavirus pandemic, the wearing of a mask has been increasing worldwide; thus, the importance of image analysis on masked face images has become essential. Although head pose estimation can be applied to various face-related applications including driver attention, face frontalization, and gaze detection, few studies have been conducted to address the performance degradation caused by masked faces. This study proposes a new data augmentation that synthesizes the masked face, depending on the face image size and poses, which shows robust performance on BIWI benchmark dataset regardless of mask-wearing. Since the proposed scheme is not limited to the specific model, it can be utilized in various head pose estimation models.

Head Pose Estimation Using Error Compensated Singular Value Decomposition for 3D Face Recognition (3차원 얼굴 인식을 위한 오류 보상 특이치 분해 기반 얼굴 포즈 추정)

  • 송환종;양욱일;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.31-40
    • /
    • 2003
  • Most face recognition systems are based on 2D images and applied in many applications. However, it is difficult to recognize a face when the pose varies severely. Therefore, head pose estimation is an inevitable procedure to improve recognition rate when a face is not frontal. In this paper, we propose a novel head pose estimation algorithm for 3D face recognition. Given the 3D range image of an unknown face as an input, we automatically extract facial feature points based on the face curvature. We propose an Error Compensated Singular Value Decomposition (EC-SVD) method based on the extracted facial feature points. We obtain the initial rotation angle based on the SVD method, and perform a refinement procedure to compensate for remained errors. The proposed algorithm is performed by exploiting the extracted facial features in the normaized 3D face space. In addition, we propose a 3D nearest neighbor classifier in order to select face candidates for 3D face recognition. From simulation results, we proved the efficiency and validity of the proposed algorithm.

Human face segmentation using the ellipse modeling and the human skin color space in cluttered background (배경을 포함한 이미지에서 타원 모델링과 피부색정보를 이용한 얼굴영역추출)

  • 서정원;송문섭;박정희;안동언;정성종
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.421-424
    • /
    • 1999
  • Automatic human face detection in a complex background is one of the difficult problems In this paper. we propose an effective automatic face detection system that can locate the face region in natural scene images when the system is used as a pre-processor of a face recog- nition system. We use two natural and powerful visual cues, the color and the human head shape. The outline of the human head can be generally described as being roughly elliptic in nature. In the first step of the proposed system, we have tried the approach of fitting the best Possible ellipse to the outline of the head In the next step, the method based on the human skin color space by selecting flesh tone regions in color images and histogramming their r(=R/(R+G+B)) and g(=G/R+G+B)) values. According to our experiment. the proposed system shows robust location results

  • PDF

Tracking by Detection of Multiple Faces using SSD and CNN Features

  • Tai, Do Nhu;Kim, Soo-Hyung;Lee, Guee-Sang;Yang, Hyung-Jeong;Na, In-Seop;Oh, A-Ran
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.61-69
    • /
    • 2018
  • Multi-tracking of general objects and specific faces is an important topic in the field of computer vision applicable to many branches of industry such as biometrics, security, etc. The rapid development of deep neural networks has resulted in a dramatic improvement in face recognition and object detection problems, which helps improve the multiple-face tracking techniques exploiting the tracking-by-detection method. Our proposed method uses face detection trained with a head dataset to resolve the face deformation problem in the tracking process. Further, we use robust face features extracted from the deep face recognition network to match the tracklets with tracking faces using Hungarian matching method. We achieved promising results regarding the usage of deep face features and head detection in a face tracking benchmark.

Robust Head Tracking using a Hybrid of Omega Shape Tracker and Face Detector for Robot Photographer (로봇 사진사를 위한 오메가 형상 추적기와 얼굴 검출기 융합을 이용한 강인한 머리 추적)

  • Kim, Ji-Sung;Joung, Ji-Hoon;Ho, An-Kwang;Ryu, Yeon-Geol;Lee, Won-Hyung;Jin, Chung-Myung
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.2
    • /
    • pp.152-159
    • /
    • 2010
  • Finding a head of a person in a scene is very important for taking a well composed picture by a robot photographer because it depends on the position of the head. So in this paper, we propose a robust head tracking algorithm using a hybrid of an omega shape tracker and local binary pattern (LBP) AdaBoost face detector for the robot photographer to take a fine picture automatically. Face detection algorithms have good performance in terms of finding frontal faces, but it is not the same for rotated faces. In addition, when the face is occluded by a hat or hands, it has a hard time finding the face. In order to solve this problem, the omega shape tracker based on active shape model (ASM) is presented. The omega shape tracker is robust to occlusion and illuminationchange. However, whenthe environment is dynamic,such as when people move fast and when there is a complex background, its performance is unsatisfactory. Therefore, a method combining the face detection algorithm and the omega shape tracker by probabilistic method using histograms of oriented gradient (HOG) descriptor is proposed in this paper, in order to robustly find human head. A robot photographer was also implemented to abide by the 'rule of thirds' and to take photos when people smile.

A Study of Measurement on the Head and Face for Korean Adults (한국 성인의 머리 및 얼굴부위 측정치에 관한 연구)

  • Yoon, Hoon-Yong;Jung, Suk-Gil
    • IE interfaces
    • /
    • v.15 no.2
    • /
    • pp.199-208
    • /
    • 2002
  • This study was performed to measure the various dimensions of the head and face for Korean adults. Three hundred and eighteen males and two hundred and sixty females, age ranged 18 to 60, participated for this study. Thirty-six dimensions were selected to measure. Subjects were divided into three age groups - 18 to 29, 30 to 39, and 40 to 60 - for each sex. The data were analyzed ta see the differences between the age groups and sex using SAS program. Also, the results of this study were compared to the data of Japanese and US. army. The results showed that the 'ear length', 'bigonial breadth' and 'bitragion submandibular arc' increased as the age increased(p<0.01). However, not much of differences were shown between the age groups in most of other dimensions. Males were significantly bigger than females in every dimensions. The comparison between Korea and Japanese showed significant differences in many dimensions. Due to this reason, it is considered that more caution has to be exercised in using Japanese data for the Korean. The Americans showed to be significantly bigger than Korean in most dimensions. It showed that Koreans have more roundish face and wider nose ridge than Americans. The results of this study can be used to design the products that related to the head and face.