• Title/Summary/Keyword: facial features

Search Result 633, Processing Time 0.029 seconds

Local Appearance-based Face Recognition Using SVM and PCA (SVM과 PCA를 이용한 국부 외형 기반 얼굴 인식 방법)

  • Park, Seung-Hwan;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.54-60
    • /
    • 2010
  • The local appearance-based method is one of the face recognition methods that divides face image into small areas and extracts features from each area of face image using statistical analysis. It collects classification results of each area and decides identity of a face image using a voting scheme by integrating classification results of each area of a face image. The conventional local appearance-based method divides face images into small pieces and uses all the pieces in recognition process. In this paper, we propose a local appearance-based method that makes use of only the relatively important facial components. The proposed method detects the facial components such as eyes, nose and mouth that differs much from person to person. In doing so, the proposed method detects exact locations of facial components using support vector machines (SVM). Based on the detected facial components, a number of small images that contain the facial parts are constructed. Then it extracts features from each facial component image using principal components analysis (PCA). We compared the performance of the proposed method to those of the conventional methods. The results show that the proposed method outperforms the conventional local appearance-based method while preserving the advantages of the conventional local appearance-based method.

Model based Facial Expression Recognition using New Feature Space (새로운 얼굴 특징공간을 이용한 모델 기반 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.309-316
    • /
    • 2010
  • This paper introduces a new model based method for facial expression recognition that uses facial grid angles as feature space. In order to be able to recognize the six main facial expression, proposed method uses a grid approach and therefore it establishes a new feature space based on the angles that each gird's edge and vertex form. The way taken in the paper is robust against several affine transformations such as translation, rotation, and scaling which in other approaches are considered very harmful in the overall accuracy of a facial expression recognition algorithm. Also, this paper demonstrates the process that the feature space is created using angles and how a selection process of feature subset within this space is applied with Wrapper approach. Selected features are classified by SVM, 3-NN classifier and classification results are validated with two-tier cross validation. Proposed method shows 94% classification result and feature selection algorithm improves results by up to 10% over the full set of feature.

OSTEOCHONDROMA OF THE MANDIBULAR CONDYLE AND ACCOMPANYING FACIAL ASYMMETRY: REPORT OF A CASE (하악과두에 발생한 골연골종 및 이와 연관된 안면비대칭의 치료: 증례 보고)

  • Lee, Hyo-Ji;Kang, Young-Hoon;Song, Won-Wook;Kim, Sung-Won;Kim, Jong-Ryoul
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.32 no.1
    • /
    • pp.72-76
    • /
    • 2010
  • Osteochondroma is the one of the most benign tumors of the axial skeleton, but is rarely found in the facial bones. Typical facial features of condylar osteochondroma include striking facial asymmetry, malocclusion with openbite on the affected side, and prognathic deviation of the chin and crossbite to the contralateral side. In this case, twenty four year-old female showed facial asymmetry, chin deviation, openbite on the affected side but have no symptoms of pain or dysfunction. Concomitantly she had maxillary occlusal cant and hemimandibular hypertrophy. Panoramic radiograph showed radiopaque mass on right mandibular condyle extended along the lateral pterygoid muscle. Computed tomogram demonstrated enlarged condylar head and bony spur on posteromedial side of condyle and 99Tc bone scintigraphy showed a focal hot image. These findings were correspond with osteochondroma. The lesion was treated with condylectomy and residual facial asymmetry was corrected with 2-jaw orthognathic surgery. Herein, we report a case of osteochondroma of the mandibular condyle and accompanying facial asymmetry.

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Study on the Practical 3D Facial Diagnosis using Kinect Sensors (키넥트 센서를 이용한 실용적인 3차원 안면 진단기 연구)

  • Jang, Jun-Su;Do, Jun-Hyeong;Kim, Jang-Woong;Nam, Jiho
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.29 no.3
    • /
    • pp.218-222
    • /
    • 2015
  • Facial diagnosis based on quantitative facial features has been studied in many Korean medicine fields, especially in Sasang constitutional medicine. By the rapid growing of 3D measuring technology, generic and cheap 3D sensors, such as Microsoft Kinect, is popular in many research fields. In this study, the possibility of using Kinect in facial diagnosis is examined. We introduce the development of facial feature extraction system and verify its accuracy and repeatability of measurement. Furthermore, we compare Sasang constitution diagnosis results between DSLR-based system and the developed Kinect-based system. A Sasang constitution diagnosis algorithm applied in the experiment was previously developed by a huge database containing 2D facial images acquired by DSLR cameras. Interrater reliability analysis result shows almost perfect agreement (Kappa = 0.818) between the two systems. This means that Kinect can be utilized to the diagnosis algorithm, even though it was originally derived from 2D facial image data. We conclude that Kinect can be successfully applicable to practical facial diagnosis.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Unilateral segmental odontomaxillary hypoplasia: an unusual case report

  • Pandey, Sushma;Pai, Keerthilatha M.;Nayak, Ajay G.;Vineetha, Ravindranath
    • Imaging Science in Dentistry
    • /
    • v.41 no.1
    • /
    • pp.39-42
    • /
    • 2011
  • Facial asymmetry is not an uncommon occurrence in day to day dental practice. It can be caused by various etiologic factors ranging from facial trauma to serious hereditary conditions. Here, we report a rare case of non-syndromic facial asymmetry in a young female, who was born with this condition but was not aware of the progression of asymmetry. No relevant family history was recognized. She was also deficient in both deciduous and permanent teeth in the corresponding region of maxilla. Hence, the cause of this asymmetry was believed to be a segmental odontomaxillary hypoplasia of left maxilla accompanied by agenesis of left maxillary premolars and molars and disuse atrophy of corresponding facial musculature. This report briefly discussed the comparative features of segmental odontomaxillary hypoplasia, hemimaxillofacial dysplasia, and segmental odontomaxillary dysplasia and justified the differences between segmental odontomaxillary hypoplasia and the other two conditions.

A Gaze Tracking based on the Head Pose in Computer Monitor (얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적)

  • 오승환;이희영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF