• Title/Summary/Keyword: Facial Feature

Search Result 513, Processing Time 0.027 seconds

Face Detection System Based on Candidate Extraction through Segmentation of Skin Area and Partial Face Classifier (피부색 영역의 분할을 통한 후보 검출과 부분 얼굴 분류기에 기반을 둔 얼굴 검출 시스템)

  • Kim, Sung-Hoon;Lee, Hyon-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.2
    • /
    • pp.11-20
    • /
    • 2010
  • In this paper we propose a face detection system which consists of a method of face candidate extraction using skin color and a method of face verification using the feature of facial structure. Firstly, the proposed extraction method of face candidate uses the image segmentation and merging algorithm in the regions of skin color and the neighboring regions of skin color. These two algorithms make it possible to select the face candidates from the variety of faces in the image with complicated backgrounds. Secondly, by using the partial face classifier, the proposed face validation method verifies the feature of face structure and then classifies face and non-face. This classifier uses face images only in the learning process and does not consider non-face images in order to use less number of training images. In the experimental, the proposed method of face candidate extraction can find more 9.55% faces on average as face candidates than other methods. Also in the experiment of face and non-face classification, the proposed face validation method obtains the face classification rate on the average 4.97% higher than other face/non-face classifiers when the non-face classification rate is about 99%.

Face and Hand Tracking Algorithm for Sign Language Recognition (수화 인식을 위한 얼굴과 손 추적 알고리즘)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.11C
    • /
    • pp.1071-1076
    • /
    • 2006
  • In this paper, we develop face and hand tracking for sign language recognition system. The system is divided into two stages; the initial and tracking stages. In initial stage, we use the skin feature to localize face and hands of signer. The ellipse model on CbCr space is constructed and used to detect skin color. After the skin regions have been segmented, face and hand blobs are defined by using size and facial feature with the assumption that the movement of face is less than that of hands in this signing scenario. In tracking stage, the motion estimation is applied only hand blobs, in which first and second derivative are used to compute the position of prediction of hands. We observed that there are errors in the value of tracking position between two consecutive frames in which velocity has changed abruptly. To improve the tracking performance, our proposed algorithm compensates the error of tracking position by using adaptive search area to re-compute the hand blobs. The experimental results indicate that our proposed method is able to decrease the prediction error up to 96.87% with negligible increase in computational complexity of up to 4%.

Face Recognition Based on Polar Coordinate Transform (극좌표계 변환에 기반한 얼굴 인식 방법)

  • Oh, Jae-Hyun;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.44-52
    • /
    • 2010
  • In this paper, we propose a novel method for face recognition which uses polar coordinate instead of conventional cartesian coordinate. Among the central area of a face, we select a point as a pole and make a polar image of a face by evenly sampling pixels in each direction of 360 degrees around the pole. By applying conventional feature extraction methods to the polar image, the recognition rates are improved. The polar coordinate delineates near-pole area more vividly than the area far from the pole. In a face, important regions such as eyes, nose and mouth are concentrated on the central part of a face. Therefore, the polar coordinate of a face image can achieve more vivid representation of important facial regions compared to the conventional cartesian coordinate. The proposed polar coordinate transform was applied to Yale and FRGC databases and LDA and NLDA were used to extract features afterwards. The experimental results show that the proposed method performs better than the conventional cartesian images.

Head Pose Estimation Based on Perspective Projection Using PTZ Camera (원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정)

  • Kim, Jin Suh;Lee, Gyung Ju;Kim, Gye Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.7
    • /
    • pp.267-274
    • /
    • 2018
  • This paper describes a head pose estimation method using PTZ(Pan-Tilt-Zoom) camera. When the external parameters of a camera is changed by rotation and translation, the estimated face pose for the same head also varies. In this paper, we propose a new method to estimate the head pose independently on varying the parameters of PTZ camera. The proposed method consists of 3 steps: face detection, feature extraction, and pose estimation. For each step, we respectively use MCT(Modified Census Transform) feature, the facial regression tree method, and the POSIT(Pose from Orthography and Scaling with ITeration) algorithm. The existing POSIT algorithm does not consider the rotation of a camera, but this paper improves the POSIT based on perspective projection in order to estimate the head pose robustly even when the external parameters of a camera are changed. Through experiments, we confirmed that RMSE(Root Mean Square Error) of the proposed method improve $0.6^{\circ}$ less then the conventional method.

A CASE REPORT ; BROWN TUMOR OF THE MAXILLA AND MANDIBLE IN ASSOCIATION WITH PRIMARY HYPERPARATHYROIDISM (상하악에 발생한 갈색종의 증례보고)

  • Lee, Ju-Kyung;Cho, Sung-Dae;Leem, Dae-Ho
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.31 no.1
    • /
    • pp.61-66
    • /
    • 2009
  • The brown tumors develop in bone and it develop on various area which in clavicle, rib bone, cervical bone, iliac bone etc. The development on the maxillofacial region is rare, relatively more develop on the mandible. The brown tumor directly develop by the dysfunction of calcium metabolism according to hyperparathyroidism and differential diagnosis with other bone lesion should be difficult if it would diagnose by only radiographic features. The histological feature is that proliferation of spindle cells with extravasated blood and haphazardly arranged, variably sized, multinucleated giant cell is seen. The brown tumor is firm diagnosed by physical examination, because of these histological feature show similar with other giant cell lesions(giant cell granuloma, aneurysmal bone cyst, cherubism). The brown tumors have been described as resulting from an imbalance of osteoclastic and osteoblastic activity. It result in bone resorption and fibrous replacement of the bone. So these lesions represent the terminal stage of hyperparathyroidism-dependent bone pathology. Therefore, it is the extremely rare finding that brown tumor in the facial bone as the first manifestation of an hyperparathyroidism. We experience 1 case of brown tumor(50 years old female) that developed on Maxilla and mandible with no history of hyperparathyroidism. So we report this case with a literature review.

Quantitative Analysis of Face Color according to Health Status of Four Constitution Types for Korean Elderly Male (고연령 한국 남성의 사상 체질별 건강 수준에 따른 안색의 정량적 분석)

  • Do, Jun-Hyeong;Ku, Bon-Cho;Kim, Jang-Woong;Jang, Jun-Su;Kim, Sang-Gil;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.26 no.1
    • /
    • pp.128-132
    • /
    • 2012
  • In this paper, we performed a quantitative analysis of face color according to the health status of four constitution types. 205 Korean male in age from 65 to 80 were participated in this study and 85 subjects were finally selected for the analysis. Imaging process techniques were employed to extract feature variables associated with face color from a frontal facial image. Using the extracted feature variables, the correlations between face color and health status, face color and health status in each constitution type, and face color and four constitution types in heath status group were investigated. As the result, it was observed that the face color of healthy group contained more red component and less blue component than unhealthy group. For each constitution type, the face parts showing a significant difference according to health status were different. This is the first work which reports the correlation between the face color and health status of four constitution types with a objective method, and the numerical data for the face color according to the health status of four constitution types will be an objective standard to diagnose a patient's health status.

Simply Separation of Head and Face Region and Extraction of Facial Features for Image Security (영상보안을 위한 머리와 얼굴의 간단한 영역 분리 및 얼굴 특징 추출)

  • Jeon, Young-Cheol;Lee, Keon-Ik;Kim, Kang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.125-133
    • /
    • 2008
  • As society develops, the importance of safety for individuals and facilities in public places is getting higher. Not only the areas such as the existing parking lot, bank and factory which require security or crime prevention but also individual houses as well as general institutions have the trend to increase investment in guard and security. This study suggests face feature extract and the method to simply divide face region and head region that are import for face recognition by using color transform. First of all, it is to divide face region by using color transform of Y image of YIQ image and head image after dividing head region with K image among CMYK image about input image. Then, it is to extract features of face by using labeling after Log calculation to head image. The clearly divided head and face region can easily classify the shape of head and face and simply find features. When the algorism of the suggested method is utilized, it is expected that security related facilities that require importance can use it effectively to guard or recognize people.

  • PDF

Association of Nose Size and Shapes with Self-rated Health and Mibyeong (코의 크기 및 형태와 자가건강, 미병과의 상관성)

  • Ahn, Ilkoo;Bae, Kwang-Ho;Jin, Hee-Jeong;Lee, Siwoo
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.35 no.6
    • /
    • pp.267-273
    • /
    • 2021
  • Mibyeong (sub-health) is a concept that represents the sub-health in traditional East Asian medicine. Assuming that the nose sizes and shapes are related to respiratory function, in this study, we hypothesized that the nose size and shape features are related to the self-rated health (SRH) level and self-rated Mibyeong severity, and aimed to assess this relationship using a fully automated image analysis system. The nose size features were evaluated from the frontal and profile face images of 810 participants. The nose size features consisted of five length features, one area feature, and one volume feature. The level of SRH and the Mibyeong severity were determined using a questionnaire. The normalized nasal height was negatively associated with the self-rated health score (SRHS) (partial ρ = -0.125, p = 3.53E-04) and the Mibyeong score (MBS) (partial ρ = -.172, p = 9.38E-07), even after adjustment for sex, age, and body mass index. The normalized nasal volume (ρ = -.105, p = 0.003), the normalized nasal tip protrusion length (ρ = -.087, p = 0.014), and the normalized nares width (ρ = -.086, p = .015) showed significant correlation with the SRHS. The normalized nasal area (ρ = -.118, p = 0.001), the normalized nasal volume (ρ = -.107, p = .002) showed significant correlation with the MBS. The wider, longer, and larger the nose, the lower the SRHS and MBS, indicating that health status can be estimated based on the size and shape features of the nose.

Development of Semi-Supervised Deep Domain Adaptation Based Face Recognition Using Only a Single Training Sample (단일 훈련 샘플만을 활용하는 준-지도학습 심층 도메인 적응 기반 얼굴인식 기술 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1375-1385
    • /
    • 2022
  • In this paper, we propose a semi-supervised domain adaptation solution to deal with practical face recognition (FR) scenarios where a single face image for each target identity (to be recognized) is only available in the training phase. Main goal of the proposed method is to reduce the discrepancy between the target and the source domain face images, which ultimately improves FR performances. The proposed method is based on the Domain Adatation network (DAN) using an MMD loss function to reduce the discrepancy between domains. In order to train more effectively, we develop a novel loss function learning strategy in which MMD loss and cross-entropy loss functions are adopted by using different weights according to the progress of each epoch during the learning. The proposed weight adoptation focuses on the training of the source domain in the initial learning phase to learn facial feature information such as eyes, nose, and mouth. After the initial learning is completed, the resulting feature information is used to training a deep network using the target domain images. To evaluate the effectiveness of the proposed method, FR performances were evaluated with pretrained model trained only with CASIA-webface (source images) and fine-tuned model trained only with FERET's gallery (target images) under the same FR scenarios. The experimental results showed that the proposed semi-supervised domain adaptation can be improved by 24.78% compared to the pre-trained model and 28.42% compared to the fine-tuned model. In addition, the proposed method outperformed other state-of-the-arts domain adaptation approaches by 9.41%.

Proposing Shape Alignment for an Improved Active Shape Model (ASM의 성능향상을 위한 형태 정렬 방식 제안)

  • Hahn, Hee-Il
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.63-70
    • /
    • 2012
  • In this paper an extension to an original active shape model(ASM) for facial feature extraction is presented. The original ASM suffers from poor shape alignment by aligning the shape model to a new instant of the object in a given image using a simple similarity transformation. It exploits only informations such as scale, rotation and shift in horizontal and vertical directions, which does not cope effectively with the complex pose variation. To solve the problem, new shape alignment with 6 degrees of freedom is derived, which corresponds to an affine transformation. Another extension is to speed up the calculation of the Mahalanobis distance for 2-D profiles by trimming the profile covariance matrices. Extensive experiment is conducted with several images of varying poses to check the performance of the proposed method to segment the human faces.