• 제목/요약/키워드: Facial feature

Search Result 517, Processing Time 0.034 seconds

Study for Classification of Facial Expression using Distance Features of Facial Landmarks (얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구)

  • Bae, Jin Hee;Wang, Bo Hyeon;Lim, Joon S.
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.613-618
    • /
    • 2021
  • Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Fully Automatic Facial Recognition Algorithm By Using Gabor Feature Based Face Graph (가버 피쳐기반 얼굴 그래프를 이용한 완전 자동 안면 인식 알고리즘)

  • Kim, Jin-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.31-39
    • /
    • 2011
  • The facial recognition algorithms using Gabor wavelet based face graph produce very good performance while they have some weakness such as a large amount of computation and an irregular result depend on initial location. We proposed a fully automatic facial recognition algorithm using a Gabor feature based geometric deformable face graph matching. The initial location and size of a face graph can be selected using Adaboost detection results for speed-up. To find the best face graph with the face model graph by updating the size and location of the graph, the geometric transformable parameters are defined. The best parameters for an optimal face graph are derived using an optimization technique. The simulation results show that the proposed algorithm can produce very good performance with recognition rate 96.7% and recognition speed 0.26 sec for FERET database.

Automatic Generation of the Personal 3D Face Model (3차원 개인 얼굴 모델 자동 생성)

  • Ham, Sang-Jin;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.104-114
    • /
    • 1999
  • This paper proposes an efficient method for the automatic generation of personalized 3D face model from color image sequence. To detect a robust facial region in a complex background, moving color detection technique based on he facial color distribution has been suggested. Color distribution and edge position information in the detected face region are used to extract the exact 31 facial feature points of the facial description parameter(FDP) proposed by MPEG-4 SNHC(Synthetic-Natural Hybrid Coding) adhoc group. Extracted feature points are then applied to the corresponding vertex points of the 3D generic face model composed of 1038 triangular mesh points. The personalized 3D face model can be generated automatically in less then 2 seconds on Pentium PC.

  • PDF

An Active Contour Approach to Extract Feature Regions from Triangular Meshes

  • Min, Kyung-Ha;Jung, Moon-Ryul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.3
    • /
    • pp.575-591
    • /
    • 2011
  • We present a novel active contour-based two-pass approach to extract smooth feature regions from a triangular mesh. In the first pass, an active contour formulated in level-set surfaces is devised to extract feature regions with rough boundaries. In the second pass, the rough boundary curve is smoothed by minimizing internal energy, which is derived from its curvature. The separation of the extraction and smoothing process enables us to extract feature regions with smooth boundaries from a triangular mesh without user's initial model. Furthermore, smooth feature curves can also be obtained by skeletonizing the smooth feature regions. We tested our algorithm on facial models and proved its excellence.

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

Vestibular Schwannoma Atypically Invading Temporal Bone

  • Park, Soo Jeong;Yang, Na-Rae;Seo, Eui Kyo
    • Journal of Korean Neurosurgical Society
    • /
    • v.57 no.4
    • /
    • pp.292-294
    • /
    • 2015
  • Vestibular schwannoma (VS) usually present the widening of internal auditory canal (IAC), and these bony changes are typically limited to IAC, not extend to temporal bone. Temporal bone invasion by VS is extremely rare. We report 51-year-old man who revealed temporal bone destruction beyond IAC by unilateral VS. The bony destruction extended anteriorly to the carotid canal and inferiorly to the jugular foramen. On histopathologic examination, the tumor showed typical benign schwannoma and did not show any unusual vascularity or malignant feature. Facial nerve was severely compressed and distorted by tumor, which unevenly eroded temporal bone in surgical field. Vestibular schwannoma with atypical invasion of temporal bone can be successfully treated with combined translabyrinthine and lateral suboccipiral approach without facial nerve dysfunction. Early detection and careful dissection of facial nerve with intraoperative monitoring should be considered during operation due to severe adhesion and distortion of facial nerve by tumor and eroded temporal bone.

Facial Expression Classification through Covariance Matrix Correlations

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.5
    • /
    • pp.505-509
    • /
    • 2011
  • This paper attempts to classify known facial expressions and to establish the correlations between two regions (eye + eyebrows and mouth) in identifying the six prototypic expressions. Covariance is used to describe region texture that captures facial features for classification. The texture captured exhibit the pattern observed during the execution of particular expressions. Feature matching is done by simple distance measure between the probe and the modeled representations of eye and mouth components. We target JAFFE database in this experiment to validate our claim. A high classification rate is observed from the mouth component and the correlation between the two (eye and mouth) components. Eye component exhibits a lower classification rate if used independently.

DETECTION OF FACIAL FEATURES IN COLOR IMAGES WITH VARIOUS BACKGROUNDS AND FACE POSES

  • Park, Jae-Young;Kim, Nak-Bin
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.4
    • /
    • pp.594-600
    • /
    • 2003
  • In this paper, we propose a detection method for facial features in color images with various backgrounds and face poses. To begin with, the proposed method extracts face candidacy region from images with various backgrounds, which have skin-tone color and complex objects, via the color and edge information of face. And then, by using the elliptical shape property of face, we correct a rotation, scale, and tilt of face region caused by various poses of head. Finally, we verify the face using features of face and detect facial features. In our experimental results, it is shown that accuracy of detection is high and the proposed method can be used in pose-invariant face recognition system effectively

  • PDF

Invariant Range Image Multi-Pose Face Recognition Using Fuzzy c-Means

  • Phokharatkul, Pisit;Pansang, Seri
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1244-1248
    • /
    • 2005
  • In this paper, we propose fuzzy c-means (FCM) to solve recognition errors in invariant range image, multi-pose face recognition. Scale, center and pose error problems were solved using geometric transformation. Range image face data was digitized into range image data by using the laser range finder that does not depend on the ambient light source. Then, the digitized range image face data is used as a model to generate multi-pose data. Each pose data size was reduced by linear reduction into the database. The reduced range image face data was transformed to the gradient face model for facial feature image extraction and also for matching using the fuzzy membership adjusted by fuzzy c-means. The proposed method was tested using facial range images from 40 people with normal facial expressions. The output of the detection and recognition system has to be accurate to about 93 percent. Simultaneously, the system must be robust enough to overcome typical image-acquisition problems such as noise, vertical rotated face and range resolution.

  • PDF