• Title/Summary/Keyword: facial extraction

Search Result 298, Processing Time 0.027 seconds

Facial Feature Extraction using Genetic Algorithm from Original Image (배경영상에서 유전자 알고리즘을 이용한 얼굴의 각 부위 추출)

  • 이형우;이상진;박석일;민홍기;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.214-217
    • /
    • 2000
  • Many researches have been performed for human recognition and coding schemes recently. For this situation, we propose an automatic facial feature extraction algorithm. There are two main steps: the face region evaluation from original background image such as office, and the facial feature extraction from the evaluated face region. In the face evaluation, Genetic Algorithm is adopted to search face region in background easily such as office and household in the first step, and Template Matching Method is used to extract the facial feature in the second step. We can extract facial feature more fast and exact by using over the proposed Algorithm.

  • PDF

A 3D Face Reconstruction Method Robust to Errors of Automatic Facial Feature Point Extraction (얼굴 특징점 자동 추출 오류에 강인한 3차원 얼굴 복원 방법)

  • Lee, Youn-Joo;Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.122-131
    • /
    • 2011
  • A widely used single image-based 3D face reconstruction method, 3D morphable shape model, reconstructs an accurate 3D facial shape when 2D facial feature points are correctly extracted from an input face image. However, in the case that a user's cooperation is not available such as a real-time 3D face reconstruction system, this method can be vulnerable to the errors of automatic facial feature point extraction. In order to solve this problem, we automatically classify extracted facial feature points into two groups, erroneous and correct ones, and then reconstruct a 3D facial shape by using only the correctly extracted facial feature points. The experimental results showed that the 3D reconstruction performance of the proposed method was remarkably improved compared to that of the previous method which does not consider the errors of automatic facial feature point extraction.

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.1-8
    • /
    • 2005
  • Facial analysis is used in many applications like face recognition systems, human-computer interface through head movements or facial expressions, model based coding, or virtual reality. In all these applications a very precise extraction of facial feature points are necessary. In this paper we presents a method for automatic extraction of the facial features Points such as mouth corners, eye corners, eyebrow corners. First, face region is detected by AdaBoost-based object detection algorithm. Then a combination of three kinds of feature energy for facial features are computed; valley energy, intensity energy and edge energy. After feature area are detected by searching horizontal rectangles which has high feature energy. Finally, a corner detection algorithm is applied on the end region of each feature area. Because we integrate three feature energy and the suggested estimation method for valley energy and intensity energy are adaptive to the illumination change, the proposed feature extraction method is robust under various conditions.

  • PDF

A Study on Face Component Extraction for Automatic Generation of Personal Avatar (개인아바타 자동 생성을 위한 얼굴 구성요소의 추출에 관한 연구)

  • Choi Jae Young;Hwang Seung Ho;Yang Young Kyu;Whangbo Taeg Ken
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.93-102
    • /
    • 2005
  • In Recent times, Netizens have frequently use virtual character 'Avatar' schemes in order to present their own identity, there is a strong need for avatars to resemble the user. This paper proposes an extraction technique for facial region and features that are used in generating the avatar automatically. For extraction of facial feature component, the method uses ACM and edge information. Also, in the extraction process of facial region, the proposed method reduces the effect of lights and poor image quality on low resolution pictures. this is achieved by using the variation of facial area size which is employed for external energy of ACM. Our experiments show that the success rate of extracting facial regions is $92{\%}$ and accuracy rate of extracting facial feature components is $83.4{\%}$, our results provide good evidence that the suggested method can extract the facial regions and features accurately, moreover this technique can be used in the process of handling features according to the pattern parts of automatic avatar generation system in the near future.

  • PDF

A STUDY ON TREATMENT EFFECTS OF MAXILLARY SECOND MOLAR EXTRACTION CASES (상악 제 2 대구치 발거에 의한 교정치료의 효과)

  • Chung, Kyu-Rhim;Park, Young-Guk;Lee, Young-Jun;Lee, Soung-Hee;Kim, Seong-Hun
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.16 no.2
    • /
    • pp.93-104
    • /
    • 2000
  • Orthodontic treatment in conjunction with second-molar extraction has been a controversial issue among orthodontists over many decades. The aim of this study was to investigate the treatment effects of upper second molar extraction cases. The sample included 19 upper second molar extraction orthodontic cases(ten Angle's Class I's and nine Class II's, average age=13Y 6M) cared at Kyung-Hee University Department of Orthodontics. Lateral cephalometric radiographs were taken before and immediately after treatment. Seventy-nine points were digitized on each cephalogram and 38 cephalometric parameters were computed comprising 22 angular measurements, 13 linear measurements, and 3 facial proportions. The data obtained from each malocclusion group were analyzed by paired t-test. The statistical results disclosed that there was no significant change in skeletal pattern after treatment except for that accountable by growth while there was statistically significant change in dentoalveolar and soft tissue patterns. There were no significant changes in Bjork sum, posterior facial height /anterior facial height and lower anterior facial height /anterior facial height. No significant changes in anteroposterior position of maxilla and palatal plane were manifested. Although facial axis and lower facial height was slightly increased and the mandible was rotated backward and downward, there was no remarkable change in the mandibular plane. There were statistically significant changes in distal movement of upper first molar, molar key correction and overjet reduction while there was no change in the occlusal plane. The upper lip was slightly retracted simultaneously with slight increase in nasolabial angle. These results signify that distalization of upper dentition with the second molar extraction does change occlusal relationship without gross modifications in the craniofacial skeletal configurationson. Henceforth the second molar extracted would be recommended to treat severe anterior crowding and protrusion with minor skeletal discrepancy.

  • PDF

Facial Feature Detection and Facial Contour Extraction using Snakes (얼굴 요소의 영역 추출 및 Snakes를 이용한 윤곽선 추출)

  • Lee, Kyung-Hee;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.731-741
    • /
    • 2000
  • This paper proposes a method to detect a facial region and extract facial features which is crucial for visual recognition of human faces. In this paper, we extract the MER(Minimum Enclosing Rectangle) of a face and facial components using projection analysis on both edge image and binary image. We use an active contour model(snakes) for extraction of the contours of eye, mouth, eyebrow, and face in order to reflect the individual differences of facial shapes and converge quickly. The determination of initial contour is very important for the performance of snakes. Particularly, we detect Minimum Enclosing Rectangle(MER) of facial components and then determine initial contours using general shape of facial components within the boundary of the obtained MER. We obtained experimental results to show that MER extraction of the eye, mouth, and face was performed successfully. But in the case of images with bright eyebrow, MER extraction of eyebrow was performed poorly. We obtained good contour extraction with the individual differences of facial shapes. Particularly, in the eye contour extraction, we combined edges by first order derivative operator and zero crossings by second order derivative operator in designing energy function of snakes, and we achieved good eye contours. For the face contour extraction, we used both edges and grey level intensity of pixels in designing of energy function. Good face contours were extracted as well.

  • PDF

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

Facial Feature Extraction using Multiple Active Appearance Model (Multiple Active Appearance Model을 이용한 얼굴 특징 추출 기법)

  • Park, Hyun-Jun;Kim, Kwang-Baek;Cha, Eui-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1201-1206
    • /
    • 2013
  • Active Appearance Model(AAM) is one of the facial feature extraction techniques. In this paper, we propose the Multiple Active Appearance Model(MAAM). Proposed method uses two AAMs. Each AAM trains using different training parameters. It causes that each AAM has different strong points. One AAM complements the weak points in the other AAM. We performed the facial feature extraction on the 100 images to verify the performance of MAAM. Experiment results show that MAAM gives more accurate results than AAM with less fitting iteration.

Facial Image Analysis Algorithm for Emotion Recognition (감정 인식을 위한 얼굴 영상 분석 알고리즘)

  • Joo, Y.H.;Jeong, K.H.;Kim, M.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.801-806
    • /
    • 2004
  • Although the technology for emotion recognition is important one which demanded in various fields, it still remains as the unsolved problem. Especially, it needs to develop the algorithm based on human facial image. In this paper, we propose the facial image analysis algorithm for emotion recognition. The proposed algorithm is composed as the facial image extraction algorithm and the facial component extraction algorithm. In order to have robust performance under various illumination conditions, the fuzzy color filter is proposed in facial image extraction algorithm. In facial component extraction algorithm, the virtual face model is used to give information for high accuracy analysis. Finally, the simulations are given in order to check and evaluate the performance.