• Title/Summary/Keyword: facial extraction

Search Result 298, Processing Time 0.025 seconds

Facial Feature Extraction with Its Applications

  • Lee, Minkyu;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.7-9
    • /
    • 2015
  • Purpose In the many face-related application such as head pose estimation, 3D face modeling, facial appearance manipulation, the robust and fast facial feature extraction is necessary. We present the facial feature extraction method based on shape regression and feature selection for real-time facial feature extraction. Materials and Methods The facial features are initialized by statistical shape model and then the shape of facial features are deformed iteratively according to the texture pattern which is selected on the feature pool. Results We obtain fast and robust facial feature extraction result with error less than 4% and processing time less than 12 ms. The alignment error is measured by average of ratio of pixel difference to inter-ocular distance. Conclusion The accuracy and processing time of the method is enough to apply facial feature based application and can be used on the face beautification or 3D face modeling.

A Factor Analysis for the Success of Commercialization of the Facial Extraction and Recognition Image Information System (얼굴추출 및 인식 영상정보 시스템 상용화 성공요인 분석)

  • Kim, Shin-Pyo;Oh, Se-Dong
    • Journal of Industrial Convergence
    • /
    • v.13 no.2
    • /
    • pp.45-54
    • /
    • 2015
  • This Study aims to analyze the factors for the success of commercialization of the facial extraction and recognition image security information system of the domestic companies in Korea. As the results of the analysis, the internal factors for the success of commercialization of the facial extraction and recognition image security information system of the company were found to include (1) Holding of technology for close range facial recognition, (2) Holding of several facial recognition related patents, (3) Preference for the facial recognition security system over the fingerprint recognition and (4) strong volition of the CEO of the corresponding company. On the other hand, the external environmental factors for the success were found to include (1) Extensiveness of the market, (2) Rapid growth of the global facial recognition market, (3) Increased demand for the image security system, (4) Competition in securing of the engine for facial extraction and recognition and (5) Selection by the government as one of the 100 major strategic products.

  • PDF

Extraction of Facial Region Using Fuzzy Color Filter (퍼지 색상 필터를 이용한 얼굴 영역 추출)

  • Kim, M.H.;Park, J.B.;Jung, K.H.;Joo, Y.H.;Lee, J.;Cho, Y.J.
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.147-149
    • /
    • 2004
  • There are no authentic solutions in a face region extraction problem though it is an important part of pattern recognition and has diverse application fields. It is not easy to develop the facial region extraction algorithm because the facial image is very sensitive according to age, sex, and illumination. In this paper, to solve these difficulties, a fuzzy color filer based on the facial region extraction algorithm is proposed. The fuzzy color filter makes the robust facial region extraction enable by modeling the skin color. Especially, it is robust in facial region extraction with various illuminations. In addition, to identify the fuzzy color filter, a linear matrix inequality(LMI) optimization method is used. Finally, the simulation result is given to confirm the superiority of the proposed algorithm.

  • PDF

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

A Study on Facial Feature' Morphological Information Extraction and Classification for Avatar Generation (아바타 생성을 위한 이목구비 모양 특징정보 추출 및 분류에 관한 연구)

  • 박연출
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.631-642
    • /
    • 2003
  • We propose an approach to extract and to classify facial features into some classes from one's photo as prepared classification standards to generate one's avatar. Facial Feature Extraction and Classification was executed at eyes, nose, lips, jaw separately and I presented each facial features and classification standards. Extracted Facial Features are used for calculation to features of professional designer's facial component images. Then, most similar facial component images are mapped onto avatar's vector face.

  • PDF

A Case Report on Facial Nerve Palsy after Tooth Extraction and Korean Medical Treatments (발치 후 병발한 안면마비 환자에 대한 한의학적 치료 사례 보고)

  • Kim, Dae Hun;Kim, Yu Ri;Bae, Ji Min;Hong, Seung Pyo;Koo, Bon Kil;Kim, Jae Kyu;Lee, Byung Ryul;Yang, Gi Young
    • Journal of Acupuncture Research
    • /
    • v.33 no.2
    • /
    • pp.211-220
    • /
    • 2016
  • Objectives : Facial nerve palsy is a rare but well-known complication that occurs after a tooth extraction. The paralysis follows the injection of a local anesthetic, but patients typically recover after a few hours. However, there are a number of reports of delayed paralysis, and the cause of delayed facial palsy remains uncertain. This study is the first case report detailing how Korean medicine can be used to treat facial nerve palsy following tooth extraction. This study reports our experience of a patient's favorable recovery. Methods : A 25-year-old male patient experienced acute facial palsy after four premolar teeth were extracted. He was hospitalized in the Pusan National University Korean Medical Hospital. We provided complex Korean traditional medical treatments such as acupuncture, cupping, use of a hot water steamer, and herbal medicine for 18 days. Results : Using the Yanagihara Grading Score, we found improvements in the patient's voluntary facial movement as his score increased from 22 to 34. Furthermore, his accompanying symptoms, such as dry eye and facial pain, disappeared. However, the patient reported transient pain around acupoints after the acupuncture intervention. Conclusion : Our study suggests that Korean medical treatments might be effectively used to treat facial nerve palsy after tooth extraction, although further research should be conducted due to the limited number of cases in this area.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

METHODS OF EYEBROW REGION EXTRACRION AND MOUTH DETECTION FOR FACIAL CARICATURING SYSTEM PICASSO-2 EXHIBITED AT EXPO2005

  • Tokuda, Naoya;Fujiwara, Takayuki;Funahashi, Takuma;Koshimizu, Hiroyasu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.425-428
    • /
    • 2009
  • We have researched and developed the caricature generation system PICASSO. PICASSO outputs the deformed facial caricature by comparing input face with prepared mean face. We specialized it as PICASSO-2 for exhibiting a robot at Aichi EXPO2005. This robot enforced by PICASSO-2 drew a facial caricature on the shrimp rice cracker with the laser pen. We have been recently exhibiting another revised robot characterized by a brush drawing. This system takes a couple of facial images with CCD camera, extracts the facial features from the images, and generates the facial caricature in real time. We experimentally evaluated the performance of the caricatures using a lot of data taken in Aichi EXPO2005. As a result it was obvious that this system were not sufficient in accuracy of eyebrow region extraction and mouth detection. In this paper, we propose the improved methods for eyebrow region extraction and mouth detection.

  • PDF

Realistic Avatar Face Generation Using Shading Mechanism (음영합성 기법을 이용한 실사형 아바타 얼굴 생성)

  • Park Yeon-Chool
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.79-91
    • /
    • 2004
  • This paper proposes avatar face generation system that uses shading mechanism and facial features extraction method of facial recognition. Proposed system generates avatar face similar to human face automatically using facial features that extracted from a photo. And proposed system is an approach which compose shade and facial features. Thus, it has advantages that can make more realistic avatar face similar to human face. This paper proposes new eye localization method, facial features extraction method, classification method for minimizing retrieval time, image retrieval method by similarity measure, and realistic avatar face generation method by mapping facial features with shaded face pane.

  • PDF

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF