• Title/Summary/Keyword: Facial Component

Search Result 181, Processing Time 0.026 seconds

Enhanced Independent Component Analysis of Temporal Human Expressions Using Hidden Markov model

  • Lee, J.J.;Uddin, Zia;Kim, T.S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.487-492
    • /
    • 2008
  • Facial expression recognition is an intensive research area for designing Human Computer Interfaces. In this work, we present a new facial expression recognition system utilizing Enhanced Independent Component Analysis (EICA) for feature extraction and discrete Hidden Markov Model (HMM) for recognition. Our proposed approach for the first time deals with sequential images of emotion-specific facial data analyzed with EICA and recognized with HMM. Performance of our proposed system has been compared to the conventional approaches where Principal and Independent Component Analysis are utilized for feature extraction. Our preliminary results show that our proposed algorithm produces improved recognition rates in comparison to previous works.

  • PDF

Using Analysis of Major Color Component facial region detection algorithm for real-time image (동영상에서 얼굴의 주색상 밝기 분포를 이용한 실시간 얼굴영역 검출기법)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.8 no.3
    • /
    • pp.329-339
    • /
    • 2007
  • In this paper we present a facial region detection algorithm for real-time image with complex background and various illumination using spatial and temporal methods. For Detecting Human region It used summation of Edge-Difference Image between continuous image sequences. Then, Detected facial candidate region is vertically divided two objected. Non facial region is reduced using Analysis of Major Color Component. Non facial region has not available Major Color Component. And then, Background is reduced using boundary information. Finally, The Facial region is detected through horizontal, vertical projection of Images. The experiments show that the proposed algorithm can detect robustly facial region with complex background various illumination images.

  • PDF

A Study on Facial Feature' Morphological Information Extraction and Classification for Avatar Generation (아바타 생성을 위한 이목구비 모양 특징정보 추출 및 분류에 관한 연구)

  • 박연출
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.631-642
    • /
    • 2003
  • We propose an approach to extract and to classify facial features into some classes from one's photo as prepared classification standards to generate one's avatar. Facial Feature Extraction and Classification was executed at eyes, nose, lips, jaw separately and I presented each facial features and classification standards. Extracted Facial Features are used for calculation to features of professional designer's facial component images. Then, most similar facial component images are mapped onto avatar's vector face.

  • PDF

Analysis and Syntheris of Facial Images for Age Change (나이변화를 위한 얼굴영상의 분석과 합성)

  • 박철하;최창석;최갑석
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.101-111
    • /
    • 1994
  • The human face can provide a great deal of information in regard to his/her race, age, sex, personality, feeling, psychology, mental state, health condition and ect. If we pay a close attention to the aging process, we are able to find out that there are recognizable phenomena such as eyelid drooping, cheek drooping, forehead furrowing, hair falling-out, the hair becomes gray and etc. This paper proposes that the method to estimate the age by analyzing these feature components for the facial image. Ang we also introduce the method of facial image synthesis in accordance with the cange of age. The feature components according to the change of age can be obtainec by dividing the facial image into the 3-dimensional shape of a face and the texture of a face and then analyzing the principle component respectively using 3-dimensional model. We assume the age of the facial image by comparing the extracted feature component to the facial image and synthesize the resulted image by adding or subtracting the feature component to/from the facial image. As a resurt of this simulation, we have obtained the age changed ficial image of high quality.

  • PDF

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

Emotion Recognition Method of Facial Image using PCA (PCA을 이용한 얼굴 표정의 감정 인식 방법)

  • Kim, Ho-Duck;Yang, Hyun-Chang;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.772-776
    • /
    • 2006
  • A research about facial image recognition is studied in the most of images in a full race. A representative part, effecting a facial image recognition, is eyes and a mouth. So, facial image recognition researchers have studied under the central eyes, eyebrows, and mouths on the facial images. But most people in front of a camera in everyday life are difficult to recognize a fast change of pupils. And people wear glasses. So, in this paper, we try using Principal Component Analysis(PCA) for facial image recognition in blindfold case.

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Robust Facial Expression Recognition using PCA Representation (PCA 표상을 이용한 강인한 얼굴 표정 인식)

  • Shin Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.323-331
    • /
    • 2005
  • This paper proposes an improved system for recognizing facial expressions in various internal states that is illumination-invariant and without detectable rue such as a neutral expression. As a preprocessing to extract the facial expression information, a whitening step was applied. The whitening step indicates that the mean of the images is set to zero and the variances are equalized as unit variances, which reduces murk of the variability due to lightening. After the whitening step, we used the facial expression information based on principal component analysis(PCA) representation excluded the first 1 principle component. Therefore, it is possible to extract the features in the lariat expression images without detectable cue of neutral expression from the experimental results, we ran also implement the various and natural facial expression recognition because we perform the facial expression recognition based on dimension model of internal states on the images selected randomly in the various facial expression images corresponding to 83 internal emotional states.

  • PDF

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

The Extraction of Face Regions based on Optimal Facial Color and Motion Information in Image Sequences (동영상에서 최적의 얼굴색 정보와 움직임 정보에 기반한 얼굴 영역 추출)

  • Park, Hyung-Chul;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.193-200
    • /
    • 2000
  • The extraction of face regions is required for Head Gesture Interface which is a natural user interface. Recently, many researchers are interested in using color information to detect face regions in image sequences. Two most widely used color models, HSI color model and YIQ color model, were selected for this study. Actually H-component of HSI and I-component of YIQ are used in this research. Given the difference in the color component, this study was aimed to compare the performance of face region detection between the two models. First, we search the optimum range of facial color for each color component, examining the detection accuracy of facial color regions for variant threshold range about facial color. And then, we compare the accuracy of the face box for both color models by using optimal facial color and motion information. As a result, a range of $0^{\circ}{\sim}14^{\circ}$ in the H-component and a range of $-22^{\circ}{\sim}-2^{\circ}$ in the I-component appeared to be the most optimum range for extracting face regions. When the optimal facial color range is used, I-component is better than H-component by about 10% in accuracy to extract face regions. While optimal facial color and motion information are both used, I-component is also better by about 3% in accuracy to extract face regions.

  • PDF