• Title/Summary/Keyword: face expression recognition

Search Result 197, Processing Time 0.024 seconds

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

A Simple Way to Find Face Direction (간단한 얼굴 방향성 검출방법)

  • Park Ji-Sook;Ohm Seong-Yong;Jo Hyun-Hee;Chung Min-Gyo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.234-243
    • /
    • 2006
  • The recent rapid development of HCI and surveillance technologies has brought great interests in application systems to process faces. Much of research efforts in these systems has been primarily focused on such areas as face recognition, facial expression analysis and facial feature extraction. However, not many approaches have been reported toward face direction detection. This paper proposes a method to detect the direction of a face using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, based on the single monocular view of the face, the proposed method introduces very simple formulas to estimate the horizontal or vertical rotation angle of the face. The horizontal rotation angle can be calculated by using a ratio between the areas of left and right facial triangles, while the vertical angle can be obtained from a ratio between the base and height of facial triangle. Experimental results showed that our method makes it possible to obtain the horizontal angle within an error tolerance of ${\pm}1.68^{\circ}$, and that it performs better as the magnitude of the vertical rotation angle increases.

  • PDF

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Facial Expression Analysis System based on Image Feature Extraction (이미지 특징점 추출 기반 얼굴 표정 분석 시스템)

  • Jeon, Jin-Hwan;Song, Jeo;Lee, Sang-Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.07a
    • /
    • pp.293-294
    • /
    • 2016
  • 스마트폰, 블랙박스, CCTV 등을 통해 다양하고 방대한 영상 데이터가 발생하고 있다. 그중에서 사람의 얼굴 영상을 통해 개인을 인식 및 식별하고 감정 상태를 분석하려는 다양한 연구가 진행되고 있다. 본 논문에서는 디지털영상처리 분야에서 널리 사용되고 있는 SIFT알고리즘을 이용하여, 얼굴영상에 대한 특징점을 추출하고 이를 기반으로 성별, 나이 및 기초적인 감정 상태를 분류할 수 있는 시스템을 제안한다.

  • PDF

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

Study of expression in virtual character of facial smile by emotion recognition (감성인식에 따른 가상 캐릭터의 미소 표정변화에 관한 연구)

  • Lee, Dong-Yeop
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.383-402
    • /
    • 2013
  • In this study, we apply the facial Facial Action Coding System for coding the muscular system anatomical approach facial expressions to be displayed in response to a change in sensitivity. To verify by applying the virtual character the Duchenne smile to the original. I extracted the Duchenne smile by inducing experiment of emotion (man 2, woman 2) and the movie theater department students trained for the experiment. Based on the expression that has been extracted, I collect the data of the facial muscles. Calculates the frequency of expression of the face and other parts of the body muscles around the mouth and lips, to be applied to the virtual character of the data. Orbicularis muscle to contract end of lips due to shrinkage of the Zygomatic Major is a upward movement, cheek goes up, the movement of the muscles, facial expressions appear the outer eyelid under the eye goes up with a look of smile. Muscle movement of large muscle and surrounding Zygomatic Major is observed together (AU9) muscles around the nose and (AU25, AU26, AU27) muscles around the mouth associated with openness. Duchen smile occurred in the form of Orbicularis Oculi and Zygomatic Major moves at the same time. Based on this, by separating the orbicularis muscle that is displayed in the form of laughter and sympathy to emotional feelings and viable large muscle by the will of the person, by applying to the character of the virtual, and expression of human I try to examine expression of the virtual character's ability to distinguish.

3D Face Modeling based on 3D Morphable Shape Model (3D 변형가능 형상 모델 기반 3D 얼굴 모델링)

  • Jang, Yong-Suk;Kim, Boo-Gyoun;Cho, Seong-Won;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.212-227
    • /
    • 2008
  • Since 3D face can be rotated freely in 3D space and illumination effects can be modeled properly, 3D face modeling Is more precise and realistic in face pose, illumination, and expression than 2D face modeling. Thus, 3D modeling is necessitated much in face recognition, game, avatar, and etc. In this paper, we propose a 3D face modeling method based on 3D morphable shape modeling. The proposed 3D modeling method first constructs a 3D morphable shape model out of 3D face scan data obtained using a 3D scanner Next, the proposed method extracts and matches feature points of the face from 2D image sequence containing a face to be modeled, and then estimates 3D vertex coordinates of the feature points using a factorization based SfM technique. Then, the proposed method obtains a 3D shape model of the face to be modeled by fitting the 3D vertices to the constructed 3D morphable shape model. Also, the proposed method makes a cylindrical texture map using 2D face image sequence. Finally, the proposed method builds a 3D face model by rendering the 3D face shape model with the cylindrical texture map. Through building processes of 3D face model by the proposed method, it is shown that the proposed method is relatively easy, fast and precise than the previous 3D face model methods.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.