• Title/Summary/Keyword: Facial Expression Intensity

Search Result 15, Processing Time 0.029 seconds

Robust Facial Expression-Recognition Against Various Expression Intensity (표정 강도에 강건한 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.395-402
    • /
    • 2009
  • This paper proposes an approach of a novel facial expression recognition to deal with different intensities to improve a performance of a facial expression recognition. Various expressions and intensities of each person make an affect to decrease the performance of the facial expression recognition. The effect of different intensities of facial expression has been seldom focused on. In this paper, a face expression template and an expression-intensity distribution model are introduced to recognize different facial expression intensities. These techniques, facial expression template and expression-intensity distribution model contribute to improve the performance of facial expression recognition by describing how the shift between multiple interest points in the vicinity of facial parts and facial parts varies for different facial expressions and its intensities. The proposed method has the distinct advantage that facial expression recognition with different intensities can be very easily performed with a simple calibration on video sequences as well as still images. Experimental results show a robustness that the method can recognize facial expression with weak intensities.

A Study of Improving LDP Code Using Edge Directional Information (에지 방향 정보를 이용한 LDP 코드 개선에 관한 연구)

  • Lee, Tae Hwan;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.7
    • /
    • pp.86-92
    • /
    • 2015
  • This study proposes new LDP code to improve facial expression recognition rate by including local directional number(LDN), edge magnitudes and differences of neighborhood edge intensity. LDP is less sensitive on the change of intensity and stronger about noise than LBP. But LDP is difficult to express the smooth area without changing of intensity and if background image has the similar pattern with a face, the facial expression recognition rate of LDP is low. Therefore, we make the LDP code has the local directional number and the edge strength and experiment the facial expression recognition rate of changed LDP code.

Dynamic Emotion Classification through Facial Recognition (얼굴 인식을 통한 동적 감정 분류)

  • Han, Wuri;Lee, Yong-Hwan;Park, Jeho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.3
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Effect of Depressive Mood on Identification of Emotional Facial Expression (우울감이 얼굴 표정 정서 인식에 미치는 영향)

  • Ryu, Kyoung-Hi;Oh, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.11 no.1
    • /
    • pp.11-21
    • /
    • 2008
  • This study was designed to examine the effect of depressive mood on identification of emotional facial expression. Participants were screened out of 305 college students on the basis of the BDI-II score. Students with BDI-II score higher than 14(upper 20%) were selected for the Depression Group and those with BDI-II score lower than 5(lower 20%) were selected for the Control Group. A final sample of 20 students in the Depression Group and 20 in the Control Group were presented with facial expression stimuli of an increasing degree of emotional intensity, slowly changing from a neutral to a full intensity of happy, sad, angry, or fearful expressions. The result showed that there was the significant interaction of Group by Emotion(esp. happy and sad) which suggested that depressive mood affects processing of emotional stimuli such as facial expressions. Implication of this result for mood-congruent information processing were discussed.

  • PDF

Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences (이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

An Efficient Facial Expression Recognition by Measuring Histogram Distance Based on Preprocessing (전처리 기반 히스토그램 거리측정에 의한 효율적인 표정인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.5
    • /
    • pp.667-673
    • /
    • 2009
  • This paper presents an efficient facial expression recognition method by measuring the histogram distance based on preprocessing. The preprocessing that uses both centroid shift and histogram equalization is applied to improve the recognition performance, The distance measurement is also applied to estimate the similarity between the facial expressions. The centroid shift based on the first moment balance technique is applied not only to obtain the robust recognition with respect to position or size variations but also to reduce the distance measurement load by excluding the background in the recognition. Histogram equalization is used for robustly recognizing the poor contrast of the images due to light intensity. The proposed method has been applied for recognizing 72 facial expression images(4 persons * 18 scenes) of 320*243 pixels. Three distances such as city-block, Euclidean, and ordinal are used as a similarity measure between histograms. The experimental results show that the proposed method has superior recognition performances compared with the method without preprocessing. The ordinal distance shows superior recognition performances over city-block and Euclidean distances, respectively.

Face Detection using Distance Ranking (거리순위를 이용한 얼굴검출)

  • Park, Jae-Hee;Kim, Seong-Dae
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.363-366
    • /
    • 2005
  • In this paper, for detecting human faces under variations of lighting condition and facial expression, distance ranking feature and detection algorithm based on the feature are proposed. Distance ranking is the intensity ranking of a distance transformed image. Based on statistically consistent edge information, distance ranking is robust to lighting condition change. The proposed detection algorithm is a matching algorithm based on FFT and a solution of discretization problem in the sliding window methods. In experiments, face detection results in the situation of varying lighting condition, complex background, facial expression change and partial occlusion of face are shown

  • PDF

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity (로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어)

  • Kim, Min-Gyu;Lee, Hui-Sung;Park, Jeong-Woo;Jo, Su-Hun;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.547-552
    • /
    • 2008
  • Human and robot will have closer relation in the future, and we can expect that the interaction between human and robot will be more intense. To take the advantage of people's innate ability of communication, researchers concentrated on the facial expression so far. But for the robot to express emotional intensity, other modalities such as gesture, movement, sound, color are also needed. This paper suggests that the intensity of emotion can be expressed with color and blinking so that it is possible to apply the result to LED. Color and emotion definitely have relation, however, the previous results are difficult to implement due to the lack of quantitative data. In this paper, we determined color and blinking period to express the 6 basic emotions (anger, sadness, disgust, surprise, happiness, fear). It is implemented on avatar and the intensities of emotions are evaluated through survey. We figured out that the color and blinking helped to express the intensity of emotion for sadness, disgust, anger. For fear, happiness, surprise, the color and blinking didn't play an important role; however, we may improve them by adjusting the color or blinking.

  • PDF

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation (효과적인 로봇 행동 생성을 위한 선형의 정서-표정 공간 내 감정 경계의 결정 -비선형의 제스처 동기화를 위한 정서, 표정 공간의 영역 결정)

  • Jo, Su-Hun;Lee, Hui-Sung;Park, Jeong-Woo;Kim, Min-Gyu;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.540-546
    • /
    • 2008
  • In the near future, robots should be able to understand human's emotional states and exhibit appropriate behaviors accordingly. In Human-Human Interaction, the 93% consist of the speaker's nonverbal communicative behavior. Bodily movements provide information of the quantity of emotion. Latest personal robots can interact with human using multi-modality such as facial expression, gesture, LED, sound, sensors and so on. However, a posture needs a position and an orientation only and in facial expression or gesture, movements are involved. Verbal, vocal, musical, color expressions need time information. Because synchronization among multi-modalities is a key problem, emotion expression needs a systematic approach. On the other hand, at low intensity of surprise, the face could be expressed but the gesture could not be expressed because a gesture is not linear. It is need to decide the emotional boundaries for effective robot behavior generation and synchronization with another expressible method. If it is so, how can we define emotional boundaries? And how can multi-modality be synchronized each other?

  • PDF