• Title/Summary/Keyword: facial emotion processing

Search Result 33, Processing Time 0.031 seconds

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

A Pilot Study on Evoked Potentials by Visual Stimulation of Facial Emotion in Different Sasang Constitution Types (얼굴 표정 시각자극에 따른 사상 체질별 유발뇌파 예비연구)

  • Hwang, Dong-Uk;Kim, Keun-Ho;Lee, Yu-Jung;Lee, Jae-Chul;Kim, Myoyung-Geun;Kim, Jong-Yeol
    • Journal of Sasang Constitutional Medicine
    • /
    • v.22 no.1
    • /
    • pp.41-48
    • /
    • 2010
  • 1. Objective There has been a few trials to diagnose Sasang Constitution by using EEG, but has not been studied intensively. For the purpose of practical diagnosis, the characteristics of EEG for each constitution should be studied first. Recently it has been shown that Sasang Constitution might be related to harm avoidance and novelty seeking in temperament and character profiles. Based on this finding, we propose a visual stimulation method to evoke a EEG response which may discriminate difference between constitutional groups. Through the experiment with this method, we tried to reveal the characteristics of EEG of each constitutional groups by the method of event-related potentials. 2. Methods: We used facial visual stimulation to verify the characteristics of EEG for each constitutional groups. To reveal characteristic in sensitivity and latency of response, we added several levels of noise to facial images. 6 male subjects(2 Taeeumin, 2 Soyangin, 2 Soeumin) participated in this study. All subjects are healthy 20's. To remove artifacts and slow modulation, we removed EOG contaminated data and renormalization is applied. To extract stimulation related components, normalized event-related potential method was used. 3. Results: From Oz channels, it is verified that facial image processing components are extracted. For lower level noise, components related to the visual stimulation were clearly shown in Oz, Pz, and Cz channels. Pz and Cz channels show differences among 3 constitutional groups in maximum around 200 msec. Especially moderate level of noise looks appropriate for diagnosis. 4. Conclusion: We verified that the visual stimulation with facial emotion might be a good candidate to evoke the differences between constitutional groups in EEG response. The differences shown in the experiment may imply that the process of emotion has distinct tendencies in latencies and sensitivity for each consitutional group. And this distinction might be related to the temperament profile of consitutional groups.

Effect Analysis of Data Imbalance for Emotion Recognition Based on Deep Learning (딥러닝기반 감정인식에서 데이터 불균형이 미치는 영향 분석)

  • Hajin Noh;Yujin Lim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.8
    • /
    • pp.235-242
    • /
    • 2023
  • In recent years, as online counseling for infants and adolescents has increased, CNN-based deep learning models are widely used as assistance tools for emotion recognition. However, since most emotion recognition models are trained on mainly adult data, there are performance restrictions to apply the model to infants and adolescents. In this paper, in order to analyze the performance constraints, the characteristics of facial expressions for emotional recognition of infants and adolescents compared to adults are analyzed through LIME method, one of the XAI techniques. In addition, the experiments are performed on the male and female groups to analyze the characteristics of gender-specific facial expressions. As a result, we describe age-specific and gender-specific experimental results based on the data distribution of the pre-training dataset of CNN models and highlight the importance of balanced learning data.

P3 Elicited by the Positive and Negative Emotional Stimuli (긍정적, 부정적 정서 자극에 의해 유발된 P3)

  • An, Suk-Kyoon;Lee, Soo-Jung;NamKoong, Kee;Lee, Chang-Il;Lee, Eun;Kim, The-Hoon;Roh, Kyo-Sik;Choi, Hye-Won;Park, Jun-Mo
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.9 no.2
    • /
    • pp.143-152
    • /
    • 2001
  • Objects : The aim of this study was to determine whether the P3 elicited by the negative emotional stimuli is different to that by positive stimuli. Methods : We measured the event-related potentials, especially P3 elicited by the facial photographs in 12 healthy subjects. Subjects were instructed to feel and respond to the rare target facial photographs imbedded in frequent non-target checkerboards. Results : We found that amplitude of P3 elicited by negative emotional photographs was significantly larger than that by the positive stimuli in healthy subjects. Conclusion : These findings suggest that P3 elicited by facial stimuli may be used as a psychophy-siological variable of the emotional processing.

  • PDF

Optimized patch feature extraction using CNN for emotion recognition (감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출)

  • Irfan Haider;Aera kim;Guee-Sang Lee;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

Performance Comparison of Emotion Recognition using Facial Expressions of Infants and Adolescents (영유아와 청소년의 얼굴표정기반 감정인식 성능분석)

  • Noh, Hajin;Lim, Yujin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.700-702
    • /
    • 2022
  • 코로나 바이러스-19 감염증 상황이 지속됨에 따라 영유아 비대면 상담이 증가하였다. 비대면이라는 제한된 환경에서, 보다 정확한 상담을 위해 영유아의 감정을 예측하는 보조도구로써 CNN 학습모델을 이용한 감정분석 결과를 활용할 수 있다. 하지만, 대부분의 감정분석 CNN 모델은 성인 데이터를 위주로 학습이 진행되므로 영유아의 감정인식률은 상대적으로 낮다. 본 논문에서는 영유아와 청소년 데이터의 감정분석 정확도 차이의 원인을 XAI 기법 중 하나인 LIME을 사용해 시각화하여 분석하고, 분석 결과를 근거로 영유아 데이터에 대한 감정인식 성능을 향상시킬 수 있는 방법을 제안한다.

A Study on the Weight Allocation Method of Humanist Input Value and Multiplex Modality using Tacit Data (암묵 데이터를 활용한 인문학 인풋값과 다중 모달리티의 가중치 할당 방법에 관한 연구)

  • Lee, Won-Tae;Kang, Jang-Mook
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.157-163
    • /
    • 2014
  • User's sensitivity is recognized as a very important parameter for communication between company, government and personnel. Especially in many studies, researchers use voice tone, voice speed, facial expression, moving direction and speed of body, and gestures to recognize the sensitivity. Multiplex modality is more precise than single modality however it has limited recognition rate and overload of data processing according to multi-sensing also an excellent algorithm is needed to deduce the sensing value. That is as each modality has different concept and property, errors might be happened to convert the human sensibility to standard values. To deal with this matter, the sensibility expression modality is needed to be extracted using technologies like analyzing of relational network, understanding of context and digital filter from multiplex modality. In specific situation to recognize the sensibility if the priority modality and other surrounding modalities are processed to implicit values, a robust system can be composed in comparison to the consuming of computer resource. As a result of this paper, it is proposed how to assign the weight of multiplex modality using implicit data.

Interactive Animation by Action Recognition (동작 인식을 통한 인터랙티브 애니메이션)

  • Hwang, Ji-Yeon;Lim, Yang-Mi;Park, Jin-Wan;Jahng, Surng-Gahb
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.269-277
    • /
    • 2006
  • In this paper, we propose an interactive system that generates emotional expressions from arm gestures. By extracting relevant features from key frames, we can infer emotions from arm gestures. The necessary factor for real-time animation is tremendous frame rates. Thus, we propose processing facial emotion expression with 3D application for minimizing animation time. And we propose a method for matching frames and actions. By matching image sequences of exagerrated arm gestures from participants, they feel that they are communicating directly with the portraits.

  • PDF

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF