• 제목/요약/키워드: Facial emotion

검색결과 309건 처리시간 0.025초

로봇과 인간의 상호작용을 위한 얼굴 표정 인식 및 얼굴 표정 생성 기법 (Recognition and Generation of Facial Expression for Human-Robot Interaction)

  • 정성욱;김도윤;정명진;김도형
    • 제어로봇시스템학회논문지
    • /
    • 제12권3호
    • /
    • pp.255-263
    • /
    • 2006
  • In the last decade, face analysis, e.g. face detection, face recognition, facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial emotion mimic system which can recognize human facial expressions and also generate the recognized facial expression. In order to recognize human facial expression in real-time, we propose a facial expression classification method that is performed by weak classifiers obtained by using new rectangular feature types. In addition, we make the artificial facial expression using the developed robotic system based on biological observation. Finally, experimental results of facial expression recognition and generation are shown for the validity of our robotic system.

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • 제8권2호
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • 제2권2호
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

얼굴표정에서 나타나는 감정표현에 대한 어린이의 반응분석 (Analysis of children's Reaction in Facial Expression of Emotion)

  • 유동관
    • 한국콘텐츠학회논문지
    • /
    • 제13권12호
    • /
    • pp.70-80
    • /
    • 2013
  • 본 연구의 목적은 열굴 표정의 감정표현에 따른 어린이의 시각적 인지반응을 연구, 분석하고 각각의 감정표현에서 나타나는 남녀어린이의 언어적 반응을 살펴봄으로써 인물표정연구의 기초자료로 활용되는데 그 의미를 두었다. 연구대상은 제시된 연구도구를 이해할 수 있는 6~8세 어린이 108명(남 55명, 여53명)으로 하였으며, 2차에 걸쳐 실시한 반응조사는 어린이의 개별면접과 자기기입식 설문지를 통한 자료수집방식을 활용하였다. 설문에 활용한 연구도구는 남녀어린이의 구체적이고 정확한 반응을 도출할 수 있는 기쁨, 슬픔, 화남, 놀람, 혐오감, 공포감 등 6가지 유형으로 구분하였다. 어린이의 시각적 인지반응결과에서 남녀어린이 모두 기쁨, 슬픔, 화남, 놀란 표정에 대한 빈도수가 높게 나타났으며, 공포감과 혐오감을 느끼는 얼굴표정에 대한 빈도수는 남녀어린이 모두 낮게 나타났다. 언어적 반응은 기쁨, 슬픔, 화남, 놀람, 혐오감, 공포감 모두 얼굴표정에서 연상되는 인상적인 부분을 찾아 대답하거나 얼굴표정의 시각적 특징을 기초로 추론하거나 탐구하는 발견적 반응이 높게 나타났으며, 놀람과 혐오감, 공포감에서 얼굴표정을 보고 연상되는 새로운 이야기를 만들어 내는 상상적 반응이 나타났다.

얼굴 감정 인식을 위한 로컬 및 글로벌 어텐션 퓨전 네트워크 (Local and Global Attention Fusion Network For Facial Emotion Recognition)

  • ;;;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.493-495
    • /
    • 2023
  • Deep learning methods and attention mechanisms have been incorporated to improve facial emotion recognition, which has recently attracted much attention. The fusion approaches have improved accuracy by combining various types of information. This research proposes a fusion network with self-attention and local attention mechanisms. It uses a multi-layer perceptron network. The network extracts distinguishing characteristics from facial images using pre-trained models on RAF-DB dataset. We outperform the other fusion methods on RAD-DB dataset with impressive results.

스마트 전시환경에서 순차적 인공신경망에 기반한 감정인식 모델 (Emotion Detection Model based on Sequential Neural Networks in Smart Exhibition Environment)

  • 정민규;최일영;김재경
    • 지능정보연구
    • /
    • 제23권1호
    • /
    • pp.109-126
    • /
    • 2017
  • 최근 지능형 서비스를 제공하기 위해 감정을 인식하기 위한 많은 연구가 진행되고 있다. 특히, 전시 분야에서 관중에게 개인화된 서비스를 제공하기 위해 얼굴표정을 이용한 감정인식 연구가 수행되고 있다. 그러나 얼굴표정은 시간에 따라 변함에도 불구하고 기존연구는 특정시점의 얼굴표정 데이터를 이용한 문제점이 있다. 따라서 본 연구에서는 전시물을 관람하는 동안 관중의 얼굴표정의 변화로부터 감정을 인식하기 위한 예측 모델을 제안하였다. 이를 위하여 본 연구에서는 시계열 데이터를 이용하여 감정예측에 적합한 순차적 인공신경망 모델을 구축하였다. 제안된 모델의 유용성을 평가하기 위하여 일반적인 표준인공신경망 모델과 제안된 모델의 성능을 비교하였다. 시험결과 시계열성을 고려한 제안된 모델의 예측이 더 뛰어남으로 보였다.

The Impact of Program Improvement Using Forest Healing Resources on the Therapeutic Effect: Focused on Improving Index of Greenness for Adolescents

  • Hwang, Joo-Ho;Lee, Hyo-Jung;Park, Jin-Hwa;Kim, Dong-Min;Lee, Kyoung-Min
    • 인간식물환경학회지
    • /
    • 제22권6호
    • /
    • pp.691-698
    • /
    • 2019
  • This study is to examine the effect of improving the forest therapy program for adolescents using forest healing resources (focused on improving index of greenness for adolescents). The participants were 30 students from in the control group that participated in the 2018 program, and 51 students in experimental group that participated in the improved program in 2019. The questionnaire, developed by Korea Forest Welfare Institute, was comprised of items on general matters, index of greenness, restorative environment, positive emotion, negative emotion, facial expression and psychological assessment. The control group had 30 and the experimental group had 49 valid copies of the questionnaires. As a result of the paired sample t-test for each group, the control group showed a significant increase in all categories except restorative environment. In the experimental group, all categories significantly improved to a higher level (p <.01). An independent sample t-test (one-tailed test) was performed to test the effect of the forest therapy program with improved index of greenness. As a result, the index of greenness increased by 0.73 points(t=2.555, p <.01) and restorative environment by 1.01 points (t=2.567, p <.01), showing statistical significance. Negative emotion increased by 0.04 points (t=0.183, p >.05), which was not significant. On the other hand, positive emotion decreased by 0.42 points (t=-1.918, p <.05), facial expression by 0.57 points (t=-1.775, p <.05), and psychological assessment by 0.29 points (t=-0.981, p >.05), showing significance in positive emotion and facial expression. However, all the decreased items showed significant improvements between the pretest and posttest scores of the experimental group.

Happy Applicants Achieve More: Expressed Positive Emotions Captured Using an AI Interview Predict Performances

  • Shin, Ji-eun;Lee, Hyeonju
    • 감성과학
    • /
    • 제24권2호
    • /
    • pp.75-80
    • /
    • 2021
  • Do happy applicants achieve more? Although it is well established that happiness predicts desirable work-related outcomes, previous findings were primarily obtained in social settings. In this study, we extended the scope of the "happiness premium" effect to the artificial intelligence (AI) context. Specifically, we examined whether an applicant's happiness signal captured using an AI system effectively predicts his/her objective performance. Data from 3,609 job applicants showed that verbally expressed happiness (frequency of positive words) during an AI interview predicts cognitive task scores, and this tendency was more pronounced among women than men. However, facially expressed happiness (frequency of smiling) recorded using AI could not predict the performance. Thus, when AI is involved in a hiring process, verbal rather than the facial cues of happiness provide a more valid marker for applicants' hiring chances.

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • 제7권2호
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구 (An Exploratory Investigation on Visual Cues for Emotional Indexing of Image)

  • 정선영;정은경
    • 한국문헌정보학회지
    • /
    • 제48권1호
    • /
    • pp.53-73
    • /
    • 2014
  • 감정기반 컴퓨팅 환경의 발전에 따라서 이미지를 포함한 멀티미디어 정보 자원의 감정 접근과 이용은 중요한 연구과제이다. 본 연구는 이미지의 감정색인을 위한 시각적인 요인의 탐색적 규명을 목적으로 한다. 연구목적을 성취하기 위해서 본 연구는 사랑, 행복, 슬픔, 공포, 분노의 5가지 기본감정으로 색인된 15건의 이미지를 대상으로 20명의 연구 참여자와의 인터뷰를 통해서 총 620건의 감정 시각적 요인을 추출하였다. 감정을 촉발하는 시각적 요인(5가지)과 하위 요인(18가지)의 분포와 5가지 감정별 시각적 요인 분포를 분석하여 그 결과를 제시하였다. 이미지의 감정을 인지하는 주요한 시각적 요인으로는 얼굴표정, 인물의 동작이나 행위, 선, 형태, 크기 등의 조형적 요소가 차지하는 비중이 높은 것으로 나타났다. 개별 감정과 시각적인 요인과의 관계를 살펴보면, 사랑 감정은 인물의 동작이나 행위와 밀접하게 나타났으며, 행복 감정은 인물의 얼굴표정이 중요한 것으로 나타났다. 슬픔 감정 역시 인물의 동작이나 행위와 밀접하게 연계되어 있으며, 공포 감정은 얼굴의 표정과 깊은 관계가 있다. 분노 감정은 조형적인 요소인 선, 형태, 크기가 특징적으로 나타났다. 이러한 결과는 이미지가 지니는 내용기반 요소와 개념기반 요소의 복합적인 접근이 효과적인 감정색인에 있어서 중요하다는 것을 제시한다.