• Title/Summary/Keyword: 얼굴감정 표현

Search Result 92, Processing Time 0.027 seconds

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Emotion Recognition Method of Facial Image using PCA (PCA을 이용한 얼굴 표정의 감정 인식 방법)

  • Kim, Ho-Duck;Yang, Hyun-Chang;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.772-776
    • /
    • 2006
  • A research about facial image recognition is studied in the most of images in a full race. A representative part, effecting a facial image recognition, is eyes and a mouth. So, facial image recognition researchers have studied under the central eyes, eyebrows, and mouths on the facial images. But most people in front of a camera in everyday life are difficult to recognize a fast change of pupils. And people wear glasses. So, in this paper, we try using Principal Component Analysis(PCA) for facial image recognition in blindfold case.

A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face (소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로)

  • Ha, Sangjip;Yi, Eunju;Yoo, In-jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.243-262
    • /
    • 2022
  • In this study, eye tracking was used for the appearance of the robot during the social robot design study. During the research, each part of the social robot was designated as AOI (Areas of Interests), and the user's attitude was measured through a design evaluation questionnaire to construct a design research model of the social robot. The data used in this study are Fixation, First Visit, Total Viewed, and Revisits as eye tracking indicators, and AOI (Areas of Interests) was designed with the face, eyes, lips, and body of the social robot. And as design evaluation questionnaire questions, consumer beliefs such as Face-highlighted, Human-like, and Expressive of social robots were collected and as a dependent variable was attitude toward robots. Through this, we tried to discover the mechanism that specifically forms the user's attitude toward the robot, and to discover specific insights that can be referenced when designing the robot.

Design of the emotion expression in multimodal conversation interaction of companion robot (컴패니언 로봇의 멀티 모달 대화 인터랙션에서의 감정 표현 디자인 연구)

  • Lee, Seul Bi;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.6
    • /
    • pp.137-152
    • /
    • 2017
  • This research aims to develop the companion robot experience design for elderly in korea based on needs-function deploy matrix of robot and emotion expression research of robot in multimodal interaction. First, Elder users' main needs were categorized into 4 groups based on ethnographic research. Second, the functional elements and physical actuators of robot were mapped to user needs in function- needs deploy matrix. The final UX design prototype was implemented with a robot type that has a verbal non-touch multi modal interface with emotional facial expression based on Ekman's Facial Action Coding System (FACS). The proposed robot prototype was validated through a user test session to analyze the influence of the robot interaction on the cognition and emotion of users by Story Recall Test and face emotion analysis software; Emotion API when the robot changes facial expression corresponds to the emotion of the delivered information by the robot and when the robot initiated interaction cycle voluntarily. The group with emotional robot showed a relatively high recall rate in the delayed recall test and In the facial expression analysis, the facial expression and the interaction initiation of the robot affected on emotion and preference of the elderly participants.

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

Development of an Emotional Messenger for IPTV and Smart Phone (IPTV 및 스마트폰을 위한 감성 메신저의 개발)

  • Sung, Minyoung;Namkung, Chan;Paek, Seon-uok;Ahn, Seonghye
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.1533-1535
    • /
    • 2010
  • 사용자의 감정을 자동으로 인식하고 3D 캐릭터 애니메이션을 통해 표현한다면 기기를 통한 통신에 더 풍부한 감성을 부여하여 의사 소통의 효과를 높일 수 있다. 본 논문에서는 IPTV와 스마트폰 기기에서 구동되는 감성 메신저의 개발에 대해 기술한다. 이를 위해 문장 및 음색 분석을 통한 감정 인식, 영상 속의 얼굴 표정 추적, 그리고 개인화된 3D 캐릭터의 표정 및 몸동작 애니메이션을 통해 감정을 전달하는 감성 메신저를 제안하고 그 효과를 서술한다. Naive Bayes 알고리즘을 이용한 채팅 문장에서의 자동 감성 인식이 개발되었으며 실험을 통해 성능 및 효과를 검증한다.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Convolutional Network with Densely Backward Attention for Facial Expression Recognition (얼굴 표정 인식을 위한 Densely Backward Attention 기반 컨볼루션 네트워크)

  • Seo, Hyun-Seok;Hua, Cam-Hao;Lee, Sung-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.958-961
    • /
    • 2019
  • Convolutional neural network(CNN)의 등장으로 얼굴 표현 인식 연구는 많은 발전을 이루었다. 그러나, 기존의 CNN 접근법은 미리 학습된 훈련모델에서 Multiple-level 의 의미적 맥락을 포함하지 않는 Attention-embedded 문제가 발생한다. 사람의 얼굴 감정은 다양한 근육의 움직임과 결합에 기초하여 관찰되며, CNN 에서 딥 레이어의 산출물로 나온 특징들의 결합은 많은 서브샘플링 단계를 통해서 class 구별와 같은 의미 정보의 손실이 일어나기 때문에 전이 학습을 통한 올바른 훈련 모델 생성이 어렵다는 단점이 있다. 따라서, 본 논문은 Backbone 네트워크의 Multi-level 특성에서 Channel-wise Attention 통합 및 의미 정보를 포함하여 높은 인식 성능을 달성하는 Densely Backwarnd Attention(DBA) CNN 방법을 제안한다. 제안하는 기법은 High-level 기능에서 채널 간 시멘틱 정보를 활용하여 세분화된 시멘틱 정보를 Low-level 버전에서 다시 재조정한다. 그런 다음, 중요한 얼굴 표정의 묘사를 분명하게 포함시키기 위해서 multi-level 데이터를 통합하는 단계를 추가로 실행한다. 실험을 통해, 제안된 접근방법이 정확도 79.37%를 달성 하여 제안 기술이 효율성이 있음을 증명하였다.

A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face (소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로)

  • Ha, Sangjip;Yi, Eun-ju;Yoo, In-jin;Park, Do-Hyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.409-414
    • /
    • 2021
  • 본 연구는 소셜로봇 디자인 연구의 흐름 중 하나인 로봇의 외형에 관하여 시선 추적을 활용하고자 한다. 소셜로봇의 몸 전체, 얼굴, 눈, 입술 등의 관심 영역으로부터 측정된 사용자의 시선 추적 지표와 디자인평가 설문을 통하여 파악된 사용자의 태도를 연결하여 소셜로봇의 디자인에 연구 모형을 구성하였다. 구체적으로 로봇에 대한 사용자의 태도를 형성하는 메커니즘을 발견하여 로봇 디자인 시 참고할 수 있는 구체적인 인사이트를 발굴하고자 하였다. 구체적으로 본 연구에서 사용된 시선 추적 지표는 고정된 시간(Fixation), 첫 응시 시간(First Visit), 전체 머문 시간(Total Viewed), 그리고 재방문 횟수(Revisits)이며, 관심 영역인 AOI(Areas of Interests)는 소셜로봇의 얼굴, 눈, 입술, 그리고 몸체로 설계하였다. 그리고 디자인평가 설문을 통하여 소셜로봇의 감정 표현, 인간다움, 얼굴 두각성 등의 소비자 신념을 수집하였고, 종속변수로 로봇에 대한 태도로 설정하였다.

  • PDF