• Title/Summary/Keyword: Recognition of Facial Expressions

Search Result 120, Processing Time 0.029 seconds

Interactive Animation by Action Recognition (동작 인식을 통한 인터랙티브 애니메이션)

  • Hwang, Ji-Yeon;Lim, Yang-Mi;Park, Jin-Wan;Jahng, Surng-Gahb
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.269-277
    • /
    • 2006
  • In this paper, we propose an interactive system that generates emotional expressions from arm gestures. By extracting relevant features from key frames, we can infer emotions from arm gestures. The necessary factor for real-time animation is tremendous frame rates. Thus, we propose processing facial emotion expression with 3D application for minimizing animation time. And we propose a method for matching frames and actions. By matching image sequences of exagerrated arm gestures from participants, they feel that they are communicating directly with the portraits.

  • PDF

Empathy Recognition Method Using Synchronization of Heart Response (심장 반응 동기화를 이용한 공감 인식 방법)

  • Lee, Dong Won;Park, Sangin;Mun, Sungchul;Whang, Mincheol
    • Science of Emotion and Sensibility
    • /
    • v.22 no.1
    • /
    • pp.45-54
    • /
    • 2019
  • Empathy has been observed to be pivotal in enhancing both social relations and the efficiency of task performance. Empathetic interaction has been shown to begin with individuals mirroring each other's facial expressions, vocal tone, actions, and so on. The internal responses of the cardiovascular activity of people engaged in empathetic interaction are also known to be synchronized. This study attempted to objectively and quantitatively define the rules of empathy with regard to the synchronization of cardiac rhythm between persons. Seventy-four subjects participated in the investigation and were paired to imitate the facial expressions of their partner. An electrocardiogram (ECG) measurement was taken as the participants conducted the task. Quantitative indicators were extracted from the heart rhythm pattern (HRP) and the heart rhythm coherence (HRC) to determine the difference of synchronization of heart rhythms between two individuals as they pertained to empathy. Statistical significance was confirmed by an independent sample t-test. The HRP and HRC correlation(r) between persons increased significantly with empathy in comparison to an interaction that was not empathetic. A difference of the standard deviation of NN intervals (SDNN) and the dominant peak frequency decreased. Therefore, significant parameters to evaluate empathy have been proposed through a step-wise discrimination analysis. Empathic interactions may thus be managed and monitored for high quality social interaction and communication.

Image Recognition by Using Hybrid Coefficient Measure of Correlation and Distance (상관계수과 거리계수의 조합형 척도를 이용한 영상인식)

  • Hong, Seong-Jun;Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.343-347
    • /
    • 2010
  • This paper presents an efficient image recognition method using the hybrid coefficient measure of correlation and distance. The correlation coefficient is applied to measure the statistical similarity by using Pearson coefficient, and distance coefficient is also applied to measure the spacial similarity by using city-block. The total similarity among images is calculated by extending the similarity between the feature vectors, then the feature vectors can be extracted by PCA and ICA, respectively. The proposed method has been applied to the problem for recognizing the 960(30 persons * 4 expressions * 2 lights * 4 poses) facial images of 40*50 pixels. The experimental results show that the proposed method of ICA has a superior recognition performances than the method using PCA, and is affected less by the environmental influences so as lighting.

An Analysis on the Empathic Changing Process of the Members in Empathy Training Program (공감훈련프로그램 참여아동의 공감표현 변화과정 분석)

  • Kim, Mi-Young
    • The Korean Journal of Elementary Counseling
    • /
    • v.7 no.1
    • /
    • pp.205-226
    • /
    • 2008
  • The purpose of the study you have seen is to verify the effectiveness of existing quantitative research and to put the Empathy Training Program to practical use for participating children. From looking into this, the changes in empathic understanding that came to light in relationships between teacher and children and children and children are sure to have that effect. For this work, I established the following subject of inquiry: What kind of changing processes can be seen in the empathic understanding of participating children in the Empathy Training Program? To resolve the above line of inquiry, six female sixth grade elementary school students were chosen and they progressed through twelve sessions of the Empathy Training Program. The children were given a sentence completion exam, recognition work, neat writing exam and a school adaptation exam both before and after participation in the program, making data for analysis. To analyze, first, participants had one or two meetings of forty to fifty minutes each. Progress through the program's curriculum was recorded and through the repeating and copying method, to be sure participating children's empathic understanding was revealed, empathic language and behavior was routinely chosen. Next, according the above criteria I looked into visible changes of the participating children's empathic expressions, classifying and analyzing changes in empathic understanding and six instances of common changes in the emphatic understanding of the participants relationships were analyzed and put together. Next I will summarize the findings we have seen in this research: First, if we look into changes in common empathic understanding from the beginning, using the criteria of empathic language, each individual showed understanding at the beginning and passed and progressed through stages of care, insight and emotional expressions. Second, when we looked at the criteria of empathic behavior from the beginning to the end, one's line of vision and ability to concentrate one's attention was connected. Next, the act of nodding one's head looked like a brief nod at first but at the end, it was not just a simple nod but rather they could feel deep empathy. The condition and substance of the facial expression was seen to match and at the very end the child was expressive and stretched out arms to hold and pat the other person and the act of holding hands could also be seen. Among lots of empathic behavior the final stage was shown by half of the children. Third, from the first stage to the last stage there were many cases revealed. The more the children went the more complete their empathic language became. Their vocabulary increased and became more diverse with empathic actions. Also, when comparing actions and expressions from the beginning with the end, visible expressions became more natural and sincere at the end. The result of the research we have seen is that through receiving experience of empathic understanding, participating children showed a sense of self-confidence and they looked to make peaceful expressions while not being aggressive or defensive about problems. In addition, from understanding empathic expressions, participating children's relationships felt closer. This outcome within this group in this case will be applied and the formation of empathic understanding can be used by the children internally to solve their own problems, acquire close relationships with their teachers and others. It will also contribute to smooth classroom management.

  • PDF

Development of facial recognition application for automation logging of emotion log (감정로그 자동화 기록을 위한 표정인식 어플리케이션 개발)

  • Shin, Seong-Yoon;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.737-743
    • /
    • 2017
  • The intelligent life-log system proposed in this paper is intended to identify and record a myriad of everyday life information as to the occurrence of various events based on when, where, with whom, what and how, that is, a wide variety of contextual information involving person, scene, ages, emotion, relation, state, location, moving route, etc. with a unique tag on each piece of such information and to allow users to get a quick and easy access to such information. Context awareness generates and classifies information on a tag unit basis using the auto-tagging technology and biometrics recognition technology and builds a situation information database. In this paper, we developed an active modeling method and an application that recognizes expressionless and smile expressions using lip lines to automatically record emotion information.

Implementation of Pet Management System including Deep Learning-based Breed and Emotion Recognition SNS (딥러닝 기반 품종 및 감정인식 SNS를 포함하는 애완동물 관리 시스템 구현)

  • Inhwan Jung;Kitae Hwang;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.45-50
    • /
    • 2023
  • As the ownership of pets has steadily increased in recent years, the need for an effective pet management system has grown. In this study, we propose a pet management system with a deep learning-based emotion recognition SNS. The system detects emotions through pet facial expressions using a convolutional neural network (CNN) and shares them with a user community through SNS. Through SNS, pet owners can connect with other users, share their experiences, and receive support and advice for pet management. Additionally, the system provides comprehensive pet management, including tracking pet health and vaccination and reservation reminders. Furthermore, we added a function to manage and share pet walking records so that pet owners can share their walking experiences with other users. This study demonstrates the potential of utilizing AI technology to improve pet management systems and enhance the well-being of pets and their owners.

Face Detection Using Skin Color and Geometrical Constraints of Facial Features (살색과 얼굴 특징들의 기하학적 제한을 이용한 얼굴 위치 찾기)

  • Cho, Kyung-Min;Hong, Ki-Sang
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.12
    • /
    • pp.107-119
    • /
    • 1999
  • There is no authentic solution in a face detection problem though it is an important part of pattern recognition and has many diverse application fields. The reason is that there are many unpredictable deformations due to facial expressions, view point, rotation, scale, gender, age, etc. To overcome these problems, we propose an algorithm based on feature-based method, which is well known to be robust to these deformations. We detect a face by calculating a similarity between the formation of real face feature and candidate feature formation which consists of eyebrow, eye, nose, and mouth. In this paper, we use a steerable filter instead of general derivative edge detector in order to get more accurate feature components. We applied deformable template to verify the detected face, which overcome the weak point of feature-based method. Considering the low detection rate because of face detection method using whole input images, we design an adaptive skin-color filter which can be applicable to a diverse skin color, minimizing target area and processing time.

  • PDF

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

A 3D Face Reconstruction and Tracking Method using the Estimated Depth Information (얼굴 깊이 추정을 이용한 3차원 얼굴 생성 및 추적 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.21-28
    • /
    • 2011
  • A 3D face shape derived from 2D images may be useful in many applications, such as face recognition, face synthesis and human computer interaction. To do this, we develop a fast 3D Active Appearance Model (3D-AAM) method using depth estimation. The training images include specific 3D face poses which are extremely different from one another. The landmark's depth information of landmarks is estimated from the training image sequence by using the approximated Jacobian matrix. It is added at the test phase to deal with the 3D pose variations of the input face. Our experimental results show that the proposed method can efficiently fit the face shape, including the variations of facial expressions and 3D pose variations, better than the typical AAM, and can estimate accurate 3D face shape from images.

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.