• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.03 seconds

Exploration of deep learning facial motions recognition technology in college students' mental health (딥러닝의 얼굴 정서 식별 기술 활용-대학생의 심리 건강을 중심으로)

  • Li, Bo;Cho, Kyung-Duk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.333-340
    • /
    • 2022
  • The COVID-19 has made everyone anxious and people need to keep their distance. It is necessary to conduct collective assessment and screening of college students' mental health in the opening season of every year. This study uses and trains a multi-layer perceptron neural network model for deep learning to identify facial emotions. After the training, real pictures and videos were input for face detection. After detecting the positions of faces in the samples, emotions were classified, and the predicted emotional results of the samples were sent back and displayed on the pictures. The results show that the accuracy is 93.2% in the test set and 95.57% in practice. The recognition rate of Anger is 95%, Disgust is 97%, Happiness is 96%, Fear is 96%, Sadness is 97%, Surprise is 95%, Neutral is 93%, such efficient emotion recognition can provide objective data support for capturing negative. Deep learning emotion recognition system can cooperate with traditional psychological activities to provide more dimensions of psychological indicators for health.

A Study on Utilization of Facial Recognition-based Emotion Measurement Technology for Quantifying Game Experience (게임 경험 정량화를 위한 안면인식 기반 감정측정 기술 활용에 대한 연구)

  • Kim, Jae Beom;Jeong, Hong Kyu;Park, Chang Hoon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.9
    • /
    • pp.215-223
    • /
    • 2017
  • Various methods for creating interesting games are used in the development process. Because the empirical part is difficult to measure and analyze, it usually only measures and analyzes the parts where data are easy to quantify. This is a clear limit to the fact that the experience of the game is important.This study proposes a system that recognizes the face of a game user and measures the emotion change from the recognized information in order to easily quantify the experience of the user who is playing the game. The system recognizes emotions and records them in real time from the face of the user who is playing the game. These recorded data include time and figures related to the progress of the game, and numerical values for emotions recognized from the face. Using the recorded data, it is possible to judge what kind of emotion the game induces to the user at a certain point in time. Numerical data on the recorded empirical part using the system of this study is expected to help develop the game according to the developer 's intention.

Effects of Facial Expression of Others on Moral Judgment (타인의 얼굴 표정이 도덕적 판단에 미치는 영향)

  • Lee, WonSeob;Kim, ShinWoo
    • Korean Journal of Cognitive Science
    • /
    • v.30 no.2
    • /
    • pp.85-104
    • /
    • 2019
  • Past research showed that presence of others induces morally desirable behavior and stricter judgments. That is, presence of others makes people become a moral being. On the other hand, little research has been conducted to test what effects facial expression of others have on moral judgments. In this research, we tested the effects of emotion exposed by facial expression on moral judgments. To this end, we presented descriptions of immoral or prosocial behavior along with facial expression of various emotions (in particular, disgust and happiness), and asked participants to make moral judgments on the behavior in the descriptions. In Experiment 1, facial expression did not affect moral judgments, but variability of judgments was increased when descriptions and facial expression were incongruent. In experiment 2, we modified potential reasons of the null effect and conducted the experiment using the same procedure. Subjects in Experiment 2 made stricter judgments with disgust faces than with happy faces for immoral behavior, but the effect did not occur for prosocial behavior. In Experiment 3, we repeated the same experiment after having subjects to consider themselves as the actor in the descriptions. The results replicated the effects of facial expression in Experiment 2 but there was no effect of the actor on moral judgments. This research showed that facial expression of others specifically affects moral judgments on immoral behavior but not on prosocial behavior. In general discussion, we provided further discussion on the results and the limitations of this research.

A Survey of Objective Measurement of Fatigue Caused by Visual Stimuli (시각자극에 의한 피로도의 객관적 측정을 위한 연구 조사)

  • Kim, Young-Joo;Lee, Eui-Chul;Whang, Min-Cheol;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • Objective: The aim of this study is to investigate and review the previous researches about objective measuring fatigue caused by visual stimuli. Also, we analyze possibility of alternative visual fatigue measurement methods using facial expression recognition and gesture recognition. Background: In most previous researches, visual fatigue is commonly measured by survey or interview based subjective method. However, the subjective evaluation methods can be affected by individual feeling's variation or other kinds of stimuli. To solve these problems, signal and image processing based visual fatigue measurement methods have been widely researched. Method: To analyze the signal and image processing based methods, we categorized previous works into three groups such as bio-signal, brainwave, and eye image based methods. Also, the possibility of adopting facial expression or gesture recognition to measure visual fatigue is analyzed. Results: Bio-signal and brainwave based methods have problems because they can be degraded by not only visual stimuli but also the other kinds of external stimuli caused by other sense organs. In eye image based methods, using only single feature such as blink frequency or pupil size also has problem because the single feature can be easily degraded by other kinds of emotions. Conclusion: Multi-modal measurement method is required by fusing several features which are extracted from the bio-signal and image. Also, alternative method using facial expression or gesture recognition can be considered. Application: The objective visual fatigue measurement method can be applied into the fields of quantitative and comparative measurement of visual fatigue of next generation display devices in terms of human factor.

Microsurgical reconstruction of posttraumatic large soft tissue defects on face (광범위한 안면외상 환자에서의 미세술기를 이용한 재건술)

  • Baek, Wooyeol;Song, Seung Yong;Roh, Tai Suk;Lee, Won Jai
    • Journal of the Korean Medical Association
    • /
    • v.61 no.12
    • /
    • pp.724-731
    • /
    • 2018
  • Our faces can express a remarkable range of subtle emotions and silent messages. Because the face is so essential for complex social interactions that are part of our everyday lives, aesthetic repair and restoration of function are an important tasks that we must not take lightly. Soft-tissue defects occur in trauma patients and require thorough evaluation, planning, and surgical treatment to achieve optimal functional and aesthetic outcomes, while minimizing the risk of complications. Recognizing the full nature of the injury and developing a logical treatment plan help determine whether there will be future aesthetic or functional deformities. Proper classification of the wound enables appropriate treatment, and helps predict the postoperative appearance and function. Comprehensive care of trauma patients requires a diverse breadth of skills, beginning with an initial evaluation, followed by resuscitation. Traditionally, facial defects have been managed with closure or grafting, and prosthetic obturators. Sometimes, however, large defects cannot be closed using simple methods. Such cases, which involve exposure of critical structures, bone, joint spaces, and neurovascular structures, requires more complex treatment. We reviewed and classified causes of significant trauma resulting in facial injuries that were reconstructed by microsurgical techniques without simple sutures or coverage with partial flaps. A local flap is a good choice for reconstruction, but large defects are hard to cover with a local flap alone. Early microsurgical reconstruction of a large facial defect is an excellent choice for aesthetic and functional outcomes.

Difference in reading facial expressions as the empathy-systemizing type - focusing on emotional recognition and emotional discrimination - (공감-체계화 유형에 따른 얼굴 표정 읽기의 차이 - 정서읽기와 정서변별을 중심으로 -)

  • Tae, Eun-Ju;Cho, Kyung-Ja;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee
    • Science of Emotion and Sensibility
    • /
    • v.11 no.4
    • /
    • pp.613-628
    • /
    • 2008
  • Mind reading is an essential part of normal social functioning and empathy plays a key role in social understanding. This study investigated how individual differences can have an effect on reading emotions in facial expressions, focusing on empathizing and systemizing. Two experiments were conducted. In study 1, participants performed emotion recognition test using facial expressions to investigate how emotion recognition can be different as empathy-systemizing type, facial areas, and emotion type. Study 2 examined how emotion recognition can be different as empathy-systemizing type, facial areas, and emotion type. An emotion discrimination test was used instead, with every other condition the same as in studies 1. Results from study 2 showed mostly same results as study 1: there were significant differences among facial areas and emotion type and also have an interaction effect between facial areas and emotion type. On the other hand, there was an interaction effect between empathy-systemizing type and emotion type in study 2. That is, how much people empathize and systemize can make difference in emotional discrimination. These results suggested that the empathy-systemizing type was more appropriate to explain emotion discrimination than emotion recognition.

  • PDF

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Development of An Interactive System Prototype Using Imitation Learning to Induce Positive Emotion (긍정감정을 유도하기 위한 모방학습을 이용한 상호작용 시스템 프로토타입 개발)

  • Oh, Chanhae;Kang, Changgu
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.239-246
    • /
    • 2021
  • In the field of computer graphics and HCI, there are many studies on systems that create characters and interact naturally. Such studies have focused on the user's response to the user's behavior, and the study of the character's behavior to elicit positive emotions from the user remains a difficult problem. In this paper, we develop a prototype of an interaction system to elicit positive emotions from users according to the movement of virtual characters using artificial intelligence technology. The proposed system is divided into face recognition and motion generation of a virtual character. A depth camera is used for face recognition, and the recognized data is transferred to motion generation. We use imitation learning as a learning model. In motion generation, random actions are performed according to the first user's facial expression data, and actions that the user can elicit positive emotions are learned through continuous imitation learning.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.