• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.023 seconds

A Deep Learning Algorithm for Fusing Action Recognition and Psychological Characteristics of Wrestlers

  • Yuan Yuan;Yuan Yuan;Jun Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.754-774
    • /
    • 2023
  • Wrestling is one of the popular events for modern sports. It is difficult to quantitatively describe a wrestling game between athletes. And deep learning can help wrestling training by human recognition techniques. Based on the characteristics of latest wrestling competition rules and human recognition technologies, a set of wrestling competition video analysis and retrieval system is proposed. This system uses a combination of literature method, observation method, interview method and mathematical statistics to conduct statistics, analysis, research and discussion on the application of technology. Combined the system application in targeted movement technology. A deep learning-based facial recognition psychological feature analysis method for the training and competition of classical wrestling after the implementation of the new rules is proposed. The experimental results of this paper showed that the proportion of natural emotions of male and female wrestlers was about 50%, indicating that the wrestler's mentality was relatively stable before the intense physical confrontation, and the test of the system also proved the stability of the system.

Analysis of Users' Emotions on Lighting Effect of Artificial Intelligence Devices (인공지능 디바이스의 조명효과에 대한 사용자의 감정 평가 분석)

  • Hyeon, Yuna;Pan, Young-hwan;Yoo, Hoon-Sik
    • Science of Emotion and Sensibility
    • /
    • v.22 no.3
    • /
    • pp.35-46
    • /
    • 2019
  • Artificial intelligence (AI) technology has been evolving to recognize and learn the languages, voice tones, and facial expressions of users so that they can respond to users' emotions in various contexts. Many AI-based services of particular importance in communications with users provide emotional interaction. However, research on nonverbal interaction as a means of expressing emotion in the AI system is still insufficient. We studied the effect of lighting on users' emotional interaction with an AI device, focusing on color and flickering motion. The AI device used in this study expresses emotions with six colors of light (red, yellow, green, blue, purple, and white) and with a three-level flickering effect (high, middle, and low velocity). We studied the responses of 50 men and women in their 20s and 30s to the emotions expressed by the light colors and flickering effects of the AI device. We found that each light color represented an emotion that was largely similar to the user's emotional image shown in a previous color-sensibility study. The rate of flickering of the lights produced changes in emotional arousal and balance. The change in arousal patterns produced similar intensities of all colors. On the other hand, changes in balance patterns were somewhat related to the emotional image in the previous color-sensibility study, but the colors were different. As AI systems and devices are becoming more diverse, our findings are expected to contribute to designing the users emotional with AI devices through lighting.

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF

Study on Development of Automated Program Model for Measuring Sensibility Preference of Portrait (인물사진의 감성 선호도 측정 자동화 프로그램 모형 개발 연구)

  • Lee, Chang-Seop;Jung, Da-Yeon;Lee, Eun-Ju;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.9
    • /
    • pp.34-43
    • /
    • 2018
  • The purpose of this study is to develop measurement program model for a human being-oriented product through the between the evaluation factors of portrait and general preferences of portraits. We added new items that are essential to the image evaluation by analysing previous studies. In this study, We identified the facial focus for the first step, and the portraits were evaluated by dividing it into objective and subjective image quality evaluation items. RSC Contrast and Dynamic Range were selected as the Objective evaluation items, and the numerical values of each image could be evaluation items, and the numerical values of each image could be evaluated by statistical analysis method. Facial Exposure, Composition, Position, Ratio, Out of focus, and Emotions and Color tone of image were selected as the Subjective evaluation items. In addition, a new face recognition algorithm is applied to judge the emotions, the manufacturer can get the information that they can analyze the people's emotion. The program developed to quantitatively and qualitatively compiles the evaluation items when evaluating portraits. The program that I developed through this study can be used an analysis program that produce the data for developing the evaluation model of the product more suitable to general users of imaging systems.

Moderating Effects of User Gender and AI Voice on the Emotional Satisfaction of Users When Interacting with a Voice User Interface (음성 인터페이스와의 상호작용에서 AI 음성이 성별에 따른 사용자의 감성 만족도에 미치는 영향)

  • Shin, Jong-Gyu;Kang, Jun-Mo;Park, Yeong-Jin;Kim, Sang-Ho
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.127-134
    • /
    • 2022
  • This study sought to identify the voice user interface (VUI) design parameters that evoked positive user emotions. Six VUI design parameters that could affect emotional user satisfaction were considered. The moderating effects of user gender and the design parameters were analyzed to determine the appropriate conditions for user satisfaction when interacting with the VUI. An interactive VUI system that could modify the six parameters was implemented using the Wizard of OZ experimental method. User emotions were assessed from the users' facial expression data, which was then converted into a valence score. The frequency analysis and chi-square test found that there were statistically significant moderating gender and AI effects. These results implied that it is beneficial to consider the users' gender when designing voice-based interactions. Adult/male/high-tone voices for males and adult/female/mid-tone voices for females are recommended as general guidelines for future VUI designs. Future analyses that consider various human factors will be able to more delicately assess human-AI interactions from a UX perspective.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.

Design and Implementation of Walking Motions Applied with Player's Emotion Factors According to Variable Statistics of RPG Game Character (RPG게임캐릭터의 능력치변화량에 따라 감정요소가 적용된 걷기동작 구현)

  • Kang, Hyun-Ah;Kim, Mi-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.5
    • /
    • pp.63-71
    • /
    • 2007
  • From several commercialized games the technique of changing facial expressions is imported, and a design method of a game character for the player's empathy is expected to be diversified in the future. In this paper, as a design method of a game character for the player's empathy, this makes walking motion for the game character applied with 'human-emotion' factors as statistics variation of a game character in RPG genre. After this paper implements analyzed emotions of human facial expression and walking motions applied with emotion in examples of character animation theory, this paper divides walking motion applied with human-emotion factors into 8 types through relationship to statistics factors in RPG genre. And then these are applied to a knight character, which has the most similarity with human's physical feature of the game characters in RPG genre, and makes walking motion as variable statistics. As a game player controls the game character applied with 'human-emotion' factors, the effect of the player's empathy about the game character becomes higher, and the level of immersion in game play is also expected to increase.

Emotion-based Real-time Facial Expression Matching Dialogue System for Virtual Human (감정에 기반한 가상인간의 대화 및 표정 실시간 생성 시스템 구현)

  • Kim, Kirak;Yeon, Heeyeon;Eun, Taeyoung;Jung, Moonryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.23-29
    • /
    • 2022
  • Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, and virtual humans implemented via these tools can communicate with users to some extent. However, most of the virtual humans so far have stayed unimodal using only text or speech. As AI technologies advance, the outdated machine-centered dialogue system is now changing to a human-centered, natural multi-modal system. By using several pre-trained networks, we implemented an emotion-based multi-modal dialogue system, which generates human-like utterances and displays appropriate facial expressions in real-time.

Arithmetic Fluctuation Effect affected by Induced Emotional Valence (유발된 정서가에 따른 계산 요동의 효과)

  • Kim, Choong-Myung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.185-191
    • /
    • 2018
  • This study examined the type and extent of interruption between induced emotion and succeeding arithmetic operation. The experiment was carried out to determine the influence of the induced emotions (anger, joy, and sorrow) and stimulus types (picture and sentence) on the cognitive process load that may block the interactions among the constituents of working memory. The study subjects were 32 undergraduates who were similar with respect to age and education parameters and were especially instructed to attend to induced emotion by imitation of facial expression and to make a correct decision during the remainder calculation task. In the results, the stimulus types did not exhibit any difference but there was a significant difference among the induced emotion types. The difference was observed in slower response time at positive emotion(joy condition) as compared with other emotions(anger and sorrow). More specifically, error and delayed correct response rate for emotion types were analysed to determine which phase the slower response was associated with. Delayed responses of the joy condition by sentence-inducing stimulus were identified with the error rate difference, and those by picture-inducing stimulus with the delayed correct response rate. These findings not only suggest that induced positive emotion increased response time compared to negative emotions, but also imply that picture-inducing stimulus easily affords arithmetic fluctuation whereas sentence-inducing stimulus results in arithmetic failure.