• Title/Summary/Keyword: Emotional Expressions

Search Result 234, Processing Time 0.023 seconds

The Relation between Preschoolers' Emotion Understanding and Parents' Emotion Expressiveness and Attitude Toward Children's Emotion Expressiveness (학령전 아동의 정서이해와 부모의 정서표현성 및 아동정서 수용태도와의 관계)

  • 이혜련;최보가
    • Journal of the Korean Home Economics Association
    • /
    • v.40 no.10
    • /
    • pp.103-112
    • /
    • 2002
  • This study investigated the relation between preschoolers' emotion understanding and parents' emotion expressiveness and attitude toward children's emotion expressiveness. Subjects were ninety 3- to 5-year old children and their parents. Parents' emotion socialization was measured by PACES developed by Saami(1989) and FEQ developed by Harberstadt(1986). And preschoolers' identification of basic emotional expressions and expression of their own feelings and others' feelings in various situations were measured. Results revealed that 5-year-old children understood emotion better than 3-year-old children, and mother's positive emotion expression influenced children's emotion understanding. The results are consistent with recent research showing that parents emotion socialization may be important for preschoolers' emotion understanding.

Development and Effect of Pain Management Protocol for Nursing Home Patients with Dementia (노인 간호 요양시설에서의 치매환자 통증관리 프로토콜 개발 및 효과)

  • Chang, Sung-Ok
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.14 no.1
    • /
    • pp.29-43
    • /
    • 2007
  • Purpose: This study was done to develop a pain management protocol for nursing home patients with dementia and to examine effects of the protocol on pain assessments and interventions by the nurses and on pain relief signs in the patients. Method: The six steps in the protocol development and the examination of effect are outlined. Three rounds using the Delphi technique and one group pretest-posttest design experiment were developed. Design issues, such as sample selection and sample size, are addressed in relation to the study protocol. Results: After implementation of the pain management protocol, there were significant changes nursing actions including frequency of number of physical examinations, utilization of pain assessment tools, and request to doctors for discomfort management and there were significant changes in frequency in the number of verbal and physical expressions of pain, and emotional patterns. Conclusion: This is the first pain management protocol for patients with dementia in Korea. However, more study will be needed to determine the methodological strength and necessary revisions for the protocol.

  • PDF

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

A neural network model for recognizing facial expressions based on perceptual hierarchy of facial feature points (얼굴 특징점의 지각적 위계구조에 기초한 표정인식 신경망 모형)

  • 반세범;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.12 no.1_2
    • /
    • pp.77-89
    • /
    • 2001
  • Applying perceptual hierarchy of facial feature points, a neural network model for recognizing facial expressions was designed. Input data were convolution values of 150 facial expression pictures by Gabor-filters of 5 different sizes and 8 different orientations for each of 39 mesh points defined by MPEG-4 SNHC (Synthetic/Natural Hybrid Coding). A set of multiple regression analyses was performed with the rating value of the affective states for each facial expression and the Gabor-filtered values of 39 feature points. The results show that the pleasure-displeasure dimension of affective states is mainly related to the feature points around the mouth and the eyebrows, while a arousal-sleep dimension is closely related to the feature points around eyes. For the filter sizes. the affective states were found to be mostly related to the low spatial frequency. and for the filter orientations. the oblique orientations. An optimized neural network model was designed on the basis of these results by reducing original 1560(39x5x8) input elements to 400(25x2x8) The optimized model could predict human affective rating values. up to the correlation value of 0.886 for the pleasure-displeasure, and 0.631 for the arousal-sleep. Mapping the results of the optimized model to the six basic emotional categories (happy, sad, fear, angry, surprised, disgusted) fit 74% of human responses. Results of this study imply that, using human principles of recognizing facial expressions, a system for recognizing facial expressions can be optimized even with a a relatively little amount of information.

  • PDF

A Qualitative Study of Early School-age Children's Experiences on Social Skills Training Program (사회성 훈련 프로그램에 참가한 학령 초기 아동의 사회적 행동 변화에 대한 질적 연구)

  • Song, Seung Min;Doh, Hyun Sim;Kim, Min Jung;Kim, Soo Jee;Shin, Nana;Kim, A Youn
    • Korean Journal of Childcare and Education
    • /
    • v.11 no.1
    • /
    • pp.329-354
    • /
    • 2015
  • The purpose of this qualitative study was to develop a social skills training program for improving early school-age children's social behaviors and to investigate its effectiveness by observing their experiences on the program with a qualitative method. Data were collected from 7 children using observer's descriptive notes and reflective notes, compliment notes by assistant leader, program leader's weekly journals, children's weekly journals, and video recordings. Four theme categories and 11 sub-lower categories emerged. Theme categories were (1) relationship building, (2) changes in emotional expressions, (3) changes in prosociality, and (4) changes in social skills. This study observed early school-age children's positive changes in social behaviors and emotional expressions through the social skills program.

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

Comparison Between Core Affect Dimensional Structures of Different Ages using Representational Similarity Analysis (표상 유사성 분석을 이용한 연령별 얼굴 정서 차원 비교)

  • Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.1
    • /
    • pp.33-42
    • /
    • 2023
  • Previous emotion studies employing facial expressions have focused on the differences between age groups for each of the emotion categories. Instead, Kim (2021) has compared representations of facial expressions in the lower-dimensional emotion space. However, he reported descriptive comparisons without statistical significance testing. This research used representational similarity analysis (Kriegeskorte et al., 2008) to directly compare empirical datasets from young, middle-aged, and old groups and conceptual models. In addition, individual differences multidimensional scaling (Carroll & Chang, 1970) was conducted to explore individual weights on the emotional dimensions for each age group. The results revealed that the old group was the least similar to the other age groups in the empirical datasets and the valence model. In addition, the arousal dimension was the least weighted for the old group compared to the other groups. This study directly tested the differences between the three age groups in terms of empirical datasets, conceptual models, and weights on the emotion dimensions.

Development and Standardization of Modified Self-Assessment Manikin for Emotional Valence and Arousal Manikin (정서가 및 각성수준에 대한 자가 평가 마네킹 척도개발 및 표준화)

  • Kang, Eun-Ho;Choi, Jeong-Eun;Ham, Byung-Joo;Seok, Jeong-Ho;Lee, Kyoung-Uk;Kim, Won;Lee, Seung-Hwan;Lim, Hyun-Kook;Park, Young-Min;Yang, Jong-Chul;Ahn, Meekyung;Lee, Jae Seon;Chae, Jeong-Ho
    • Anxiety and mood
    • /
    • v.7 no.2
    • /
    • pp.113-118
    • /
    • 2011
  • Objectives : The Self-Assessment Manikin (SAM) developed by Bradley and Lang is a non-verbal pictorial assessment tool that measures human emotion and has been widely used. However, the pictures in SAM have not been easy for Korean subjects to understand or relate to. The authors developed a new manikin (Emotional Valence and Arousal Manikin, EVAS) modeling it after Korean faces while modifying and standardizing the SAM. Methods : Forty-one healthy subjects participated in this study. They were asked to rate the emotional valence and level of arousal using both the SAM and EVAS after being exposed to pictures of facial expressions for affective neurosciences-Korean version. The internal consistency of the EVAS and the correlation between EVAS and SAM were examined. Resuts : Internal consistencies of the valence using the EVAS were from 0.63 (surprise) to 0.82 (happiness) and those of the arousal were from 0.90 to 0.95. Correlation coefficients of the valence and arousal between the SAM and EVAS were from 0.61 (both surprise and disgust) to 0.84 (neutral) and from 0.82 (sadness) to 0.94 (fear), respectively. Conclusions : We developed a new manikin (the EVAS) for the Korean population modifying and standardizing the SAM. The EVAS demonstrated a good internal consistency and validity. As such, it can be used in the field of human emotion research.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.