• Title/Summary/Keyword: Emotion processing

Search Result 311, Processing Time 0.026 seconds

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF

A Study on Visual Perception based Emotion Recognition using Body-Activity Posture (사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.305-314
    • /
    • 2011
  • Research into the visual perception of human emotion to recognize an intention has traditionally focused on emotions of facial expression. Recently researchers have turned to the more challenging field of emotional expressions through body posture or activity. Proposed work approaches recognition of basic emotional categories from body postures using neural model applied visual perception of neurophysiology. In keeping with information processing models of the visual cortex, this work constructs a biologically plausible hierarchy of neural detectors, which can discriminate 6 basic emotional states from static views of associated body postures of activity. The proposed model, which is tolerant to parameter variations, presents its possibility by evaluating against human test subjects on a set of body postures of activities.

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Individual Differences in Regional Gray Matter Volumes According to the Cognitive Style of Young Adults

  • Hur, Minyoung;Kim, Chobok
    • Science of Emotion and Sensibility
    • /
    • v.22 no.4
    • /
    • pp.65-74
    • /
    • 2019
  • Extant research has proposed that the Object-Spatial-Verbal cognitive style can elucidate individual differences in the preference for modality-specific information. However, no studies have yet ascertained whether this type of information processing evinces structural correlations in the brain. Therefore, the current study used voxel-based morphometry (VBM) analyses to investigate individual differences in gray matter volumes based on the Object-Spatial-Verbal cognitive style. For this purpose, ninety healthy young adults were recruited to participate in the study. They were administered the Korean version of the Object-Spatial-Verbal cognitive style questionnaire, and their anatomical brain images were scanned. The VBM results demonstrated that the participants' verbal scores were positively correlated with regional gray matter volumes (rGMVs) in the right superior temporal sulcus/superior temporal gyrus, the bilateral parahippocampal gyrus/fusiform gyrus, and the left inferior temporal gyrus. In addition, the rGMVs in these regions were negatively correlated with the relative spatial preference scores obtained by individual participants. The findings of the investigation provide anatomical evidence that the verbal cognitive style could be decidedly relevant to higher-level language processing, but not to basic language processing.

Difficulty in Facial Emotion Recognition in Children with ADHD (주의력결핍 과잉행동장애의 이환 여부에 따른 얼굴표정 정서 인식의 차이)

  • An, Na Young;Lee, Ju Young;Cho, Sun Mi;Chung, Young Ki;Shin, Yun Mi
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.24 no.2
    • /
    • pp.83-89
    • /
    • 2013
  • Objectives : It is known that children with attention-deficit hyperactivity disorder (ADHD) experience significant difficulty in recognizing facial emotion, which involves processing of emotional facial expressions rather than speech, compared to children without ADHD. This objective of this study is to investigate the differences in facial emotion recognition between children with ADHD and normal children used as control. Methods : The children for our study were recruited from the Suwon Project, a cohort comprising a non-random convenience sample of 117 nine-year-old ethnic Koreans. The parents of the study participants completed study questionnaires such as the Korean version of Child Behavior Checklist, ADHD Rating Scale, Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version. Facial Expression Recognition Test of the Emotion Recognition Test was used for the evaluation of facial emotion recognition and ADHD Rating Scale was used for the assessment of ADHD. Results : ADHD children (N=10) were found to have impaired recognition when it comes to Emotional Differentiation and Contextual Understanding compared with normal controls (N=24). We found no statistically significant difference in the recognition of positive facial emotions (happy and surprise) and negative facial emotions (anger, sadness, disgust and fear) between the children with ADHD and normal children. Conclusion : The results of our study suggested that facial emotion recognition may be closely associated with ADHD, after controlling for covariates, although more research is needed.

Neo-Confucian Study on the Ministerial Fire's Theory of JuDanGe (주단계(朱丹溪) 상화론(相火論)의 성리학적(性理學的) 연구(硏究))

  • Kim, Yeong-Mok
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.20 no.4
    • /
    • pp.784-792
    • /
    • 2006
  • The Neo-Confucian study of the thee of ministerial fire(相火論) constitute of the rule of iung jung(中正), the principal of form and use(體用) and of real nature and emotion(性情) of human and ethic mind(人心道心). The present study is to evaluate a fundamental concept of the theory of ministerial fire, which is one of traditional medical ideology in China, through the Neo-Confucianism projects to The theory of ministerial fire. The theory of ministerial fire of Judange(朱丹溪) was recognized by ontological principal of Heaven-Human being-Earth, ontological structure of Form-Use and the structure that mind consist of real nature and emotion(심統性惰). The ethic and human mind(道心人心) of Judange and constancy and transition(常變) of seven emotion have relationship in the ontological structure of Form-Use. The real nature of human Doing consisted of apriority of Heaven rule is unitary constructed by Form-Use(체용) of inactivated real nature and activated seven emotion and then activated seven emotion shows dual form of appropriate(中節) and inappropriate(不中節). Emperor's and ministerial fire(君火相火) which has a relationships of Heaven-Human being synchronization represents all kinds of fire and classified to heaven fire(天火) and human fire(人火). The emperor's fire was triggered by inactivated fire and ministerial fire(相火) was triggered by activated fire. Inactivated ministerial fires have dual form of physiological ministerial and pathological ministerial fire. Regarding the forementioned analytic thinking, it was clear that manifest processing of One's real nature and the ministerial fire undergo the same principal and logics. Since maintained One's real nature that is inactivated seven emotion and appropriated activated seven emotion, ministerial fire can be stable and keep one's health and well-being in mind and body.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

A Study on the Emotional Evaluation of fabric Color Patterns

  • Koo, Hyun-Jin;Kang, Bok-Choon;Um, Jin-Sup;Lee, Joon-Whan
    • Science of Emotion and Sensibility
    • /
    • v.5 no.3
    • /
    • pp.11-20
    • /
    • 2002
  • There are Two new models developed for objective evaluation of fabric color patterns by applying a multiple regression analysis and an adaptive foray-rule-based system. The physical features of fabric color patterns are extracted through digital image processing and the emotional features are collected based on the psychological experiments of Soen[3, 4]. The principle physical features are hue, saturation, intensity and the texture of color patterns. The emotional features arc represented thirteen pairs of adverse adjectives. The multiple regression analyses and the adaptive fuzzy system are used as a tool to analyze the relations between physical and emotional features. As a result, both of the proposed models show competent performance for the approximation and the similar linguistic interpretation to the Soen's psychological experiments.

  • PDF

Effect of the Weaving Preparatory Process Characteristics on the PET FabricsSensibility (제직 준비 공정특성이 PET 직물 감성에 미치는 영향)

  • Kim, Seung-Jin
    • Science of Emotion and Sensibility
    • /
    • v.11 no.1
    • /
    • pp.123-129
    • /
    • 2008
  • The purpose of this study is to analyse the effect of weaving preparatory process characteristics on the PET fabric sensibility through assessment of handle, garment formability and sewability for the enhancement of the physical property of the PET fabrics. For this purpose, eleven fabric specimens processed on the interlacing, pirn winder, 2-for-1 twister, weaving and dyeing and finishing processes were prepared and processing tension and interlacing intensity after each process were measured with various processing conditions.

  • PDF