• 제목/요약/키워드: Emotion System

검색결과 1,112건 처리시간 0.04초

감성로봇을 위한 동적 감성시스템의 설계와 구현 (Design and Implementation of Dynamic Emotion System for Affective Robots)

  • 이용우;김종복;김성훈;서일홍;박명관
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.927-928
    • /
    • 2006
  • In this paper, we propose a dynamic emotion system involving the state equation and the output equation from the control theory. In our emotion system, the state equation accepts external stimulus and generates emotions. And the output equation modifies the intensity of emotions in accordance with personalities and circumstances. The validity of the proposed emotion system is shown by two simulation works which express emotions according to personalities and circumstances.

  • PDF

독일어 감정음성에서 추출한 포먼트의 분석 및 감정인식 시스템과 음성인식 시스템에 대한 음향적 의미 (An Analysis of Formants Extracted from Emotional Speech and Acoustical Implications for the Emotion Recognition System and Speech Recognition System)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제3권1호
    • /
    • pp.45-50
    • /
    • 2011
  • Formant structure of speech associated with five different emotions (anger, fear, happiness, neutral, sadness) was analysed. Acoustic separability of vowels (or emotions) associated with a specific emotion (or vowel) was estimated using F-ratio. According to the results, neutral showed the highest separability of vowels followed by anger, happiness, fear, and sadness in descending order. Vowel /A/ showed the highest separability of emotions followed by /U/, /O/, /I/ and /E/ in descending order. The acoustic results were interpreted and explained in the context of previous articulatory and perceptual studies. Suggestions for the performance improvement of an automatic emotion recognition system and automatic speech recognition system were made.

  • PDF

인간과 감정적 상호작용을 위한 '감정 엔진' (Engine of computational Emotion model for emotional interaction with human)

  • 이연곤
    • 감성과학
    • /
    • 제15권4호
    • /
    • pp.503-516
    • /
    • 2012
  • 지금까지 로봇 및 소프트웨어 에이전트들을 살펴보면, 감정 모델이 내부에 종속적으로 존재하기 때문에 감정모델만을 별도로 분리해 새로운 시스템에 재활용하기란 쉽지 않다. 따라서 어떤 로봇 및 에이전트와 연동될 수 있는 Engine of computational Emotion model (이하 EE로 표시한다)을 소개한다. 이 EE는 어떤 입력 정보에도 치중되지 않고, 어떤 로봇 및 에이전트의 내부와도 연동되도록 독립적으로 감정을 담당하기 위해, 입력 단계인 인식과 출력 단계인 표현을 배제하고, 순수하게 감정의 생성 및 처리를 담당하는 중간 단계인 감정 발생만을 분리하여, '입력단 및 출력단과 독립적인 소프트웨어 형태, 즉 엔진(Engine)'으로 존재한다. 이 EE는 어떤 입력단 및 출력단과 상호작용이 가능하며, 자체 감정뿐 아니라 상대방의 감정을 사용하며, 성격을 활용하여 종합적인 감정을 산출해낸다. 또한 이 EE는 로봇 및 에이전트의 내부에 라이브러리 형태로 존재하거나, 별도의 시스템으로 존재하여 통신할 수 있는 구조로 활용될 수 있다. 감정은 Joy(기쁨), Surprise(놀람), Disgust(혐오), Fear(공포), Sadness(슬픔), Anger(분노)의 기본 감정을 사용하며, 문자열과 계수를 쌍으로 갖는 정보를 EE는 입력 인터페이스를 통해 입력 신호로 받고, 출력 인터페이스를 통해 출력 신호로 내보낸다. EE는 내부에 감정마다 감정경험의 연결 목록을 가지고 있으며, 이의 계수의 쌍으로 구성된 정보를 감정의 생성 및 처리하기 위한 감정상태 목록으로 사용한다. 이 감정경험 목록은 '인간이 실생활에서 경험하는 다양한 감정에 대한 이해를 도모'하는 감정표현어휘로 구성되어 있다. EE는 인간의 감정을 탐색하여 적절한 반응을 나타내주는 상호작용 제품에 이용 가능할 것이다. 본 연구는 제품이 '인간을 공감하고 있음'을 인간이 느낄 수 있도록 유도하는 시스템을 만들고자 함이므로, HRI(인간-로봇 상호작용)나 HCI(인간-컴퓨터 상호작용)와 관련 제품이 효율적인 감정적 공감 서비스를 제공하는데 도움이 될 수 있을 것으로 기대한다.

  • PDF

얼굴 영상을 이용한 감정 인식 시스템 개발 (Development of Emotion Recongition System Using Facial Image)

  • 김문환;주영훈;박진배;이재연;조용조
    • 한국지능시스템학회논문지
    • /
    • 제15권2호
    • /
    • pp.191-196
    • /
    • 2005
  • 칼라 영상을 이용한 감정 인식 기술은 사회의 여러 분야에서 필요성이 대두되고 있음에도 불구하고 인식 과정의 어려움으로 인해 풀리지 않는 문제로 남아있다. 특히, 얼굴 영상을 이용한 감정 인식 기술은 많은 응용이 가능하기 때문에 개발의 필요성이 증대되고 있다. 얼굴 영상을 이용하여 감정을 인식하는 시스템은 매우 다양한 기법들이 사용되는 복합적인 시스템이다. 따라서, 이를 설계하기 위해서는 얼굴 영상 분석, 특징 벡터 추출 및 패턴 인식 등 다양한 기법의 연구가 필요하다. 본 논문에서 이전에 연구된 얼굴 영상 기법들을 바탕으로 한 새로운 감정 인식 시스템을 제안한다. 제안된 시스템은 감정 분석에 적합한 퍼지 이론을 바탕으로 한 퍼지 분류기를 이용하여 감정을 인식한다. 제안된 시스템의 성능을 평가하기 위해 평가데이터 베이스가 구축되었으며, 이를 통해 제안된 시스템의 성능을 평가하였다.

음성의 감성요소 추출을 통한 감성 인식 시스템 (The Emotion Recognition System through The Extraction of Emotional Components from Speech)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Ahn Min-young's Jade-like Sijo, Emotion Coding by Orchid

  • Park, Inkwa
    • International Journal of Advanced Culture Technology
    • /
    • 제7권1호
    • /
    • pp.199-208
    • /
    • 2019
  • Today, mankind is falling in serious stress. So there are various way that heal men as human psychology, philosophy, medical science etc. And in recent years, interest in literature therapy has been focused to heal the human sense of spirit. In the future, we will be able to treat our spirit sense with AI Emotion. The treatment process can be induced by the system of emotion coding which AI Emotion deliver Emotion signals to Human body. For this study, we used the Ahn Min-young's Sijo. The reason is that his Sijo is useful this study of the Emotion Coding. As the result, Ahn Min-young's emotion coding created the codes as if amino acid codes. We must continue this research. Then our literature therapy could grow and contribute to human well-being.

30대 미혼성인자녀가 지각한 부모-자녀분화, 표현된 정서가 자녀의 심리적 우울에 미치는 영향 (A Study of the Effects on Premarital Adult Children Aged Thirties Psychological Depression by Parents-Children Differentiation and Expressed Emotion)

  • 권미애;김태현
    • 가정과삶의질연구
    • /
    • 제22권5호
    • /
    • pp.197-210
    • /
    • 2004
  • The Purpose of this study was to explore the effects of differentiation, emotion over involvement(expressed emotion), and criticism between middle-or-old aged parent and child, by relation of emotional system, on child's psychological depression. The subject of this study were m premarital adult children over 30 years old. The major findings of this study were as follows. First. it was found that mother-child differentiation was more perceptive than that of father-child. With psychological depression, expressed emotion within family and criticism were shown average score that was lower than middle score. Second, among demographic characteristics, there are significant differences premarital adult children's sex, education, income, family type, father's education, and parents' marital status. Third, as the result of regression analysis, the higher level of psychological depression when the lower differentiation between parent-child, the higher expressed emotion over involvement within family and criticism. Based on the findings in this study, the relation of emotional system is very important. Therefore, it is necessary to consider the therapeutic intervention and relation improvement program when individual and family counseling about parent-child are going on.