• Title/Summary/Keyword: 감정인식

Search Result 914, Processing Time 0.038 seconds

Emotional States Recognition of Text Data Using Hidden Markov Models (HMM을 이용한 채팅 텍스트로부터의 화자 감정상태 분석)

  • 문현구;장병탁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.127-129
    • /
    • 2001
  • 입력된 문장을 분석하여 미리 정해진 범주에 따라 그 문장의 감정 상태의 천이를 출력해 주는 감정인식 시스템을 제안한다. Naive Bayes 알고리즘을 사용했던 이전 방법과 달리 새로 연구된 시스템은 Hidden Markov Model(HMM)을 사용한다. HMM은 특정 분포로 발생하는 현상에서 그 현상의 원인이 되는 상태의 천이를 찾아내는데 적합한 방법으로서, 하나의 문장에 여러 가지 감정이 표현된다는 가정 하에 감정인식에 관한 이상적인 알고리즘이라 할 수 있다. 본 논문에서는 HMM을 사용한 감정인식 시스템에 관한 개요를 설명하고 이전 버전에 비해 보다 향상된 실험결과를 보여준다.

  • PDF

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

Emotion Recognition and Expression using Facial Expression (얼굴표정을 이용한 감정인식 및 표현 기법)

  • Ju, Jong-Tae;Park, Gyeong-Jin;Go, Gwang-Eun;Yang, Hyeon-Chang;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.295-298
    • /
    • 2007
  • 본 논문에서는 사람의 얼굴표정을 통해 4개의 기본감정(기쁨, 슬픔, 화남, 놀람)에 대한 특징을 추출하고 인식하여 그 결과를 이용하여 감정표현 시스템을 구현한다. 먼저 주성분 분석(Principal Component Analysis)법을 이용하여 고차원의 영상 특징 데이터를 저차원 특징 데이터로 변환한 후 이를 선형 판별 분석(Linear Discriminant Analysis)법에 적용시켜 좀 더 효율적인 특징벡터를 추출한 다음 감정을 인식하고, 인식된 결과를 얼굴 표현 시스템에 적용시켜 감정을 표현한다.

  • PDF

Feature Comparison of Emotion Recognition Models using Face Images (얼굴사진 기반 감정인식 모델의 특성 분석)

  • Kim, MinGeyung;Yang, Jiyoon;Choi, Yoo-Joo
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.615-617
    • /
    • 2022
  • 본 논문에서는 얼굴사진 기반 감정인식 심층망, 음성사운드를 기반한 감정인식 심층망을 결합한 앙상블 네트워크 구축을 위한 사전연구로서 얼굴사진 기반 감정을 인식하는 기존 딥뉴럴 네트워크 모델들을 입력 데이터 처리 방법에 따라 분류하고, 각 방법의 특성을 분석한다. 또한, 얼굴사진 외관 특성을 기반한 감정인식 네트워크를 여러 구조로 구성하고, 구성된 방법의 성능을 비교하여, 우수 성능을 보이는 네트워크를 선정하여 추후 앙상블 네트워크의 구성 네트워크로 사용하고자 한다.

Development of Vision based Emotion Recognition Robot (비전 기반의 감정인식 로봇 개발)

  • Park, Sang-Sung;Kim, Jung-Nyun;An, Dong-Kyu;Kim, Jae-Yeon;Jang, Dong-Sik
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.670-672
    • /
    • 2005
  • 본 논문은 비전을 기반으로 한 감정인식 로봇에 관한 논문이다. 피부스킨칼라와 얼굴의 기하학적 정보를 이용한 얼굴검출과 감정인식 알고리즘을 제안하고, 개발한 로봇 시스템을 설명한다. 얼굴 검출은 RGB 칼라 공간을 CIElab칼라 공간으로 변환하여, 피부스킨 후보영역을 추출하고, Face Filter로 얼굴의 기하학적 상관관계를 통하여 얼굴을 검출한다. 기하학적인 특징을 이용하여 눈, 코, 입의 위치를 판별하여 표정 인식의 기본 데이터로 활용한다. 눈썹과 입의 영역에 감정 인식 윈도우를 적용하여, 윈도우 내에서의 픽셀값의 변화와 크기의 변화로 감정인식의 특징 칼을 추출한다. 추출된 값은 실험에 의해서 미리 구해진 샘플과 비교를 통해 강정을 표현하고, 표현된 감정은 Serial Communication을 통하여 로봇에 전달되고, 감정 데이터를 받은 얼굴에 장착되어 있는 모터를 통해 표정을 표현한다.

  • PDF

Facial Expression Recognition using ICA-Factorial Representation Method (ICA-factorial 표현법을 이용한 얼굴감정인식)

  • Han, Su-Jeong;Kwak, Keun-Chang;Go, Hyoun-Joo;Kim, Sung-Suk;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.371-376
    • /
    • 2003
  • In this paper, we proposes a method for recognizing the facial expressions using ICA(Independent Component Analysis)-factorial representation method. Facial expression recognition consists of two stages. First, a method of Feature extraction transforms the high dimensional face space into a low dimensional feature space using PCA(Principal Component Analysis). And then, the feature vectors are extracted by using ICA-factorial representation method. The second recognition stage is performed by using the Euclidean distance measure based KNN(K-Nearest Neighbor) algorithm. We constructed the facial expression database for six basic expressions(happiness, sadness, angry, surprise, fear, dislike) and obtained a better performance than previous works.

Robust Speech Recognition using Vocal Tract Normalization for Emotional Variation (성도 정규화를 이용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo;Bang, Hyun-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.773-778
    • /
    • 2009
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, vocal tract normalization method is used to develop the robust speech recognition system for emotional variations. Experimental results from the isolated word recognition using HMM showed that the vocal tract normalization method reduced the error rate of the conventional recognition system by 41.9% when emotional test data was used.

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.