• 제목/요약/키워드: Emotion machine

검색결과 174건 처리시간 0.03초

개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델 (Maximum Entropy-based Emotion Recognition Model using Individual Average Difference)

  • 박소영;김동근;황민철
    • 한국정보통신학회논문지
    • /
    • 제14권7호
    • /
    • pp.1557-1564
    • /
    • 2010
  • 감성신호는 개인에 따라 그 패턴이 매우 다르게 나타나므로, 본 논문에서는 감성신호의 개인별 특징을 고려한 최대 엔트로피 기반 감성 인식 모델을 제안한다. 제안하는 모델은 보다 정확하게 사용자의 감성을 인식하기 위해서, 단순히 주어진 입력 감성 신호 값만을 사용하지 않고, 긍정 감성 신호 값의 평균과 부정 감성 신호 값의 평균을 입력 감성 신호의 값과 비교하여 활용한다. 또한, 감성 인식에 대한 전문적인 지식이 없이도 감성 인식 모델의 구축이 용이하도록, 제안하는 모델은 성능이 높다고 잘 알려진 기계학습기법의 하나인 최대 엔트로피 모델을 이용한다. 감성 신호의 수치 값을 그대로 사용하면 기계 학습에 필요한 학습 패턴 자료를 충분히 확보하기 어렵다는 점을 고려하여, 제안하는 모델은 평균차를 수치 값 대신 +(양수)와 -(음수)로 단순하게 표현하며, 감성 반응 전체 시간인 10초 대신 초단위로 분할하여 학습 패턴 자료의 양을 늘렸다.

Discrimination of Three Emotions using Parameters of Autonomic Nervous System Response

  • Jang, Eun-Hye;Park, Byoung-Jun;Eum, Yeong-Ji;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제30권6호
    • /
    • pp.705-713
    • /
    • 2011
  • Objective: The aim of this study is to compare results of emotion recognition by several algorithms which classify three different emotional states(happiness, neutral, and surprise) using physiological features. Background: Recent emotion recognition studies have tried to detect human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 217 students participated in this experiment. While three kinds of emotional stimuli were presented to participants, ANS responses(EDA, SKT, ECG, RESP, and PPG) as physiological signals were measured in twice first one for 60 seconds as the baseline and 60 to 90 seconds during emotional states. The obtained signals from the session of the baseline and of the emotional states were equally analyzed for 30 seconds. Participants rated their own feelings to emotional stimuli on emotional assessment scale after presentation of emotional stimuli. The emotion classification was analyzed by Linear Discriminant Analysis(LDA, SPSS 15.0), Support Vector Machine (SVM), and Multilayer perceptron(MLP) using difference value which subtracts baseline from emotional state. Results: The emotional stimuli had 96% validity and 5.8 point efficiency on average. There were significant differences of ANS responses among three emotions by statistical analysis. The result of LDA showed that an accuracy of classification in three different emotions was 83.4%. And an accuracy of three emotions classification by SVM was 75.5% and 55.6% by MLP. Conclusion: This study confirmed that the three emotions can be better classified by LDA using various physiological features than SVM and MLP. Further study may need to get this result to get more stability and reliability, as comparing with the accuracy of emotions classification by using other algorithms. Application: This could help get better chances to recognize various human emotions by using physiological signals as well as be applied on human-computer interaction system for recognizing human emotions.

Detecting Data which Represent Emotion Features from the Speech Signal

  • Park, Chang-Hyun;Sim, Kwee-Bo;Lee, Dong-Wook;Joo, Young-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.138.1-138
    • /
    • 2001
  • Usually, when we take a conversation with another, we can know his emotion as well as his idea. Recently, some applications using speech recognition comes out , however, those can recognize only context of various informations which he(she) gave. In the future, machine familiar to human will be a requirement for more convenient life. Therefore, we need to get emotion features. In this paper, we´ll collect a multiplicity of reference data which represent emotion features from the speech signal. As our final target is to recognize emotion from a stream of speech, as such, we must be able to understand features that represent emotion. There are much emotions human can show. the delicate difference of emotions makes this recognition problem difficult.

  • PDF

감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법 (Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach)

  • 에드워드 카야디;송미화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.

바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발 (Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model)

  • 조예리;배동성;이윤규;안우진;임묘택;강태구
    • 전기학회논문지
    • /
    • 제67권11호
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.

Development of Milking Machine for Human with Reference to human Babies' Behavior

  • Kawamura, Takashi;Morichika, Masayuki;Nakazawa, Masaru
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.203-205
    • /
    • 2000
  • This paper deals with human nursing and milking. We have made clear the principle of milking of human babies that consists of biting force, sucking pressure and movement of tongue. Base on this observation, the tongue mechanism was proposed. Then new type milking machine was developed and helpful milking was realized by controlling it with reference to human babies behavior. The experimental result of milking from nursing bottles and breast of mothers are shown in this paper. According to the result, the machine has almost same ability as babies.

  • PDF

EEG 신호 기반 경사도 방법을 통한 감정인식에 대한 연구 (A Novel Method for Emotion Recognition based on the EEG Signal using Gradients)

  • 한의환;차형태
    • 전자공학회논문지
    • /
    • 제54권7호
    • /
    • pp.71-78
    • /
    • 2017
  • 감정을 분류하는 대표적인 알고리즘에는 Support-vector-machine (SVM), Bayesian decision rule 등이 있다. 하지만 기존의 연구자들은 위와 같은 방법에는 문제점이 있다고 지적하였다. 이를 보완하기 위해 다른 연구자는 경사도를 이용하여 새로운 패턴인식 알고리즘을 제안하였다. 본 논문에서는 이 알고리즘을 통해 새로운 EEG 기반의 감정 인식 알고리즘을 제안하고 기존의 연구와 비교한다. 본 논문에서는 신뢰도 높은 자료를 얻기 위해 여러 논문에서 사용된 DEAP (a database for emotion analysis using physiological signals)를 사용하였다. 또한, 객관적인 검증을 위해 기존의 연구에서 사용된 4개의 뇌파 채널(Fz, Fp2, F3, F4)의 PSD (Power Spectral Density)를 특징으로 사용하여 감정의 2개 척도 (Arousal, Valence)를 분류하였다. 본 논문에서 실시한 교차검증 (4-fold)에 의하면 Valence 축에서 85%, Arousal 축에서 87.5의 정확도를 얻을 수 있었다.

고감성 패턴 제조를 위한 반자동 검단기의 개발 (Development of a Semi-automatic Cloth Inspection Machine for High-quality Fabric Patterns)

  • 김주용;김기태
    • 감성과학
    • /
    • 제11권2호
    • /
    • pp.207-214
    • /
    • 2008
  • 직물의 결점은 원단손실을 가중시키기 때문에, 검단공정에서 결점부위를 제거한다. 실제공장에서 이뤄지는 검단공정은 육안판정방식과 전자동 방식 중 하나를 채택하는데, 두 방식 나름대로 장단점이 있다. 본 연구에서는 육안판정과 전자동 방식의 장점만을 모아 검출은 사람이 하지만, 결점위치 정보 제공 및 결점정보 기록을 컴퓨터가 하도록 반자동 검단기를 개발하였다. 본 연구에서 개발한 레이저 그리드는 결점의 위치를 검단자가 쉽게 파악하도록 돕는 역할을 하며, 야드미터는 자동으로 결점의 위치를 측정하는 역할을 한다. 컴퓨터는 야드미터로 측정된 직물의 길이와 사람이 검출한 결점의 위치정보를 받아들여 저장하고 직물의 결점정보를 한눈에 보여주는 역할을 한다. 실제 사용되는 직물을 대상으로 특정 패턴으로 재단했을 때의 손실률을 계산하여 개발된 시스템의 성능을 객관적으로 평가하였다.

  • PDF

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.