• Title/Summary/Keyword: emotion pattern

Search Result 252, Processing Time 0.038 seconds

Speaker and Context Independent Emotion Recognition System using Gaussian Mixture Model (GMM을 이용한 화자 및 문장 독립적 감정 인식 시스템 구현)

  • 강면구;김원구
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2463-2466
    • /
    • 2003
  • This paper studied the pattern recognition algorithm and feature parameters for emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used lot speaker and context independent recognition. The speech parameters used as the feature are pitch, energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and their derivatives as a feature showed better performance than that using the Pitch and energy Parameters. For pattern recognition algorithm, GMM based emotion recognizer was superior to KNN and VQ based recognizer

  • PDF

Formative Properties of Sensibility and Emotion in Fashion (패션에 나타난 감성과 감정의 조형적 특성 연구)

  • 김유진;이경희
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.1
    • /
    • pp.34-44
    • /
    • 2004
  • The purpose of this study was to provide effective design strategy and distinguish productions for the consumer's emotion satisfaction by analyzing formative properties of fashion sensibility and emotion. 54 photos of contemporary costume have been selected which represented the Izard' DES. The questionnaire consisted of bi-polar 25 pairs adjective scale of fashion sensibility and the 18 noun scale of emotion was distributed to 970 male and female living in Pusan area. The data were analyzed by GLM using the statistic SPSS package. The major findings of this research were as follows. 1. In the clothing formative properties following fashion sensibilities, aestheticism shows significant differences in the silhouette and texture, maturity in the silhouette and color, character in the texture and decoration and feminity in the pattern and color. 2. In the clothing formative properties following emotions, negative emotion shows significant differences in the pattern and silhouette, distressㆍfear in the silhouette and pattern, arousal in the texture and color, shame in the color and texture and enjoyment in the silhouette and pattern. 3. In the fashion sensibility and emotion following clothing formative properties, each formative property shows differences in fashion sensibility and emotion. This study result will be utilized in the clothing design development in special usage like theatrical costume, discriminated display and advertisement stratge.

Personalized Service Based on Context Awareness through User Emotional Perception in Mobile Environment (모바일 환경에서의 상황인식 기반 사용자 감성인지를 통한 개인화 서비스)

  • Kwon, Il-Kyoung;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.10 no.2
    • /
    • pp.287-292
    • /
    • 2012
  • In this paper, user personalized services through the emotion perception required to support location-based sensing data preprocessing techniques and emotion data preprocessing techniques is studied for user's emotion data building and preprocessing in V-A emotion model. For this purpose the granular context tree and string matching based emotion pattern matching techniques are used. In addition, context-aware and personalized recommendation services technique using probabilistic reasoning is studied for personalized services based on context awareness.

Research on Classification of 2 dimension Emotion by Pattern analysis of Autonomic response (자율신경계 반응 패턴 분석을 통한 2차원 감성 분류에 대한 연구)

  • 황민철;임평규;김혜진;김세영
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.11a
    • /
    • pp.279-282
    • /
    • 2002
  • 자율신경계 반응은 인간의 각성을 측정하는 변수가 될 수 있다(황민철 외, 2001). 본 연구에서는 자율신경계 반응만으로 인간의 2차원 감성 분류를 할 수 있는지를 살펴보았다. 5명의 피험자에게 대중가요나 효과음 등과 같은 다양한 청각자극을 제시하여 감성을 유발한 후, 자율신경계의 반응을 3가지 생리신호(GSR, SKT, PPG)를 통해 측정하여 반응 패턴을 분석하였다. 결과적으로, 자율신경계 반응 패턴은 각성/이완뿐만 아니라 쾌/불쾌간 감성 구분의 가능성을 확인할 수 있었다.

  • PDF

Human emotional elements and external stimulus information-based Artificial Emotion Expression System for HRI (HRI를 위한 사람의 내적 요소 기반의 인공 정서 표현 시스템)

  • Oh, Seung-Won;Hahn, Min-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.7-12
    • /
    • 2008
  • In human and robot interaction, the role of emotion becomes more important Therefore, robots need the emotion expression mechanism similar to human. In this paper, we suggest a new emotion expression system based on the psychological studies and it consists of five affective elements, i.e., the emotion, the mood, the personality, the tendency, and the machine rhythm. Each element has somewhat peculiar influence on the emotion expression pattern change according to their characteristics. As a result, although robots were exposed to the same external stimuli, each robot can show a different emotion expression pattern. The proposed system may contribute to make a rather natural and human-friendly human-robot interaction and to promote more intimate relationships between people and robots.

  • PDF

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyun;Sim Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyeon;Sim Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF

Analytic Framework of Emotion Factors for Gameplay Capability (게임플레이 가능성을 위한 감정요소 분석 프레임워크)

  • Kim, Mi-Jin;Kim, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.188-196
    • /
    • 2010
  • This study aims at empiricallly designing method to increase player's play capability through determining of the relations of player's behavior pattern and emotion factors when having gameplay experiences in MMORPG. For modeling emotion factors on gameplay, Our preliminary studies are considered in the process of rule-based systems on cognitive science approach. These study estabilished game architecture process applied of player's cognitieve emotion reactions in specific situation of role playing games that have most noticeable interaction with players based on quest system. This approach is expected to extend instant responses, which can be controlled behind all of anti-main behavior pattern and main behavior pattern in game playing. Therefore, such study is meaningful that is proposed the analysis framework of player's emotion factors to design game playing capability in game interactions.

GMM-based Emotion Recognition Using Speech Signal (음성 신호를 사용한 GMM기반의 감정 인식)

  • 서정태;김원구;강면구
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.235-241
    • /
    • 2004
  • This paper studied the pattern recognition algorithm and feature parameters for speaker and context independent emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used for speaker and context independent recognition. The speech parameters used as the feature are pitch. energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and its derivatives showed better performance than that using the pitch and energy parameters. For pattern recognition algorithm. GMM-based emotion recognizer was superior to KNN and VQ-based recognizer.

Speaker and Context Independent Emotion Recognition using Speech Signal (음성을 이용한 화자 및 문장독립 감정인식)

  • 강면구;김원구
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF