Browse > Article
http://dx.doi.org/10.5391/IJFIS.2006.6.1.027

Design of Model to Recognize Emotional States in a Speech  

Kim Yi-Gon (Division of Electronic communication and Electrical Engineering, Chonnam National University)
Bae Young-Chul (Division of Electronic communication and Electrical Engineering, Chonnam National University)
Publication Information
International Journal of Fuzzy Logic and Intelligent Systems / v.6, no.1, 2006 , pp. 27-32 More about this Journal
Abstract
Verbal communication is the most commonly used mean of communication. A spoken word carries a lot of informations about speakers and their emotional states. In this paper we designed a model to recognize emotional states in a speech, a first phase of two phases in developing a toy machine that recognizes emotional states in a speech. We conducted an experiment to extract and analyse the emotional state of a speaker in relation with speech. To analyse the signal output we referred to three characteristics of sound as vector inputs and they are the followings: frequency, intensity, and period of tones. Also we made use of eight basic emotional parameters: surprise, anger, sadness, expectancy, acceptance, joy, hate, and fear which were portrayed by five selected students. In order to facilitate the differentiation of each spectrum features, we used the wavelet transform analysis. We applied ANFIS (Adaptive Neuro Fuzzy Inference System) in designing an emotion recognition model from a speech. In our findings, inference error was about 10%. The result of our experiment reveals that about 85% of the model applied is effective and reliable.
Keywords
ANFIS; emotional parameters; emotional state; spectrum features; wavelet; FFT(Fast Fourier Transform);
Citations & Related Records
연도 인용수 순위
  • Reference
1 Eibl-Eibesfeldt I., 'Ethology. The biology of behavior', 2nd ed. Holt, Rinehert, and Winston, New York, 1975
2 I. R. Murray and J. L. Arnott, 'Synthesizing Emotions in Speech: Is it time to get excited?', The MicroCenter, Applied Computer Studies Division, University of Dundee, Dundee DDi 4HN, U.K
3 Phonetics and Theory of Speech Production. http://www.acoustics.hut.fi/-slemmett/dippa/chap11.html
4 I. R. Murray and J. L. Arnott,'Toward the simulation in synthetic speech: A review of the literature on human vocal emotion',J. Acoust. Soc. Am., Vol.93, No.2, pp1097-1109, February 1993   DOI   ScienceOn
5 Heerens Chr. Willem 'Sound intensity on the basilar membrane as square of the amplitude on the ear drum', 2003 (http://www.slechthored-plus.nl/fysica/en/heerens-02 expl.htm)
6 Moriyama T. ; Saito H. and Ozawa S.,'Evaluation of the relationship between emotional concepts and emotional parameters on speech', Department of Electrical Engineering, Kei Univ., Japan
7 Woodman Peter, PA News, 'Human Robot science Museum Debut' Sunday 18, January 2004. 10:26 am U.K. (http://www.news.scotsman.com)