• Title/Summary/Keyword: 음성 감성 인식

Search Result 52, Processing Time 0.026 seconds

A Study on The Improvement of Emotion Recognition by Gender Discrimination (성별 구분을 통한 음성 감성인식 성능 향상에 대한 연구)

  • Cho, Youn-Ho;Park, Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.107-114
    • /
    • 2008
  • In this paper, we constructed a speech emotion recognition system that classifies four emotions - neutral, happy, sad, and anger from speech based on male/female gender discrimination. At first, the proposed system distinguish between male and female from a queried speech, then the system performance can be improved by using separate optimized feature vectors for each gender for the emotion classification. As a emotion feature vector, this paper adopts ZCPA(Zero Crossings with Peak Amplitudes) which is well known for its noise-robustic characteristic from the speech recognition area and the features are optimized using SFS method. For a pattern classification of emotion, k-NN and SVM classifiers are compared experimentally. From the computer simulation results, the proposed system was proven to be highly efficient for speech emotion classification about 85.3% regarding four emotion states. This might promise the use the proposed system in various applications such as call-center, humanoid robots, ubiquitous, and etc.

Emotion Recognition Using Output Data of Image and Speech (영상과 음성의 출력 데이터를 이용한 감성 인식)

  • Joo, Young-Hoon;Oh, Jae-Heung;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.275-280
    • /
    • 2003
  • In this paper, we propose a method for recognizing the human s emotion using output data of image and speech. The proposed method is based on the recognition rate of image and speech. In case that we use one data of image or speech, it is hard to produce the correct result by wrong recognition. To solve this problem, we propose the new method that can reduce the result of the wrong recognition by multiplying the emotion status with the higher recognition rate by the higher weight value. To experiment the proposed method, we suggest the simple recognizing method by using image and speech. Finally, we have shown the potentialities through the expriment.

A Study on Robust Speech Emotion Feature Extraction Under the Mobile Communication Environment (이동통신 환경에서 강인한 음성 감성특징 추출에 대한 연구)

  • Cho Youn-Ho;Park Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.269-276
    • /
    • 2006
  • In this paper, we propose an emotion recognition system that can discriminate human emotional state into neutral or anger from the speech captured by a cellular-phone in real time. In general. the speech through the mobile network contains environment noise and network noise, thus it can causes serious System performance degradation due to the distortion in emotional features of the query speech. In order to minimize the effect of these noise and so improve the system performance, we adopt a simple MA (Moving Average) filter which has relatively simple structure and low computational complexity, to alleviate the distortion in the emotional feature vector. Then a SFS (Sequential Forward Selection) feature optimization method is implemented to further improve and stabilize the system performance. Two pattern recognition method such as k-NN and SVM is compared for emotional state classification. The experimental results indicate that the proposed method provides very stable and successful emotional classification performance such as 86.5%. so that it will be very useful in application areas such as customer call-center.

Implementation of the Speech Emotion Recognition System in the ARM Platform (ARM 플랫폼 기반의 음성 감성인식 시스템 구현)

  • Oh, Sang-Heon;Park, Kyu-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1530-1537
    • /
    • 2007
  • In this paper, we implemented a speech emotion recognition system that can distinguish human emotional states from recorded speech captured by a single microphone and classify them into four categories: neutrality, happiness, sadness and anger. In general, a speech recorded with a microphone contains background noises due to the speaker environment and the microphone characteristic, which can result in serious system performance degradation. In order to minimize the effect of these noises and to improve the system performance, a MA(Moving Average) filter with a relatively simple structure and low computational complexity was adopted. Then a SFS(Sequential Forward Selection) feature optimization method was implemented to further improve and stabilize the system performance. For speech emotion classification, a SVM pattern classifier is used. The experimental results indicate the emotional classification performance around 65% in the computer simulation and 62% on the ARM platform.

  • PDF

The Subjective Evaluation System Implementation Using Speech Recognition (음성인식을 이용한 주관평가 시스템 구현)

  • 한화영;고한우;윤용현;조택동
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.05a
    • /
    • pp.276-279
    • /
    • 2001
  • 환경이나 작업부하 등이 인간에게 주는 피로나, stress 또는 쾌, 불쾌감 등의 감성을 평가하기 위한 정신물리학적인 방법의 하나으로 설문지에 의한 주관적인 평가법이 많이 사용되고 있다. 기존의 수작업으로 이루어지던 설문 방식을 자동화하여 PC 기반으로 설문양식을 자동 생성하고 음성을 통해 응답할 수 있는 프로그램을 개발하였다. 주관평가 자동화 시스템은 주관평가 데이터를 효율적으로 처리를 할 수 있고 음성을 이용함으로써 피험자의 정신적 부담을 경감시키며 생리신호와 주관평가와의 경시적인 변화를 효과적으로 평가할 수 있다. 설문 형식으로는 5점 척도와 7점 척도를 선택하였으며 평가어는 “매우 아니다”∼“매우 그렇다”로 구성되었다. 평가어를 인식함에 있어 좋은 인식률을 얻기 위한 특징벡터의 치수와 기본 프레임 개수를 대상으로 인식실험을 하였다.

  • PDF

A Basic Study on Automation of the Subjective Evaluation using Speech Recognition (음성인식을 이용한 주관평가의 자동화에 관한 기초연구)

  • 한화영;고한우;윤용현;조택동
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.11a
    • /
    • pp.113-117
    • /
    • 2000
  • 수작업으로 이루어지고 있는 환경의 영향이나 작업의 영향에 따른 정신피로나 신체피로의 주관적인 평가를 자동화하기 위한 방법에 대하여 논하였다. 사람의 가장 자연스러운 의사소통인 평가어를 척도로 하여 평가가 이루어지는 음성인식기술을 응용한 주관평가법에 대하여 연구하였다. 주관평가의 자동화를 위하여 우선, 평가어에 대한 음성 인식을 한 후 인식된 평가 결과 데이터를 이용하여 설문지를 자동 생성시킴과 동시에 파일 형태로 저장시켰다. 음성 인식 알고리즘으로는 DTW(Dynamic Time Warping)인식 알고리즘을 사용하였고. 설문지 질의 내용은 집중도 평가를 이용하였다. 인식실험은 설문에 대한 응답에 필요한 평가어를 대상으로 하였다.

  • PDF

Analyzing the Acoustic Elements and Emotion Recognition from Speech Signal Based on DRNN (음향적 요소분석과 DRNN을 이용한 음성신호의 감성 인식)

  • Sim, Kwee-Bo;Park, Chang-Hyun;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.45-50
    • /
    • 2003
  • Recently, robots technique has been developed remarkably. Emotion recognition is necessary to make an intimate robot. This paper shows the simulator and simulation result which recognize or classify emotions by learning pitch pattern. Also, because the pitch is not sufficient for recognizing emotion, we added acoustic elements. For that reason, we analyze the relation between emotion and acoustic elements. The simulator is composed of the DRNN(Dynamic Recurrent Neural Network), Feature extraction. DRNN is a learning algorithm for pitch pattern.

A Study on the Automatic Monitoring System for the Contact Center Using Emotion Recognition and Keyword Spotting Method (감성인식과 핵심어인식 기술을 이용한 고객센터 자동 모니터링 시스템에 대한 연구)

  • Yoon, Won-Jung;Kim, Tae-Hong;Park, Kyu-Sik
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.107-114
    • /
    • 2012
  • In this paper, we proposed an automatic monitoring system for contact center in order to manage customer's complaint and agent's quality. The proposed system allows more accurate monitoring using emotion recognition and keyword spotting method for neutral/anger voice emotion. The system can provide professional consultation and management for the customer with language violence, such as abuse and sexual harassment. We developed a method of building robust algorithm on heterogeneous speech DB of many unspecified customers. Experimental results confirm the stable and improved performance using real contact center speech data.

Robot Emotion Technology (로봇 감성 기술)

  • Park, C.S.;Ryu, J.W.;Sohn, J.C.
    • Electronics and Telecommunications Trends
    • /
    • v.22 no.2 s.104
    • /
    • pp.1-9
    • /
    • 2007
  • 공공 서비스, 홈 서비스, 엔터테인먼트, 매개치료, 개호 등의 다양한 분야에서 인간과 로봇간의 상호작용을 통한 감성적인 교류에 대한 연구가 활발히 진행되고 있다. 점차시각, 음성 인식을 통한 사용자 중심의 감성 인식에서 촉각 기반의 상호 작용을 통한감성을 생성하고 다양한 형태로 감성을 표현하는 로봇들에 대한 개발이 이루어질 것이다. 이에, 본 고에서는 내.외부 센서들을 통한 감성에 영향을 주는 감성적 문맥 인식기술과 로봇 감성 및 행동 표현에 대한 기술 개발 동향에 대하여 살펴 보도록 한다.

A Study on Robust Emotion Classification Structure Between Heterogeneous Speech Databases (이종 음성 DB 환경에 강인한 감성 분류 체계에 대한 연구)

  • Yoon, Won-Jung;Park, Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.477-482
    • /
    • 2009
  • The emotion recognition system in commercial environments such as call-center undergoes severe system performance degradation and instability due to the speech characteristic differences between the system training database and the input speech of unspecified customers. In order to alleviate these problems, this paper extends traditional method of emotion recognition of neutral/anger into two-step hierarchical structure by using emotional characteristic changes and differences of male and female. The experimental results indicate that the proposed method provides very stable and successful emotional classification performance about 25% over the traditional method of emotion recognition.