• Title/Summary/Keyword: 감정음성

Search Result 223, Processing Time 0.025 seconds

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

Adaptive Speech Emotion Recognition Framework Using Prompted Labeling Technique (프롬프트 레이블링을 이용한 적응형 음성기반 감정인식 프레임워크)

  • Bang, Jae Hun;Lee, Sungyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.160-165
    • /
    • 2015
  • Traditional speech emotion recognition techniques recognize emotions using a general training model based on the voices of various people. These techniques can not consider personalized speech character exactly. Therefore, the recognized results are very different to each person. This paper proposes an adaptive speech emotion recognition framework made from user's' immediate feedback data using a prompted labeling technique for building a personal adaptive recognition model and applying it to each user in a mobile device environment. The proposed framework can recognize emotions from the building of a personalized recognition model. The proposed framework was evaluated to be better than the traditional research techniques from three comparative experiment. The proposed framework can be applied to healthcare, emotion monitoring and personalized service.

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Emotion Robust Speech Recognition using Speech Transformation (음성 변환을 사용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.683-687
    • /
    • 2010
  • This paper studied some methods which use frequency warping method that is the one of the speech transformation method to develope the robust speech recognition system for the emotional variation. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions and it is observed that speech spectrum is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, new training method that uses frequency warping in training process is presented to reduce the effect of emotional variation and the speech recognition system based on vocal tract length normalization method is developed to be compared with proposed system. Experimental results from the isolated word recognition using HMM showed that new training method reduced the error rate of the conventional recognition system using speech signal containing various emotions.

A study on the vocal characteristics of spoken emotional expressions (구어체 정서표현에 있어서의 음성 특성 연구)

  • 이수정;김명재;김정수
    • Science of Emotion and Sensibility
    • /
    • v.2 no.2
    • /
    • pp.53-66
    • /
    • 1999
  • 현 연구에서는 음성합성의 기초자료 수집을 위하여 대화체 감정표현의 음성적인 패러미터를 찾아내려고 시도하였다. 이를 이하여 일단 가장 자주 사용되는 대화체 감정표현자료가 수집되었고 이들 표현을 발화할 때 가장 주의를 기울이는 발성의 특징들이 탐색되었다. 구어체적 감정표현의 타당한 데이터베이스를 작성하기 위하여 20대와 30대로 연령층을 구분하여 자료를 수집, 분석하였다. 그 결과 다양한 감정표현의 발화특성들은 음의 강도, 강도변화, 그리고 음색이 중요한 기준으로 작용하는 것으로 나타났다. 다차원분석 결과 산출된 20대와 30대의 음성표현이 도면은 개별정서들이 음성의 잠재차원 상에서 상당한 일관된 특징을 지님을 보여 주었다.

  • PDF

A study on the vocal characteristics of spoken emotional expressions (구어체 정서표현에 있어서의 음성 특성 연구)

  • 이수정
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.277-291
    • /
    • 1999
  • 현 연구에서는 음성합성의 기초자료 수집을 위하여 대화체 감정표현의 음성적인 패러미터를 찾아내려고 시도하였다. 이를 위하여 일단 가장 자주 사용되는 대화체 감정 표현자료가 수집되었고 이들 표현을 발화할 때 가장 주의를 기울이는 발성의 특징들이 탐색되었다. 구어체적 감정표현의 타당한 데이타베이스를 작성하기 위하여 20대와 30로 연령층을 구분하여 자료를 수집, 분석하였다. 그 결과 다양한 감정표현의 발화특성들은 음의 강도, 강도변화, 그리고 음색이 중요한 기준으로 작용하는 것으로 나타났다. 다차원 분석 결과 산출된 20대와 30대의 음성표현의 도면은 개별정서들이 음성의 잠재차원 상에서 상당한 일관된 특징을 지님을 보여 주었다.

  • PDF

Speech emotion recognition based on CNN - LSTM Model (CNN - LSTM 모델 기반 음성 감정인식)

  • Yoon, SangHyeuk;Jeon, Dayun;Park, Neungsoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.939-941
    • /
    • 2021
  • 사람은 표정, 음성, 말 등을 통해 감정을 표출한다. 본 논문에서는 화자의 음성데이터만을 사용하여 감정을 분류하는 방법을 제안한다. 멜 스펙트로그램(Mel-Spectrogram)을 이용하여 음성데이터를 시간에 따른 주파수 영역으로 변화한다. 멜 스펙트로그램으로 변환된 데이터를 CNN을 이용하여 특징 벡터화한 후 Bi-Directional LSTM을 이용하여 화자의 발화 시간 동안 변화되는 감정을 분석한다. 마지막으로 완전 연결 네트워크를 통해 전체 감정을 분류한다. 감정은 Anger, Excitement, Fear, Happiness, Sadness, Neutral로, 총 6가지로 분류하였으며 데이터베이스로는 상명대 연구팀에서 구축한 한국어 음성 감정 데이터베이스를 사용하였다. 실험 결과 논문에서 제안한 CNN-LSTM 모델의 정확도는 88.89%로 측정되었다.

An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications (다음색 감정 음성합성 응용을 위한 감정 SSML 처리기)

  • Ryu, Se-Hui;Cho, Hee;Lee, Ju-Hyun;Hong, Ki-Hyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.523-529
    • /
    • 2021
  • In this paper, we designed and developed an Emotional Speech Synthesis Markup Language (SSML) processor. Multi-speaker emotional speech synthesis technology that can express multiple voice colors and emotional expressions have been developed, and we designed Emotional SSML by extending SSML for multiple voice colors and emotional expressions. The Emotional SSML processor has a graphic user interface and consists of following four components. First, a multi-speaker emotional text editor that can easily mark specific voice colors and emotions on desired positions. Second, an Emotional SSML document generator that creates an Emotional SSML document automatically from the result of the multi-speaker emotional text editor. Third, an Emotional SSML parser that parses the Emotional SSML document. Last, a sequencer to control a multi-speaker and emotional Text-to-Speech (TTS) engine based on the result of the Emotional SSML parser. Based on SSML which is a programming language and platform independent open standard, the Emotional SSML processor can easily integrate with various speech synthesis engines and facilitates the development of multi-speaker emotional text-to-speech applications.

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.