• Title/Summary/Keyword: spontaneous speech

Search Result 125, Processing Time 0.019 seconds

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Affixation effects on word-final coda deletion in spontaneous Seoul Korean speech

  • Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.9-14
    • /
    • 2016
  • This study investigated the patterns of coda deletion in spontaneous Seoul Korean speech. More specifically, the current study focused on three factors in promoting coda deletion, namely, word position, consonant type, and morpheme type. The results revealed that, first, coda deletion frequently occurred when affixes were attached to the ends of words, rather than in affixes in word-internal positions or in roots. Second, alveolar consonants [n] and [l] in the coda positions of high-frequency affixes [nɨn] and [lɨl] were most likely to be deleted. Additionally, regarding affix reduction in the word-final position, all subjects seemed to depend on this articulatory strategy to a similar degree. In sum, the current study found that affixes without primary semantic content in spontaneous speech tend to undergo the process of reduction, favoring the occurrence of specific pronunciation variants.

Patterns of consonant deletion in the word-internal onset position: Evidence from spontaneous Seoul Korean speech

  • Kim, Jungsun;Yun, Weonhee;Kang, Ducksoo
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.45-51
    • /
    • 2016
  • This study examined the deletion of onset consonant in the word-internal structure in spontaneous Seoul Korean speech. It used the dataset of speakers in their 20s extracted from the Korean Corpus of Spontaneous Speech (Yun et al., 2015). The proportion of deletion of word-internal onset consonants was analyzed using the linear mixed-effects regression model. The factors that promoted the deletion of onsets were primarily the types of consonants and their phonetic contexts. The results showed that onset deletion was more likely to occur for a lenis velar stop [k] than the other consonants, and in the phonetic contexts, when the preceding vowel was a low central vowel [a]. Moreover, some speakers tended to more frequently delete onset consonants (e.g., [k] and [n]) than other speakers, which reflected individual differences. This study implies that word-internal onsets undergo a process of gradient reduction within individuals' articulatory strategies.

A Study on the Male Vowel Formants of the Korean Corpus of Spontaneous Speech (한국어 자연발화 음성코퍼스의 남성 모음 포먼트 연구)

  • Kim, Soonok;Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.95-102
    • /
    • 2015
  • The purpose of this paper is to extract the vowel formants of the ten adult male speakers in their twenties and thirties from the Korean Corpus of Spontaneous Speech [4], also known as the Seoul corpus, and to analyze them by comparing to earlier works on the Buckeye Corpus of Conversational Speech [1] in terms of the various linguistic factors that are expected to affect the formant distribution. The vowels extracted from the Korean corpus were also compared to those of the read Korean. The results showed that the distribution of the vowel formants from the Korean corpus was very different from that of read Korean speech. The comparison with English corpus and read English speech showed similar patterns. The factors affecting the Korean vowel formants were the interviewer sex, the location of the target vowel or the syllable containing it with respect to the phrasal word or utterance and the speech rate of the surrounding words.

Spontaneous Speech Language Modeling using N-gram based Similarity (N-gram 기반의 유사도를 이용한 대화체 연속 음성 언어 모델링)

  • Park Young-Hee;Chung Minhwa
    • MALSORI
    • /
    • no.46
    • /
    • pp.117-126
    • /
    • 2003
  • This paper presents our language model adaptation for Korean spontaneous speech recognition. Korean spontaneous speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpus. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf/sup */idf similarity. In addition to relevance weighting, we use disfluencies as Predictor to the neighboring words. The best result reduces 9.7% word error rate relatively and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor also.

  • PDF

The Korean Corpus of Spontaneous Speech

  • Yun, Weonhee;Yoon, Kyuchul;Park, Sunwoo;Lee, Juhee;Cho, Sungmoon;Kang, Ducksoo;Byun, Koonhyuk;Hahn, Hyeseung;Kim, Jungsun
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.103-109
    • /
    • 2015
  • This paper describes the development of the Korean corpus of spontaneous speech, also called the Seoul corpus. The corpus contains the audio recording of the interview-style spontaneous speech from the 40 native speakers of Seoul Korean. The talkers are divided into four age groups; talkers in their teens, twenties, thirties and forties. Each age group has ten talkers, five males and five females. The method used to elicit and record the speech is described. The corpus containing around 220,000 phrasal words was phonemically labeled along with information on the boundaries for Korean phrasal words and utterances, which were additionally romanized. According to the test result of labeling consistency, the inter-labeler agreement on phoneme identification was 98.1% and the mean deviation on boundary placement was 9.04 msec. The corpus will be made available for free to the research community in March, 2015.

A User-friendly Remote Speech Input Method in Spontaneous Speech Recognition System

  • Suh, Young-Joo;Park, Jun;Lee, Young-Jik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.2E
    • /
    • pp.38-46
    • /
    • 1998
  • In this paper, we propose a remote speech input device, a new method of user-friendly speech input in spontaneous speech recognition system. We focus the user friendliness on hands-free and microphone independence in speech recognition applications. Our method adopts two algorithms, the automatic speech detection and the microphone array delay-and-sum beamforming (DSBF)-based speech enhancement. The automatic speech detection algorithm is composed of two stages; the detection of speech and nonspeech using the pitch information for the detected speech portion candidate. The DSBF algorithm adopts the time domain cross-correlation method as its time delay estimation. In the performance evaluation, the speech detection algorithm shows within-200 ms start point accuracy of 93%, 99% under 15dB, 20dB, and 25dB signal-to-noise ratio (SNR) environments, respectively and those for the end point are 72%, 89%, and 93% for the corresponding environments, respectively. The classification of speech and nonspeech for the start point detected region of input signal is performed by the pitch information-base method. The percentages of correct classification for speech and nonspeech input are 99% and 90%, respectively. The eight microphone array-based speech enhancement using the DSBF algorithm shows the maximum SNR gaing of 6dB over a single microphone and the error reductin of more than 15% in the spontaneous speech recognition domain.

  • PDF

Developing a Korean Standard Speech DB (한국인 표준 음성 DB 구축)

  • Shin, Jiyoung;Jang, Hyejin;Kang, Younmin;Kim, Kyung-Wha
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.139-150
    • /
    • 2015
  • The data accumulated in this database will be used to develop a speaker identification system. This may also be applied towards, but not limited to, fields of phonetic studies, sociolinguistics, and language pathology. We plan to supplement the large-scale speech corpus next year, in terms of research methodology and content, to better answer the needs of diverse fields. The purpose of this study is to develop a speech corpus for standard Korean speech. For the samples to viably represent the state of spoken Korean, demographic factors were considered to modulate a balanced spread of age, gender, and dialects. Nine separate regional dialects were categorized, and five age groups were established from individuals in their 20s to 60s. A speech-sample collection protocol was developed for the purpose of this study where each speaker performs five tasks: two reading tasks, two semi-spontaneous speech tasks, and one spontaneous speech task. This particular configuration of sample data collection accommodates gathering of rich and well-balanced speech-samples across various speech types, and is expected to improve the utility of the speech corpus developed in this study. Samples from 639 individuals were collected using the protocol. Speech samples were collected also from other sources, for a combined total of samples from 1,012 individuals.

A Comparative Study on the Male and Female Vowel Formants of the Korean Corpus of Spontaneous Speech (한국어 자연발화 음성코퍼스의 남녀 모음 포먼트 비교 연구)

  • Yoon, Kyuchul;Kim, Soonok
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.131-138
    • /
    • 2015
  • The aim of this work is to compare the vowel formants of the ten adult female speakers in their twenties and thirties from the Seoul corpus[7] with those of corresponding Korean male speakers from the same corpus and of American female speakers from the Buckeye corpus[4]. In addition, various linguistic factors that are expected affect the formant frequencies were examined to account for the distribution of the vowel formants. Formant frequencies extracted from the Seoul corpus were also compared to those from read speech. The results showed that the formant distribution of the spontaneous speech was very different from that of the read speech, while the comparison between the female and male speakers was similar in both languages. To a greater or lesser degree, the potential linguistic factors influenced the formant frequencies of the vowels.

Performance of the Phoneme Segmenter in Speech Recognition System (음성인식 시스템에서의 음소분할기의 성능)

  • Lee, Gwang-seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.705-708
    • /
    • 2009
  • This research describes a neural network-based phoneme segmenter for recognizing spontaneous speech. The input of the phoneme segmenter for spontaneous speech is 16th order mel-scaled FFT, normalized frame energy, ratio of energy among 0~3[KHz] band and more than 3[KHz] band. All the features are differences of two consecutive 10 [msec] frame. The main body of the segmenter is single-hidden layer MLP(Multi-Layer Perceptron) with 72 inputs, 20 hidden nodes, and one output node. The segmentation accuracy is 78% with 7.8% insertion.

  • PDF