• Title/Summary/Keyword: Formant frequencies

Search Result 75, Processing Time 0.021 seconds

Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot (모의 지능로봇에서 음성신호에 의한 감정인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

Contrastive Analysis of Mongolian and Korean Monophthongs Based on Acoustic Experiment (음향 실험을 기초로 한 몽골어와 한국어의 단모음 대조분석)

  • Yi, Joong-Jin
    • Phonetics and Speech Sciences
    • /
    • v.2 no.2
    • /
    • pp.3-16
    • /
    • 2010
  • This study aims at setting the hierarchy of difficulty of the 7 Korean monophthongs for Mongolian learners of Korean according to Prator's theory based on the Contrastive Analysis Hypothesis. In addition to that, it will be shown that the difficulties and errors for Mongolian learners of Korean as a second or foreign language proceed directly from this hierarchy of difficulty. This study began by looking at the speeches of 60 Mongolians for Mongolian monophthongs; data were investigated and analyzed into formant frequencies F1 and F2 of each vowel. Then, the 7 Korean monophthongs were compared with the resultant Mongolian formant values and are assigned to 3 levels, 'same', 'similar' or 'different sound'. The findings in assessing the differences of the 8 nearest equivalents of Korean and Mongolian vowels are as follows: First, Korean /a/ and /$\wedge$/ turned out as a 'same sound' with their counterparts, Mongolian /a/ and /ɔ/. Second, Korean /i/, /e/, /o/, /u/ turned out as a 'similar sound' with each their Mongolian counterparts /i/, /e/, /o/, /u/. Third, Korean /ɨ/ which is nearest to Mongolian /i/ in terms of phonetic features seriously differs from it and is thus assigned to 'different sound'. And lastly, Mongolian /$\mho$/ turned out as a 'different sound' with its nearest counterpart, Korean /u/. Based on these findings the hierarchy of difficulty was constructed. Firstly, 4 Korean monophthongs /a/, /$\wedge$/, /i/, /e/ would be Level 0(Transfer); they would be transferred positively from their Mongolian counterparts when Mongolians learn Korean. Secondly, Korean /o/, /u/ would be Level 5(Split); they would require the Mongolian learner to make a new distinction and cause interference in learning the Korean language because Mongolian /o/, /u/ each have 2 similar counterpart sounds; Korean /o, u/, /u, o/. Thirdly, Korean /ɨ/ which is not in the Mongolian vowel system will be Level 4(Overdifferentiation); the new vowel /ɨ/ which bears little similarity to Mongolian /i/, must be learned entirely anew and will cause much difficulty for Mongolian learners in speaking and writing Korean. And lastly, Mongolian /$\mho$/ will be Level 2(Underdifferentiation); it is absent in the Korean language and doesn‘t cause interference in learning Korean as long as Mongolian learners avoid using it.

  • PDF

The Effectiveness of Explicit Form-Focused Instruction in Teaching the Schwa /ə/ (영어 약모음 /ə/ 교수에 있어서 명시적 Form-Focused Instruction의 효과 연구)

  • Lee, Yunhyun
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.8
    • /
    • pp.101-113
    • /
    • 2020
  • This study aimed to explore how effective explicit form-focused instruction (FFI) is in teaching the schwa vowel /ə/ to EFL students in a classroom setting. The participants were 25 female high school students, who were divided into the experimental group (n=13) and the control group (n=12). One female American also participated in the study for a speech sample as a reference. The treatment, which involves shadowing model pronunciation by the researcher and a free text-to-speech software and the researcher's feedback in a private session, was given to the control group over a month and a half. The speech samples, for which the participants read the 14 polysyllabic stimulus words followed by the sentences containing the words, were collected before and after the treatment. The paired-samples t test and non-parametric Wilcoxon signed-rank test were used for analysis. The results showed that the participants of the experimental group in the post-test reduced the duration of the schwa by around 40 percent compared to the pre-test. However, little effect was found in approximating the participants' distribution patterns of /ə/ measured by the F1/F2 formant frequencies to the reference point, which was 539 Hz (F1) by 1797 Hz (F2). The findings of this study suggest that explicit FFI with multiple repetitions and corrective feedback is partly effective in teaching pronunciation.

Influence of standard Korean and Gyeongsang regional dialect on the pronunciation of English vowels (표준어와 경상 지역 방언의 한국어 모음 발음에 따른 영어 모음 발음의 영향에 대한 연구)

  • Jang, Soo-Yeon
    • Phonetics and Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.1-7
    • /
    • 2021
  • This study aims to enhance English pronunciation education for Korean students by examining the impact of standard Korean and Gyeongsang regional dialect on the articulation of English vowels. Data were obtained through the Korean-Spoken English Corpus (K-SEC). Seven Korean words and ten English mono-syllabic words were uttered by adult, male speakers of standard Korean and Gyeongsang regional dialect, in particular, speakers with little to no experience living abroad were selected. Formant frequencies of the recorded corpus data were measured using spectrograms, provided by the speech analysis program, Praat. The recorded data were analyzed using the articulatory graph for formants. The results show that in comparison with speakers using standard Korean, those using the Gyeongsang regional dialect articulated both Korean and English vowels in the back. Moreover, the contrast between standard Korean and Gyeongsang regional dialect in the pronunciation of Korean vowels (/으/, /어/) affected how the corresponding English vowels (/ə/, /ʊ/) were articulated. Regardless of the use of regional dialect, a general feature of vowel pronunciation among Korean people is that they show more narrow articulatory movements, compared with that of native English speakers. Korean people generally experience difficulties with discriminating tense and lax vowels, whereas native English speakers have clear distinctions in vowel articulation.

Relationship between roar sound characteristics and body size of Steller sea lion

  • Park, Tae-Geon;Iida, Kohji;Mukai, Tohru
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.46 no.4
    • /
    • pp.458-465
    • /
    • 2010
  • Hundreds of Steller sea lions, Eumetopias jubatus, migrate from Sakhalin and the northern Kuril Islands to Hokkaido every winter. During this migration, they may use their roaring sounds to navigate and to maintain their groups. We recorded the roars of wild Steller sea lions that had landed on reefs on the west coast of Hokkaido, and those of captive sea lions, while making video recordings. A total of 300 roars of wild sea lions and 870 roars of captive sea lions were sampled. The fundamental frequency ($F_0$), formant frequency ($F_1$), pulse repetition rate (PRR), and duration of syllables (T) were analyzed using a sonagraph. $F_0$, $F_1$, and PRR of the roars emitted by captive sea lions increased in the order male, female, and juvenile. By contrast, the $F_1$ of wild males was lower than that of females, while the $F_0$ and PRR of wild males and females did not differ statistically. Moreover, the $F_0$ and $F_1$ frequencies for captive sea lions were higher than those of wild sea lions, while PRR in captive sea lions was lower than in wild sea lions. Since there was a linear relationship between body length and the $F_0$ and $F_1$ frequencies in captive sea lions, the body length distribution of wild sea lions could be estimated from the $F_0$ and $F_1$ frequency distribution using a regression equation. These results roughly agree with the body length distribution derived from photographic geometry. As the volume of the oral cavity and the length of the vocal cords are generally proportional to body length, sampled roars can provide useful information about a population, such as the body length distribution and sex ratio.

The interlanguage Speech Intelligibility Benefit for Korean Learners of English: Production of English Front Vowels

  • Han, Jeong-Im;Choi, Tae-Hwan;Lim, In-Jae;Lee, Joo-Kyeong
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.53-61
    • /
    • 2011
  • The present work is a follow-up study to that of Han, Choi, Lim and Lee (2011), where an asymmetry in the source segments eliciting the interlanguage speech intelligibility benefit (ISIB) was found such that the vowels which did not match any vowel of the Korean language were likely to elicit more ISIB than matched vowels. In order to identify the source of the stronger ISIB in non-matched vowels, acoustic analyses of the stimuli were performed. Two pairs of English front vowels [i] vs. [I], and $[{\varepsilon}]$ vs. $[{\ae}]$ were recorded by English native talkers and two groups of Korean learners according to their English proficiency, and then their vowel duration and the frequencies of the first two formants (F1, F2) were measured. The results demonstrated that the non-matched vowels such as [I], and $[{\ae}]$ produced by Korean talkers seemed to show more deviated acoustic characteristics from those of the natives, with longer duration and with closer formant values to the matched vowels, [i] and $[{\varepsilon}]$, than those of the English natives. Combining the results of acoustic measurements in the present study and those of word identification in Han et al. (2011), we suggest that relatively better performance in word identification by Korean talkers/listeners than the native English talkers/listeners is associated with the shared interlanguage of Korean talkers and listeners.

  • PDF

A Study on Correcting Korean Pronunciation Error of Foreign Learners by Using Supporting Vector Machine Algorithm

  • Jang, Kyungnam;You, Kwang-Bock;Park, Hyungwoo
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.316-324
    • /
    • 2020
  • It has experienced how difficult People with foreign language learning, it is to pronounce a new language different from the native language. The goal of various foreigners who want to learn Korean is to speak Korean as well as their native language to communicate smoothly. However, each native language's vocal habits also appear in Korean pronunciation, which prevents accurate information transmission. In this paper, the pronunciation of Chinese learners was compared with that of Korean. For comparison, the fundamental frequency and its variation of the speech signal were examined and the spectrogram was analyzed. The Formant frequencies known as the resonant frequency of the vocal tract were calculated. Based on these characteristics parameters, the classifier of the Supporting Vector Machine was found to classify the pronunciation of Koreans and the pronunciation of Chinese learners. In particular, the linguistic proposition was scientifically proved by examining the Korean pronunciation of /ㄹ/ that the Chinese people were not good at pronouncing.

Korean and English affricates in bilingual children

  • Yu, Hye Jeong
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.1-6
    • /
    • 2017
  • This study examined how early bilingual children produce sounds in their two languages articulated with the same manner of articulation but at different places of articulation. English affricates are palato-alveolar and Korean affricates are alveolar. This study analyzed the frequencies of center of gravity (COG), spectral peak (SP), and the second formant (F2) of word-initial affricates in English and Korean produced by twenty-four early Korean-English bilingual children (aged 4 to 7), and compared them with those of monolingual counterparts in the two languages. If early Korean-English bilingual children produce palato-alveolar affricates in English and alveolar affricates in Korean, they may produce Korean affricates with higher COGs, SPs, and F2s than English affricates. The early Korean-English bilingual children at the age of 4 produced English and Korean affricates with similar COGs, SPs, and F2s, and the COGs, SPs, and F2s of their Korean affricates were similar to those of the Korean monolingual counterparts. However, the early bilingual children at the age of 5 to 7 had lower COGs and SPs for English affricates with higher F2s compared to Korean affricates, and the COGs, SPs, and F2s of their English affricates were similar to those of the English monolingual counterparts.

A Study on Voice Color Control Rules for Speech Synthesis System (음성합성시스템을 위한 음색제어규칙 연구)

  • Kim, Jin-Young;Eom, Ki-Wan
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.25-44
    • /
    • 1997
  • When listening the various speech synthesis systems developed and being used in our country, we find that though the quality of these systems has improved, they lack naturalness. Moreover, since the voice color of these systems are limited to only one recorded speech DB, it is necessary to record another speech DB to create different voice colors. 'Voice Color' is an abstract concept that characterizes voice personality. So speech synthesis systems need a voice color control function to create various voices. The aim of this study is to examine several factors of voice color control rules for the text-to-speech system which makes natural and various voice types for the sounding of synthetic speech. In order to find such rules from natural speech, glottal source parameters and frequency characteristics of the vocal tract for several voice colors have been studied. In this paper voice colors were catalogued as: deep, sonorous, thick, soft, harsh, high tone, shrill, and weak. For the voice source model, the LF-model was used and for the frequency characteristics of vocal tract, the formant frequencies, bandwidths, and amplitudes were used. These acoustic parameters were tested through multiple regression analysis to achieve the general relation between these parameters and voice colors.

  • PDF

A Method of Learning and Recognition of Vowels by Using Neural Network (신경망을 이용한 모음의 학습 및 인식 방법)

  • Shim, Jae-Hyoung;Lee, Jong-Hyeok;Yoon, Tae-Hoon;Kim, Jae-Chang;Lee, Yang-Sung
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.11
    • /
    • pp.144-151
    • /
    • 1990
  • In this work Ohotomo et al., neural network model for learning and recognizing vowels is modified in order to reduce the time for learning and the possibility of incorrect recognition. In this modification, the finite bandwidth of formant frequencies of vowels are taken into consider-ations in coding input patterns. Computer simulations show that the modification reduces not only the possibility of incorrect recognition by about $30{\%}$ but also the time for learning by about $7{\%}$.

  • PDF