• Title/Summary/Keyword: formant frequencies

Search Result 75, Processing Time 0.025 seconds

Relationship between Formants and Constriction Areas of Vocal Tract in 9 Korean Standard Vowels (우리말 모음의 발음시 음형대와 조음위치의 관계에 대한 연구)

  • 서경식;김재영;김영기
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.5 no.1
    • /
    • pp.44-58
    • /
    • 1994
  • The formants of the 9 Korean standard vowels(which used by the average people of Seoul, central-area of the Korean peninsula) were measured by analysis with the linear predictive coding(LPC) and fast Fourier transform(FFT). The author already had reported the constriction area for the Korean standard vowels, and with the existing data, the distance from glottis to the constriction area in the vocal tract of each vowel was newly measured with videovelopharyngograms and lateral Rontgenograms of the vocal tract. We correlated the formant frequencies with the distance from glottis to the constriction area of the vocal tract. Also we tried to correlate the formant frequencies with the position of tongue in the vocal tract which is divided into 2 categories : The position of tongue in oral cavity by the distance from imaginary palatal line to the highest point of tongue and the position in pharyngeal cavity by the distance from back of tongue to posterior pharyngeal wall. This study was performed with 10 adults(male : 5, female : 5) who spoke primary 9 Korean standard vowels. We had already reported that the Korean vowel [i], [e], $[{\varepsilon}]$ were articulated at hard palate level, [$\dot{+}$], [u] were at soft palate level, [$\wedge$] was at upper pharynx level and the [$\wedge$], [$\partial$], [a] in a previous article. Also we had noted that the significance of pharyngeal cavity in vowel articulation. From this study we have concluded that ; 1) The F$_1$ is related with the oral cavity articulated vowel [i, e, $\varepsilon$, $\dot{+}$, u]. 2) Within the oral cavity articulated vowel [i, e, $\varepsilon$, $\dot{+}$, u] and the upper pharynx articulated vowel [o], the F$_2$ is elevated when the diatance from glottis to the constriction area is longer. But within the lower pharynx articulated vowel [$\partial$, $\wedge$, a], the F$_2$ is elevated when the distance from glottis to the constriction area is shorter. 3) With the stronger tendency of back-vowel, the higher the elevation of the F$_1$ and F$_2$ frequencies. 4) The F$_3$ and F$_4$ showed no correaltion with the constriction area nor the position of tongue in the vocal tract 5) The parameter F$_2$- F$_1$, which is the difference between F$_2$ frequency and F$_1$ frequency showed an excellent indicator of differenciating the oral cavity articulated vowels from pharyngeal cavity articulated vowels. If the F$_2$-F$_1$ is less than about 600Hz which indicates the vowel is articulated in the pharyngeal cavity, and more than about 600Hz, which indicates that the vowel is articulated in the oral cavity.

  • PDF

A preliminary study of acoustic measures in male musical theater students by laryngeal height (뮤지컬 전공 남학생에서 후두 높이에 따른 음향학적 측정치에 대한 예비 연구)

  • Lee, Kwang Yong;Lee, Seung Jin
    • Phonetics and Speech Sciences
    • /
    • v.14 no.2
    • /
    • pp.55-65
    • /
    • 2022
  • This study aimed to compare acoustic measurements by the high, middle, and low laryngeal heights of male musical theater students. Furthermore, the correlation between the relative height of the larynx and the acoustic measurements was examined, along with the predictability of the relative height (vertical position) of the larynx from acoustic measurements. The participants included five male students majoring in musical theater singing, and acoustic analysis was performed by having them produce the /a/ vowel 10 times each at the laryngeal positions of high, middle, and low. The relative vertical positions of the laryngeal prominence in each position were measured based on the resting position. Results indicated that the relative position of the larynx varied significantly according to laryngeal height, such that as the larynx descended, the first three formant frequencies decreased while the spectral energy at the same frequencies increased. Formant frequencies showed a weak to moderate positive correlation with the relative height of the larynx, while the spectral energy showed a moderate negative correlation. The relative height of the larynx was predicted by eight acoustic measures (adjusted R2 = .829). In conclusion, the predictability of the relative height of the larynx was partially confirmed in a non-invasive manner.

The Vowel System of American English and Its Regional Variation (미국 영어 모음 체계의 몇 가지 지역 방언적 차이)

  • Oh, Eun-Jin
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.69-87
    • /
    • 2006
  • This study aims to describe the vowel system of present-day American English and to discuss some of its phonetic variations due to regional differences. Fifteen speakers of American English from various regions of the United States produced the monophthongs of English. The vowel duration and the frequencies of the first and the second formant were measured. The results indicate that the distinction between the vowels [c] and [a] has been merged in most parts of the U.S. except in some speakers from eastern and southeastern parts of the U.S., resulting in the general loss of phonemic distinction between the vowels. The phonemic merger of the two vowels can be interpreted as the result of the relatively small functional load of the [c]-[a] contrast, and the smaller back vowel space in comparison to the front vowel space. The study also shows that the F2 frequencies of the high back vowel [u] were extremely high in most of the speakers from the eastern region of the U.S., resulting in the overall reduction of their acoustic space for high vowels. From the viewpoint of the Adaptive Dispersion Theory proposed by Liljencrants & Lindblom (1972) and Lindblom (1986), the high back vowel [u] appeared to have been fronted in order to satisfy the economy of articulatory gesture to some extent without blurring any contrast between [i] and [u] in the high vowel region.

  • PDF

A Study on English Reduced Vowels Produced by Korean Learners and Native Speakers of English (한국인 영어학습자와 영어원어민이 발화한 영어 약화모음에 관한 연구)

  • Shin, Seung-Hoon;Yoon, Nam-Hee;Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.3 no.4
    • /
    • pp.45-53
    • /
    • 2011
  • Flemming and Johnson (2007) claim that there is a fundamental distinction between the mid central vowel [ə] and the high central vowel [?] in that [ə] occurs in an unstressed word-final position while [?] appears elsewhere. Compared to English counterparts, Korean [ə] and [?] are full vowels and they have phonemic contrast. The purpose of this paper is to explore the acoustic quality of two English reduced vowels produced by Korean learners and native speakers of English in terms of their two formant frequencies. Sixteen Korean learners of English and six native speakers of English produced four types of English words and two types of Korean words with different phonological and morphological patterns. The results show that Korean learners of English produced the two reduced vowels of English and their Korean counterparts differently in Korean and English words.

  • PDF

Speech Emotion Recognition on a Simulated Intelligent Robot (모의 지능로봇에서의 음성 감정인식)

  • Jang Kwang-Dong;Kim Nam;Kwon Oh-Wook
    • MALSORI
    • /
    • no.56
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

A Study on the Relation Between the LSF's and Spectral Distribution of Speech Signals (Line Spectral Frequency와 음성신호의 주파수 분포에 관한 연구)

  • 이동수;김영화
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.25 no.4
    • /
    • pp.430-436
    • /
    • 1988
  • LSF(Line Spectral Frequency) derived from LPC has known as a very useful transmission parameter of speech signals, for it has a good linear interpolation characteristics and a low spectrum distortion at low bit rates coding. This paper presents that it is possible to extract directly the formant frequencies of speech signals from LSF parameter without application of FFT algorithm by comparing the distribution of LSF parameter with the frequency distribution of analysis filter. This paper suggests the advanced algorithm that results in improving the speed of convergence at analytic solution method. Also, for the flexibility of parameters, the process that transforms from LSF to LPC is presented.

  • PDF

Correlation between Physical Fatigue and Speech Signals (육체피로와 음성신호와의 상관관계)

  • Kim, Taehun;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.11-17
    • /
    • 2015
  • This paper deals with the correlation between physical fatigue and speech signals. A treadmill task to increase fatigue and a set of subjective questionnaire for rating tiredness were designed. The results from the questionnaire and the collected bio-signals showed that the designed task imposes physical fatigue. The t-test for two-related-samples between the speech signals and fatigue showed that the parameters statistically significant to fatigue are fundamental frequency, first and second formant frequencies, long term average spectral slope, smoothed pitch perturbation quotient, relative average perturbation, pitch perturbation quotient, cepstral peak prominence, and harmonics to noise ratio. According to the experimental results, it is shown that mouth is opened small and voice is changed to be breathy as the physical fatigue accumulates.

Geophysics of Vowel Space in Bahasa Malaysia and Bahasa Indonesia (말레이시아어와 인도네시아어 모음 공간의 지형도)

  • Park Jeong-Sook;Chun Taihyun;Park Han-Sang
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.63-66
    • /
    • 2006
  • This present study investigates the vowels in Bahasa Malaysia and Bahasa Indonesia in terms of the first two formant frequencies. For this study, we recruited 30 male native speakers of Bahasa Malaysia and Bahasa Indonesia (15 each) which include 6 vowels (i, e, a, o, u, a) in various contexts. The present study provides a three-dimensional vowel space by plotting F1, F2, and the frequency of datapoints. This study is significant in that the geophysics of vowel space presents yet another view of the vowel space.

  • PDF

SPEECH TRAINING TOOLS BASED ON VOWEL SWITCH/VOLUME CONTROL AND ITS VISUALIZATION

  • Ueda, Yuichi;Sakata, Tadashi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.441-445
    • /
    • 2009
  • We have developed a real-time software tool to extract a speech feature vector whose time sequences consist of three groups of vector components; the phonetic/acoustic features such as formant frequencies, the phonemic features as outputs on neural networks, and some distances of Japanese phonemes. In those features, since the phoneme distances for Japanese five vowels are applicable to express vowel articulation, we have designed a switch, a volume control and a color representation which are operated by pronouncing vowel sounds. As examples of those vowel interface, we have developed some speech training tools to display a image character or a rolling color ball and to control a cursor's movement for aurally- or vocally-handicapped children. In this paper, we introduce the functions and the principle of those systems.

  • PDF

A Study on the Synthesis of Korean Speech by Formant VOCODER (포르만트 VOCODER에 의한 한국어 음성합성에 관한 연구)

  • 허강인;이대영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.14 no.6
    • /
    • pp.699-712
    • /
    • 1989
  • This paper describes a method of Korean speech synhes is using format VOCODER. The parameters of speech synthes is are a follows, 1) format F1, F2, and F3 by spectrum moment method and F4, F5 using the length of vocal tract. 2) pitch frequencies obtained by optimu, Comb method using AMDF. 3) short time average energy and short time mean amplitude. 4) The decision method of bandwidth reportd by Fant. 5) voicde/unvoiced discrimination using zerocrossing. 6) excitation wave reported by Rosenberg. 7) gaussian white noise. Synthesis results are in fairly good agreement with original speech.

  • PDF