• Title/Summary/Keyword: Low Vowel

Search Result 105, Processing Time 0.019 seconds

Electroglottographic Spectral Tilt in Frequency Ranges of Vowel Sound (모음 주파수 범위에 따른 성문전도 스펙트럼 기울기)

  • Kim, Ji-Hye;Jang, Ae-Lan;Jung, Dong-Keun
    • Journal of Sensor Science and Technology
    • /
    • v.24 no.4
    • /
    • pp.247-251
    • /
    • 2015
  • In this study, electroglottographic spectral tilt (EST) was investigated for characterization of vocal cords vibration. EST was analyzed from the power spectrum of electroglottographic signals by dividing frequency analysis range as full range (0~4 octave), low range (0~2 octave), and high range (2~4 octave). EST of all ranges in female were greater than those in male. In female and male groups, EST of high range was higher than that of low range. This result suggests that EST has at least two components and dividing frequency range in analysis of EST is effective for investigating characteristics of vocal cords vibration.

Korean Single-Vowel Recognition Using Cumulants in Color Noisy Environment (유색 잡음 환경하에서 Cumulant를 이용한 한국어 단모음 인식)

  • Lee, Hyung-Gun;Yang, Won-Young;Cho, Yong-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.2
    • /
    • pp.50-59
    • /
    • 1994
  • This paper presents a speech recognition method utilizing third-order cumulants as a feature vector and a neural network for recognition. The use of higher-order cumulants provides desirable uncoupling between the gaussian noise and speech, which enables us to estimate the coefficients of AR model without bias. Unlike the conventional method using second-order statistics, the proposed one exhibits low bias even in SNR as low as 0 dB at the expense of higher variance. It is confirmed through computer simulation that recognition rate of korean single-vowels with the cumulant-based method is much higher than the results with the conventional method even in low SNR.

  • PDF

Effects of phonological and phonetic information of vowels on perception of prosodic prominence in English

  • Suyeon Im
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.1-7
    • /
    • 2023
  • This study investigates how the phonological and phonetic information of vowels influences prosodic prominence among linguistically untrained listeners using public speech in American English. We first examined the speech material's phonetic realization of vowels (i.e., maximum F0, F0 range, phone rate [as a measure of duration considering the speech rate of the utterance], and mean intensity). Results showed that the high vowels /i/ and /u/ likely had the highest max F0, while the low vowels /æ/ and /ɑ/ tended to have the highest mean intensity. Both high and low vowels had similarly high phone rates. Next, we examined the effects of the vowels' phonological and phonetic information on listeners' perceptions of prosodic prominence. The results showed that vowels significantly affected the likelihood of perceived prominence independent of acoustic cues. The high and low vowels affected probability of perceived prominence less than the mid vowels /ɛ/ and /ʌ/, although the former two were more likely to be phonetically enhanced in the speech than the latter. Overall, these results suggest that perceptions of prosodic prominence in English are not directly influenced by signal-driven factors (i.e., vowels' acoustic information) but are mediated by expectation-driven factors (e.g., vowels' phonological information).

A Study on Pitch Extraction Method using FIR-STREAK Digital Filter (FIR-STREAK 디지털 필터를 사용한 피치추출 방법에 관한 연구)

  • Lee, Si-U
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.1
    • /
    • pp.247-252
    • /
    • 1999
  • In order to realize a speech coding at low bit rates, a pitch information is useful parameter. In case of extracting an average pitch information form continuous speech, the several pitch errors appear in a frame which consonant and vowel are coexistent; in the boundary between adjoining frames and beginning or ending of a sentence. In this paper, I propose an Individual Pitch (IP) extraction method using residual signals of the FIR-STREAK digital filter in order to restrict the pitch extraction errors. This method is based on not averaging pitch intervals in order to accomodate the changes in each pitch interval. As a result, in case of Ip extraction method suing FIR-STREAK digital filter, I can't find the pitch errors in a frame which consonant and vowel are consistent; in the boundary between adjoining frames and beginning or ending of a sentence. This method has the capability of being applied to many fields, such as speech coding, speech analysis, speech synthesis and speech recognition.

  • PDF

Automatic Phonetic Segmentation of Korean Speech Signal Using Phonetic-acoustic Transition Information (음소 음향학적 변화 정보를 이용한 한국어 음성신호의 자동 음소 분할)

  • 박창목;왕지남
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.24-30
    • /
    • 2001
  • This article is concerned with automatic segmentation for Korean speech signals. All kinds of transition cases of phonetic units are classified into 3 types and different strategies for each type are applied. The type 1 is the discrimination of silence, voiced-speech and unvoiced-speech. The histogram analysis of each indicators which consists of wavelet coefficients and SVF (Spectral Variation Function) in wavelet coefficients are used for type 1 segmentation. The type 2 is the discrimination of adjacent vowels. The vowel transition cases can be characterized by spectrogram. Given phonetic transcription and transition pattern spectrogram, the speech signal, having consecutive vowels, are automatically segmented by the template matching. The type 3 is the discrimination of vowel and voiced-consonants. The smoothed short-time RMS energy of Wavelet low pass component and SVF in cepstral coefficients are adopted for type 3 segmentation. The experiment is performed for 342 words utterance set. The speech data are gathered from 6 speakers. The result shows the validity of the method.

  • PDF

Tonal development and voice quality in the stops of Seoul Korean

  • Yu, Hye Jeong
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.91-99
    • /
    • 2018
  • Korean stops are currently undergoing a tonogenetic sound change, as found in the Seoul dialect in which a merged VOT of aspirated and lax stops induces F0 to be the primary cue for distinguishing the two stops and the lax stops have lower F0 than the aspirated stops. In tonal languages, low tone is produced with a breathy voice. This study investigated whether there are changes in voice quality with respect to the tonogenetic sound change of Korean stops. Two age groups speaking the Seoul dialect participated in this study: five females and six males born in the 1940s and 1950s and nine females and eight males born in the 1980s and 1990s. This study replicated previous findings of VOT and F0 and further examined H1-H2, H1-A1, and H1-A2 to see how they correlate with the sound change. In the older and younger generations, H1-H2, H1-A1, and H1-A2 were significantly lower after the tense stops than after the aspirated and lax stops, but they were not significantly different after the aspirated and lax stops. However, the younger females exhibited some different results for H1-H2 and H1-A2 than the older generation. In the younger females, the H1-H2 mean was higher after the aspirated stops than it was after the lax stops at the vowel onset, and the H1-H2 difference increased at the vowel midpoint. Although there was an inter-speaker variation in the results of H1-H2 and H1-A1, analyses of individual speakers showed that the H1-H2 and H1-A1 were higher after the lax stops than after the aspirated stops in the younger female speakers. These results indicate that lax stops tend to be breathier than aspirated stops in the younger female speakers. They also indicate that changes in voice quality are on Korean stops with tonal sound change, but are still developing.

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

Comparison of vowel lengths of articles and monosyllabic nouns in Korean EFL learners' noun phrase production in relation to their English proficiency (한국인 영어학습자의 명사구 발화에서 영어 능숙도에 따른 관사와 단음절 명사 모음 길이 비교)

  • Park, Woojim;Mo, Ranm;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.33-40
    • /
    • 2020
  • The purpose of this research was to find out the relation between Korean learners' English proficiency and the ratio of the length of the stressed vowel in a monosyllabic noun to that of the unstressed vowel in an article of the noun phrases (e.g., "a cup", "the bus", etcs.). Generally, the vowels in monosyllabic content words are phonetically more prominent than the ones in monosyllabic function words as the former have phrasal stress, making the vowels in content words longer in length, higher in pitch, and louder in amplitude. This study, based on the speech samples from Korean-Spoken English Corpus (K-SEC) and Rated Korean-Spoken English Corpus (Rated K-SEC), examined 879 English noun phrases, which are composed of an article and a monosyllabic noun, from sentences which are rated on 4 levels of proficiency. The lengths of the vowels in these 879 target NPs were measured and the ratio of the vowel lengths in nouns to those in articles was calculated. It turned out that the higher the proficiency level, the greater the mean ratio of the vowels in nouns to the vowels in articles, confirming the research's hypothesis. This research thus concluded that for the Korean English learners, the higher the English proficiency level, the better they could produce the stressed and unstressed vowels with more conspicuous length differences between them.

Thai Phoneme Segmentation using Dual-Band Energy Contour

  • Ratsameewichai, S.;Theera-Umpon, N.;Vilasdechanon, J.;Uatrongjit, S.;Likit-Anurucks, K.
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.110-112
    • /
    • 2002
  • In this paper, a new technique for Thai isolated speech phoneme segmentation is proposed. Based on Thai speech feature, the isolated speech is first divided into low and high frequency components by using the technique of wavelet decomposition. Then the energy contour of each decomposed signal is computed and employed to locate phoneme boundary. To verity the proposed scheme, some experiments have been performed using 1,000 syllables data recorded from 10 speakers. The accuracy rates are 96.0, 89.9, 92.7 and 98.9% for initial consonant, vowel, final consonant and silence, respectively.

  • PDF

Against Phonological Ambisyllabicity (음운적 양음절성의 허상)

  • 김영석
    • Korean Journal of English Language and Linguistics
    • /
    • v.1 no.1
    • /
    • pp.19-38
    • /
    • 2001
  • The question of how / ... VCV .../ sequences should be syllabified is a much discussed, yet unresolved, issue in English phonology. While most researchers recognize an over-all universal tendency towards open syllables, there seem to be at least two different views as regards the analysis of / ... VCV .../ when the second vowel is unstressed: ambisyllabicity (e.g., Kahn 1976) and resyllabification (e.g., Borowsky 1986). Basically, we adopt the latter view and will present further evidence in its favor. This does not exclude low-level “phonetic” ambisyllabification, however. Following Nespor and Vogel (1986), we also assume that the domain of syllabification or resyllabification is the phonological word. With the new conception of the syllable structure of English, we attempt a reanalysis of Aitkin's Law as well as fe-tensing in New York City and Philadelphia.

  • PDF