• Title/Summary/Keyword: phonetic data

Search Result 200, Processing Time 0.025 seconds

Locus equation -as a phonetic descriptor for place articulation in Arabic.

  • Kassem Wahba
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.206-206
    • /
    • 1996
  • Previous studies of American English(e.g. Sussman 1991, 1993, 1994) CVC coarticulation with initial consonants representing the labial, alveolar, and velar showed a linear relationship that fits to data points formed by plotting onsets of F2 transition along the y-axis and their corresponding midvowel points along the x-axis. The present study extends the locus equation metric to include the following places of articulation:uvular, pharyngeal, laryngeal, and emphatics. The question of interest is to determine if locus equation could serve as phonetic descriptor for the place of articulation in Arabic. Five male native speakers of Colloquial Egyptian Arabic(CEA) read a list of 204 CVC and CVCC words, containing eight different places of articulation and eight vowels. Average of formant patterns(Fl,F2,F3) onsets, midpoints, and offsets were calculated, using wide band spectrograms obtained by means of the kay spectrograph model(7029), and plotted as locus equations. A summary of the acoustic properties of the place of articulation of CEA will be presented in the frames of bVC and CVb. Strong linear regression relationships were found for every place of articulation.

  • PDF

A Phonetic Study of Vowel Raising: A Closer Look at the Realization of the Suffix {-go} (모음 상승 현상의 음성적 고찰: 어미 {-고}의 실현을 중심으로)

  • LEE, HYANG WON;Shin, Jiyoung
    • Korean Linguistics
    • /
    • v.81
    • /
    • pp.267-297
    • /
    • 2018
  • Vowel raising in Korean has been primarily treated as a phonological, categorical change. This study aims to show how the Korean connective suffix {-go} is realized in various environments, and propose a principle of vowel raising based on both acoustic and perceptual data. To that end, we used a corpus of spoken Korean to analyze the types of syntactic constructions, the realization of prosodic boundaries (IP and PP), and the types of boundary tone associated with {-go}. It was found that the vowel tends to be raised most frequently in utterance-final position, while in utterance-medial position the vowel was raised more when the syntactic and prosodic distance between {-go} and the following constituent was smaller. The results for boundary tone also showed a correlation between vowel raising and the discourse function of the boundary tone. In conclusion, we propose that vowel raising is not simply an optional phenomenon, but rather a type of phonetic reduction related to the comprehension of the following constituent.

An Experimental Study on the Degree of Phonetic Similarity between Korean and Japanese Vowels (한국어와 일본어 단모음의 유사성 분석을 위한 실험음성학적 연구)

  • Kwon, Sung-Mi
    • MALSORI
    • /
    • no.63
    • /
    • pp.47-66
    • /
    • 2007
  • This study aims at exploring the degree of phonetic similarity between Korean and Japanese vowels in terms of acoustic features by performing the speech production test on Korean speakers and Japanese speakers. For this purpose, the speech of 16 Japanese speakers for Japanese speech data, and the speech of 16 Korean speakers for Korean speech data were utilized. The findings in assessing the degree of the similarity of the 7 nearest equivalents of the Korean and Japanese vowels are as follows: First, Korean /i/ and /e/ turned out to display no significant differences in terms of F1 and F2 with their counterparts, Japanese /i/ and /e/, and the distribution of F1 and F2 of Korean /i/ and /e/ in the distributional map completely overlapped with Japanese /i/ and /e/. Accordingly, Korean /i/ and /e/ were believed to be "identical." Second, Korean /a/, /o/, and /i/ displayed a significant difference in either F1 or F2, but showed a great similarity in distribution of F1 and F2 with Japanese /a/, /o/, and /m/ respectively. Korean /a/ /o/, and /i/, therefore, were categorized as very similar to Japanese vowels. Third, Korean /u/, which has the counterpart /m/ in Japanese, showed a significant difference in both F1 and F2, and only half of the distribution overlapped. Thus, Korean /u/ was analyzed as being a moderately similar vowel to Japanese vowels. Fourth, Korean /${\wedge}$/ did not have a close counterpart in Japanese, and was classified as "the least similar vowel."

  • PDF

On the Merger of Korean Mid Front Vowels: Phonetic and Phonological Evidence

  • Eychenne, Julien;Jang, Tae-Yeoub
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.119-129
    • /
    • 2015
  • This paper investigates the status of the merger between the mid front unrounded vowels ㅔ[e] and ㅐ[${\varepsilon}$] in contemporary Korean. Our analysis is based on a balanced corpus of production and perception data from young subjects from three dialectal areas (Seoul, Daegu and Gwangju). Except for expected gender differences, the production data display no difference in the realization of these vowels, in any of the dialects. The perception data, while mostly in line with the production results, show that Seoul females tend to better discriminate the two vowels in terms of perceived height: vowels with a lower F1 are more likely to be categorized as ㅔ by this group. We then investigate the possible causes of this merger: based on an empirical study of transcribed spoken Korean, we show that the pair of vowels ㅔ/ㅐ has a very low functional load. We argue that this factor, together with the phonetic similarity of the two vowels, may have been responsible for the observed merger.

Voice Conversion using Generative Adversarial Nets conditioned by Phonetic Posterior Grams (Phonetic Posterior Grams에 의해 조건화된 적대적 생성 신경망을 사용한 음성 변환 시스템)

  • Lim, Jin-su;Kang, Cheon-seong;Kim, Dong-Ha;Kim, Kyung-sup
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.369-372
    • /
    • 2018
  • This paper suggests non-parallel-voice-conversion network conversing voice between unmapped voice pair as source voice and target voice. Conventional voice conversion researches used learning methods that minimize spectrogram's distance error. Not only these researches have some problem that is lost spectrogram resolution by methods averaging pixels. But also have used parallel data that is hard to collect. This research uses PPGs that is input voice's phonetic data and a GAN learning method to generate more clear voices. To evaluate the suggested method, we conduct MOS test with GMM based Model. We found that the performance is improved compared to the conventional methods.

  • PDF

Electromyographic evidence for a gestural-overlap analysis of vowel devoicing in Korean

  • Jun, Sun-A;Beckman, M.;Niimi, Seiji;Tiede, Mark
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.153-200
    • /
    • 1997
  • In languages such as Japanese, it is very common to observe that short peripheral vowel are completely voiceless when surrounded by voiceless consonants. This phenomenon has been known as Montreal French, Shanghai Chinese, Greek, and Korean. Traditionally this phenomenon has been described as a phonological rule that either categorically deletes the vowel or changes the [+voice] feature of the vowel to [-voice]. This analysis was supported by Sawashima (1971) and Hirose (1971)'s observation that there are two distinct EMG patterns for voiced and devoiced vowel in Japanese. Close examination of the phonetic evidence based on acoustic data, however, shows that these phonological characterizations are not tenable (Jun & Beckman 1993, 1994). In this paper, we examined the vowel devoicing phenomenon in Korean using data from ENG fiberscopic and acoustic recorders of 100 sentences produced by one Korean speaker. The results show that there is variability in the 'degree of devoicing' in both acoustic and EMG signals, and in the patterns of glottal closing and opening across different devoiced tokens. There seems to be no categorical difference between devoiced and voiced tokens, for either EMG activity events or glottal patterns. All of these observations support the notion that vowel devoicing in Korean can not be described as the result of the application of a phonological rule. Rather, devoicing seems to be a highly variable 'phonetic' process, a more or less subtle variation in the specification of such phonetic metrics as degree and timing of glottal opening, or of associated subglottal pressure or intra-oral airflow associated with concurrent tone and stricture specifications. Some of token-pair comparisons are amenable to an explanation in terms of gestural overlap and undershoot. However, the effect of gestural timing on vocal fold state seems to be a highly nonlinear function of the interaction among specifications for the relative timing of glottal adduction and abduction gestures, of the amplitudes of the overlapped gestures, of aerodynamic conditions created by concurrent oral tonal gestures, and so on. In summary, to understand devoicing, it will be necessary to examine its effect on phonetic representation of events in many parts of the vocal tracts, and at many stages of the speech chain between the motor intent and the acoustic signal that reaches the hearer's ear.

  • PDF

Automatic Conversion of English Pronunciation Using Sequence-to-Sequence Model (Sequence-to-Sequence Model을 이용한 영어 발음 기호 자동 변환)

  • Lee, Kong Joo;Choi, Yong Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.5
    • /
    • pp.267-278
    • /
    • 2017
  • As the same letter can be pronounced differently depending on word contexts, one should refer to a lexicon in order to pronounce a word correctly. Phonetic alphabets that lexicons adopt as well as pronunciations that lexicons describe for the same word can be different from lexicon to lexicon. In this paper, we use a sequence-to-sequence model that is widely used in deep learning research area in order to convert automatically from one pronunciation to another. The 12 seq2seq models are implemented based on pronunciation training data collected from 4 different lexicons. The exact accuracy of the models ranges from 74.5% to 89.6%. The aim of this study is the following two things. One is to comprehend a property of phonetic alphabets and pronunciations used in various lexicons. The other is to understand characteristics of seq2seq models by analyzing an error.

Acoustic Characteristics of Korean Alveolar Sibilant 's', 's'' according to Phonetic Contexts of Children with Cerebral Palsy (뇌성마비 아동의 음성 환경에 따른 치경마찰음 'ㅅ', 'ㅆ'의 음향학적 특성)

  • Kim, Sookhee;Kim, Hyungi
    • Phonetics and Speech Sciences
    • /
    • v.5 no.2
    • /
    • pp.3-10
    • /
    • 2013
  • The purpose of this study is to analyze the acoustic characteristics of Korean alveolar sibilant sounds of children with cerebral palsy by acoustic analysis. Thirteen children with spastic cerebral palsy aging from 6 to 10 years old, were selected by an articulation test, and compared with a control group of thirty children. The meaningless monosyllable CV, disyllable VCV(/asa/) and frame sentence including target syllables CV were measured. C was from the /s, s'/, and V was from the set /a, i, u, ${\varepsilon}$, o, ɯ, ʌ/. Multi-Speech was used for data recording and analysis. As a result, the frication duration of lenis-glottalized alveolar sibilant of children with cerebral palsy was significantly shorter than that of the control group in CV, VCV and frame sentence. The vowel duration in the following lenis-glottalized alveolar sibilant of children with cerebral palsy was significantly longer than that of the control group in CV, VCV and frame sentence. The children with cerebral palsy showed frequency and intensity of friction intervals which were significantly lower than in the control group in CV, VCV and frame sentence. In the comparison of the lenis-glottalized alveolar sibilant by the children with cerebral palsy group's phonation types, the frication duration showed a significant difference between the phonation types in CV, VCV and between the phonetic contexts. The glottalized-sibilant was longer than the lenis-sibilant in all the phonetic contexts. The subsequent vowel duration showed a significant difference between the phonation types in VCV and between the phonetic contexts(p<.05). The vowel duration in the following glottalized-sibilant was longer than the vowel duration in the following lenis-sibilant in all the phonetic contexts. In the frequency there was a significant difference between the phonation types in CV, and in the intensity there was a significant difference between the phonation type in CV and VCV. The children with spastic cerebral palsy had difficulty in articulating the alveolar sibilant due to poor control ability in laryngeal, respiration and articulatory movements which require fine motor coordination. This study quantitatively analyzes the acoustic parameters of the alveolar sibilant in various phonetic contexts. Therefore, the results are expected to help provide fundamental data for an intervention of articulation treatment for children with cerebral palsy.

A Study on the Pitch Contour Generator with Neural Network in the Isolated Words (신경망을 이용한 고립단어에서의 피치변화곡선 발생기에 관한 연구)

  • Lim Unchun;Kwak Jingu;Chang Sokwang
    • Proceedings of the KSPS conference
    • /
    • 1996.02a
    • /
    • pp.137-155
    • /
    • 1996
  • The purpose of this paper is to generate a pitch contour which is affected by tile phonetic environment and the number of syllables in each Korean isolated word using a neural network. To do this, we analyzed a set of 513 Korean isolated words, consisting of 1-4 syllables and extracted the pitch contour and the duration of each phoneme in all the words. The total number of phonemes we analyzed is about 3800. After that we approximated the pitch contour with a 1st order polynominal by a regression analysis. We could get the slope, the initial pitch and the duration of each phoneme. We used these 3 parameters as the target pattern of the neural network and let the neural network learn the rule of the variation of the pitch and duration, which was affected by the phonetic environment of each phoneme. We used 7 consecutive phoneme strings as an input pattern for a neural network to make the network learn the effect of phonetic environment around the center phoneme. In the learning phase, we used 3545 items(463 words) as target patterns which contained the phonetic environment of front and rear 3 phonemes and the neural network showed the correctness rate of 98.43%, 98.59%, 97.7% in the estimation of the duration, the slope, the initial pitch. In the recall phase, we tested the performance of tile neural network with 251 items(50 words) which weren't need as learning data and we could get the good correctness rate of 97.34%, 95.45%, 96.3% in the generation of the duration, the slope, and the initial pitch of each phoneme.

  • PDF

Implementation of Korean TTS System based on Natural Language Processing (자연어 처리 기반 한국어 TTS 시스템 구현)

  • Kim Byeongchang;Lee Gary Geunbae
    • MALSORI
    • /
    • no.46
    • /
    • pp.51-64
    • /
    • 2003
  • In order to produce high quality synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model from texts using natural language processing. Robust preprocessing for non-Korean characters should also be required. In this paper, we analyzed Korean texts using a morphological analyzer, part-of-speech tagger and syntactic chunker. We present a new grapheme-to-phoneme conversion method for Korean using a hybrid method with a phonetic pattern dictionary and CCV (consonant vowel) LTS (letter to sound) rules, for unlimited vocabulary Korean TTS. We constructed a prosody model using a probabilistic method and decision tree-based method. The probabilistic method atone usually suffers from performance degradation due to inherent data sparseness problems. So we adopted tree-based error correction to overcome these training data limitations.

  • PDF