• 제목/요약/키워드: phonetic data

검색결과 200건 처리시간 0.021초

Locus equation -as a phonetic descriptor for place articulation in Arabic.

  • Kassem Wahba
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.206-206
    • /
    • 1996
  • Previous studies of American English(e.g. Sussman 1991, 1993, 1994) CVC coarticulation with initial consonants representing the labial, alveolar, and velar showed a linear relationship that fits to data points formed by plotting onsets of F2 transition along the y-axis and their corresponding midvowel points along the x-axis. The present study extends the locus equation metric to include the following places of articulation:uvular, pharyngeal, laryngeal, and emphatics. The question of interest is to determine if locus equation could serve as phonetic descriptor for the place of articulation in Arabic. Five male native speakers of Colloquial Egyptian Arabic(CEA) read a list of 204 CVC and CVCC words, containing eight different places of articulation and eight vowels. Average of formant patterns(Fl,F2,F3) onsets, midpoints, and offsets were calculated, using wide band spectrograms obtained by means of the kay spectrograph model(7029), and plotted as locus equations. A summary of the acoustic properties of the place of articulation of CEA will be presented in the frames of bVC and CVb. Strong linear regression relationships were found for every place of articulation.

  • PDF

모음 상승 현상의 음성적 고찰: 어미 {-고}의 실현을 중심으로 (A Phonetic Study of Vowel Raising: A Closer Look at the Realization of the Suffix {-go})

  • 이향원;신지영
    • 한국어학
    • /
    • 제81권
    • /
    • pp.267-297
    • /
    • 2018
  • Vowel raising in Korean has been primarily treated as a phonological, categorical change. This study aims to show how the Korean connective suffix {-go} is realized in various environments, and propose a principle of vowel raising based on both acoustic and perceptual data. To that end, we used a corpus of spoken Korean to analyze the types of syntactic constructions, the realization of prosodic boundaries (IP and PP), and the types of boundary tone associated with {-go}. It was found that the vowel tends to be raised most frequently in utterance-final position, while in utterance-medial position the vowel was raised more when the syntactic and prosodic distance between {-go} and the following constituent was smaller. The results for boundary tone also showed a correlation between vowel raising and the discourse function of the boundary tone. In conclusion, we propose that vowel raising is not simply an optional phenomenon, but rather a type of phonetic reduction related to the comprehension of the following constituent.

한국어와 일본어 단모음의 유사성 분석을 위한 실험음성학적 연구 (An Experimental Study on the Degree of Phonetic Similarity between Korean and Japanese Vowels)

  • 권성미
    • 대한음성학회지:말소리
    • /
    • 제63호
    • /
    • pp.47-66
    • /
    • 2007
  • This study aims at exploring the degree of phonetic similarity between Korean and Japanese vowels in terms of acoustic features by performing the speech production test on Korean speakers and Japanese speakers. For this purpose, the speech of 16 Japanese speakers for Japanese speech data, and the speech of 16 Korean speakers for Korean speech data were utilized. The findings in assessing the degree of the similarity of the 7 nearest equivalents of the Korean and Japanese vowels are as follows: First, Korean /i/ and /e/ turned out to display no significant differences in terms of F1 and F2 with their counterparts, Japanese /i/ and /e/, and the distribution of F1 and F2 of Korean /i/ and /e/ in the distributional map completely overlapped with Japanese /i/ and /e/. Accordingly, Korean /i/ and /e/ were believed to be "identical." Second, Korean /a/, /o/, and /i/ displayed a significant difference in either F1 or F2, but showed a great similarity in distribution of F1 and F2 with Japanese /a/, /o/, and /m/ respectively. Korean /a/ /o/, and /i/, therefore, were categorized as very similar to Japanese vowels. Third, Korean /u/, which has the counterpart /m/ in Japanese, showed a significant difference in both F1 and F2, and only half of the distribution overlapped. Thus, Korean /u/ was analyzed as being a moderately similar vowel to Japanese vowels. Fourth, Korean /${\wedge}$/ did not have a close counterpart in Japanese, and was classified as "the least similar vowel."

  • PDF

On the Merger of Korean Mid Front Vowels: Phonetic and Phonological Evidence

  • Eychenne, Julien;Jang, Tae-Yeoub
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.119-129
    • /
    • 2015
  • This paper investigates the status of the merger between the mid front unrounded vowels ㅔ[e] and ㅐ[${\varepsilon}$] in contemporary Korean. Our analysis is based on a balanced corpus of production and perception data from young subjects from three dialectal areas (Seoul, Daegu and Gwangju). Except for expected gender differences, the production data display no difference in the realization of these vowels, in any of the dialects. The perception data, while mostly in line with the production results, show that Seoul females tend to better discriminate the two vowels in terms of perceived height: vowels with a lower F1 are more likely to be categorized as ㅔ by this group. We then investigate the possible causes of this merger: based on an empirical study of transcribed spoken Korean, we show that the pair of vowels ㅔ/ㅐ has a very low functional load. We argue that this factor, together with the phonetic similarity of the two vowels, may have been responsible for the observed merger.

Phonetic Posterior Grams에 의해 조건화된 적대적 생성 신경망을 사용한 음성 변환 시스템 (Voice Conversion using Generative Adversarial Nets conditioned by Phonetic Posterior Grams)

  • 임진수;강천성;김동하;김경섭
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2018년도 추계학술대회
    • /
    • pp.369-372
    • /
    • 2018
  • 본 논문은 매핑 되지 않은 입력 음성과 목표음성 사이에 음성 변환하는 비 병렬 음성 변환 네트워크를 제안한다. 기존 음성 변환 연구에서는 변환 전후 스펙트로그램의 거리 오차를 최소화하는 방법을 주로 학습 한다. 이러한 방법은 MSE의 이미지를 평균 내는 특징으로 인하여 생성된 스펙트로그램의 해상도가 저하되는 문제점이 있었다. 또한, 병렬 데이터를 사용해 연구를 진행했기 때문에 데이터를 수집하는 것에도 어려움이 많았다. 본 논문에서는 입력 음성의 발음 PPGs를 사용하여 비 병렬 데이터 간 학습을 진행 하며, GAN 학습을 통해 더욱 선명한 음성을 생성하는 방법을 사용하였다. 제안한 방법의 유효성을 검증하기 위해서 기존 음성 변환 시스템에서 많이 사용하는 GMM 기반 모델과 MOS 테스트를 진행하였으며 기존 모델에 비하여 성능이 향상되는 결과를 얻었다.

  • PDF

Electromyographic evidence for a gestural-overlap analysis of vowel devoicing in Korean

  • Jun, Sun-A;Beckman, M.;Niimi, Seiji;Tiede, Mark
    • 음성과학
    • /
    • 제1권
    • /
    • pp.153-200
    • /
    • 1997
  • In languages such as Japanese, it is very common to observe that short peripheral vowel are completely voiceless when surrounded by voiceless consonants. This phenomenon has been known as Montreal French, Shanghai Chinese, Greek, and Korean. Traditionally this phenomenon has been described as a phonological rule that either categorically deletes the vowel or changes the [+voice] feature of the vowel to [-voice]. This analysis was supported by Sawashima (1971) and Hirose (1971)'s observation that there are two distinct EMG patterns for voiced and devoiced vowel in Japanese. Close examination of the phonetic evidence based on acoustic data, however, shows that these phonological characterizations are not tenable (Jun & Beckman 1993, 1994). In this paper, we examined the vowel devoicing phenomenon in Korean using data from ENG fiberscopic and acoustic recorders of 100 sentences produced by one Korean speaker. The results show that there is variability in the 'degree of devoicing' in both acoustic and EMG signals, and in the patterns of glottal closing and opening across different devoiced tokens. There seems to be no categorical difference between devoiced and voiced tokens, for either EMG activity events or glottal patterns. All of these observations support the notion that vowel devoicing in Korean can not be described as the result of the application of a phonological rule. Rather, devoicing seems to be a highly variable 'phonetic' process, a more or less subtle variation in the specification of such phonetic metrics as degree and timing of glottal opening, or of associated subglottal pressure or intra-oral airflow associated with concurrent tone and stricture specifications. Some of token-pair comparisons are amenable to an explanation in terms of gestural overlap and undershoot. However, the effect of gestural timing on vocal fold state seems to be a highly nonlinear function of the interaction among specifications for the relative timing of glottal adduction and abduction gestures, of the amplitudes of the overlapped gestures, of aerodynamic conditions created by concurrent oral tonal gestures, and so on. In summary, to understand devoicing, it will be necessary to examine its effect on phonetic representation of events in many parts of the vocal tracts, and at many stages of the speech chain between the motor intent and the acoustic signal that reaches the hearer's ear.

  • PDF

Sequence-to-Sequence Model을 이용한 영어 발음 기호 자동 변환 (Automatic Conversion of English Pronunciation Using Sequence-to-Sequence Model)

  • 이공주;최용석
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제6권5호
    • /
    • pp.267-278
    • /
    • 2017
  • 영어는 동일 철자의 발음이 매우 다양한 언어이기 때문에 사전에 기술되어 있는 단어의 발음기호를 읽어야만 정확한 발음을 알 수 있다. 영어 사전마다 사용하는 발음기호(phonetic alphabet) 시스템이 다르며 같은 단어에 대해 기술하고 있는 발음 역시 다르다. 본 연구에서는 최근 딥 러닝 분야에서 널리 사용되고 있는 sequence-to-sequence (seq2seq) model을 이용하여 사전마다 다른 발음을 자동으로 변환해 보고자 한다. 4가지 다른 종류의 사전에서 추출한 발음 데이터를 이용하여 모두 12개의 seq2seq model을 구현하였으며, 발음 자동 변환 모듈의 정확 일치율은 74.5% ~ 89.6%의 성능을 보였다. 본 연구의 주요 목적은 다음의 두 가지이다. 첫째 영어 발음기호 시스템과 각 사전의 발음 데이터 특성을 살펴보는 것이고, 둘째, 발음 정보의 자동 변환과 오류 분석을 통해 seq2seq model의 특성을 살펴보는 것이다.

뇌성마비 아동의 음성 환경에 따른 치경마찰음 'ㅅ', 'ㅆ'의 음향학적 특성 (Acoustic Characteristics of Korean Alveolar Sibilant 's', 's'' according to Phonetic Contexts of Children with Cerebral Palsy)

  • 김숙희;김현기
    • 말소리와 음성과학
    • /
    • 제5권2호
    • /
    • pp.3-10
    • /
    • 2013
  • The purpose of this study is to analyze the acoustic characteristics of Korean alveolar sibilant sounds of children with cerebral palsy by acoustic analysis. Thirteen children with spastic cerebral palsy aging from 6 to 10 years old, were selected by an articulation test, and compared with a control group of thirty children. The meaningless monosyllable CV, disyllable VCV(/asa/) and frame sentence including target syllables CV were measured. C was from the /s, s'/, and V was from the set /a, i, u, ${\varepsilon}$, o, ɯ, ʌ/. Multi-Speech was used for data recording and analysis. As a result, the frication duration of lenis-glottalized alveolar sibilant of children with cerebral palsy was significantly shorter than that of the control group in CV, VCV and frame sentence. The vowel duration in the following lenis-glottalized alveolar sibilant of children with cerebral palsy was significantly longer than that of the control group in CV, VCV and frame sentence. The children with cerebral palsy showed frequency and intensity of friction intervals which were significantly lower than in the control group in CV, VCV and frame sentence. In the comparison of the lenis-glottalized alveolar sibilant by the children with cerebral palsy group's phonation types, the frication duration showed a significant difference between the phonation types in CV, VCV and between the phonetic contexts. The glottalized-sibilant was longer than the lenis-sibilant in all the phonetic contexts. The subsequent vowel duration showed a significant difference between the phonation types in VCV and between the phonetic contexts(p<.05). The vowel duration in the following glottalized-sibilant was longer than the vowel duration in the following lenis-sibilant in all the phonetic contexts. In the frequency there was a significant difference between the phonation types in CV, and in the intensity there was a significant difference between the phonation type in CV and VCV. The children with spastic cerebral palsy had difficulty in articulating the alveolar sibilant due to poor control ability in laryngeal, respiration and articulatory movements which require fine motor coordination. This study quantitatively analyzes the acoustic parameters of the alveolar sibilant in various phonetic contexts. Therefore, the results are expected to help provide fundamental data for an intervention of articulation treatment for children with cerebral palsy.

신경망을 이용한 고립단어에서의 피치변화곡선 발생기에 관한 연구 (A Study on the Pitch Contour Generator with Neural Network in the Isolated Words)

  • 임운천;곽진구;장석왕
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 2월 학술대회지
    • /
    • pp.137-155
    • /
    • 1996
  • The purpose of this paper is to generate a pitch contour which is affected by tile phonetic environment and the number of syllables in each Korean isolated word using a neural network. To do this, we analyzed a set of 513 Korean isolated words, consisting of 1-4 syllables and extracted the pitch contour and the duration of each phoneme in all the words. The total number of phonemes we analyzed is about 3800. After that we approximated the pitch contour with a 1st order polynominal by a regression analysis. We could get the slope, the initial pitch and the duration of each phoneme. We used these 3 parameters as the target pattern of the neural network and let the neural network learn the rule of the variation of the pitch and duration, which was affected by the phonetic environment of each phoneme. We used 7 consecutive phoneme strings as an input pattern for a neural network to make the network learn the effect of phonetic environment around the center phoneme. In the learning phase, we used 3545 items(463 words) as target patterns which contained the phonetic environment of front and rear 3 phonemes and the neural network showed the correctness rate of 98.43%, 98.59%, 97.7% in the estimation of the duration, the slope, the initial pitch. In the recall phase, we tested the performance of tile neural network with 251 items(50 words) which weren't need as learning data and we could get the good correctness rate of 97.34%, 95.45%, 96.3% in the generation of the duration, the slope, and the initial pitch of each phoneme.

  • PDF

자연어 처리 기반 한국어 TTS 시스템 구현 (Implementation of Korean TTS System based on Natural Language Processing)

  • 김병창;이근배
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.51-64
    • /
    • 2003
  • In order to produce high quality synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model from texts using natural language processing. Robust preprocessing for non-Korean characters should also be required. In this paper, we analyzed Korean texts using a morphological analyzer, part-of-speech tagger and syntactic chunker. We present a new grapheme-to-phoneme conversion method for Korean using a hybrid method with a phonetic pattern dictionary and CCV (consonant vowel) LTS (letter to sound) rules, for unlimited vocabulary Korean TTS. We constructed a prosody model using a probabilistic method and decision tree-based method. The probabilistic method atone usually suffers from performance degradation due to inherent data sparseness problems. So we adopted tree-based error correction to overcome these training data limitations.

  • PDF