• 제목/요약/키워드: phonetic variation

검색결과 60건 처리시간 0.02초

의미의 강조에 의한 운율특징 -음향음성학적 관점에 의한 분석- (The Variation of Prosody by Focus)

  • 김선희
    • 대한음성학회지:말소리
    • /
    • 제40호
    • /
    • pp.51-63
    • /
    • 2000
  • There are sentences where sentence stress is imposed on a specific word. These sentences are called 'focused sentences'. The purpose of this paper is to investigate the variation of pitch, duration, amplitude in focused words. It is noted that pitch of a focused word is higher than that of unfocused words irrespective of the accentual pattern, and that contour tones such as HL or LH are realized longer when these tones appear in focused words. Not only the noun but also the following particle like '-boda' is higher when these words are in focus. Hence pitch is proved to be the most salient prosodic feature of the focused sentence.

  • PDF

신경망을 이용한 고립단어에서의 피치변화곡선 발생기에 관한 연구 (A Study on the Pitch Contour Generator with Neural Network in the Isolated Words)

  • 임운천;곽진구;장석왕
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 2월 학술대회지
    • /
    • pp.137-155
    • /
    • 1996
  • The purpose of this paper is to generate a pitch contour which is affected by tile phonetic environment and the number of syllables in each Korean isolated word using a neural network. To do this, we analyzed a set of 513 Korean isolated words, consisting of 1-4 syllables and extracted the pitch contour and the duration of each phoneme in all the words. The total number of phonemes we analyzed is about 3800. After that we approximated the pitch contour with a 1st order polynominal by a regression analysis. We could get the slope, the initial pitch and the duration of each phoneme. We used these 3 parameters as the target pattern of the neural network and let the neural network learn the rule of the variation of the pitch and duration, which was affected by the phonetic environment of each phoneme. We used 7 consecutive phoneme strings as an input pattern for a neural network to make the network learn the effect of phonetic environment around the center phoneme. In the learning phase, we used 3545 items(463 words) as target patterns which contained the phonetic environment of front and rear 3 phonemes and the neural network showed the correctness rate of 98.43%, 98.59%, 97.7% in the estimation of the duration, the slope, the initial pitch. In the recall phase, we tested the performance of tile neural network with 251 items(50 words) which weren't need as learning data and we could get the good correctness rate of 97.34%, 95.45%, 96.3% in the generation of the duration, the slope, and the initial pitch of each phoneme.

  • PDF

자동 음성분할 및 레이블링 시스템의 성능향상 (Performance Improvement of Automatic Speech Segmentation and Labeling System)

  • 홍성태;김제우;김형순
    • 대한음성학회지:말소리
    • /
    • 제35_36호
    • /
    • pp.175-188
    • /
    • 1998
  • Database segmented and labeled up to phoneme level plays an important role in phonetic research and speech engineering. However, it usually requires manual segmentation and labeling, which is time-consuming and may also lead to inconsistent consequences. Automatic segmentation and labeling can be introduced to solve these problems. In this paper, we investigate a method to improve the performance of automatic segmentation and labeling system, where Spectral Variation Function(SVF), modification of silence model, and use of energy variations in postprocessing stage are considered. In this paper, SVF is applied in three ways: (1) addition to feature parameters, (2) postprocessing of phoneme boundaries, (3) restricting the Viterbi path so that the resulting phoneme boundaries may be located in frames around SVF peaks. In the postprocessing stage, positions with greatest energy variation during transitional period between silence and other phonemes were used to modify boundaries. In order to evaluate the performance of the system, we used 452 phonetically balanced word(PBW) database for training phoneme models and phonetically balanced sentence(PBS) database for testing. According to our experiments, 83.1% (6.2% improved) and 95.8% (0.9% improved) of phoneme boundaries were within 20ms and 40ms of the manually segmented boundaries, respectively.

  • PDF

ToBI and beyond: Phonetic intonation of Seoul Korean ani in Korean Intonation Corpus (KICo)

  • Ji-eun Kim
    • 말소리와 음성과학
    • /
    • 제16권1호
    • /
    • pp.1-9
    • /
    • 2024
  • This study investigated the variation in the intonation of Seoul Korean interjection ani across different meanings ("no" and "really?") and speech levels (Intimate and Polite) using data from Korean Intonation Corpus (KICo). The investigation was conducted in two stages. First, IP-final tones in the dataset were categorized according to the K-ToBI convention (Jun, 2000). While significant relationships were observed between the meaning of ani and its IP-final tones, substantial overlap between groups was notable. Second, the F0 characteristics of the final syllable of ani were analyzed to elucidate the apparent many-to-many relationships between intonation and meaning/speech level. Results indicated that these seemingly overlapping relationships could be significantly distinguished. Overall, this study advocates for a deeper analysis of phonetic intonation beyond ToBI-based categorical labels. By examining the F0 characteristics of the IP-final syllable, previously unclear connections between meaning/speech level and intonation become more comprehensible. Although ToBI remains a valuable tool and framework for studying intonation, it is imperative to explore beyond these categories to grasp the "distinctiveness" of intonation, thereby enriching our understanding of prosody.

한국어의 변이음 규칙과 변이음의 결정 요인들 (Allophonic Rules and Determining Factors of Allophones in Korean)

  • 이호영
    • 대한음성학회지:말소리
    • /
    • 제21_24호
    • /
    • pp.144-175
    • /
    • 1992
  • This paper aims to discuss determining factors of Korean allophones and to formulate and classify Korean allophonic rules systematically. The relationship between allophones and coarticulation, the most. influential factor of allophonic variation, is thoroughly investigated. Other factors -- speech tempo and style, dialect, and social factors such as age, set, class etc. -- are also briefly discussed. Allophonic rules are classified into two groups -- 3) those relevant to coarticulation and 2) those irrelevant to coarticulation. Rules of the first group are further classified into four subgroups according to the directionality of the coarticulation. Each allophonic nile formulation is explained and discussed in detai1. The allophonic rules formulated and classified in this paper are 1) Devoicing of Voiced Consonants, 2) Devoicing of Vowels, 3) Nasal Approach and Lateral Approach, 4) Uvularization, 5) Palatalization, 6) Voicing of Voiceless Lax Consonants, 7) Frication, 8) Labialization, 9) Nasalization, 10) Release Withholding and Release Masking, 11) Glottalization, 12) Flap Rule, 13) Vowel Weakening, and 14) Allophones of /ㅚ, ㅟ, ㅢ/ (which are realized as diphthongs or as monophthongs depending on phonetic contexts).

  • PDF

Explaining Phonetic Variation of Consonants in Vocalic Context

  • Oh, Eu-Jin
    • 음성과학
    • /
    • 제8권3호
    • /
    • pp.31-41
    • /
    • 2001
  • This paper aims to provide preliminary evidence that (at least part of) phonetic phenomena are not simply automatic or arbitrary, but are explained by the functional guidelines, ease of articulation and maintenance of contrasts. The first study shows that languages with more high vowels (e.g., French) allow larger consonantal deviation from its target than languages with less high vowels (e.g., English). This is interpreted as achieving the economy of articulation to a certain extent in order to avoid otherwise extreme articulatory movement to be made in CV syllables due to strict demand on maintaining vocalic contrasts. The second study shows that Russian plain bilabial consonant allows less amount of undershoot due to the neighboring vowels than does English bilabial consonant. This is probably due to the stricter demand on maintaining the consonantal contrasts, plain vs. palatalized, existing only in Russian.

  • PDF

표준 중국어의 경계억양에 관한 연구 (Study of Boundary Tone in Mandarin Chinese)

  • 손남호
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2003년도 5월 학술대회지
    • /
    • pp.43-47
    • /
    • 2003
  • This paper is phonetic study of $F_{0}$ range and boundary tone in Mandarin Chinese. The production data from 6 Chinese speakers show that there are declination, pitch resetting and tonal variation of boundary tone. In declarative sentence, $F_{0}$ declines gradually over the utterance but mid-sentence boundary prevents $F_{0}$ of following syllable from declining because of pitch resetting. $F_{0}$ range of syllable is expanded before the mid- and final sentence boundaries. In interrogative one, $F_{0}$ ascends gradually over the utterance and mid-sentence boundary makes $F_{0}$ of following syllable rise more. $F_{0}$ range of sentence final syllable is expanded and $F_{0}$ contour shows rising curve.

  • PDF

Electromyographic evidence for a gestural-overlap analysis of vowel devoicing in Korean

  • Jun, Sun-A;Beckman, M.;Niimi, Seiji;Tiede, Mark
    • 음성과학
    • /
    • 제1권
    • /
    • pp.153-200
    • /
    • 1997
  • In languages such as Japanese, it is very common to observe that short peripheral vowel are completely voiceless when surrounded by voiceless consonants. This phenomenon has been known as Montreal French, Shanghai Chinese, Greek, and Korean. Traditionally this phenomenon has been described as a phonological rule that either categorically deletes the vowel or changes the [+voice] feature of the vowel to [-voice]. This analysis was supported by Sawashima (1971) and Hirose (1971)'s observation that there are two distinct EMG patterns for voiced and devoiced vowel in Japanese. Close examination of the phonetic evidence based on acoustic data, however, shows that these phonological characterizations are not tenable (Jun & Beckman 1993, 1994). In this paper, we examined the vowel devoicing phenomenon in Korean using data from ENG fiberscopic and acoustic recorders of 100 sentences produced by one Korean speaker. The results show that there is variability in the 'degree of devoicing' in both acoustic and EMG signals, and in the patterns of glottal closing and opening across different devoiced tokens. There seems to be no categorical difference between devoiced and voiced tokens, for either EMG activity events or glottal patterns. All of these observations support the notion that vowel devoicing in Korean can not be described as the result of the application of a phonological rule. Rather, devoicing seems to be a highly variable 'phonetic' process, a more or less subtle variation in the specification of such phonetic metrics as degree and timing of glottal opening, or of associated subglottal pressure or intra-oral airflow associated with concurrent tone and stricture specifications. Some of token-pair comparisons are amenable to an explanation in terms of gestural overlap and undershoot. However, the effect of gestural timing on vocal fold state seems to be a highly nonlinear function of the interaction among specifications for the relative timing of glottal adduction and abduction gestures, of the amplitudes of the overlapped gestures, of aerodynamic conditions created by concurrent oral tonal gestures, and so on. In summary, to understand devoicing, it will be necessary to examine its effect on phonetic representation of events in many parts of the vocal tracts, and at many stages of the speech chain between the motor intent and the acoustic signal that reaches the hearer's ear.

  • PDF

한국어 연속음성 인식을 위한 발음열 자동 생성 (Automatic Generation of Pronunciation Variants for Korean Continuous Speech Recognition)

  • 이경님;전재훈;정민화
    • 한국음향학회지
    • /
    • 제20권2호
    • /
    • pp.35-43
    • /
    • 2001
  • 음성 인식이나 음성 합성시 필요한 발음열을 수작업으로 작성할 경우 작성자의 음운변화 현상에 대한 전문적 언어지식을 비롯하여 많은 시간과 노력이 요구되며 일관성을 유지하기도 쉽지 않다. 또한 한국어의 음운 변화 현상은 단일 형태소의 내부와 복합어에서 결합된 형태소의 경계점, 여러 형태소가 결합해서 한 어절을 이룰 경우 그 어절 내부의 형태소의 경계점, 여러 어절이 한 어절을 이룰 때 구성 어절의 경계점에서 서로 다른 적용 양상을 보인다. 본 논문에서는 이러한 문제를 해결하기 위해서 형태음운론적 분석에 기반하여 문자열을 자동으로 발음열로 변환하는 발음 생성 시스템을 제안하였다. 이 시스템은 한국어에서 빈번하게 발생하는 음운변화 현상의 분석을 통해 정의된 음소 변동 규칙과 변이음 규칙을 다단계로 적용하여 가능한 모든 발음열을 생성한다. 각 음운변화 규칙을 포함하는 대표적인 언절 리스트를 이용하여 구성된 시스템의 안정성을 검증하였고, 발음사전 구성과 학습용 발음열의 유용성을 인식 실험을 통해 평가하였다. 그 결과 표제어 사이의 음운변화 현상을 반영한 발음사전의 경우 5-6% 정도 나은 단어 인식률을 얻었으며, 생성된 발음열을 학습에 사용한 경우에서도 향상된 결과를 얻을 수 있었다.

  • PDF