• Title/Summary/Keyword: Articulatory

Search Result 153, Processing Time 0.021 seconds

Language Specific Variations of Domain-initial Strengthening and its Implications on the Phonology-Phonetics Interface: with Particular Reference to English and Hamkyeong Korean

  • Kim, Sung-A
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.7-21
    • /
    • 2004
  • The present study aims to investigate domain-initial strengthening phenomenon, which refers to strengthening of articulatory gestures at the initial positions of prosodic domains. More specifically, this paper presents the result of an experimental study of initial syllables with onset consonants (initial-syllable vowels henceforth) of various prosodic domains in English and Hamkyeong Korean, a pitch accent dialect spoken in the northern part of North Korea. The durations of initial-syllable vowels are compared to those of second vowels in real-word tokens for both languages, controlling both stress and segmental environment. Hamkyeong Korean, like English, tuned out to strengthen the domain-initial consonants. With regard to vowel durations, no significant prosodic effect was found in English. On the other hand, Hamkyeong Korean showed significant differences between the durations of initial and non-initial vowels in the higher prosodic domains. The theoretical implications of the findings are as follows: The potentially universal phenomenon of initial strengthening is shown to be subject to language specific variations in its implementation. More importantly, the distinct phonetics- phonology model (Pierrehumbert & Beckman, 1998; Keating, 1990; Cohn, 1993) is better equipped to account for the facts in the present study.

  • PDF

The Influence of L1 on L2 -Perception of Korean Monophthongs by Polish Speakers- (외국어 습득에 모국어가 미치는 영향에 대하여 -폴란드어 화자의 한국어 단순 모음 청취에 대한 연구-)

  • Paradowska Anna IBabella
    • MALSORI
    • /
    • no.39
    • /
    • pp.73-86
    • /
    • 2000
  • This paper aims to research the influence of mother tongue (Polish) on the perception of a foreign language (Korean) i.e. how vowel sounds that are totally unfamiliar to the listeners are perceived, how the similar sounds are perceived and whether the perception differs according to the phonetic values of the neighbouring sounds. As a result, the degree of the influence of Ll on the vowels of L2 is different in each case and mostly depends on the familiarity of the vowel in question and on the articulatory similarities between the vowels in both languages. The results are as follows; The best perception was observed with Korean /i/ and /a/ (very similar places of articulation in both languages). The worst degree of perception was Korean /(equation omitted)/ that is very unfamiliar to Polish subjects. Vowels that are not so different from the Ll sounds were perceived fairly well. Another important result is that Polish listeners seem to be more sensitive to lip rounding than to the height of the tongue. The role of the neighbouring sounds seems to be of a considerable importance, Depending on the preceding vowel, a sudden drop or rise in the degree of the perception was observed.

  • PDF

Design & Implementation of Lipreading System using the Articulatory Controls Analysis of the Korean 5 Vowels (<<한국어 5모음의 조음적 제어 분석을 이용한 자동 독화에 관한 연구>>)

  • Lee, Kyong-Ho;Kum, Jong-Ju;Rhee, Sang-Bum
    • Journal of the Korea Computer Industry Society
    • /
    • v.8 no.4
    • /
    • pp.281-288
    • /
    • 2007
  • In this paper, we set 6 interesting points around lips. Analyzed and characterized is the distance change of these 6 interesting points when people pronounces 5 vowels of Korean language. 450 data are gathered and analyzed. Based on this analysis, the system is constructed and the recognition experiments are performed. In this system, we used the camera connected to computer to measure the distance vector between 6 interesting points. In the experiment, 80 normal persons were sampled. The observational error between samples was corrected using normalization method. We analyzed with 30 persons and experimented with 50 persons. We constructed three recognition systems and of those the neural net system gave the best recognition result of 87.44 %.

  • PDF

Use of Acoustic Analysis for Indivisualised Therapeutic Planning and Assessment of Treatment Effect in the Dysarthric Children (조음장애 환아에서 개별화된 치료계획 수립과 효과 판정을 위한 음향음성학적 분석방법의 활용)

  • Kim, Yun-Hee;Yu, Hee;Shin, Seung-Hun;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.19-35
    • /
    • 2000
  • Speech evaluation and treatment planning for the patients with articulation disorders have traditionally been based on perceptual judgement by speech pathologists. Recently, various computerized speech analysis systems have been developed and commonly used in clinical settings to obtain the objective and quantitative data and specific treatment strategies. 10 dysarthric children (6 neurogenic and 4 functional dysarthria) participated in this experiment. Speech evaluation of dysarthria was performed in two ways; first, the acoustic analysis by Visi-Pitch and a Computerized Speech Lab and second, the perceptual scoring of phonetic errors rates in 100 word test. The results of the initial evaluation served as primary guidlines for the indivisualized treatment planning of each patient's speech problems. After mean treatment period of 5 months, the follow-up data of both dysarthric groups showed increased maximum phonation time, increased alternative motion rate and decreased occurrence of articulatory deviation. The changes of acoustic data and therapeutic effects were more prominent in children with dysarthria due to neurologic causes than with functional dysarthria. Three cases including their pre- and post treatment data were illustrated in detail.

  • PDF

Phonetic Factors Conditioning the Release of English Sentence-Final Stops (영어 문장 말 폐쇄음의 파열 양상)

  • Kim, Da-Hee
    • MALSORI
    • /
    • no.53
    • /
    • pp.1-16
    • /
    • 2005
  • This experimental study aims to test the hypothesis that the occurrence of English sentence-final stop release is, at least, partly predictable by examining its phonetic context. 10 native(5 male and 5 female) speakers of American English recorded, in a sound-proof booth, sentences excerpted from novels and the natural documents on the World Wide Web. Based on the waveforms and spectrograms of the recorded sentences, judgements of the release of a sentence-final stop were made. If the aperiodic energy of a given final stop lasted more than .015 second, it was considered to be "released." The result reveals that English sentence-final stops tend to be released when they are 1) velar consonants, 2) preceeded by tense vowels, and 3) coda consonants of content words. The phonetic environment in which final stops are often released can be characterized by the articulatory comfortableness and the need for release burst noise, without which the final stops may not be correctly perceived. By examining the release of English final stops, it is concluded that the phonological events, which had been considered to occur rather "randomly," in fact, reflect the universal tendency of human speech: to minimize the speakers' and hearers' effort.

  • PDF

A perceptual study of the three-way contrast in Korean stops with cross-spliced syllables

  • Kim, Mi-Ryoung
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.343-348
    • /
    • 1996
  • This paper examines the contribution of vocalic information (after the onset of voicing) to the perception of Korean alveolar stops: the aspirated /$t^{h}$/, the lenis /t/, and the fortis /$t^{*}$/. These stops have been analyzed as differing in VOT (Abramson & Lisker, 1964), the glottal width or aspiration (Kim, 1970), and F0 and intensity build-up (Han & Weitzman, 1970). These studies focused on the articulatory and acoustic qualities of the consonants and often assumed that the consonantal portion before the onset of voicing plays the main role in maintaining the three-way distinction. In contrast, the role of the following vowels was given less attention. In order to investigate the contribution of the following vowels, a perceptual study was conducted using stimuli cross-spliced from three naturally produced syllables [$t^{h}al$] 'mask', [tal] 'moon', and [$t^{*}al$]) 'daughter'. Stimuli were presented to 12 Korean listeners for identification. Each subject responded to a total of 486 tokens. The results show that vowels play the primary role when the cut occurs at the star of voicing. Even with cuts at 10 ms and 40 ms into voicing, the following vowel still plays a clear role. This suggests that vowels carry the important information for distinguishing the three stops.

  • PDF

The change of vowel characteristics for the dysarthric speech along with speaking style (경도 마비말장애 환자의 발화 유형에 따른 모음 특성 비교)

  • Kim, Jiyoun;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.51-59
    • /
    • 2016
  • The purpose of present study is to examine differences between habitual speech (HS) and clear speech (CS) in individuals with mild dysarthria. Twelve speakers with mild dysarthria and twelve healthy control speakers read sentences in two speaking styles. Formant and intensity related values, triangular area, and center of gravity of /a/, /i/, and /u/ were measured. In addition, formant-ratio variables such as vowel space area(VSA), vowel articulatory index (VAI), formant centralization ratio (FCR) and F2i/F1u ratio (F2 ratio) were calculated. The results of repeated-measures ANOVA showed a significant difference in F2 of vowel /i/ and F2 energy of vowel /a/ between groups. Regarding formant energy, F2 energy of vowel /a/ were observed as meaningful variables between speaking styles. There were significant speaking style-by-group interactions for F2 energy of vowel /a/. These findings indicated that current parameters could discriminate healthy group and mild dysarthria group meaningfully and that speaker with dysarthria had larger clear speech benefit than healthy talkers. We also claim that various acoustic changes of clear speech may contribute to improving vowel intelligibility.

A Study on Combining Bimodal Sensors for Robust Speech Recognition (강인한 음성인식을 위한 이중모드 센서의 결합방식에 관한 연구)

  • 이철우;계영철;고인선
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.51-56
    • /
    • 2001
  • Recent researches have been focusing on jointly using lip motions and speech for reliable speech recognitions in noisy environments. To this end, this paper proposes the method of combining the visual speech recognizer and the conventional speech recognizer with each output properly weighted. In particular, we propose the method of autonomously determining the weights, depending on the amounts of noise in the speech. The correlations between adjacent speech samples and the residual errors of the LPC analysis are used for this determination. Simulation results show that the speech recognizer combined in this way provides the recognition performance of 83 % even in severely noisy environments.

  • PDF

Characteristics of Phoniatrics in Patients with Spastic Dysarthria (경직형 마비말장애의 음성언어의학적 특성)

  • Kim, Sook-Hee;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.159-170
    • /
    • 2008
  • The purpose of this study was to find out the ability of coordination of the articulatory motor and the ability of control of the respiration and laryngeal for spastic dysarthria by acoustic analysis. The sustained of vowel /a/ and repetition of syllable /pa/ in 15 normal and 10 spastic dysarthria were measured. Multi-Speech, MDVP, and MSP were used for data recording and analysis. As a result, the mean DDK rate in the spastic group was significantly slower than in the normal. The maximum phonation time in the spastic group ($4.80{\pm}1.94$) was shorter than in the normal ($11.20{\pm}3.72$). The DDKjit in the spastic group was significantly higher than in the normal. The DDKsla was reduced in the spastic group. The mean syllable duration in the spastic group (146.2ms) was significantly longer than in the normal (75.8ms). The mean energy was reduced in the spastic group. The range of Fo was greater than in the normal. The frequency perturbation (jitter, vFo) and amplitude perturbation (shimmer, vAm) were higher than in the normal group. The NHR was higher than in the normal group. The parameters of this were significantly difference between the spastic dysarthria and the normal (p<0.05). Finally, the spastic dysarthria has short respiration, slow speech rate, and voice quality problem. The these results will help to establish a plan and the intervention of treatment.

  • PDF

The Vowel System of American English and Its Regional Variation (미국 영어 모음 체계의 몇 가지 지역 방언적 차이)

  • Oh, Eun-Jin
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.69-87
    • /
    • 2006
  • This study aims to describe the vowel system of present-day American English and to discuss some of its phonetic variations due to regional differences. Fifteen speakers of American English from various regions of the United States produced the monophthongs of English. The vowel duration and the frequencies of the first and the second formant were measured. The results indicate that the distinction between the vowels [c] and [a] has been merged in most parts of the U.S. except in some speakers from eastern and southeastern parts of the U.S., resulting in the general loss of phonemic distinction between the vowels. The phonemic merger of the two vowels can be interpreted as the result of the relatively small functional load of the [c]-[a] contrast, and the smaller back vowel space in comparison to the front vowel space. The study also shows that the F2 frequencies of the high back vowel [u] were extremely high in most of the speakers from the eastern region of the U.S., resulting in the overall reduction of their acoustic space for high vowels. From the viewpoint of the Adaptive Dispersion Theory proposed by Liljencrants & Lindblom (1972) and Lindblom (1986), the high back vowel [u] appeared to have been fronted in order to satisfy the economy of articulatory gesture to some extent without blurring any contrast between [i] and [u] in the high vowel region.

  • PDF