• Title/Summary/Keyword: syllable length

Search Result 50, Processing Time 0.022 seconds

The Korean Word Length Effect on Auditory Word Recognition (청각 단어 재인에서 나타난 한국어 단어길이 효과)

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.137-140
    • /
    • 2002
  • This study was conducted to examine the korean word length effects on auditory word recognition. Linguistically, word length can be defined by several sublexical units such as letters, phonemes, syllables, and so on. In order to investigate which units are used in auditory word recognition, lexical decision task was used. Experiment 1 and 2 showed that syllable length affected response time, and syllable length interacted with word frequency. As a result, in recognizing auditory word syllable length was important variable.

  • PDF

The Role of Pitch and Length in Spoken Word Recognition: Differences between Seoul and Daegu Dialects (말소리 단어 재인 시 높낮이와 장단의 역할: 서울 방언과 대구 방언의 비교)

  • Lee, Yoon-Hyoung;Pak, Hyen-Sou
    • Phonetics and Speech Sciences
    • /
    • v.1 no.2
    • /
    • pp.85-94
    • /
    • 2009
  • The purpose of this study was to see the effects of pitch and length patterns on spoken word recognition. In Experiment 1, a syllable monitoring task was used to see the effects of pitch and length on the pre-lexical level of spoken word recognition. For both Seoul dialect speakers and Daegu dialect speakers, pitch and length did not affect the syllable detection processes. This result implies that there is little effect of pitch and length in pre-lexical processing. In Experiment 2, a lexical decision task was used to see the effect of pitch and length on the lexical access level of spoken word recognition. In this experiment, word frequency (low and high) as well as pitch and length was manipulated. The results showed that pitch and length information did not play an important role for Seoul dialect speakers, but that it did affect lexical decision processing for Daegu dialect speakers. Pitch and length seem to affect lexical access during the word recognition process of Daegu dialect speakers.

  • PDF

Effects of Korean Syllable Structure on English Pronunciation

  • Lee, Mi-Hyun;Ryu, Hee-Kwan
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.364-364
    • /
    • 2000
  • It has been widely discussed in phonology that syllable structure of mother tongue influences one's acquisition of foreign language. However, the topic was hardly examined experimentally. So, we investigated effects of Korean syllable structure when Korean speakers pronounce English words, especially focusing on consonant strings that are not allowed in Korean. In the experiment, all the subjects are divided into 3 groups, that is, native, experienced, and inexperienced speakers. Native group consists of 1 male English native speaker. Experienced and inexperienced are each composed of 3 male Korean speakers. These 2 groups are divided by the length of residence in the country using English as a native language. 41 mono-syllable words are prepared considering the position (onset vs. coda), characteristic (stops, affricates, fricatives), and number of consonant. Then, the length of the consonant cluster is measured. To eliminate tempo effect, the measured length is normalized using the length of the word 'say' in the carrier sentence. Measurement of consonant cluster is the relative time period between the initiation of energy (onset I coda) which is acoustically representative of noise (consonant portion) and voicing. bar (vowel portion) in a syllable. Statistical method is used to estimate the differences among 3 groups. For each word, analysis of variance (ANDY A) and Post Hoc tests are carried out.

  • PDF

Effects of vowel context, stimulus length, and age on nasalance scores (검사어의 모음 환경과 길이 및 연령에 따른 비음치)

  • Shin, Il San;Ha, Seunghee
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.111-116
    • /
    • 2016
  • The Nasometer is most commonly used to assess the presence and degree of resonance problems in clinical settings and it provides nasalance scores to identify the acoustic correlates of nasality. Nasalance scores are influenced by factors related to speakers and speech stimuli. This study aims to examine the effect of vowel context and length of stimuli and age on nasalance scores. The participants were 20 adults and 45 children ranging in age from 3 to 5 years. The stimuli consisted of 12 sentences containing no nasal consonants. The stimuli in the three vowel contexts (low, high, and mixed) consisted of 4, 8, 16, and 31-syllable long sentences. Speakers were asked to repeat each stimulus after examiner. The results indicated significant effects of vowel contexts and stimulus length on nasalance scores. The nasalance scores for the high vowel contexts were significantly higher than those for the mixed and low vowel contexts. The nasalance scores for the mixed vowel contexts were significantly higher than those for the low vowel contexts. Speakers had higher nasalance scores for 4-syllable long sentences and 31-syllable long sentences than for 16-syllable long sentences. The effect of age on nasalance scores was not significant. The results of the study suggest that the vowel context and length of speech stimuli should be carefully considered when interpreting the nasalance scores.

Analysis of sequential motion rate in dysarthric speakers using a software (소프트웨어를 이용한 마비말장애 화자의 일련운동속도 분석)

  • Park, Heejune;An, Sinwook;Shin, Bumjoo
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.173-177
    • /
    • 2018
  • Purpose: The primary goal of this study was to discover whether the articulatory diadochokinesis (sequential motionrate, SMR) collected using the Motor Speech Disorder Assessment (MSDA) software module can diagnose dysarthria and determine its severity. Methods: Two subject groups, one with spastic dysarthria (n=26) and a control group of speakers (n=30) without neurological disease, were set up. From both groups, the SMR was collected by MSDA at a time, and then analyzed using descriptive statistics. Results: For the parameters of syllable rate, jitter, and the mean syllable length (MSL) at the front and back, the control group displayed better results than the dysarthria patients. Conclusions: At the level of articulatory diadochokinesis, the results showed that the use of MSDA software in clinical practice was generally suitable for quickly recording the parameters of syllable rate, jitter, and mean syllable length.

Conditional Probability of a 'Choseong', a 'Jungseong', and a 'Jongseong' Between Syllables in Multi-Syllable Korean Words (한국어 다음절 단어의 초성, 중성, 종성단위의 음절간 조건부 확률)

  • 이재홍;이재학
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.9
    • /
    • pp.692-703
    • /
    • 1991
  • A Korean word is composed of syllables. A Korean syllable is regarded as a random variable according to its probabilistic property in occurrence. A Korean syllable is divided into 'choseong', 'jungseong', and 'jongseong' which are regarded as random variables. We can consider teh conditional probatility of syllable as an index which represents the occurrence correlation between syllables in Korean words. Since the number of syllables is enormous, we use the conditional probability of a' choseong', a 'jungseong', and a 'jongseong' between syllables as an index which represents the occurrence correlation between syllables in Korean words. The length distribution of Korean woeds is computed according to frequency and to kind. Form the cumulative frequency of a Korean syllable computed from multi-syllable Korean woeds, all probabilities and conditiona probabilities are computed for the three random variables. The conditional probabilities of 'choseong'- 'choseong', 'jungseong'- 'jungseong', 'jongseong'-'jongseong', 'jongseong'-'choseong' between adjacent syllables in multi-syllable Korean woeds are computed.

  • PDF

Pronunciation Variation Modeling for Korean Point-of-Interest Data Using Prosodic Information (운율 정보를 이용한 한국어 위치 정보 데이타의 발음 모델링)

  • Kim, Sun-He;Park, Jeon-Gue;Na, Min-Soo;Jeon, Je-Hun;Chung, Min-Wha
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.104-111
    • /
    • 2007
  • This paper examines how the performance of an automatic speech recognizer was improved for Korean Point-of-Interest (POI) data by modeling pronunciation variation using structural prosodic information such as prosodic words and syllable length. First, multiple pronunciation variants are generated using prosodic words given that each POI word can be broken down into prosodic words. And the cross-prosodic-word variations were modeled considering the syllable length of word. A total of 81 experiments were conducted using 9 test sets (3 baseline and 6 proposed) on 9 trained sets (3 baseline, 6 proposed). The results show: (i) the performance was improved when the pronunciation lexica were generated using prosodic words; (ii) the best performance was achieved when the maximum number of variants was constrained to 3 based on the syllable length; and (iii) compared to the baseline word error rate (WER) of 4.63%, a maximum of 8.4% in WER reduction was achieved when both prosodic words and syllable length were considered.

On the Rising Tone of Intermediate Phrase in Standard Korean (한국어의 중간구 오름조 현상에 대하여)

  • Kwack Dong-gi
    • MALSORI
    • /
    • no.40
    • /
    • pp.13-27
    • /
    • 2000
  • It is generally accepted that there appears the rising tone at the end of the intermediate phrase in standard Korean. There have been discussions about whether the syllable with the rising tone, even if it is a particle or an ending, might be accented or not. The accented syllable is the most prominent one in the given phonological strings. It is determined by the nondistinctive stress which is located on the first or second syllable of lexical word according to vowel length and syllable weight. So pitch does not have any close relationship with accent. The intermediate phrase-final rising tone, therefore, is not associated with accent, but used to convey other pragmatic meanings, that is, i) speech style is more friendly, ii) the speaker tries to send the information for the hearer to hear more clearly, and iii) the speaker wants the hearer to keep on listening to him or her because the speaker's utterance is not complete.

  • PDF

An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length (모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상)

  • 박준배;김태준;최성용;이정현
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

Early Vocalization and Phonological Developments of Typically Developing Children: A longitudinal study (일반 영유아의 초기 발성과 음운 발달에 관한 종단 연구)

  • Ha, Seunghee;Park, Bora
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.63-73
    • /
    • 2015
  • This study investigated longitudinally early vocalization and phonological developments of typically developing children. Ten typically developing children participated in the study from 9 months to 18 months of age. Spontaneous utterance samples were collected at 9, 12, 15, 18 months of age and phonetically transcribed and analyzed. Utterance samples were classified into 5 levels using Stark Assessment of Early Vocal Development-Revised(SAEVD-R). The data analysis focused on 4 and 5 levels of vocalizations classified by SAEVD-R and word productions. The percentage of each vocalization level, vocalization length, syllable structures, and consonant inventory were obtained. The results showed that the percentages of level 4 and 5 vocalizations and word significantly increased with age and the production of syllable structures containing consonants significantly increased around 12 and 15 months of age. On average, the children produced 4 types of syllable structure and 5.4 consonants at 9 months and they produced 5 types of syllable structure and 9.8 consonants at 18 months. The phonological development patterns in this study were consistent with those analyzed from children's meaningful utterances in previous studies. The results support the perspective on the continuity between babbling and early speech. This study has clinical implications in early identification and speech-language intervention for young children with speech delays or at risk.