• 제목/요약/키워드: syllable length

검색결과 50건 처리시간 0.027초

청각 단어 재인에서 나타난 한국어 단어길이 효과 (The Korean Word Length Effect on Auditory Word Recognition)

  • 최원일;남기춘
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.137-140
    • /
    • 2002
  • This study was conducted to examine the korean word length effects on auditory word recognition. Linguistically, word length can be defined by several sublexical units such as letters, phonemes, syllables, and so on. In order to investigate which units are used in auditory word recognition, lexical decision task was used. Experiment 1 and 2 showed that syllable length affected response time, and syllable length interacted with word frequency. As a result, in recognizing auditory word syllable length was important variable.

  • PDF

말소리 단어 재인 시 높낮이와 장단의 역할: 서울 방언과 대구 방언의 비교 (The Role of Pitch and Length in Spoken Word Recognition: Differences between Seoul and Daegu Dialects)

  • 이윤형;박현수
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.85-94
    • /
    • 2009
  • The purpose of this study was to see the effects of pitch and length patterns on spoken word recognition. In Experiment 1, a syllable monitoring task was used to see the effects of pitch and length on the pre-lexical level of spoken word recognition. For both Seoul dialect speakers and Daegu dialect speakers, pitch and length did not affect the syllable detection processes. This result implies that there is little effect of pitch and length in pre-lexical processing. In Experiment 2, a lexical decision task was used to see the effect of pitch and length on the lexical access level of spoken word recognition. In this experiment, word frequency (low and high) as well as pitch and length was manipulated. The results showed that pitch and length information did not play an important role for Seoul dialect speakers, but that it did affect lexical decision processing for Daegu dialect speakers. Pitch and length seem to affect lexical access during the word recognition process of Daegu dialect speakers.

  • PDF

Effects of Korean Syllable Structure on English Pronunciation

  • Lee, Mi-Hyun;Ryu, Hee-Kwan
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.364-364
    • /
    • 2000
  • It has been widely discussed in phonology that syllable structure of mother tongue influences one's acquisition of foreign language. However, the topic was hardly examined experimentally. So, we investigated effects of Korean syllable structure when Korean speakers pronounce English words, especially focusing on consonant strings that are not allowed in Korean. In the experiment, all the subjects are divided into 3 groups, that is, native, experienced, and inexperienced speakers. Native group consists of 1 male English native speaker. Experienced and inexperienced are each composed of 3 male Korean speakers. These 2 groups are divided by the length of residence in the country using English as a native language. 41 mono-syllable words are prepared considering the position (onset vs. coda), characteristic (stops, affricates, fricatives), and number of consonant. Then, the length of the consonant cluster is measured. To eliminate tempo effect, the measured length is normalized using the length of the word 'say' in the carrier sentence. Measurement of consonant cluster is the relative time period between the initiation of energy (onset I coda) which is acoustically representative of noise (consonant portion) and voicing. bar (vowel portion) in a syllable. Statistical method is used to estimate the differences among 3 groups. For each word, analysis of variance (ANDY A) and Post Hoc tests are carried out.

  • PDF

검사어의 모음 환경과 길이 및 연령에 따른 비음치 (Effects of vowel context, stimulus length, and age on nasalance scores)

  • 신일산;하승희
    • 말소리와 음성과학
    • /
    • 제8권3호
    • /
    • pp.111-116
    • /
    • 2016
  • The Nasometer is most commonly used to assess the presence and degree of resonance problems in clinical settings and it provides nasalance scores to identify the acoustic correlates of nasality. Nasalance scores are influenced by factors related to speakers and speech stimuli. This study aims to examine the effect of vowel context and length of stimuli and age on nasalance scores. The participants were 20 adults and 45 children ranging in age from 3 to 5 years. The stimuli consisted of 12 sentences containing no nasal consonants. The stimuli in the three vowel contexts (low, high, and mixed) consisted of 4, 8, 16, and 31-syllable long sentences. Speakers were asked to repeat each stimulus after examiner. The results indicated significant effects of vowel contexts and stimulus length on nasalance scores. The nasalance scores for the high vowel contexts were significantly higher than those for the mixed and low vowel contexts. The nasalance scores for the mixed vowel contexts were significantly higher than those for the low vowel contexts. Speakers had higher nasalance scores for 4-syllable long sentences and 31-syllable long sentences than for 16-syllable long sentences. The effect of age on nasalance scores was not significant. The results of the study suggest that the vowel context and length of speech stimuli should be carefully considered when interpreting the nasalance scores.

소프트웨어를 이용한 마비말장애 화자의 일련운동속도 분석 (Analysis of sequential motion rate in dysarthric speakers using a software)

  • 박희준;안신욱;신범주
    • 말소리와 음성과학
    • /
    • 제10권4호
    • /
    • pp.173-177
    • /
    • 2018
  • Purpose: The primary goal of this study was to discover whether the articulatory diadochokinesis (sequential motionrate, SMR) collected using the Motor Speech Disorder Assessment (MSDA) software module can diagnose dysarthria and determine its severity. Methods: Two subject groups, one with spastic dysarthria (n=26) and a control group of speakers (n=30) without neurological disease, were set up. From both groups, the SMR was collected by MSDA at a time, and then analyzed using descriptive statistics. Results: For the parameters of syllable rate, jitter, and the mean syllable length (MSL) at the front and back, the control group displayed better results than the dysarthria patients. Conclusions: At the level of articulatory diadochokinesis, the results showed that the use of MSDA software in clinical practice was generally suitable for quickly recording the parameters of syllable rate, jitter, and mean syllable length.

한국어 다음절 단어의 초성, 중성, 종성단위의 음절간 조건부 확률 (Conditional Probability of a 'Choseong', a 'Jungseong', and a 'Jongseong' Between Syllables in Multi-Syllable Korean Words)

  • 이재홍;이재학
    • 전자공학회논문지B
    • /
    • 제28B권9호
    • /
    • pp.692-703
    • /
    • 1991
  • A Korean word is composed of syllables. A Korean syllable is regarded as a random variable according to its probabilistic property in occurrence. A Korean syllable is divided into 'choseong', 'jungseong', and 'jongseong' which are regarded as random variables. We can consider teh conditional probatility of syllable as an index which represents the occurrence correlation between syllables in Korean words. Since the number of syllables is enormous, we use the conditional probability of a' choseong', a 'jungseong', and a 'jongseong' between syllables as an index which represents the occurrence correlation between syllables in Korean words. The length distribution of Korean woeds is computed according to frequency and to kind. Form the cumulative frequency of a Korean syllable computed from multi-syllable Korean woeds, all probabilities and conditiona probabilities are computed for the three random variables. The conditional probabilities of 'choseong'- 'choseong', 'jungseong'- 'jungseong', 'jongseong'-'jongseong', 'jongseong'-'choseong' between adjacent syllables in multi-syllable Korean woeds are computed.

  • PDF

운율 정보를 이용한 한국어 위치 정보 데이타의 발음 모델링 (Pronunciation Variation Modeling for Korean Point-of-Interest Data Using Prosodic Information)

  • 김선희;박전규;나민수;전재훈;정민화
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권2호
    • /
    • pp.104-111
    • /
    • 2007
  • 본 논문은 두 가지의 구조적 운율 정보, 즉 운율어와 음절수를 이용하여 한국어 위치 정보 데이타의 발음모델링을 수행할 경우에 음성인식기의 성능을 평가하는 것을 목표로 하는 이다. 먼저, 위치 정보 데이타가 운율어로 구성되어 있다는 전제 하에 운율어를 이용하여 위치 정보 데이타의 가능한 모든 발음을 생성하고, 다시 음절수를 기준으로 발음변이 수를 조절하는 방법을 제시하였다. 제안한 방법에 의하여 9개의 테스트 세트와 9개의 학습 세트로 총 81개의 실험을 통하여 음성인식의 성능을 평가하였다. 실험 결과 운율어를 이용하여 발음 사전을 제작한 모든 경우에 베이스라인과 비교하여 성능이 향상되었다. 음절수에 따라서 발음 변이의 수를 조절한 결과도 전체적으로는 3음절로 그 수를 제한한 경우에 가장 좋은 인식 성능을 얻을 수 있어서, 음절수에 따른 발음 변이 수의 조절이 효과적임을 알 수 있었다. 제안한 방법과 같이 운율어와 음절수를 이용한 경우에 베이스라인의 WER 4.63%에서 최대 8.4%의 WER가 감소하였다.

한국어의 중간구 오름조 현상에 대하여 (On the Rising Tone of Intermediate Phrase in Standard Korean)

  • 곽동기
    • 대한음성학회지:말소리
    • /
    • 제40호
    • /
    • pp.13-27
    • /
    • 2000
  • It is generally accepted that there appears the rising tone at the end of the intermediate phrase in standard Korean. There have been discussions about whether the syllable with the rising tone, even if it is a particle or an ending, might be accented or not. The accented syllable is the most prominent one in the given phonological strings. It is determined by the nondistinctive stress which is located on the first or second syllable of lexical word according to vowel length and syllable weight. So pitch does not have any close relationship with accent. The intermediate phrase-final rising tone, therefore, is not associated with accent, but used to convey other pragmatic meanings, that is, i) speech style is more friendly, ii) the speaker tries to send the information for the hearer to hear more clearly, and iii) the speaker wants the hearer to keep on listening to him or her because the speaker's utterance is not complete.

  • PDF

모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상 (An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length)

  • 박준배;김태준;최성용;이정현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 컴퓨터소사이어티 추계학술대회논문집
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

일반 영유아의 초기 발성과 음운 발달에 관한 종단 연구 (Early Vocalization and Phonological Developments of Typically Developing Children: A longitudinal study)

  • 하승희;박보라
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.63-73
    • /
    • 2015
  • This study investigated longitudinally early vocalization and phonological developments of typically developing children. Ten typically developing children participated in the study from 9 months to 18 months of age. Spontaneous utterance samples were collected at 9, 12, 15, 18 months of age and phonetically transcribed and analyzed. Utterance samples were classified into 5 levels using Stark Assessment of Early Vocal Development-Revised(SAEVD-R). The data analysis focused on 4 and 5 levels of vocalizations classified by SAEVD-R and word productions. The percentage of each vocalization level, vocalization length, syllable structures, and consonant inventory were obtained. The results showed that the percentages of level 4 and 5 vocalizations and word significantly increased with age and the production of syllable structures containing consonants significantly increased around 12 and 15 months of age. On average, the children produced 4 types of syllable structure and 5.4 consonants at 9 months and they produced 5 types of syllable structure and 9.8 consonants at 18 months. The phonological development patterns in this study were consistent with those analyzed from children's meaningful utterances in previous studies. The results support the perspective on the continuity between babbling and early speech. This study has clinical implications in early identification and speech-language intervention for young children with speech delays or at risk.