• 제목/요약/키워드: Syllable Number

검색결과 84건 처리시간 0.02초

한국어 분류에 관한 음향음성학적 연구 (An acoustic study of word-timing with references to Korean)

  • 김대원
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 제11회 음성통신 및 신호처리 워크샵 논문집 (SCAS 11권 1호)
    • /
    • pp.323-327
    • /
    • 1994
  • There have been three contrastive claims over the classification of Korean. To answer the classification question, timing variables which would determine the durations of syllable, word and foot were investigated with various words either in isolation or in sentence contexts using Soundcoup/16 on Macintosh P.C., and a total of 284 utterances, obtained from six Korean speakers, were used. It was found 1) that the durational pattern for words tended to maintain in utterances, regardless of position , subjects and dialects 2) that the syllable duration was determined both by the types of phoneme and by the number of phonemes, the word duration both by the syllable complexity and by the number of syllables, and the foot duration by the word complexity, 3) that there was a constractive relationship between foot length in syllables and foot duration and 4) that the foot duration varied generally with word complexity if the same word did not occur both in the first foot and in the second foot. On the basis of these, it was concluded that Korean is a word timed language where, all else being equal, including tempo, emphasis, etc., the inherent durational pattern for words tends to maintain in utterances. The main difference between stress timing, syllable timing and word timing were also discussed.

  • PDF

인공와우이식 아동의 운율 특성 - 조음속도와 쉼, 지속시간을 중심으로 - (The Prosodic Characteristics of Children with Cochlear Implant with Respect to the Articulation Rate, Pause, and Duration)

  • 오순영;성철재
    • 말소리와 음성과학
    • /
    • 제4권4호
    • /
    • pp.117-127
    • /
    • 2012
  • This research reports the prosodic characteristics (including articulation speech rate, pause characteristics, duration) of children with cochlear implants with reference to those of children with normal hearing. Subjects are 8-to 10-year-old children, balancing each number of gender as 24. Dialogue speech data are comprised of four types of sentence patterns. Results show that 1) there's a statistically meaningful difference on articulation speech rate between the two groups. 2) On pauses, they are not observed in exclamatory and declarative sentences in normal children. While imperative sentences show no statistical difference on the number of pauses between the two groups, interrogative sentences do. 3) Declarative, exclamatory, and interrogative sentences reveal statistical difference between the two groups in terms of the sentence's final two-syllable word duration, showing no difference on imperative sentences. 4) When it comes to the RFP (duration ratio of sentence final syllable to penultimate syllable), we no statistically meaningful difference between the two groups in all types of sentences exists. 5) Lastly, RWS (the ratio of sentence final two syllable word duration to that of whole sentence duration) shows statistical difference between two groups in imperative sentences, but not in all the rest types.

붉은뺨멧새 Stereotyped song 내 Syllable의 개체내, 개체간 변이 비교 (Intraindividual and Interindividual Variations of Stereotyped Songs in Gray-headed Bunting (Emberiza fucata))

  • Kim, Kil-Won;Park, Shi-Rvons
    • 한국동물학회지
    • /
    • 제36권4호
    • /
    • pp.476-486
    • /
    • 1993
  • From a population in Kang-Nae, Cheong-won, Chung-Buk, acoustic behaviours of Gray-headed Bunting (Emberizo fucata) were obsenred. The singing of males was classified into two major types, stereotyped song and squeaky song. The stereotyped songs of eight territorial males were recorded Intraindividual and individually distinctive features were studied. Individuals produced their song in distinctive ways in terms of song duration and the number of syllables. Gray-headed Buntings sang various syllable types. We found that a male produced more constant syllables in anterior group than those in posterior group. Males sang distinctive syllables among them. Some syllable types which were frequently appeared in an anterior group. In these analyses, we suggest that the anterior groups in songs of a Gray-headed Bunting express the constant information and the posterior groups contribute to situational communication.

  • PDF

Effects of Korean Syllable Structure on English Pronunciation

  • Lee, Mi-Hyun;Ryu, Hee-Kwan
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.364-364
    • /
    • 2000
  • It has been widely discussed in phonology that syllable structure of mother tongue influences one's acquisition of foreign language. However, the topic was hardly examined experimentally. So, we investigated effects of Korean syllable structure when Korean speakers pronounce English words, especially focusing on consonant strings that are not allowed in Korean. In the experiment, all the subjects are divided into 3 groups, that is, native, experienced, and inexperienced speakers. Native group consists of 1 male English native speaker. Experienced and inexperienced are each composed of 3 male Korean speakers. These 2 groups are divided by the length of residence in the country using English as a native language. 41 mono-syllable words are prepared considering the position (onset vs. coda), characteristic (stops, affricates, fricatives), and number of consonant. Then, the length of the consonant cluster is measured. To eliminate tempo effect, the measured length is normalized using the length of the word 'say' in the carrier sentence. Measurement of consonant cluster is the relative time period between the initiation of energy (onset I coda) which is acoustically representative of noise (consonant portion) and voicing. bar (vowel portion) in a syllable. Statistical method is used to estimate the differences among 3 groups. For each word, analysis of variance (ANDY A) and Post Hoc tests are carried out.

  • PDF

한국어 시각단어재인에서 나타나는 이웃효과 (The Neighborhood Effect in Korean Visual Word Recognition)

  • 권유안;조혜숙;김충명;남기춘
    • 대한음성학회지:말소리
    • /
    • 제60호
    • /
    • pp.29-45
    • /
    • 2006
  • We investigated whether the first syllable plays an important role in lexical access in Korean visual word recognition. To do so, one lexical decision task (LDT) and two form primed LDT experiments examined the nature of the syllabic neighborhood effect. In Experiment 1, the syllabic neighborhood density and the syllabic neighborhood frequency was manipulated. The results showed that lexical decision latencies were only influenced by the syllabic neighborhood frequency. The purpose of experiment 2 was to confirm the results of experiment 1 with form-primed LDT task. The lexical decision latency was slower in form-related condition compared to form-unrelated condition. The effect of syllabic neighborhood density was significant only in form-related condition. This means that the first syllable plays an important role in the sub-lexical process. In Experiment 3, we conducted another form-primed LDT task manipulating the number of syllabic neighbors in words with higher frequency neighborhood. The interaction of syllabic neighborhood density and form relation was significant. This result confirmed that the words with higher frequency neighborhood are more inhibited by neighbors sharing the first syllable than words with no higher frequency neighborhood in the lexical level. These findings suggest that the first syllable is the unit of neighborhood and the unit of representation in sub-lexical representation is syllable in Korea.

  • PDF

한글 낱말의 처리 단위 (The Processing Unit in Korean Words)

  • 이준석;김경린
    • 인지과학
    • /
    • 제1권2호
    • /
    • pp.221-239
    • /
    • 1989
  • 한글 낱말의 처리단의를 검증하기 위해 3개의 실험을 실시 하였다.예비 실험과 실험1은 한음절 글자, 실험 2는 2음절 이상 글자에서의 처리단위를 밝혀보고자 하였다.예비실험에서,자음유형효과는 통계적으로 유의미하지 않았으나 낱말 위치 효과는 유의미했다.Newman-Keuls 검증결과 초성조건과 중성조건간 차이는 유의미하지 않았으나 중성조건과 중성조건간의 차이는 유의미했다.실험 1에서는 낱자수가 증가함에 따라 반응시간도 증가했다.낱말 위치 효과는 예비실험과 동일했다.실험 2에서는 종성유무와는 관계없이 음절이 증가함에 따라 반응시간이 증가했다.본 연구의 시사점은 다음과 같다:(1)한 음절의 글자에서는 초성과 종성으로만 구성된 음절을 단위로 정보처리가 이루어지나 (2) 두 음절이상의 글자에서는 종성이 포함된 음절을 단위로 정보처리가 이루어진다.

모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상 (An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length)

  • 박준배;김태준;최성용;이정현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 컴퓨터소사이어티 추계학술대회논문집
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

소프트컴퓨팅 기법을 이용한 다음절 단어의 음성인식 (Speech Recognition of Multi-Syllable Words Using Soft Computing Techniques)

  • 이종수;윤지원
    • 정보저장시스템학회논문집
    • /
    • 제6권1호
    • /
    • pp.18-24
    • /
    • 2010
  • The performance of the speech recognition mainly depends on uncertain factors such as speaker's conditions and environmental effects. The present study deals with the speech recognition of a number of multi-syllable isolated Korean words using soft computing techniques such as back-propagation neural network, fuzzy inference system, and fuzzy neural network. Feature patterns for the speech recognition are analyzed with 12th order thirty frames that are normalized by the linear predictive coding and Cepstrums. Using four models of speech recognizer, actual experiments for both single-speakers and multiple-speakers are conducted. Through this study, the recognizers of combined fuzzy logic and back-propagation neural network and fuzzy neural network show the better performance in identifying the speech recognition.

현대일본어의 회화문에 나타난 축약형의 음운론적 분석 (Analysis of Phonological Reduction in Conversational Japanese)

  • 최영숙;좌등자;박희태
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.198-206
    • /
    • 1996
  • Using eighteen text materials from various goners of present-day Japanese, we collected phonologically reduced forms frequently observed in conversational Japanese, and classified them in search of unified explanation of phonological reduction phenomena. We found 7,516 cases of reduced forms which we divided into 43 categories according to the types of phonological changes they have undergone. The general tendencies ale that deletion and fusion of a phoneme or an entire syllable takes place frequently, resulting in the decrease in the number of syllable. Typical examples frequently observed throughout the materials are : $~/noda/{\rightarrow}~/nda/,{\;}-/teiru/{\rightarrow}~/teru/,{\;}~/dewa/{\rightarrow}~/zja/,{\;}~/tesimau/{\rightarrow}~/cjau/$. From morphosyntactic point of view phonological reduction often occurs at the NP and VP morpheme boundaries. The following findings are drawn from phonological observations of reduction. (1) Vowels are more easily deleted than consonants. (2) Bilabials(/m/, /b/, and /w/ are the most likely candidates for deletion. (3) In a concatenation of vowels, closed vowels are absorbed into open vowels, or two adjacent vowels come to create another vowel, in which case reconstruction of the original sequence is not always predictable. (4) Alveolars are palatalized under the influence of front vowels. (5) Regressive assimilation takes place in a syllable starting with ill, changing the entire syllable into phonological choked sound or a syllabic nasal, depending on the voicing of following phoneme.

  • PDF

가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현 (Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality)

  • 최장석;이기영
    • 전자공학회논문지S
    • /
    • 제35S권7호
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF