• Title/Summary/Keyword: Syllable Number

Search Result 84, Processing Time 0.021 seconds

An acoustic study of word-timing with references to Korean (한국어 분류에 관한 음향음성학적 연구)

  • 김대원
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.323-327
    • /
    • 1994
  • There have been three contrastive claims over the classification of Korean. To answer the classification question, timing variables which would determine the durations of syllable, word and foot were investigated with various words either in isolation or in sentence contexts using Soundcoup/16 on Macintosh P.C., and a total of 284 utterances, obtained from six Korean speakers, were used. It was found 1) that the durational pattern for words tended to maintain in utterances, regardless of position , subjects and dialects 2) that the syllable duration was determined both by the types of phoneme and by the number of phonemes, the word duration both by the syllable complexity and by the number of syllables, and the foot duration by the word complexity, 3) that there was a constractive relationship between foot length in syllables and foot duration and 4) that the foot duration varied generally with word complexity if the same word did not occur both in the first foot and in the second foot. On the basis of these, it was concluded that Korean is a word timed language where, all else being equal, including tempo, emphasis, etc., the inherent durational pattern for words tends to maintain in utterances. The main difference between stress timing, syllable timing and word timing were also discussed.

  • PDF

The Prosodic Characteristics of Children with Cochlear Implant with Respect to the Articulation Rate, Pause, and Duration (인공와우이식 아동의 운율 특성 - 조음속도와 쉼, 지속시간을 중심으로 -)

  • Oh, Soonyoung;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.4 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • This research reports the prosodic characteristics (including articulation speech rate, pause characteristics, duration) of children with cochlear implants with reference to those of children with normal hearing. Subjects are 8-to 10-year-old children, balancing each number of gender as 24. Dialogue speech data are comprised of four types of sentence patterns. Results show that 1) there's a statistically meaningful difference on articulation speech rate between the two groups. 2) On pauses, they are not observed in exclamatory and declarative sentences in normal children. While imperative sentences show no statistical difference on the number of pauses between the two groups, interrogative sentences do. 3) Declarative, exclamatory, and interrogative sentences reveal statistical difference between the two groups in terms of the sentence's final two-syllable word duration, showing no difference on imperative sentences. 4) When it comes to the RFP (duration ratio of sentence final syllable to penultimate syllable), we no statistically meaningful difference between the two groups in all types of sentences exists. 5) Lastly, RWS (the ratio of sentence final two syllable word duration to that of whole sentence duration) shows statistical difference between two groups in imperative sentences, but not in all the rest types.

Intraindividual and Interindividual Variations of Stereotyped Songs in Gray-headed Bunting (Emberiza fucata) (붉은뺨멧새 Stereotyped song 내 Syllable의 개체내, 개체간 변이 비교)

  • Kim, Kil-Won;Park, Shi-Rvons
    • The Korean Journal of Zoology
    • /
    • v.36 no.4
    • /
    • pp.476-486
    • /
    • 1993
  • From a population in Kang-Nae, Cheong-won, Chung-Buk, acoustic behaviours of Gray-headed Bunting (Emberizo fucata) were obsenred. The singing of males was classified into two major types, stereotyped song and squeaky song. The stereotyped songs of eight territorial males were recorded Intraindividual and individually distinctive features were studied. Individuals produced their song in distinctive ways in terms of song duration and the number of syllables. Gray-headed Buntings sang various syllable types. We found that a male produced more constant syllables in anterior group than those in posterior group. Males sang distinctive syllables among them. Some syllable types which were frequently appeared in an anterior group. In these analyses, we suggest that the anterior groups in songs of a Gray-headed Bunting express the constant information and the posterior groups contribute to situational communication.

  • PDF

Effects of Korean Syllable Structure on English Pronunciation

  • Lee, Mi-Hyun;Ryu, Hee-Kwan
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.364-364
    • /
    • 2000
  • It has been widely discussed in phonology that syllable structure of mother tongue influences one's acquisition of foreign language. However, the topic was hardly examined experimentally. So, we investigated effects of Korean syllable structure when Korean speakers pronounce English words, especially focusing on consonant strings that are not allowed in Korean. In the experiment, all the subjects are divided into 3 groups, that is, native, experienced, and inexperienced speakers. Native group consists of 1 male English native speaker. Experienced and inexperienced are each composed of 3 male Korean speakers. These 2 groups are divided by the length of residence in the country using English as a native language. 41 mono-syllable words are prepared considering the position (onset vs. coda), characteristic (stops, affricates, fricatives), and number of consonant. Then, the length of the consonant cluster is measured. To eliminate tempo effect, the measured length is normalized using the length of the word 'say' in the carrier sentence. Measurement of consonant cluster is the relative time period between the initiation of energy (onset I coda) which is acoustically representative of noise (consonant portion) and voicing. bar (vowel portion) in a syllable. Statistical method is used to estimate the differences among 3 groups. For each word, analysis of variance (ANDY A) and Post Hoc tests are carried out.

  • PDF

The Neighborhood Effect in Korean Visual Word Recognition (한국어 시각단어재인에서 나타나는 이웃효과)

  • Kwon, You-An;Cho, Hyae-Suk;Kim, Choong-Myung;Nam, Ki-Chun
    • MALSORI
    • /
    • no.60
    • /
    • pp.29-45
    • /
    • 2006
  • We investigated whether the first syllable plays an important role in lexical access in Korean visual word recognition. To do so, one lexical decision task (LDT) and two form primed LDT experiments examined the nature of the syllabic neighborhood effect. In Experiment 1, the syllabic neighborhood density and the syllabic neighborhood frequency was manipulated. The results showed that lexical decision latencies were only influenced by the syllabic neighborhood frequency. The purpose of experiment 2 was to confirm the results of experiment 1 with form-primed LDT task. The lexical decision latency was slower in form-related condition compared to form-unrelated condition. The effect of syllabic neighborhood density was significant only in form-related condition. This means that the first syllable plays an important role in the sub-lexical process. In Experiment 3, we conducted another form-primed LDT task manipulating the number of syllabic neighbors in words with higher frequency neighborhood. The interaction of syllabic neighborhood density and form relation was significant. This result confirmed that the words with higher frequency neighborhood are more inhibited by neighbors sharing the first syllable than words with no higher frequency neighborhood in the lexical level. These findings suggest that the first syllable is the unit of neighborhood and the unit of representation in sub-lexical representation is syllable in Korea.

  • PDF

The Processing Unit in Korean Words (한글 낱말의 처리 단위)

  • 이준석;김경린
    • Korean Journal of Cognitive Science
    • /
    • v.1 no.2
    • /
    • pp.221-239
    • /
    • 1989
  • The purpose of this study was to explore the processing unit in Korean word.Three experiments were conducted to examine this question.Preliminary experiment and Enperiment I were executed to delineate the processing unit in singles syllable word and Experiment 2,for words two or more syllables.The major finding of the preliminary experiment showed that the effect of the consonant type was not significant but that of the letter position was.Reaction time increased as the position of letter increased.The difference in reaction time between the first and the second position was not significant.However,the difference between the second and third was.In the Experiment 1, the effect of the number of letter was significant: reaction time increased as the number of letters increased.The size of the position effect both in the preliminary experiment and Experiment 1was comparable.Result of Experiment 2 was such that regardless of the presence of the final consonant(s),the reaction time incresased linearly as the number of svllables increased from two to four. The findings of the present study suggest that:(1)processing unit in single syllable Korean words is a syllable without the final consonant(s):(2) but in words of two or more syllables,the unit is likely to be a syllable with the final consonant(s).

An Improvement of Korean Speech Recognition Using a Compensation of the Speaking Rate by the Ratio of a Vowel length (모음길이 비율에 따른 발화속도 보상을 이용한 한국어 음성인식 성능향상)

  • 박준배;김태준;최성용;이정현
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.195-198
    • /
    • 2003
  • The accuracy of automatic speech recognition system depends on the presence of background noise and speaker variability such as sex, intonation of speech, and speaking rate. Specially, the speaking rate of both inter-speaker and intra-speaker is a serious cause of mis-recognition. In this paper, we propose the compensation method of the speaking rate by the ratio of each vowel's length in a phrase. First the number of feature vectors in a phrase is estimated by the information of speaking rate. Second, the estimated number of feature vectors is assigned to each syllable of the phrase according to the ratio of its vowel length. Finally, the process of feature vector extraction is operated by the number that assigned to each syllable in the phrase. As a result the accuracy of automatic speech recognition was improved using the proposed compensation method of the speaking rate.

  • PDF

Speech Recognition of Multi-Syllable Words Using Soft Computing Techniques (소프트컴퓨팅 기법을 이용한 다음절 단어의 음성인식)

  • Lee, Jong-Soo;Yoon, Ji-Won
    • Transactions of the Society of Information Storage Systems
    • /
    • v.6 no.1
    • /
    • pp.18-24
    • /
    • 2010
  • The performance of the speech recognition mainly depends on uncertain factors such as speaker's conditions and environmental effects. The present study deals with the speech recognition of a number of multi-syllable isolated Korean words using soft computing techniques such as back-propagation neural network, fuzzy inference system, and fuzzy neural network. Feature patterns for the speech recognition are analyzed with 12th order thirty frames that are normalized by the linear predictive coding and Cepstrums. Using four models of speech recognizer, actual experiments for both single-speakers and multiple-speakers are conducted. Through this study, the recognizers of combined fuzzy logic and back-propagation neural network and fuzzy neural network show the better performance in identifying the speech recognition.

Analysis of Phonological Reduction in Conversational Japanese (현대일본어의 회화문에 나타난 축약형의 음운론적 분석)

  • Choi Young-sook;Sato Shigeru;Pahk Hy-tay
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.198-206
    • /
    • 1996
  • Using eighteen text materials from various goners of present-day Japanese, we collected phonologically reduced forms frequently observed in conversational Japanese, and classified them in search of unified explanation of phonological reduction phenomena. We found 7,516 cases of reduced forms which we divided into 43 categories according to the types of phonological changes they have undergone. The general tendencies ale that deletion and fusion of a phoneme or an entire syllable takes place frequently, resulting in the decrease in the number of syllable. Typical examples frequently observed throughout the materials are : $~/noda/{\rightarrow}~/nda/,{\;}-/teiru/{\rightarrow}~/teru/,{\;}~/dewa/{\rightarrow}~/zja/,{\;}~/tesimau/{\rightarrow}~/cjau/$. From morphosyntactic point of view phonological reduction often occurs at the NP and VP morpheme boundaries. The following findings are drawn from phonological observations of reduction. (1) Vowels are more easily deleted than consonants. (2) Bilabials(/m/, /b/, and /w/ are the most likely candidates for deletion. (3) In a concatenation of vowels, closed vowels are absorbed into open vowels, or two adjacent vowels come to create another vowel, in which case reconstruction of the original sequence is not always predictable. (4) Alveolars are palatalized under the influence of front vowels. (5) Regressive assimilation takes place in a syllable starting with ill, changing the entire syllable into phonological choked sound or a syllabic nasal, depending on the voicing of following phoneme.

  • PDF

Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality (가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현)

  • 최장석;이기영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.7
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF