• Title/Summary/Keyword: Korean consonants

Search Result 402, Processing Time 0.039 seconds

A Historical Review of Sorigal (Korean Phonetics) in the early 20th Century (우리말 소리갈(國語音聲學)에 대한 연구 - 주시경, 김두봉, 최현배, 이극로를 중심으로 -)

  • Lee, Suk-Hui;Ko, Do-Heung
    • Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.149-167
    • /
    • 2000
  • The purpose of this paper is to review the contribution made by some phoneticians including Si-gyung Chu, Du-bong Kim, Hyun-bae Choi, and Geuk-ro Lee in the early 20th century. It is known that the period can be characterized as the coexistence of traditional phonetics and modem phonetics. Si-gyung Chu well recognized the physical nature of speech sounds from the physiological point of view. Although Du-bong Kim adapted Chu's approach in some ways, he made some more detailed modifications in explaining the vocal organs. Hyun-bae Choi tried to explain the consonants and vowels systematically based on Western theories of phonetics. Finally, Geuk-ro Lee made the most significant contribution by introducing the experimental phonetics.

  • PDF

The Basic Study on making mono-phone for Korean Speech Recognition (한국어 음성 인식을 위한 mono-phone 구성의 기초 연구)

  • Hwang YoungSoo;Song Minsuck
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.45-48
    • /
    • 2000
  • In the case of making large vocabulary speech recognition system, it is better to use the segment than the syllable or the word as the recognition unit. In this paper, we study on the basis of making mono-phone for Korean speech recognition. For experiments, we use the speech toolkit of OGI in U.S.A. The result shows that the recognition rate of :he case in which the diphthong is established as a single unit is superior to that of the case in which the diphthong is established as two units, i.e. a glide plus a vowel. And also, the recognition rate by the number of consonants is a little different.

  • PDF

A Study of the Speaking-Centered Chinese Pronunciation Teaching Method for Basic Chinese Learners. (초급 중국어 학습자를 위한 발음교육 개선방안 - 말하기 중심 발음 교수법 -)

  • Lim, Seung Kyu
    • Cross-Cultural Studies
    • /
    • v.35
    • /
    • pp.339-368
    • /
    • 2014
  • In Teaching Chinese as a Foreign Language, phoneme-based pronunciation teaching such as tone, consonants, vowels is the most common teaching methods. Based on main character of Chinese grammar: 'lack of morphological change' in a narrow sense, was proposed by Lv Shuxiang and Zhu Dexi, I designed 'Communicative oriented Chinese pronunciation teaching method'. This teaching method is composed of seven elements: one kind is the 'structural elements': phoneme, word, phrase, sentence; another kind is the 'functional elements': listening, speaking and translation. This pronunciation teaching method has four kinds of practice methods: 1) phoneme learning method; 2) word based pronunciation practice; 3) phrase based pronunciation practice; 4) sentence based pronunciation practice. When the teachers use these practice methods, they can use the dialogue and Korean-Chinese translation. In particular, when the teachers use 'phoneme learning method', they must use Korean and Chinese phonetic comparison results. When the teachers try to correct learner's errors, they must first consider the speech communication.

A Study on Korean Lip-Sync for Animation Characters - Based on Lip-Sync Technique in English-Speaking Animations (애니메이션 캐릭터의 한국어 립싱크 연구 : 영어권 애니메이션의 립싱크 기법을 기반으로)

  • Kim, Tak-Hoon
    • Cartoon and Animation Studies
    • /
    • s.13
    • /
    • pp.97-114
    • /
    • 2008
  • This study aims to study mouth shapes suitable to the shapes of Korean consonants and vowels for Korean animations by analyzing the process of English-speaking animation lip-sync based on pre-recording in the United States. A research was conducted to help character animators understand the concept of Korean lip-sync which is done after recording and to introduce minimum, basic mouth shapes required for Korean expressions which can be applied to various characters. In the introduction, this study mentioned the necessity of Korean lip-sync in local animations and introduced the research methods of Korean lip-sync data based on English lip-sync data by laking an American production as an example. In the main subject, this study demonstrated the characteristics and roles of 8 basic mouth shapes required for English pronunciation expressions, left out mouth shapes that are required for English expressions but not for Korean expressions, and in contrast, added mouth shapes required for Korean expressions but not for English expressions. Based on these results, this study made a diagram for the mouth shapes of Korean expressions by laking various examples and made a research on how mouth shapes vary when they are used as consonants, vowels and batchim. In audition, the case study proposed a method to transfer lines to the exposure sheet and a method to arrange mouth shapes according to lip-sync for practical animation production. However, lines from a Korean movie were inevitably used as an example because there has not been any precedents in Korea about animation production with systematic Korean lip-sync data.

  • PDF

Consonant Inventories of the Better Cochlear Implant Children in Korea (말지각 능력이 우수한 인공와우 착용 아동들의 조음 특성 : 정밀전사 분석 방법을 중심으로)

  • Chang, Son-A;Kim, Soo-Jin;Shin, Ji-Young
    • MALSORI
    • /
    • no.62
    • /
    • pp.33-49
    • /
    • 2007
  • The purpose of this study is 1) to investigate the phoneme inventories and phonological processes of cochlear implant(CI) children and 2) to describe their utterances using narrow phonetic transcription method. All ten subjects had more than 2 year-experience with CI and showed more than 85 % open-set sentence perception abilities. Average consonant accuracy was 81.36 % and it was improved up to 87.41% when distortion errors were not counted. They showed similar phonological processing patterns to HA or normal hearing children in some way as well as different phonological processing patterns from HA or normal hearing children. The prominent distortion error pattern was weakening of consonants. Every subject had his/her idiosyncratic error pattern that demanded his/her own individualized therapy program.

  • PDF

A Use of Songs for Teaching Pronunciations in Elementary School

  • Hong, Kyung-Suk
    • MALSORI
    • /
    • no.41
    • /
    • pp.61-71
    • /
    • 2001
  • How to teach intelligible, communicative pronunciation is a continuous question in the English education. Without good input, we can not expect good output. However, in EFL situation, it is very difficult to input the good English pronunciation, therefore, we have to find out the efficient and effective material for teaching pronunciation. One of the materials is song, because songs contain the linguistic and cultural traits of the language. The purpose of this paper is to clarify the reason why songs are good for teaching pronunciation. Koreans, who are syllable timed language users, have difficulties in English pronunciation of stress, rhythm, consonants cluster, linking or blending in connected speech. The 134 songs from wee sing are analyzed for how these traits show in songs. The result shows that we can acquire the traits easily and naturally through songs. And a lesson plan is offered as an example for teaching songs.

  • PDF

An Experimental Study of Korean Intervocalic Lax and Tense Stop Consonants - With Respect to Stop Closure Duration - (모음사이의 예사소리와 된소리의 구분에 대한 실험음성학적 연구 -파열음의 폐쇄지속시간을 중심으로-)

  • 김효숙
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.93-96
    • /
    • 1998
  • 이 논문은 모음 사이에 오는 예사소리와 된소리의 음향적 특징을 비교하여 예사소리와 된소리 사이에 뚜렷한 차이가 나는 음향적 특징 가운데 몇 개를 골라 그것을 변수로 삼았을 때 청취에 미치는 영향을 알아보는 것을 목적으로 삼았다. 모음 사이에 오는 예사소리와 된소리의 음향적 특징은 첫째, 자음의 폐쇄지속 시간이 된소리가 예사소리보다 길다. 둘째, 예사소리 앞에 오는 모음의 길이가 된소리 앞에 오는 모음의 길이보다 길다. 셋째, VOT는 예사소리와 된소리 사이에 차이가 거의 없다. 이 같은 음향적 특징 가운데에서 자음의 폐쇄지속시간과 앞에 오는 모음의 길이가 예사소리와 된소리의 구별에 영향을 미치는 반면 앞에 오는 모음의 길이는 예사소리와 된소리의 구별에 영향을 미치지 않았다.

  • PDF

Consonant Inventories of the Better Cochlear Implant Children in Korea (말지각 능력이 우수한 인공와우 착용 아동들의 조음 능력;음소의 정밀 전사)

  • Chang, Son-A;Kim, Su-Jin;Sin, Ji-Yeong
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.274-277
    • /
    • 2007
  • The purpose of this study is 1) to describe the phoneme inventories of cochlear implant(CI) children and 2) to describe their utterances using narrow phonetic transcription method. All the subjects had more than 2 year-experience with CI and showed more than 87% open-set sentence perception abilities. Average consonant accuracy was 81.36% and it was improved up to 87.41% when distortion errors were not counted. They showed different error patterns from hearing aid users. The prominent error pattern was weakening of consonants.

  • PDF

Speech Production Characteristics of Congenitally Deaf Children with Cochlear Implant (선천성심도 청각장애 아동의 와우이식 후 말산출 특성)

  • Yoon, Mi-Sun
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.302-304
    • /
    • 2007
  • The purpose of this study was to evaluate speech production ability of congenitally deaf children with cochlear implant. Forty children were participated in the study. The results are following: (1) mean of speech intelligibility score was 3.05 in 5 point scale, (2) mean of percent of correct vowels was 86.19%, and mean of percent of correct consonants was 74.89%, and (3) voice profiles showed their voice were high pitched, hypernasal, and breathy. But 12.5% of the children were evaluated as having normal voice quality. Overall speech production abilities of children with cochlear implant were superior than the deaf children's result reported in literatures. Meanwhile their abilities were not same as children with normal hearing.

  • PDF

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.