• Title/Summary/Keyword: initial 2 syllables

Search Result 20, Processing Time 0.025 seconds

Characteristics of AP Tonal Patterns & Slopes Produced by Chinese Learners of Korean (중국인 학습자의 한국어 강세구 성조패턴과 기울기 특성)

  • In, Jiyoung;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.47-54
    • /
    • 2013
  • The purpose of this study is to analyse prosodic characteristics of accentual phrases (AP, hereafter) produced by Chinese learners of Korean in Korean text reading. The study is restricted only to the initial APs. Subjects are students who have been studying Korean at a beginner level. The results showed that Chinese learners of Korean tended to make errors in the realization of the tonal patterns of the initial 2 syllables of the initial APs. Also they showed different F0 slopes across the initial and second syllables in the initial APs. Chinese learners of Korean, therefore, need to focus on the prosodic characteristics of the initial 2 syllables of Korean APs to realize fluent Korean intonation.

The Effect of Strong Syllables on Lexical Segmentation in English Continuous Speech by Korean Speakers (강음절이 한국어 화자의 영어 연속 음성의 어휘 분절에 미치는 영향)

  • Kim, Sunmi;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.2
    • /
    • pp.43-51
    • /
    • 2013
  • English native listeners have a tendency to treat strong syllables in a speech stream as the potential initial syllables of new words, since the majority of lexical words in English have a word-initial stress. The current study investigates whether Korean (L1) - English (L2) late bilinguals perceive strong syllables in English continuous speech as word onsets, as English native listeners do. In Experiment 1, word-spotting was slower when the word-initial syllable was strong, indicating that Korean listeners do not perceive strong syllables as word onsets. Experiment 2 was conducted in order to avoid any possibilities that the results of Experiment 1 may be due to the strong-initial targets themselves used in Experiment 1 being slower to recognize than the weak-initial targets. We employed the gating paradigm in Experiment 2, and measured the Isolation Point (IP, the point at which participants correctly identify a word without subsequently changing their minds) and the Recognition Point (RP, the point at which participants correctly identify the target with 85% or greater confidence) for the targets excised from the non-words in the two conditions of Experiment 1. Both the mean IPs and the mean RPs were significantly earlier for the strong-initial targets, which means that the results of Experiment 1 reflect the difficulty of segmentation when the initial syllable of words was strong. These results are consistent with Kim & Nam (2011), indicating that strong syllables are not perceived as word onsets for Korean listeners and interfere with lexical segmentation in English running speech.

Strong (stressed) syllables in English and lexical segmentation by Koreans (영어의 강음절(강세 음절)과 한국어 화자의 단어 분절)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.3-14
    • /
    • 2011
  • It has been posited that in English, native listeners use the Metrical Segmentation Strategy (MSS) for the segmentation of continuous speech. Strong syllables tend to be perceived as potential word onsets for English native speakers, which is due to the high proportion of strong syllables word-initially in the English vocabulary. This study investigates whether Koreans employ the same strategy when segmenting speech input in English. Word-spotting experiments were conducted using vowel-initial and consonant-initial bisyllabic targets embedded in nonsense trisyllables in Experiment 1 and 2, respectively. The effect of strong syllable was significant in the RT (reaction times) analysis but not in the error analysis. In both experiments, Korean listeners detected words more slowly when the word-initial syllable is strong (stressed) than when it is weak (unstressed). However, the error analysis showed that there was no effect of initial stress in Experiment 1 and in the item (F2) analysis in Experiment 2. Only the subject (F1) analysis in Experiment 2 showed that the participants made more errors when the word starts with a strong syllable. These findings suggest that Koran listeners do not use the Metrical Segmentation Strategy for segmenting English speech. They do not treat strong syllables as word beginnings, but rather have difficulties recognizing words when the word starts with a strong syllable. These results are discussed in terms of intonational properties of Korean prosodic phrases which are found to serve as lexical segmentation cues in the Korean language.

  • PDF

Sentence design for speech recognition database

  • Zu Yiqing
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.472-472
    • /
    • 1996
  • The material of database for speech recognition should include phonetic phenomena as much as possible. At the same time, such material should be phonetically compact with low redundancy[1, 2]. The phonetic phenomena in continuous speech is the key problem in speech recognition. This paper describes the processing of a set of sentences collected from the database of 1993 and 1994 "People's Daily"(Chinese newspaper) which consist of news, politics, economics, arts, sports etc.. In those sentences, both phonetic phenometla and sentence patterns are included. In continuous speech, phonemes always appear in the form of allophones which result in the co-articulary effects. The task of designing a speech database should be concerned with both intra-syllabic and inter-syllabic allophone structures. In our experiments, there are 404 syllables, 415 inter-syllabic diphones, 3050 merged inter-syllabic triphones and 2161 merged final-initial structures in read speech. Statistics on the database from "People's Daily" gives and evaluation to all of the possible phonetic structures. In this sentence set, we first consider the phonetic balances among syllables, inter-syllabic diphones, inter-syllabic triphones and semi-syllables with their junctures. The syllabic balances ensure the intra-syllabic phenomena such as phonemes, initial/final and consonant/vowel. the rest describes the inter-syllabic jucture. The 1560 sentences consist of 96% syllables without tones(the absent syllables are only used in spoken language), 100% inter-syllabic diphones, 67% inter-syllabic triphones(87% of which appears in Peoples' Daily). There are rougWy 17 kinds of sentence patterns which appear in our sentence set. By taking the transitions between syllables into account, the Chinese speech recognition systems have gotten significantly high recognition rates[3, 4]. The following figure shows the process of collecting sentences. [people's Daily Database] -> [segmentation of sentences] -> [segmentation of word group] -> [translate the text in to Pin Yin] -> [statistic phonetic phenomena & select useful paragraph] -> [modify the selected sentences by hand] -> [phonetic compact sentence set]

  • PDF

Treatment Effect of a Modified Melodic Intonation Therapy (MMIT) in Korean Aphasics

  • Ko, Do-Heung;Jeong, Ok-Ran
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.91-102
    • /
    • 1998
  • The present study attempted to modify the conventional Melodic Intonation Therapy (MIT) in three aspects: number of syllables of adjacent target utterances (ATU), melody patterns of ATU, and initial listening of melody and intoned speech with the eyes closed. The modified Melodic Intonation Therapy (MMIT) was applied to two severe Korean aphasics. The patients exhibited a severely nonfluent aphasia resulting from a left CVA(Cerebrovascular Accident). The purpose of the modification was to avoid perseveration and improve reflective listening skills. First, the treatment program avoided ATU with the same number of syllables. Second, four different patterns of melody were developed: rising type, falling type, V-type, and inverted V-type. One type of prosodic pattern was preceded and followed by another type of melody. These two variations were to decrease perseverative behaviors. Finally, the patients kept their eyes closed when the clinician played and hummed a target melody at the initial stage of the program in order to improve reflective listening skills. A single-subject alternating treatment design was used. The effects of MMIT were compared to the conventional MIT. Differing the number of syllables and the type of melodic patterns decreased perseverative behaviors and produced more correct names. The initial listening of the target melody with the patients' eyes closed seemed to increase their attentiveness and result in a more fluent production of target utterances. Probable reasons for the effectiveness of MMIT were discussed.

  • PDF

Korean Speech Recognition Based on Syllable (음절을 기반으로한 한국어 음성인식)

  • Lee, Young-Ho;Jeong, Hong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.1
    • /
    • pp.11-22
    • /
    • 1994
  • For the conventional systme based on word, it is very difficult to enlarge the number of vocabulary. To cope with this problem, we must use more fundamental units of speech. For example, syllables and phonemes are such units, Korean speech consists of initial consonants, middle vowels and final consonants and has characteristic that we can obtain syllables from speech easily. In this paper, we show a speech recognition system with the advantage of the syllable characteristics peculiar to the Korean speech. The algorithm of recognition system is the Time Delay Neural Network. To recognize many recognition units, system consists of initial consonants, middle vowels, and final consonants recognition neural network. At first, our system recognizes initial consonants, middle vowels and final consonants. Then using this results, system recognizes isolated words. Through experiments, we got 85.12% recognition rate for 2735 data of initial consonants, 86.95% recognition rate for 3110 data of middle vowels, and 90.58% recognition rate for 1615 data of final consonants. And we got 71.2% recognition rate for 250 data of isolated words.

  • PDF

The Development of Phonological Awareness in Children (아동의 음운인식 발달)

  • Park, Hyang Ah
    • Korean Journal of Child Studies
    • /
    • v.21 no.1
    • /
    • pp.35-44
    • /
    • 2000
  • This study examined the development of phonological awareness of 3-, 5-, and 7-year-old children, 20 subjects at each age level. The 3-year-olds were given 2 phoneme detection tasks and the 5- and 7-year-olds were given 5 phoneme detection tasks. In each task, the children first heard a target syllable together with 2 other syllables and were asked to tell which of the 2 syllables sounded similar to the target. Children were able to detect relatively large segments ($Consonant_1+Vowel$ or $Vowel+Consonant_2$: $C_1V$ or $VC_2$) at the age of 3 and gradually progressed to smaller sound segments(e.g., phonemes). This study indicated the Korean children detect $C_1V$ segments better than $VC_2$ segments and detect the initial consonant better than the middle vowel and the final consonant.

  • PDF

Acoustic analysis of Korean trisyllabic words produced by English and Korean speakers

  • Lee, Jeong-Hwa;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.1-6
    • /
    • 2018
  • The current study aimed to investigate the transfer of English word stress rules to the production of Korean trisyllabic words by L1 English learners of Korean. It compared English and Korean speakers' productions of seven Korean words from the corpus L2KSC (Rhee et al., 2005). To this end, it analyzed the syllable duration, intensity, and pitch. The results showed that English and Korean speakers' pronunciations differed markedly in duration and intensity. English learners produced word-initial syllables of greater intensity than Korean speakers, while Korean speakers produced word-final syllables of longer duration than English learners. However, these differences between the two speaker groups were not related to the expected L1 transfer. The tonal patterns produced by English and Korean speakers were similar, reflecting L1 English speakers' learning of the L2 Korean prosodic system.

A Study on the Intelligibility of Esophageal Speech (식도발성 발화의 명료도에 대한 연구)

  • Pyo, Hwa-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.5
    • /
    • pp.182-187
    • /
    • 2007
  • The present study was to investigate the speech intelligibility of esophageal speech, which is the way that the laryngectomized people who lost their voices by total laryngectomy can phonate by using the airstream driven into esophagus, not trachea. Three normal listeners transcribed the CVVand VCV syllables produced by 10 esophageal speakers. As a result, overall intelligibility of esophageal speech was 27%. Affricates showed the highest intelligibility, and fricatives, the lowest. In the aspect of the place of articulation, palatals were the most intelligble, and alveolars, the least. Most of the aspirated consonants showed a low intelligibility. The consonants in VCV syllables were more intelligible than the ones in CVV syllables. The low intelligibility of esophageal speakers is due to insufficient airflow intake into esophagus. Therefore, training to increase airflow intake, as well as correct articulation training, will improve their low intelligibility.

Some Characteristics of Hanmal and Hangul from the viewpoint of Processing Hangul Information on Computers

  • Kim, Kyong-Sok
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.456-463
    • /
    • 1996
  • In this paper, we discussed three cases to see the effects of the characteristics of Hangul writing system. In applications such as computer Hangul shorthands for ordinary people and pushbuttons with Hangul characters engraved, we found that there is much advantage in using Hangul. In case of Hangul Transliteration, we discussed some problems which are related with the characteristics of Hangul writing system. Shorthands use 3-set keyboards in England, America, and Korea. We saw how ordinary people can do computer Hangul shorthands, whereas only experts can do computer shorthands in other countries. Specifically, the facts that 1) Hangul characters are grouped into syllables (syllabic blocks) and that 2) there is already a 3-set Hangul keyboard for ordinary people allow ordinary people to do computer Hangul shorthands without taking special training as with English shorthands. This study was done by the author under the codename of 'Sejong 89'. In contrast like QWERTY or DVORAK, a 2-set Hangul keyboard cannot be used for shorthands. In case of English pushbuttons, one digit is associated with only one character. However, by engraving only syllable-initial characters on the phone pushbuttons, we can associate one Hangul "syllable" with one digit. Therefore, for a given number of digits, we can associate longer words or more meaningful words in Hangul than in English. We discussed the problems of the Hangul Transliteration system proposed by South Korea and suggested their solutions, if available. 1) We are incorrectly using the framework of transcription for transliteration. To solve the problem, the author suggests that a) we include all complex characters in the transliteration table, and that b) we specify syllable-initial and -final characters separately in the table. 2) The proposed system cannot represent independent characters and incomplete syllables. 3) The proposed system cannot distinguish between syllable-initial and -final characters.

  • PDF