• 제목/요약/키워드: sentence production

Search Result 65, Processing Time 0.028 seconds

A Study on the Efficacy of Teaching English Discourse Intonation: Blended Learning (담화속 영어 억양교육의 효율성에 대한 실험연구: 혼합교수모듈을 중심으로)

  • Kim, He-Kyung
    • Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.31-46
    • /
    • 2007
  • This study attempts to investigate that the training of pitch manipulation would help Korean speakers reduce the intonation errors based on the review of many previous studies on Korean speakers' phonetic realization of intonation. The previous studies have indicated that Korean speakers have problems with pitch manipulation in their production of English word stress, sentence stress, and eventually intonation. To train Korean speakers phonetically realize English pitch patterns, a blended learning module was operated for two weeks: face-to-face instruction for six hours and e-learning instruction for three hours in total. This module was designed to help Korean speakers realize pitch as a distinctive phoneme. An acoustic assessment on five Korean female English speakers shows that the training of pitch manipulation helps Korean English speakers reduce the intonation errors indicated in the previous studies reviewed.

  • PDF

Korean speakers hyperarticulate vowels in polite speech

  • Oh, Eunhae;Winter, Bodo;Idemaru, Kaori
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.15-20
    • /
    • 2021
  • In line with recent attention to the multimodal expression of politeness, the present study examined the association between polite speech and acoustic features through the analysis of vowels produced in casual and polite speech contexts in Korean. Fourteen adult native speakers of Seoul Korean produced the utterances in two social conditions to elicit polite (professor) and casual (friend) speech. Vowel duration and the first (F1) and second formants (F2) of seven sentence- and phrase-initial monophthongs were measured. The results showed that polite speech shares acoustic similarities with vowel production in clear speech: speakers showed greater vowel space expansion in polite than casual speech in an effort to enhance perceptual intelligibility. Especially, female speakers hyperarticulated (front) vowels for polite speech, independent of speech rate. The implications for the acoustic encoding of social stance in polite speech are further discussed.

A study of an effective teaching of listening comprehension (영어 청해력 향상을 위한 효율적인 학습 지도 방안)

  • Park, Chan-Shik
    • English Language & Literature Teaching
    • /
    • no.1
    • /
    • pp.69-108
    • /
    • 1995
  • Listening comprehension can be defined as a process of an integrative, positive and creative activity through which listeners get the message of speakers' production using linguistic or non-linguistic redundancy as well as linguistic or non-linguistic knowledge. Compared with reading comprehension, it has many difficulties especially for foreigners. while it can be transferred to the other skills: speaking, reading, writing. With this said, listening comprehension can be taught effectively using the following teaching strategies. First. systematic and intensive instruction of segmental phonemes, suprasegmental phonemes and sound changes must be given to remove the difficulties of listening comprehension concerned with the identification of sounds. Second, vocabulary drill through various games and other activities is absolutely needed until words can be unconsciously recognized. Without this, comprehension is almost impossible. Third, instruction of sentence structures is thought to be essential considering grammar is supplementary to listening comprehension and reading comprehension for academic purpose. So grammar translation drills, mechanical drills, meaningful drills and communicative drills should be performed in succession with common or frequently used structures. Fourth, listening activities for overall comprehension should teach how to receive overall meaning of intended messages intact. Linguists and literatures have listed some specific activities as follows: Total Physical Response, dictation, role playing, singing songs, selective listening, picture recognition, list activities, completion, prediction, true or false choice, multiple choice, seeking of specific information, summarizing, problem-solving and decision-making, recognization of relationships between speakers, recognition of mood, attitude and behavior of speakers.

  • PDF

Cross-Generational Differences of /o/ and /u/ in Informal Text Reading (편지글 읽기에 나타난 한국어 모음 /오/-/우/의 세대간 차이)

  • Han, Jeong-Im;Kang, Hyunsook;Kim, Joo-Yeon
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.201-207
    • /
    • 2013
  • This study is a follow-up study of Han and Kang (2013) and Kang and Han (2013) which examined cross-generational changes in the Korean vowels /o/ and /u/ using acoustic analyses of the vowel formants of these two vowels, their Euclidean distances and the overlap fraction values generated in SOAM 2D (Wassink, 2006). Their results showed an on-going approximation of /o/ and /u/, more evident in female speakers and non-initial vowels. However, these studies employed non-words in a frame sentence. To see the extent to which these two vowels are merged in real words in spontaneous speech, we conducted an acoustic analysis of the formants of /o/ and /u/ produced by two age groups of female speakers while reading a letter sample. The results demonstrate that 1) the younger speakers employed mostly F2 but not F1 differences in the production of /o/ and /u/; 2) the Euclidean distance of these two vowels was shorter in non-initial than initial position, but there was no difference in Euclidean distance between the two age groups (20's vs. 40-50's); 3) overall, /o/ and /u/ were more overlapped in non-initial than initial position, but in non-initial position, younger speakers showed more congested distribution of the vowels than in older speakers.

Effects of pitch accent and prosodic boundary on English vowel production by native versus nonnative (Korean) speakers. (영어의 강세와 운율경계가 모음 발화에 미치는 영향에 관한 음향 연구;원어민과 한국인을 대상으로)

  • Hur, Yu-Na;Kim, Sa-Hyang;Cho, Tae-Hong
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.240-242
    • /
    • 2007
  • The goal of this paper is to investigate effects of three prosodic factors, such as phrasal accent (accented vs. unaccented), prosodic boundary (IP-initial vs. IP-medial) and coda voicing (e.g., bed vs. bet), on acoustic realization of English vowels (/i, $_I/$, $/{\varepsilon}$, ${\ae}/$) as produced by native (Canadian) and nonnative (Korean) speakers. The speech corpus included 16 minimal pairs (e.g., bet-bat, bet-bed) embedded in a sentence. Results show that phonological contrast between vowels are maximized when they were accented, though the contrast maximization pattern was not the same between the English and Korean speakers. However, domain-initial position do not affect the phonetic manifestation of vowels. Results also show that phonological contrast due to coda voicing is maximized only when the vowels are accented. These results propose that the phonetic realization of vowels is affected by phrasal accent only, and not by the location within prosodic position.

  • PDF

An Analysis of $H^*$ Production by Korean Learners of English according to the Focus of English Sentences in Comparison with Native Speakers of English and Its Pedagogical Implications (영어 원어민과 비교한 한국인 학습자의 영어 문장 초점에 따른 영어 고성조 구현의 분석과 억양교육에 대한 시사점)

  • Yi, So-Pae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.57-62
    • /
    • 2011
  • Focused items in English sentences are usually accompanied by changes in acoustic manifestation. This paper investigates the acoustic characteristics of $H^*$ in English utterances produced by natives speakers of English and Korean learners of English. To obtain more reliable results, the changes of the acoustic feature values (F0, intensity, syllable duration) were normalized by a median value and a whole duration of each utterance. Acoustic values of sentences with no focused words were compared with those of sentences with focused words within each group (Americans vs. Koreans). Sentences with focused words were compared between the two groups, too. In the instances in which a significant Group x Focus Location (initial, middle and final of a sentence) interaction was obtained, further analysis testing the effect of Group on each Focus Location was conducted. The analysis revealed that Korean learners of English produced focused words with lower F0, lower intensity and shorter syllable duration than native speakers of English. However, the effect of intensity change caused by focus was not significant within each group. Further analysis examining the interaction of Group and Focus Location showed that the change in F0 produced by Korean group was significantly lower in the middle and the final positions of sentences than by American group. Implications for the intonation training were also discussed.

  • PDF

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Comparison of Speech Rate and Long-Term Average Speech Spectrum between Korean Clear Speech and Conversational Speech

  • Yoo, Jeeun;Oh, Hongyeop;Jeong, Seungyeop;Jin, In-Ki
    • Korean Journal of Audiology
    • /
    • v.23 no.4
    • /
    • pp.187-192
    • /
    • 2019
  • Background and Objectives: Clear speech is an effective communication strategy used in difficult listening situations that draws on techniques such as accurate articulation, a slow speech rate, and the inclusion of pauses. Although too slow speech and improperly amplified spectral information can deteriorate overall speech intelligibility, certain amplitude of increments of the mid-frequency bands (1 to 3 dB) and around 50% slower speech rates of clear speech, when compared to those in conversational speech, were reported as factors that can improve speech intelligibility positively. The purpose of this study was to identify whether amplitude increments of mid-frequency areas and slower speech rates were evident in Korean clear speech as they were in English clear speech. Subjects and Methods: To compare the acoustic characteristics of the two methods of speech production, the voices of 60 participants were recorded during conversational speech and then again during clear speech using a standardized sentence material. Results: The speech rate and longterm average speech spectrum (LTASS) were analyzed and compared. Speech rates for clear speech were slower than those for conversational speech. Increased amplitudes in the mid-frequency bands were evident for the LTASS of clear speech. Conclusions:The observed differences in the acoustic characteristics between the two types of speech production suggest that Korean clear speech can be an effective communication strategy to improve speech intelligibility.

Effects of a singing program using self-voice monitoring on the intonation and pitch production change for children with cochlear implants (자가음성 모니터링을 응용한 가창 프로그램이 인공와우이식 아동의 억양과 음고 변화에 미치는 영향)

  • Kim, Sung Keong;Kim, Soo Ji
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.75-83
    • /
    • 2020
  • The purpose of this study was to examine how a singing program using self-voice monitoring for children with cochlear implants (CI) influences on the intonation and the accuracy of pitch production. To verify and estimate the effectiveness, a program was conducted with participants of 7 prelingual CI users, whose aged between 4 years and 7 years. The program adopted three stages from the self-voice monitoring: Listen, Explore, and Reproduce (LER stage). All participants received 8 singing sessions over 8 weeks, including pre-test, intervention, and post-test. For the pre and post-test, participants' singing of an excerpt of a song "happy birthday" and speaking three assertive sentences and three interrogative sentences were recorded and analyzed in terms of the intonation slopes at the end of the sentences and the melodic contour. From the sentence speeches, we found that the intonation slopes of the interrogative sentences significantly improved as they showed similar patterns with that of the average normal hearing group. Also, in regard to singing, we observed that the melody contour had progressed, as well as the range of pitch production had extended. The positive result from the intervention indicates that the singing program was effective for children with CI to develop the intonation skill and accuracy of pitch production.

Characteristics of speech rate and pause in children with spastic cerebral palsy and their relationships with speech intelligibility (경직형 뇌성마비 아동의 하위그룹별 말속도와 쉼의 특성 및 말명료도와의 관계)

  • Jeong, Pil Yeon;Sim, Hyun Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.95-103
    • /
    • 2020
  • The current study aimed to identify the characteristics of speech rate and pause in children with spastic cerebral palsy (CP) and their relationships with speech intelligibility. In all, 26 children with CP, 4 with no speech motor involvement and age-appropriate language ability (NSMI-LCT), 6 with no speech motor involvement and impaired language ability (NSMI-LCI), 6 with speech motor involvement and age-appropriate language ability (SMI- LCT), and 10 with speech motor involvement and impaired language ability (SMI-LCI) participated in the study. Speech samples for the speech rate and pause analysis were extracted using a sentence repetition task. Acoustic analysis were made in Praat. First, it was found that regardless of the presence of language impairment, significant group differences between the NSMI and SMI groups were found in speech rate and articulation rate. Second, the SMI groups showed a higher ratio of pause time to sentence production time, more frequent pauses, and longer durations of pauses than the NSMI groups. Lastly, there were significant correlations among speech rate, articulation rate, and intelligibility. These findings suggest that slow speech rate is the main feature in SMI groups, and that both speech rate and articulation rate play important roles in the intelligibility of children with spastic CP.