• Title/Summary/Keyword: Talking

Search Result 312, Processing Time 0.022 seconds

Korean Students' Perceptions of Free-talking and International Professors' Role Recognition

  • Kim, Nahk-Bohk
    • English Language & Literature Teaching
    • /
    • v.17 no.3
    • /
    • pp.119-139
    • /
    • 2011
  • Free-talking in Korea has recently been emphasized as a way of improving students' speaking ability outside of the classroom. The purpose of this study is to examine perceptions of free-talking, to understand what type of roles were played by or allotted between Korean students and international professors (IPs) and to look for effective speaking strategies for utilizing free-talking. Participants of this study were 68 university students and 23 IPs. The data collected through a survey type of questionnaire were analyzed by this researcher and the main findings indicate that students and IPs have somewhat different viewpoints about their concepts of free-talking. Students expressed varying viewpoints depending on their experience and class (year). In terms of the benefits, usefulness, and satisfaction of free-talking, students and IPs seem to be in more agreement with each other although the two groups have conflicting perceptions in the particular operation of free-talking, especially in terms of preparation and feedback. Students stated that they feel anxious, nervous, and that they struggle with peer pressure while free-talking. However, they feel that through free-talking they build up confidence and increase their speaking ability. Regarding roles, most professors play a helpful role as a guide or facilitator while students want professors to provide more suitable materials and to tutor them by means of appropriate feedback and strategies as well-prepared teachers like a prompter, participant or tutor in the timely manner. Finally, this paper proffers a few practical suggestions for activating free-talking and a discussion of the pedagogical implications.

  • PDF

A Study on the Durational Characteristics of Korean Distant-Talking Speech (한국어 원거리 음성의 지속시간 연구)

  • Kim, Sun-Hee
    • MALSORI
    • /
    • no.54
    • /
    • pp.1-14
    • /
    • 2005
  • This paper presents durational characteristics of Korean distant-talking speech using speech data, which consist of 500 distant-talking utterances and 500 normal utterances of 10 speakers (5 males and 5 females). Each file was segmented and labeled manually and the duration of each segment and each word was extracted. Using a statistical method, the durational change of distant-talking speech in comparison with normal speech was analyzed. The results show that the duration of words with distant-talking speech is increased in comparison with normal style, and that the average unvoiced consonantal duration is reduced while the average vocalic duration is increased. Female speakers show a stronger tendency towards lengthening the duration in distant-talking speech. Finally, this study also shows that the speakers of distant-talking speech could be classified according to their different duration rate.

  • PDF

An Analysis of Acoustic Features Caused by Articulatory Changes for Korean Distant-Talking Speech

  • Kim Sunhee;Park Soyoung;Yoo Chang D.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.71-76
    • /
    • 2005
  • Compared to normal speech, distant-talking speech is characterized by the acoustic effect due to interfering sound and echoes as well as articulatory changes resulting from the speaker's effort to be more intelligible. In this paper, the acoustic features for distant-talking speech due to the articulatory changes will be analyzed and compared with those of the Lombard effect. In order to examine the effect of different distances and articulatory changes, speech recognition experiments were conducted for normal speech as well as distant-talking speech at different distances using HTK. The speech data used in this study consist of 4500 distant-talking utterances and 4500 normal utterances of 90 speakers (56 males and 34 females). Acoustic features selected for the analysis were duration, formants (F1 and F2), fundamental frequency, total energy and energy distribution. The results show that the acoustic-phonetic features for distant-talking speech correspond mostly to those of Lombard speech, in that the main resulting acoustic changes between normal and distant-talking speech are the increase in vowel duration, the shift in first and second formant, the increase in fundamental frequency, the increase in total energy and the shift in energy from low frequency band to middle or high bands.

Acoustic Characteristics of Vowels in Korean Distant-Talking Speech (한국어 원거리 음성의 모음의 음향적 특성)

  • Lee Sook-hyang;Kim Sunhee
    • MALSORI
    • /
    • v.55
    • /
    • pp.61-76
    • /
    • 2005
  • This paper aims to analyze the acoustic effects of vowels produced in a distant-talking environment. The analysis was performed using a statistical method. The influence of gender and speakers on the variation was also examined. The speech data used in this study consist of 500 distant-talking words and 500 normal words of 10 speakers (5 males and 5 females). Acoustic features selected for the analysis were the duration, the formants (Fl and F2), the fundamental frequency and the total energy. The results showed that the duration, F0, F1 and the total energy increased in the distant-talking speech compared to normal speech; female speakers showed higher increase in all features except for the total energy and the fundamental frequency. In addition, speaker differences were observed.

  • PDF

Automatic Vowel Sequence Reproduction for a Talking Robot Based on PARCOR Coefficient Template Matching

  • Vo, Nhu Thanh;Sawada, Hideyuki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.3
    • /
    • pp.215-221
    • /
    • 2016
  • This paper describes an automatic vowel sequence reproduction system for a talking robot built to reproduce the human voice based on the working behavior of the human articulatory system. A sound analysis system is developed to record a sentence spoken by a human (mainly vowel sequences in the Japanese language) and to then analyze that sentence to give the correct command packet so the talking robot can repeat it. An algorithm based on a short-time energy method is developed to separate and count sound phonemes. A matching template using partial correlation coefficients (PARCOR) is applied to detect a voice in the talking robot's database similar to the spoken voice. Combining the sound separation and counting the result with the detection of vowels in human speech, the talking robot can reproduce a vowel sequence similar to the one spoken by the human. Two tests to verify the working behavior of the robot are performed. The results of the tests indicate that the robot can repeat a sequence of vowels spoken by a human with an average success rate of more than 60%.

Avatar's Lip Synchronization in Talking Involved Virtual Reality (대화형 가상 현실에서 아바타의 립싱크)

  • Lee, Jae Hyun;Park, Kyoungju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.4
    • /
    • pp.9-15
    • /
    • 2020
  • Having a virtual talking face along with a virtual body increases immersion in VR applications. As virtual reality (VR) techniques develop, various applications are increasing including multi-user social networking and education applications that involve talking avatars. Due to a lack of sensory information for full face and body motion capture in consumer-grade VR, most VR applications do not show a synced talking face and body. We propose a novel method, targeted for VR applications, for talking face synced with audio with an upper-body inverse kinematics. Our system presents a mirrored avatar of a user himself in single-user applications. We implement the mirroring in a single user environment and by visualizing a synced conversational partner in multi-user environment. We found that a realistic talking face avatar is more influential than an un-synced talking avatar or an invisible avatar.

The Impact on Growth in Childhood and Adolescence Based on Sleeping Symptoms (수면 시 동반되는 증상이 소아·청소년 성장에 미치는 영향)

  • Hong, Hyo Shin;Kim, Deog Gon;Lee, Jin Yong
    • The Journal of Pediatrics of Korean Medicine
    • /
    • v.27 no.2
    • /
    • pp.20-30
    • /
    • 2013
  • Objectives Sleep is closely related to children's and adolescent's growth. The purpose of this cross-sectional study is to examine the frequency of symptoms associated with sleep in childhood and adolescence and the impact in their growth. Methods This study had used questionnaire targeting 1001 children and adolescents. 532 of them were visited the Department of Pediatrics, the Oriental Medicine Hospital of ${\bigcirc}{\bigcirc}$University located in Dongdaemun-gu, Seoul, during the period between May and September in 2012. 469 of them were students in the lower grades at ${\bigcirc}{\bigcirc}$Elementary School located in Gangnam-gu, Seoul, during June, 2012. We used PASW Statistics 18.0 to analyze the relation between growth and symptoms associated with sleep by using Independent samples t-test, one-way ANOVA, and ANCOVA. Results As the result of this research, snoring(54.9%), sleep bruxism(34.2%), sleep talking(31.5%), sleep terror(17.1%) were most frequently seen as symptoms associated with sleep. Group of habitual snoring($p=0.008^{**}$) and sleep terror($p=0.016^*$) had lower height percentile than other groups. Groups with sleep talking($p=0.022^*$) had lower weight percentile than group without sleep talking. Groups with sleep talking($p=0.018^*$) or sleep walking($p=0.045^*$) had lower BMI percentile, and group with habitual sleep apnea($p=0.039^*$) had higher BMI percentile. Conclusions Symptoms during sleep such as snoring, sleep bruxism, sleep talking, and sleep terror occur frequently among children and adolescents. More importantly, snoring, sleep terror, and sleep talking may be associated with growth of children and adolescents.

MLLR-Based Environment Adaptation for Distant-Talking Speech Recognition (원거리 음성인식을 위한 MLLR적응기법 적용)

  • Kwon, Suk-Bong;Ji, Mi-Kyong;Kim, Hoi-Rin;Lee, Yong-Ju
    • MALSORI
    • /
    • no.53
    • /
    • pp.119-127
    • /
    • 2005
  • Speech recognition is one of the user interface technologies in commanding and controlling any terminal such as a TV, PC, cellular phone etc. in a ubiquitous environment. In controlling a terminal, the mismatch between training and testing causes rapid performance degradation. That is, the mismatch decreases not only the performance of the recognition system but also the reliability of that. Therefore, the performance degradation due to the mismatch caused by the change of the environment should be necessarily compensated. Whenever the environment changes, environment adaptation is performed using the user's speech and the background noise of the changed environment and the performance is increased by employing the models appropriately transformed to the changed environment. So far, the research on the environment compensation has been done actively. However, the compensation method for the effect of distant-talking speech has not been developed yet. Thus, in this paper we apply MLLR-based environment adaptation to compensate for the effect of distant-talking speech and the performance is improved.

  • PDF

Prosodic Characteristics of Korean Distance Speech (한국어 원거리 음성의 운율적 특성)

  • Lee, Sook-hyang;Kim, Sun-Hee;Kim, Jong-Jin
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.87-90
    • /
    • 2005
  • The aim of this paper is to investigate the prosodic characteristics of Korean distant speech. 36 2-syllable words of 4 speakers (2 males and 2 females) produced in both distant-talking and normal environments were used. The results showed that ratios of second syllable to first syllable in vowel duration and vowel energy were significantly larger in the distant-talking environment compared to the normal environment and f0 range also bigger in the distant-talking environment. In addition, 'HL%' contour boundary tone in the second syllable and/or 'L +H' contour tone in the first syllable were used in the distant-talking environment.

  • PDF

The Research on Convergence Education Method of Architecture and Communication Using Film - Focused , a Documentary Film - (영화를 활용한 건축 및 의사소통의 융합 교육 방법 - 다큐멘터리 <말하는 건축가>를 중심으로 -)

  • Nam, Jin-Sook;Byun, Na-Hyang
    • Journal of Engineering Education Research
    • /
    • v.18 no.6
    • /
    • pp.70-79
    • /
    • 2015
  • This paper proposes a convergence education method that combines architecture and communication through a documentary film entitled the Talking Architecture. The purpose of this program is to propose a new teaching and learning method for architecture education and investigates what would be an effective method of communication for architects. As mentioned above, this paper proposes a teaching method and a model applicable to actual classes based on the Talking Architect. It is proved that the method can be used for various types of classes and fields such as architectural expression, architectural planning and designing, housing theory, building structure, building materials, and other subject matters. In addition, this paper explores how architects communicate as described in the film. The findings show the potential of integrity, negotiating capability, and the convergent method of thinking and communication between the humanities and architecture as a positive communication model for architects. This paper opens up the possibilities for convergence education in the field of engineering education through three key words: film, architecture, and communication. And this paper is worth in that it is a useful method for developing convergence courses and team-teaching courses.