• Title/Summary/Keyword: Part of speech

Search Result 439, Processing Time 0.03 seconds

A Study on Estimation of Formant and Articulatory Motion using RLSL Adaptive Linear Prediction Filter (RLSL 적응선형예측필터를 이용한 형성음 및 조음운동 궤적 추정에 관한 연구)

  • Kim, Dong-Jun;Song, Young-Soo;Yoon, Tae-Sung;Park, Sang-Hui
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.163-166
    • /
    • 1992
  • In this study, the extractions of formant and articulately motion trajectorles from Korean diphthongs are performed by using the RISL adaptive linear prediction filter. This enables us to extract spectrum transition of speech signal accurately. This study showes that the RISL algorithm is superior to the Levinson algorithm, specially in transition part of speech.

  • PDF

Production and Perception of English /r/ and /l/ by Korean Learners of English: An Experimental Study

  • Kang, Hyeon-Seok
    • Speech Sciences
    • /
    • v.6
    • /
    • pp.7-24
    • /
    • 1999
  • Eleven Korean learners of English took part in an experiment where the production and perception of English /r/ and /l/ in four different word positions was investigated. Overall the subjects made more errors on /l/ in both production and identification tests. The frequency of the subjects' errors was also sensitive to word positions in which the two English liquids occur. Especially the subjects made noticeably fewer errors in intervocalic medial position. It is suggested that the Korean subjects' acquisitional pattern in this particular case of foreign phone learning can be explained more by language particular 'interference' effects rather than 'universal' acoustic arguments such as those given in Dissosway et a1. (1982) and Sheldon and Strange (1982). The results of the experiment also support the minority position among second language educators that in some cases of non-native phone acquisition, learners' production abilities can be developed earlier than their perceptual abilities.

  • PDF

Investigation about Japanese perception of Korean Tense Consonants (일본어 모국어 화자의 한국어 경음 지각)

  • Kwon, Yeonjoo
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • The aim of this paper is to investigate Japanese speakers' perception of Korean tense consonants. In a range of perceptual experiments Japanese participants were directed to label Korean stimuli using Japanese katakana characters. The analysis of the results showed a strong influence of Japanese phonology in the responses. Japanese perception of sokuon was increased, (1) when the tense consonants were in word medial position, (2) when tense consonants were other than /s/, (3) when the tense consonant followed voiceless consonants, (4) when the consonants were part of a cluster sharing their point of articulation, (5) when preceding vowel were other than /u/, (6) when following vowel were /u/. This result, showing preference for phonology, is in harmony with previous research on the Japanese sokuon perception using Japanese (Takeyasu 2009, Matsui 2011), and Italian (Tanaka & Kubozono 2008) stimuli.

Auditory Images of Japanese /p/ by Koreans (일본어 /p/의 청각인상 연구)

  • Lee, Jae-Kang
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.83-93
    • /
    • 2004
  • The objectives of this study are to analyze Korean speakers' pronunciations of various Japanese /p/ patterns and to provide desirable pronunciation models. This is a part of an ongoing research that aims to propose a useful method of teaching Japanese pronunciation of /p/ to Koreans. The experimental data consist of /p/ phonemes in word initial, word medial, and 'yoon' positions. Yoon must be written in small size after a letter and it only makes a syllable with the preceding letter in Japanese. There were 22 different phoneme positions. They were pronounced by 48 Japanese majoring students (24 females and 24 males), who were in their twenties and were raised in Daejeon and vicinity. The individual pronunciations were collected and digitized into 528 files. The results show that Koreans pronounced the Japanese phoneme /p/ in a variety of ways, according to the auditory environments in which the phoneme was tested: as [ph] in word initial, [pp] or [ph] in word medial, and [ph] in 'yoon', unlike native speakers who pronounced Japanese /p/ as [ph] in word initial, [pp] in word medial and, and [pp] or [ph] in 'yoon'.

  • PDF

Learning French Intonation with a Base of the Visualization of Melody (억양의 시각화를 통한 프랑스어의 억양학습)

  • Lee, Jung-Won
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.63-71
    • /
    • 2003
  • This study aims to experiment on learning French intonation, based on the visualization of melody, which was employed in the early sixties to reeducate those with communication disorders. The visualization of melody in this paper, however, was used to the foreign language learning and produced successful results in many ways, especially in learning foreign intonation. In this paper, we used the PitchWorks to visualize some French intonation samples and experiment on learning intonation based on the bitmap picture projected on a screen. The students could see the melody curve while listening to the sentences. We could observe great achievement on the part of the students in learning intonations, as verified by the result of this experiment. The students were much more motivated in learning and showed greater improvement in recognizing intonation contour than just learning by hearing. But lack of animation in the bitmap file could make the experiment nothing but a boring pattern practices. It would be better if we can use a sound analyser, as like for instance a PitchWorks, which is designed to analyse the pitch, since the students can actually see their own fluctuating intonation visualized on the screen.

  • PDF

A study on the Suprasegmental Parameters Exerting an Effect on the Judgment of Goodness or Badness on Korean-spoken English (한국인 영어 발음의 좋음과 나쁨 인지 평가에 영향을 미치는 초분절 매개변수 연구)

  • Kang, Seok-Han;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.3-10
    • /
    • 2011
  • This study investigates the role of suprasegmental features with respect to the intelligibility of Korean-spoken English judged by Korean and English raters as being good or bad. It has been hypothesized that Korean raters would have different evaluations from English native raters and that the effect may vary depending on the types of suprasegmental factors. Four Korean and four English native raters, respectively, took part in the evaluation of 14 Korean subjects' English speaking. The subjects read a given paragraph. The results show that the evaluation for 'intelligibility' is different for the two groups and that the difference comes from their perception of L2 English suprasegmentals.

  • PDF

Effect of Music Therapy on Stroke Patients

  • Lee, Su-Kyung;Cho, Hye-Jin
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.20 no.2
    • /
    • pp.498-502
    • /
    • 2006
  • Neurological impairment produces cognitive, communicational, physical, and social deficits. Music has the power to help stroke patients to regain speech and overcome other deficits. Rhythm and melody help to rehabilitate memory, muscles, breathing, etc. This article introduces how music therapy approaches stroke patients and helps them. It focuses particularly on speech; however, music affects not only one part of the body but the whole body. In cases in which music therapy is used, we can see how music helps with stroke patients and how to achieve these goals.

Inference Ability Based Emotion Recognition From Speech (추론 능력에 기반한 음성으로부터의 감성 인식)

  • Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.123-125
    • /
    • 2004
  • Recently, we are getting to interest in a user friendly machine. The emotion is one of most important conditions to be familiar with people. The machine uses sound or image to express or recognize the emotion. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

  • PDF

Keyphrase Extraction Using Active Learning and Clustering (Active Learning과 군집화를 이용한 고정키어구 추출)

  • Lee, Hyun-Woo;Cha, Jeong-Won
    • MALSORI
    • /
    • no.66
    • /
    • pp.87-103
    • /
    • 2008
  • We describe a new active learning method in conditional random fields (CRFs) framework for keyphrase extraction. To save elaboration in annotation, we use diversity and representative measure. We select high diversity training candidates by sentence confidence value. We also select high representative candidates by clustering the part-of-speech patterns of contexts. In the experiments using dialog corpus, our method achieves 86.80% and saves 88% training corpus compared with those of supervised method. From the results of experiment, we can see that the proposed method shows improved performance over the previous methods. Additionally, the proposed method can be applied to other applications easily since its implementation is independent on applications.

  • PDF

Teaching Pronunciation Using Sound Visualization Technology to EFL Learners

  • Min, Su-Jung;Pak, Hubert H.
    • English Language & Literature Teaching
    • /
    • v.13 no.2
    • /
    • pp.129-153
    • /
    • 2007
  • When English language teachers are deciding on their priorities for teaching pronunciation, it is imperative to know what kind of differences and errors are most likely to interfere with communication, and what special problems particular first-language speakers will have with English pronunciation. In other words, phoneme discrimination skill is an integral part of speech processing for the EFL learners' learning to converse in English. Training using sound visualization technique can be effective in improving second language learners' perceptions and productions of segmental and suprasegmental speech contrasts. This study assessed the efficacy of a pronunciation training that provided visual feedback for EFL learners acquiring pitch and durational contrasts to produce and perceive English phonemic distinctions. The subjects' ability to produce and to perceive novel English words was tested in two contexts before and after training; words in isolation and words in sentences. In comparison with an untrained control group, trainees showed improved perceptual and productive performance, transferred their knowledge to new contexts, and maintained their improvement three months after training. These findings support the feasibility of learner-centered programs using sound visualization technique for English language pronunciation instruction.

  • PDF