• Title/Summary/Keyword: Speech analysis

Search Result 1,575, Processing Time 0.022 seconds

How Korean Learner's English Proficiency Level Affects English Speech Production Variations

  • Hong, Hye-Jin;Kim, Sun-Hee;Chung, Min-Hwa
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.115-121
    • /
    • 2011
  • This paper examines how L2 speech production varies according to learner's L2 proficiency level. L2 speech production variations are analyzed by quantitative measures at word and phone levels using Korean learners' English corpus. Word-level variations are analyzed using correctness to explain how speech realizations are different from the canonical forms, while accuracy is used for analysis at phone level to reflect phone insertions and deletions together with substitutions. The results show that speech production of learners with different L2 proficiency levels are considerably different in terms of performance and individual realizations at word and phone levels. These results confirm that speech production of non-native speakers varies according to their L2 proficiency levels, even though they share the same L1 background. Furthermore, they will contribute to improve non-native speech recognition performance of ASR-based English language educational system for Korean learners of English.

  • PDF

Effects of Phonetic Complexity and Articulatory Severity on Percentage of Correct Consonant and Speech Intelligibility in Adults with Dysarthria (조음복잡성 및 조음중증도에 따른 마비말장애인의 자음정확도와 말명료도)

  • Song, HanNae;Lee, Youngmee;Sim, HyunSub;Sung, JeeEun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the effects of phonetic complexity and articulatory severity on Percentage of Correct Consonant (PCC) and speech intelligibility in adults with dysarthria. Speech samples of thirty-two words from APAC (Assessment of Phonology and Articulation of Children) were collected from 38 dysarthric speakers with one of two different levels of articulatory severities (mild or mild-moderate). A PCC and speech intelligibility score was calculated by the 4 levels of phonetic complexity. Two-way mixed ANOVA analysis revealed: (1) the group with mild severity showed significantly higher PCC and speech intelligibility scores than the mild-moderate articulatory severity group, (2) PCC at the phonetic complexity level 4 was significantly lower than those at the other levels and (3) an interaction effect of articulatory severity and phonetic complexity was observed only on the PCC. Pearson correlation analysis demonstrated the degree of correlation between PCC and speech intelligibility varied depending on the level of articulatory severity and phonetic complexity. The clinical implications of the findings were discussed.

A study on speech analysis of person with presbycusis (노인성 난청인의 음성특성에 관한 연구)

  • Lee, S.M.;Song, C.G.;Woo, H.C.;Lee, Y.M.;Kim, W.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.67-70
    • /
    • 1997
  • In this paper, we evaluated the character of speech of hearing impaired person (HIP) who acquire his hearing loss after the youth. It is usually observed that severe HIP decreased not only speech perception but also vocalization. so there is a need for sensitive and quantitative measures or the assesment of the speech of the HIP to serve both diagnostic and prognosic purposes, 7 HIP and 12 normal hearing person(NHP) were studied with pure tone test and speaking test using word/sentence table which consists of vowel(a:), mono and two syllables and a sentence. we analyzed formant frequency, pitch, sound intensity, speech duration of HIP and NHP speech. According to the results, in the HIP's speech we find that formant frequency was shifted, first-formant prominence was reduced, the dynamic range of sound intensity was decreased, speech duration was prolonged. In the next, we expect the correlation between hearing and speech character of HIP is cleared through analysis of more acoustic parameters and precise selection of HIP group.

  • PDF

The Effects of Pitch Increasing Training (PIT) on Voice and Speech of a Patient with Parkinson's Disease: A Pilot Study

  • Lee, Ok-Bun;Jeong, Ok-Ran;Shim, Hong-Im;Jeong, Han-Jin
    • Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.95-105
    • /
    • 2006
  • The primary goal of therapeutic intervention in dysarthric speakers is to increase the speech intelligibility. Decision of critical features to increase the intelligibility is very important in speech therapy. The purpose of this study is to know the effects of pitch increasing training (PIT) on speech of a subject with Parkinson's disease (PD). The PIT program is focused on increasing pitch while a vowel is sustained with the same loudness. The loudness level is somewhat higher than that of the habitual loudness. A 67-year-old female with PD participated in the study. Speech therapy was conducted for 4 sessions (200 minutes) for one week. Before and after the treatment, acoustic, perceptual and speech naturalness evaluation was peformed for data analysis. Speech and voice satisfaction index (SVSI) was obtained after the treatment. Results showed Improvements in voice quality and speech naturalness. In addition, the patient's satisfaction ratings (SVSI) indicated a positive relationship between improved speech production and their (the patient and care-givers) satisfaction.

  • PDF

An Acoustic Analysis of Speech in Patients with Nonfluent Aphasia (비 유창성 실어증 환자 말소리의 음향학적 분석)

  • Kim, Hyun-Gi;Kang, Eun-Young;Kim, Yun-Hee
    • Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.87-97
    • /
    • 2002
  • The purpose of this study is to analyze the speech duration in Korean-speaking aphasics. Five patients with nonfluent aphasia (2 with traumatic brain injury and 3 with strokes) and five normal adults participated in this experiment. The mean age in patients with nonfluent aphasia was $45.8\pm2.3$ years and $47.4\pm2.3$ years for the normal adults. The Computerized Speech Lab was used to evaluate the acoustic characteristics of the subjects. Voice onset time, vowel duration, total duration, hold and consonant duration were evaluated for the monosyllabic and the polysyllabic words. The patients with nonfluent aphasia did not show the voicing bar on hold area, however, it was seen in the normal persons in the intervocalic position. Explosion duration of glottalized stops in the intervocalic position was significantly prolonged in nonfluent aphasics in comparison with the normal persons. This suggestes that the laryngeal adjustment is disturbed in these patients. Consonant duration, vowel duration, and total duration of the polysyllabic words were significantly longer in the patients with nonfluent aphasia than those of the normal persons. These results demonstrate the disturbances in controlling articulatory muscles during sound production in patients with nonfluent aphasia. The objective and quantitative analysis based on the acoustic characteristics of nonfluent aphasics, will be very useful in therapeutic planning and on the the effects of speech therapy.

  • PDF

Optimum MVF Estimation-Based Two-Band Excitation for HMM-Based Speech Synthesis

  • Han, Seung-Ho;Jeong, Sang-Bae;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.457-459
    • /
    • 2009
  • The optimum maximum voiced frequency (MVF) estimation-based two-band excitation for hidden Markov model-based speech synthesis is presented. An analysis-by-synthesis scheme is adopted for the MVF estimation which leads to the minimum spectral distortion of synthesized speech. Experimental results show that the proposed method significantly improves synthetic speech quality.

Continuous Speech Recognition using Syntactic Analysis and One-Stage DMS/DP (구문 분석과 One-Stage DMS/DP를 이용한 연속음 인식)

  • 안태옥
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.201-207
    • /
    • 2004
  • This paper is a study on the recognition of continuous speech and uses a method of speech recognition using syntactic analysis and one-stage DMS/DP. In order to perform the speech recognition, first of all, we make DMS model by section division algorithm and let continuous speech data be recognized through One-stage DMS/DP method using syntactic analysis. Besides the speech recognition experiments of proposed method, we experiment the conventional one-stage DP method under the equivalent environment of data and conditions. From the recognition experiments, it is shown that Ole-stage DMS/DP using syntactic analysis is superior to conventional method.

The Acoustic Analysis of Korean Read Speech - with respect to the prosodic phrasing - (한국어 낭독체 문장의 음향분석 -바람과 햇님의 운율구 생성을 중심으로-)

  • Sung Chuljae
    • Proceedings of the KSPS conference
    • /
    • 1996.02a
    • /
    • pp.157-172
    • /
    • 1996
  • This study aims to suggest some theoretical methodology for analysis of the prosodic patterns in Korean Read Speech. The engineering effort relevant to the phonetic study has focused to the importance of prosodic phrasing which may play a major role in analyzing the phonetic DB. Before establishing the prosodic phrase as the prosodic unit, we should describe the features of the boundary signal in a target sentence. With this in mind, the general characteristics of Read Speech and the ToBI(tones and Break Indices), which has been currently in vogue with respect to the prosodic labelling system were presented as the first step. The concrete analysis was carried out with the fable 'North Wind and the Sun' Korean version, where about 25 prosodic units were discriminated by perceptual approach for 5 subjects. Establishing various informations which can be used for deciding a boundary position systematically, we can proceed to the next, viz. acoustic analysis of prosodic unit. The most important which we primarily study for improving the naturalness of synthetic speech may be, at first, detecting the boundary signals in the speech file and accordingly reestablishment it within the raw text.

  • PDF

An acoustical analysis of emotional speech using close-copy stylization of intonation curve (억양의 근접복사 유형화를 이용한 감정음성의 음향분석)

  • Yi, So Pae
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.131-138
    • /
    • 2014
  • A close-copy stylization of intonation curve was used for an acoustical analysis of emotional speech. For the analysis, 408 utterances of five emotions (happiness, anger, fear, neutral and sadness) were processed to extract acoustical feature values. The results show that certain pitch point features (pitch point movement time and pitch point distance within a sentence) and sentence level features (pitch range of a final pitch point, pitch range of a sentence and pitch slope of a sentence) are affected by emotions. Pitch point movement time, pitch point distance within a sentence and pitch slope of a sentence show no significant difference between male and female participants. The emotions with high arousal (happiness and anger) are consistently distinguished from the emotion with low arousal (sadness) in terms of these acoustical features. Emotions with higher arousal show steeper pitch slope of a sentence. They have steeper pitch slope at the end of a sentence. They also show wider pitch range of a sentence. The acoustical analysis in this study implies the possibility that the measurement of these acoustical features can be used to cluster and identify emotions of speech.

Analysis of Transient Features in Speech Signal by Estimating the Short-term Energy and Inflection points (변곡점 및 단구간 에너지평가에 의한 음성의 천이구간 특징분석)

  • Choi, I.H.;Jang, S.K.;Cha, T.H.;Choi, U.S.;Kim, C.S.
    • Speech Sciences
    • /
    • v.3
    • /
    • pp.156-166
    • /
    • 1998
  • In this paper, I would like to propose a dividing method by estimating the inflection points and the average magnitude energy in speech signals. The method proposed in this paper gave not only a satisfactory solution for the problems on dividing method by zero-crossing rate, but could estimate the feature of the transient period after dividing the starting point and transient period in speech signals before steady state. In the results of the experiment carried out with monosyllabic speech, it was found that even through speech samples indicated in D.C. level, the staring and ending point of the speech signals were exactly divided by the method. In addition to the results, I could compare with the features, such as the length of transient period, the short term energy, the frequency characteristics, in each speech signal.

  • PDF