• Title/Summary/Keyword: speech rates

Search Result 271, Processing Time 0.019 seconds

A Study af Speech Rate and Fluency in Narmal Speakers (정상 성인의 말속도 및 유창성 연구)

  • Shin, Moon-Ja;Han, Sook-Ja
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.159-168
    • /
    • 2003
  • The purpose of this study was to assess the speech rate, fluency and the type of dysfluencies of normal adults in order to provide a basic data of normal speaking. The number of subjects of this study were 30(14 females and 16 males), and their ages ranged 17 to 36. The rate was measured as syllables per minute (SPM). The speech rates in reading ranged 273-426 with a mean of 348 SPM and in speaking ranges 118-409 (mean=265). The average of their fluencies was 99.1% in reading and 96.9% in speaking. The rater reliability of speech rate in the data assessed by video was very high (r=0.98) and the rater reliability of speech fluency was moderately high (r=0.67). The disfluency types were also analysed from 150 disfluency episodes. Syllable repetition and word interjection were the most common disfluent types.

  • PDF

The Prosodic Characteristics of Children with Cochlear Implants with Respect to Speech Rate and Intonation Slope (인공와우이식 아동의 운율 특성 - 발화속도와 억양기울기를 중심으로 -)

  • Oh, Soon-Young;Seong, Cheol-Jae;Choi, Eun-Ah
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.157-165
    • /
    • 2011
  • This study investigated speech rate and intonation slope (least square method; F0, quarter-tone) in normal and CI children's utterances. Each group consisted of 12 people and were divided into groups of children with CI operation (before 3;00), children with CI operation (after 3;00), and normal children. Materials are composed of four kinds of grammatical dialogue sentences which are lacking in respect. Given three groups as independent variables and both speech rate and intonation slope as dependent variables, a one-way ANOVA showed that normal children had faster speech rates and steeper intonation slopes than those of the CI group. More specifically, there was a statistically significant speech rate difference between normal and CI children in all of the sentential patterns but imperative form (p<.01). Additionally, F0 and qtone slope observed in sentential final word showed a significant statistical difference between normal and CI children in imperative form (f0: p<.01; q-tone: p<.05).

  • PDF

Optimization of State-Based Real-Time Speech Endpoint Detection Algorithm (상태변수 기반의 실시간 음성검출 알고리즘의 최적화)

  • Kim, Su-Hwan;Lee, Young-Jae;Kim, Young-Il;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.137-143
    • /
    • 2010
  • In this paper, a speech endpoint detection algorithm is proposed. The proposed algorithm is a kind of state transition-based ones for speech detection. To reject short-duration acoustic pulses which can be considered noises, it utilizes duration information of all detected pulses. For the optimization of parameters related with pulse lengths and energy threshold to detect speech intervals, an exhaustive search scheme is adopted while speech recognition rates are used as its performance index. Experimental results show that the proposed algorithm outperforms the baseline state-based endpoint detection algorithm. At 5 dB input SNR for the beamforming input, the word recognition accuracies of its outputs were 78.5% for human voice noises and 81.1% for music noises.

  • PDF

Statistical Model-Based Voice Activity Detection Using Spatial Cues for Dual-Channel Noisy Speech Recognition (이중채널 잡음음성인식을 위한 공간정보를 이용한 통계모델 기반 음성구간 검출)

  • Shin, Min-Hwa;Park, Ji-Hun;Kim, Hong-Kook;Lee, Yeon-Woo;Lee, Seong-Ro
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.141-148
    • /
    • 2010
  • In this paper, voice activity detection (VAD) for dual-channel noisy speech recognition is proposed in which spatial cues are employed. In the proposed method, a probability model for speech presence/absence is constructed using spatial cues obtained from dual-channel input signal, and a speech activity interval is detected through this probability model. In particular, spatial cues are composed of interaural time differences and interaural level differences of dual-channel speech signals, and the probability model for speech presence/absence is based on a Gaussian kernel density. In order to evaluate the performance of the proposed VAD method, speech recognition is performed for speech segments that only include speech intervals detected by the proposed VAD method. The performance of the proposed method is compared with those of several methods such as an SNR-based method, a direction of arrival (DOA) based method, and a phase vector based method. It is shown from the speech recognition experiments that the proposed method outperforms conventional methods by providing relative word error rates reductions of 11.68%, 41.92%, and 10.15% compared with SNR-based, DOA-based, and phase vector based method, respectively.

  • PDF

Google speech recognition of an English paragraph produced by college students in clear or casual speech styles (대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.

Voice Features Extraction of Lung Diseases Based on the Analysis of Speech Rates and Intensity (발화속도 및 강도 분석에 기반한 폐질환의 음성적 특징 추출)

  • Kim, Bong-Hyun;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.16B no.6
    • /
    • pp.471-478
    • /
    • 2009
  • The lung diseases classifying as one of the six incurable diseases in modern days are caused mostly by smoking and air pollution. Such causes the lung function damages, and results in malfunction of the exchange of carbon dioxide and oxygen in an alveolus, which the interest is augment with risk diseases of life prolongation. With this in the paper, we proposed a diagnosis method of lung diseases by applying parameters of voice analysis aiming at the getting the voice feature extraction. Firstly, we sampled the voice data from patients and normal persons in the same age and sex, and made two sample groups from them. Also, we conducted an analysis by applying the various parameters of voice analysis through the collected voice data. The relational significance between the patient and normal groups can be evaluated in terms of speech rates and intensity as a part of analized parameters. In conclusion, the patient group has shown slower speech rates and bigger intensity than the normal group. With this, we propose the method of voice feature extraction for lung diseases.

Coding Method of Variable Threshold Dual Rate ADPCM Speech Considering the Background Noise (배경 잡음환경에서 가변 임계값에 의한 Dual Rate ADPCM 음성 부호화 기법)

  • 한경호
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.17 no.6
    • /
    • pp.154-159
    • /
    • 2003
  • In this paper, we proposed variable threshold dual rate ADPCM coding method which adapts two coding rates of the standard ADPCM of ITU G.726 for speech quality improvement at a comparably low coding rates. The ZCR(Zero Crossing Rate) is computed for speecd data and under the noisy environment, noise data dominant region showed higher ZCR and speech data dominant region showed lower ZCR. The speech data with the higher ZCR is encoded by low coding rate for reduced coded data and the speech data with the lower ZCR is encoded by high coding rate for speech quality improvements. For coded data, 2 bits are assigned for low coding rate of 16[Kbps] and 5 bits are is assigned for high coding rate of 40[Kbps]. Through the simulation, the proposed idea is evaluated and shown that the variable dual rate ADPCM coding technique shows the qood speech quality at low coding rate.

The Study of Speech Rate in Normal-Speaking Adults and Children (정상 성인 및 아동의 구어속도에 관한 연구)

  • Ahn, Jong-Bok;Shin, Myung-Sun;Kwon, Do-Ha
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.93-103
    • /
    • 2002
  • The purpose of this study was to establish preliminary data on the speech rates in groups of normal speaking adults and children. The results of the present study are intended to serve as clinical measurement guidelines for diagnosis, assessment, treatment planning, and therapy progresses of stuttering. Thirty-one adults (16 females, 15 males), aged 18-30 years and thirty normally developing children (15 females, 15 males), aged 8-10, participated in the study. The subjects' reading of the Stroll (Jeong, 1994) passage and l-minute portion of talking about the daily routine were sampled. The adult speakers had rates of $308.29\pm22.57$ syllables per minute (SPM) or $108.06\pm6.17$ words per minute (WPM) during reading, and $252.87\pm40.86$ SPM and $92.26\pm17.12$ WPM during talking. The children had rates of $176.67\pm33.65$ SPM or $64.07\pm12.62$ WPM during reading, and $149.30\pm33.14$ SPM and $56.60\pm11.36$ WPM during talking. The results of t-tests for reading and talking tasks in adults showed that SPM in reading (t=2.211, p< .05) and WPM in talking (t=-2.284, p< .05) differed significantly by the gender. To answer the questions whether the rate is different across children' s gender and age, a two-way ANOVA was performed. Both SPM and WPM in reading tasks were significantly different between groups of children aged 8 and 10 (p< 01), In speaking tasks, both SPM and WPM were significantly different between groups of children aged 8 and 10, and between 9 and 10.

  • PDF

Robust Feature Extraction for Voice Activity Detection in Nonstationary Noisy Environments (음성구간검출을 위한 비정상성 잡음에 강인한 특징 추출)

  • Hong, Jungpyo;Park, Sangjun;Jeong, Sangbae;Hahn, Minsoo
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.11-16
    • /
    • 2013
  • This paper proposes robust feature extraction for accurate voice activity detection (VAD). VAD is one of the principal modules for speech signal processing such as speech codec, speech enhancement, and speech recognition. Noisy environments contain nonstationary noises causing the accuracy of the VAD to drastically decline because the fluctuation of features in the noise intervals results in increased false alarm rates. In this paper, in order to improve the VAD performance, harmonic-weighted energy is proposed. This feature extraction method focuses on voiced speech intervals and weighted harmonic-to-noise ratios to determine the amount of the harmonicity to frame energy. For performance evaluation, the receiver operating characteristic curves and equal error rate are measured.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF