• 제목/요약/키워드: speech rates

검색결과 271건 처리시간 0.019초

정상 성인의 말속도 및 유창성 연구 (A Study af Speech Rate and Fluency in Narmal Speakers)

  • 신문자;한숙자
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.159-168
    • /
    • 2003
  • The purpose of this study was to assess the speech rate, fluency and the type of dysfluencies of normal adults in order to provide a basic data of normal speaking. The number of subjects of this study were 30(14 females and 16 males), and their ages ranged 17 to 36. The rate was measured as syllables per minute (SPM). The speech rates in reading ranged 273-426 with a mean of 348 SPM and in speaking ranges 118-409 (mean=265). The average of their fluencies was 99.1% in reading and 96.9% in speaking. The rater reliability of speech rate in the data assessed by video was very high (r=0.98) and the rater reliability of speech fluency was moderately high (r=0.67). The disfluency types were also analysed from 150 disfluency episodes. Syllable repetition and word interjection were the most common disfluent types.

  • PDF

인공와우이식 아동의 운율 특성 - 발화속도와 억양기울기를 중심으로 - (The Prosodic Characteristics of Children with Cochlear Implants with Respect to Speech Rate and Intonation Slope)

  • 오순영;성철재;최은아
    • 말소리와 음성과학
    • /
    • 제3권3호
    • /
    • pp.157-165
    • /
    • 2011
  • This study investigated speech rate and intonation slope (least square method; F0, quarter-tone) in normal and CI children's utterances. Each group consisted of 12 people and were divided into groups of children with CI operation (before 3;00), children with CI operation (after 3;00), and normal children. Materials are composed of four kinds of grammatical dialogue sentences which are lacking in respect. Given three groups as independent variables and both speech rate and intonation slope as dependent variables, a one-way ANOVA showed that normal children had faster speech rates and steeper intonation slopes than those of the CI group. More specifically, there was a statistically significant speech rate difference between normal and CI children in all of the sentential patterns but imperative form (p<.01). Additionally, F0 and qtone slope observed in sentential final word showed a significant statistical difference between normal and CI children in imperative form (f0: p<.01; q-tone: p<.05).

  • PDF

상태변수 기반의 실시간 음성검출 알고리즘의 최적화 (Optimization of State-Based Real-Time Speech Endpoint Detection Algorithm)

  • 김수환;이영재;김영일;정상배
    • 말소리와 음성과학
    • /
    • 제2권4호
    • /
    • pp.137-143
    • /
    • 2010
  • In this paper, a speech endpoint detection algorithm is proposed. The proposed algorithm is a kind of state transition-based ones for speech detection. To reject short-duration acoustic pulses which can be considered noises, it utilizes duration information of all detected pulses. For the optimization of parameters related with pulse lengths and energy threshold to detect speech intervals, an exhaustive search scheme is adopted while speech recognition rates are used as its performance index. Experimental results show that the proposed algorithm outperforms the baseline state-based endpoint detection algorithm. At 5 dB input SNR for the beamforming input, the word recognition accuracies of its outputs were 78.5% for human voice noises and 81.1% for music noises.

  • PDF

이중채널 잡음음성인식을 위한 공간정보를 이용한 통계모델 기반 음성구간 검출 (Statistical Model-Based Voice Activity Detection Using Spatial Cues for Dual-Channel Noisy Speech Recognition)

  • 신민화;박지훈;김홍국;이연우;이성로
    • 말소리와 음성과학
    • /
    • 제2권3호
    • /
    • pp.141-148
    • /
    • 2010
  • In this paper, voice activity detection (VAD) for dual-channel noisy speech recognition is proposed in which spatial cues are employed. In the proposed method, a probability model for speech presence/absence is constructed using spatial cues obtained from dual-channel input signal, and a speech activity interval is detected through this probability model. In particular, spatial cues are composed of interaural time differences and interaural level differences of dual-channel speech signals, and the probability model for speech presence/absence is based on a Gaussian kernel density. In order to evaluate the performance of the proposed VAD method, speech recognition is performed for speech segments that only include speech intervals detected by the proposed VAD method. The performance of the proposed method is compared with those of several methods such as an SNR-based method, a direction of arrival (DOA) based method, and a phase vector based method. It is shown from the speech recognition experiments that the proposed method outperforms conventional methods by providing relative word error rates reductions of 11.68%, 41.92%, and 10.15% compared with SNR-based, DOA-based, and phase vector based method, respectively.

  • PDF

대학생들이 또렷한 음성과 대화체로 발화한 영어문단의 구글음성인식 (Google speech recognition of an English paragraph produced by college students in clear or casual speech styles)

  • 양병곤
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.43-50
    • /
    • 2017
  • These days voice models of speech recognition software are sophisticated enough to process the natural speech of people without any previous training. However, not much research has reported on the use of speech recognition tools in the field of pronunciation education. This paper examined Google speech recognition of a short English paragraph produced by Korean college students in clear and casual speech styles in order to diagnose and resolve students' pronunciation problems. Thirty three Korean college students participated in the recording of the English paragraph. The Google soundwriter was employed to collect data on the word recognition rates of the paragraph. Results showed that the total word recognition rate was 73% with a standard deviation of 11.5%. The word recognition rate of clear speech was around 77.3% while that of casual speech amounted to 68.7%. The reasons for the low recognition rate of casual speech were attributed to both individual pronunciation errors and the software itself as shown in its fricative recognition. Various distributions of unrecognized words were observed depending on each participant and proficiency groups. From the results, the author concludes that the speech recognition software is useful to diagnose each individual or group's pronunciation problems. Further studies on progressive improvements of learners' erroneous pronunciations would be desirable.

발화속도 및 강도 분석에 기반한 폐질환의 음성적 특징 추출 (Voice Features Extraction of Lung Diseases Based on the Analysis of Speech Rates and Intensity)

  • 김봉현;조동욱
    • 정보처리학회논문지B
    • /
    • 제16B권6호
    • /
    • pp.471-478
    • /
    • 2009
  • 현대인의 6대 난치병으로 분류되고 있는 폐질환은 대부분 흡연과 대기 오염으로 발병한다. 이와 같은 이유로 폐기능이 손상되어 폐포내에서 이산화탄소와 산소의 교환이 정상적으로 이루어지지 않아 생명 연장의 위험 질환으로 관심이 증대되고 있다. 이를 위해 본 논문에서는 폐질환 에 대한 음성적 특징 추출을 목적으로 음성 분석 요소를 적용한 폐질환 진단 방법을 제안하였다. 우선 폐질환을 앓고 있는 환자들과 동일한 연령, 성별대의 정상인들로 피실험자 집단을 구성하고 이들의 음성을 수집하였다. 또한 수집된 음성을 통해 다양한 음성 분석 요소를 적용하여 분석을 수행하였으며 발화속도 및 강도 분석 요소 부분에서 폐질환자 집단과 정상인 집단간의 유의성이 있음을 알아 낼 수 있었다. 결론적으로 폐질환자 집단이 정상인 집단보다 발화속도가 느리며 강도가 크게 나타나는 결과를 도출해 내었으며 이를 통해 폐질환의 음성적 특징 추출 방법을 제시하였다.

배경 잡음환경에서 가변 임계값에 의한 Dual Rate ADPCM 음성 부호화 기법 (Coding Method of Variable Threshold Dual Rate ADPCM Speech Considering the Background Noise)

  • 한경호
    • 조명전기설비학회논문지
    • /
    • 제17권6호
    • /
    • pp.154-159
    • /
    • 2003
  • 본 논문에서는 ITU G.726 규격을 만족하는 표준형 ADPCM 부호화 법을 이용하여 배경잡음의 크기에 따라 음성의 부호화율이 두가지로 가변하도록 함으로써, 낮은 데이터 전송률을 가지고도 단일 부호화율의 경우보다 개선된 음질을 갖는 부호화 기법을 제안하였다. 이를 위하여 배경잡음보다 큰 음성신호에 대하여는 데이터의 양이 커지더라도 음질을 향상시키기 위하여 40 [Kbps]로 압축하고, 작은 음성신호에 대하여는 16[Kbps]로 압축하여 데이터의 양을 줄이도록 하여 전체적으로 압축데이터의 양을 줄이면서 음질을 개선하도록 하였다. 입력된 음성신호에 대하여 두가지 압축율을 결정하기 위하여 영교차율(ZCR)을 사용하여 처리속도를 빠르도록 하였다.

정상 성인 및 아동의 구어속도에 관한 연구 (The Study of Speech Rate in Normal-Speaking Adults and Children)

  • 안종복;신명선;권도하
    • 음성과학
    • /
    • 제9권4호
    • /
    • pp.93-103
    • /
    • 2002
  • The purpose of this study was to establish preliminary data on the speech rates in groups of normal speaking adults and children. The results of the present study are intended to serve as clinical measurement guidelines for diagnosis, assessment, treatment planning, and therapy progresses of stuttering. Thirty-one adults (16 females, 15 males), aged 18-30 years and thirty normally developing children (15 females, 15 males), aged 8-10, participated in the study. The subjects' reading of the Stroll (Jeong, 1994) passage and l-minute portion of talking about the daily routine were sampled. The adult speakers had rates of $308.29\pm22.57$ syllables per minute (SPM) or $108.06\pm6.17$ words per minute (WPM) during reading, and $252.87\pm40.86$ SPM and $92.26\pm17.12$ WPM during talking. The children had rates of $176.67\pm33.65$ SPM or $64.07\pm12.62$ WPM during reading, and $149.30\pm33.14$ SPM and $56.60\pm11.36$ WPM during talking. The results of t-tests for reading and talking tasks in adults showed that SPM in reading (t=2.211, p< .05) and WPM in talking (t=-2.284, p< .05) differed significantly by the gender. To answer the questions whether the rate is different across children' s gender and age, a two-way ANOVA was performed. Both SPM and WPM in reading tasks were significantly different between groups of children aged 8 and 10 (p< 01), In speaking tasks, both SPM and WPM were significantly different between groups of children aged 8 and 10, and between 9 and 10.

  • PDF

음성구간검출을 위한 비정상성 잡음에 강인한 특징 추출 (Robust Feature Extraction for Voice Activity Detection in Nonstationary Noisy Environments)

  • 홍정표;박상준;정상배;한민수
    • 말소리와 음성과학
    • /
    • 제5권1호
    • /
    • pp.11-16
    • /
    • 2013
  • This paper proposes robust feature extraction for accurate voice activity detection (VAD). VAD is one of the principal modules for speech signal processing such as speech codec, speech enhancement, and speech recognition. Noisy environments contain nonstationary noises causing the accuracy of the VAD to drastically decline because the fluctuation of features in the noise intervals results in increased false alarm rates. In this paper, in order to improve the VAD performance, harmonic-weighted energy is proposed. This feature extraction method focuses on voiced speech intervals and weighted harmonic-to-noise ratios to determine the amount of the harmonicity to frame energy. For performance evaluation, the receiver operating characteristic curves and equal error rate are measured.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF