• Title/Summary/Keyword: Speech level

Search Result 674, Processing Time 0.036 seconds

How Korean Learner's English Proficiency Level Affects English Speech Production Variations

  • Hong, Hye-Jin;Kim, Sun-Hee;Chung, Min-Hwa
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.115-121
    • /
    • 2011
  • This paper examines how L2 speech production varies according to learner's L2 proficiency level. L2 speech production variations are analyzed by quantitative measures at word and phone levels using Korean learners' English corpus. Word-level variations are analyzed using correctness to explain how speech realizations are different from the canonical forms, while accuracy is used for analysis at phone level to reflect phone insertions and deletions together with substitutions. The results show that speech production of learners with different L2 proficiency levels are considerably different in terms of performance and individual realizations at word and phone levels. These results confirm that speech production of non-native speakers varies according to their L2 proficiency levels, even though they share the same L1 background. Furthermore, they will contribute to improve non-native speech recognition performance of ASR-based English language educational system for Korean learners of English.

  • PDF

A Study on the Noise-Level Measurement Using the Energy and Relation of Closed Pitch (에너지와 인근 피치간에 유사도를 이용한 잡음레벨 검출에 관한 연구)

  • Kang, In-Gyu;Lee, Ki-Young;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.157-164
    • /
    • 2004
  • Human has average pitch-level when speak naturally. That is 'Habitual pitch level'. However, if noise added at speech, the pitch-wave is changed irregularly. We can estimate noise level of speech by using this point. This paper calculates energy level of the input speech, pitch period from of above limited energy level by NAMDF (Normalized Average Magnitude Difference Function) method, after cut each frame by pitch period unit, and propose a method that estimate noise level through closed pitch of input speech.

  • PDF

Automatic Speech Database Verification Method Based on Confidence Measure

  • Kang Jeomja;Jung Hoyoung;Kim Sanghun
    • MALSORI
    • /
    • no.51
    • /
    • pp.71-84
    • /
    • 2004
  • In this paper, we propose the automatic speech database verification method(or called automatic verification) based on confidence measure for a large speech database. This method verifies the consistency between given transcription and speech using the confidence measure. The automatic verification process consists of two stages : the word-level likelihood computation stage and multi-level likelihood ratio computation stage. In the word-level likelihood computation stage, we calculate the word-level likelihood using the viterbi decoding algorithm and make the segment information. In the multi-level likelihood ratio computation stage, we calculate the word-level and the phone-level likelihood ratio based on confidence measure with anti-phone model. By automatic verification, we have achieved about 61% error reduction. And also we can reduce the verification time from 1 month in manual to 1-2 days in automatic.

  • PDF

Implementation of Quad Variable Rates ADPCM Speech CODEC on C6000 DSP considering the Environmental Noise (배경잡음을 고려한 4배 가변 압축률을 갖는 ADPCM의 C6000 DSP 실시간 구현)

  • Kim Dae-Sung;Han Kyong-ho
    • Proceedings of the KIPE Conference
    • /
    • 2002.07a
    • /
    • pp.727-729
    • /
    • 2002
  • In this paper, we proposed quad variable rates ADPCM coding method and its implementation on C6000 DSP, which is modified from the standard ADPCM of ITU G.726 for speech quality improvement considering the environmental noise Four coding rates, 16Kbps, 24Kbps, 32Kbps and 40Kbps are used for speech window samples and the rate decision threshold is decided by the environmental noise level. The object of the proposed method is to reduce the coding rate while retaining the speech quality and the speech quality is considerably close to 40Kbps single rate coder with the coding rate close to 16Kbps single rate coder under the environmental noise. The environmental noise level affects the coding rate and the noise level is calculated per every speech window samples. At high noise level, more samples are coded at higher rates to enhance the quality, but at low noise level, only the big speech signals are coded at higher rates and more speech samples are coded at lower coding rates to reduce the coding rates. The influence of the noise on tile speech signal is considerably high for small signals and the small signal has the higher ZCR (zero crossing rate). The method is simulated in PC and to be implemented on C6000 floating point DSP board in real time operations.

  • PDF

Effects of Talker Sidetone and Room Noise on the Speech Level of a Talker (송화측음 및 실내소음이 송화 음성레벨에 미치는 영향)

  • Kang, Kyeong-Ok;Kang, Seong-Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.52-59
    • /
    • 1992
  • In order to see the effects of talker sidetone on a talker's speech level quantitatively when he converses with others on a telephone, we reviewed the measuring algorithm of speech level and assessed variation of speech level due to that of sidetone masking rating(STMR). We measured room noise effects on speech level, when STMR values were changed, as well. If we consider the effects of talker sidetone and room noise on speech level, the results of experiments suggest that a talker continuously tries to keep the psychological loudness of his own speech, heard by himeself via a telephone handset, at the constant and comfortable level by controlling his speaking level according as STMR value and room noise are change. That is, because the amount of his speech masked by his talker sidetone and room noise is different when STMR value and room noise are changed, we can see the tendency that he controls his speaking level in order to keep the perceived psychological loudness of his own speech to be constant.

  • PDF

Weighted Finite State Transducer-Based Endpoint Detection Using Probabilistic Decision Logic

  • Chung, Hoon;Lee, Sung Joo;Lee, Yun Keun
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.714-720
    • /
    • 2014
  • In this paper, we propose the use of data-driven probabilistic utterance-level decision logic to improve Weighted Finite State Transducer (WFST)-based endpoint detection. In general, endpoint detection is dealt with using two cascaded decision processes. The first process is frame-level speech/non-speech classification based on statistical hypothesis testing, and the second process is a heuristic-knowledge-based utterance-level speech boundary decision. To handle these two processes within a unified framework, we propose a WFST-based approach. However, a WFST-based approach has the same limitations as conventional approaches in that the utterance-level decision is based on heuristic knowledge and the decision parameters are tuned sequentially. Therefore, to obtain decision knowledge from a speech corpus and optimize the parameters at the same time, we propose the use of data-driven probabilistic utterance-level decision logic. The proposed method reduces the average detection failure rate by about 14% for various noisy-speech corpora collected for an endpoint detection evaluation.

Speech Evaluation Variables Related to Speech Intelligibility in Children with Spastic Cerebral Palsy (경직형 뇌성마비아동의 말명료도 및 말명료도와 관련된 말 평가 변인)

  • Park, Ji-Eun;Kim, Hyang-Hee;Shin, Ji-Cheol;Choi, Hong-Shik;Sim, Hyun-Sub;Park, Eun-Sook
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.193-212
    • /
    • 2010
  • The purpose of our study was to provide effective speech evaluation items examining the variables of speech that successfully predict the speech intelligibility in CP children. The subjects were 55 children with spastic type cerebral palsy. As for the speech evaluation, we performed a speech subsystem evaluation and a speech intelligibility test. The results of the study are as follows. The evaluation task for the speech subsystems consisted of 48 task items within an observational evaluation stage and three levels of severity. The levels showed correlations with gross motor functions, fine motor functions, and age. Second, the evaluation items for the speech subsystems were rearranged into seven factors. Third, 34 out of 48 task items that positively correlated with the syllable intelligibility rating were as follows. There were four items in the observational evaluation stage. Among the nonverbal articulatory function evaluation items, there were 11 items in level one. There were 12 items in level two. In level three there were eight items. Fourth, there were 23 items among the 48 evaluation tasks that correlated with the sentence intelligibility rating. There was one item in the observational evaluation stage which was in the articulatory structure evaluation task. In level one there were six items. In level two, there were eight items. In level three, there was a total number of eight items. Fifth, there was a total number of 14 items that influenced the syllable intelligibility rating. Sixth, there was a total number of 13 items that influenced the syllable intelligibility rating. According to the results above, the variables that influenced the speech intelligibility of CP children among the articulatory function tasks were in the respiratory function task, phonatory function task, and lip and chin related tasks. We did not find any correlation for the tongue function. The results of our study could be applied to speech evaluation, setting therapy goals, and evaluating the degree of progression in children with CP. We only studied children with the spastic type of cerebral palsy, and there were a small number of severe degree CP children compared to those with a moderate degree of CP. Therefore, when evaluating children with other degrees of severity, we may have to take their characteristics more into account. Further study on speech evaluation variables in relation to the severity of the speech intelligibility and different types of cerebral palsy may be necessary.

  • PDF

The Lombard effect on the speech of children with intellectual disability (지적장애 아동의 롬바드 효과에 따른 말산출 특성)

  • Lee, Hyunju;Lee, Jiyun;Kim, Yukyung
    • Phonetics and Speech Sciences
    • /
    • v.8 no.4
    • /
    • pp.115-122
    • /
    • 2016
  • This study investigates the acoustic-phonetic features and speech intelligibility of Lombard speech in children with intellectual disability, by examining the effect of Lombard speech at 3 levels of non-noise, 55dB, and 65dB. Eight children with intellectual disability read sentences and played speaking games, and their speech were analyzed in terms of intensity, pitch, vowel space of /a/, /i/, and /u/, VAI(3), articulation rate and speech intelligibility. Results showed, first, that intensity and pitch increased as noise level increased; second, that VAI(3) increased as the noise level increased; third, that articulation rate decreased as noise intensity increased; finally, that speech intelligibility increased as noise intensity increased. The Lombard speech changed the VAI(3), vowel space, articulation rate, speech intelligibility of the children with intellectual disability as well. This study suggests that the Lombard speech will be clinically useful for the persons who have intellectual disability and difficulties in self-control.

A Study on the Speech Intelligibility of Voice Disordered Patients according to the Severity and Utterance Level (음성장애의 중증도와 발화 수준에 따른 말 명료도의 변화 연구)

  • Pyo, Hwa-Young
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.101-110
    • /
    • 2008
  • The purpose of this study was to investigate the speech intelligibility of voice disordered patients when we consider the severity and utterance level as variables. Based on the severity level, 12 patients were divided into three groups, G1, G2, and G3 group, respectively. Words, phrases and sentences produced by the speakers were judged by four listeners with normal hearing, and we compared the intelligibility scores of the three groups. As a result, the speech intelligibility was decreased as the severity level was increased, and the difference was statistically significant. However, the mean difference among words, phrases and sentences was not significant, and the variation of intelligibility according to the utterance level was not under the regular rules.

  • PDF

Integrated Visual and Speech Parameters in Korean Numeral Speech Recognition

  • Lee, Sang-won;Park, In-Jung;Lee, Chun-Woo;Kim, Hyung-Bae
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.685-688
    • /
    • 2000
  • In this paper, we used image information for the enhancement of Korean numeral speech recognition. First, a noisy environment was made by Gaussian generator at each 10 dB level and the generated signal was added to original Korean numeral speech. And then, the speech was analyzed to recognize Korean numeral speech. Speech through microphone was pre-emphasized with 0.95, Hamming window, autocorrelation and LPC analysis was used. Second, the image obtained by camera, was converted to gray level, autocorrelated, and analyzed using LPC algorithm, to which was applied in speech analysis, Finally, the Korean numerial speech recognition with image information was more ehnanced than speech-only, especially in ‘3’, ‘5’and ‘9’. As the same LPC algorithm and simple image management was used, additional computation a1gorithm like a filtering was not used, a total speech recognition algorithm was made simple.

  • PDF