• Title/Summary/Keyword: Speech level

검색결과 674건 처리시간 0.03초

How Korean Learner's English Proficiency Level Affects English Speech Production Variations

  • Hong, Hye-Jin;Kim, Sun-Hee;Chung, Min-Hwa
    • 말소리와 음성과학
    • /
    • 제3권3호
    • /
    • pp.115-121
    • /
    • 2011
  • This paper examines how L2 speech production varies according to learner's L2 proficiency level. L2 speech production variations are analyzed by quantitative measures at word and phone levels using Korean learners' English corpus. Word-level variations are analyzed using correctness to explain how speech realizations are different from the canonical forms, while accuracy is used for analysis at phone level to reflect phone insertions and deletions together with substitutions. The results show that speech production of learners with different L2 proficiency levels are considerably different in terms of performance and individual realizations at word and phone levels. These results confirm that speech production of non-native speakers varies according to their L2 proficiency levels, even though they share the same L1 background. Furthermore, they will contribute to improve non-native speech recognition performance of ASR-based English language educational system for Korean learners of English.

  • PDF

에너지와 인근 피치간에 유사도를 이용한 잡음레벨 검출에 관한 연구 (A Study on the Noise-Level Measurement Using the Energy and Relation of Closed Pitch)

  • 강인규;이기영;배명진
    • 음성과학
    • /
    • 제11권3호
    • /
    • pp.157-164
    • /
    • 2004
  • Human has average pitch-level when speak naturally. That is 'Habitual pitch level'. However, if noise added at speech, the pitch-wave is changed irregularly. We can estimate noise level of speech by using this point. This paper calculates energy level of the input speech, pitch period from of above limited energy level by NAMDF (Normalized Average Magnitude Difference Function) method, after cut each frame by pitch period unit, and propose a method that estimate noise level through closed pitch of input speech.

  • PDF

Automatic Speech Database Verification Method Based on Confidence Measure

  • Kang Jeomja;Jung Hoyoung;Kim Sanghun
    • 대한음성학회지:말소리
    • /
    • 제51호
    • /
    • pp.71-84
    • /
    • 2004
  • In this paper, we propose the automatic speech database verification method(or called automatic verification) based on confidence measure for a large speech database. This method verifies the consistency between given transcription and speech using the confidence measure. The automatic verification process consists of two stages : the word-level likelihood computation stage and multi-level likelihood ratio computation stage. In the word-level likelihood computation stage, we calculate the word-level likelihood using the viterbi decoding algorithm and make the segment information. In the multi-level likelihood ratio computation stage, we calculate the word-level and the phone-level likelihood ratio based on confidence measure with anti-phone model. By automatic verification, we have achieved about 61% error reduction. And also we can reduce the verification time from 1 month in manual to 1-2 days in automatic.

  • PDF

배경잡음을 고려한 4배 가변 압축률을 갖는 ADPCM의 C6000 DSP 실시간 구현 (Implementation of Quad Variable Rates ADPCM Speech CODEC on C6000 DSP considering the Environmental Noise)

  • 김대성;한경호
    • 전력전자학회:학술대회논문집
    • /
    • 전력전자학회 2002년도 전력전자학술대회 논문집
    • /
    • pp.727-729
    • /
    • 2002
  • In this paper, we proposed quad variable rates ADPCM coding method and its implementation on C6000 DSP, which is modified from the standard ADPCM of ITU G.726 for speech quality improvement considering the environmental noise Four coding rates, 16Kbps, 24Kbps, 32Kbps and 40Kbps are used for speech window samples and the rate decision threshold is decided by the environmental noise level. The object of the proposed method is to reduce the coding rate while retaining the speech quality and the speech quality is considerably close to 40Kbps single rate coder with the coding rate close to 16Kbps single rate coder under the environmental noise. The environmental noise level affects the coding rate and the noise level is calculated per every speech window samples. At high noise level, more samples are coded at higher rates to enhance the quality, but at low noise level, only the big speech signals are coded at higher rates and more speech samples are coded at lower coding rates to reduce the coding rates. The influence of the noise on tile speech signal is considerably high for small signals and the small signal has the higher ZCR (zero crossing rate). The method is simulated in PC and to be implemented on C6000 floating point DSP board in real time operations.

  • PDF

송화측음 및 실내소음이 송화 음성레벨에 미치는 영향 (Effects of Talker Sidetone and Room Noise on the Speech Level of a Talker)

  • 강경옥;강성훈
    • 한국음향학회지
    • /
    • 제11권1호
    • /
    • pp.52-59
    • /
    • 1992
  • 전화통화시에 송화측음과 실내소음에 따른 송화자의 음성레벨의 변화를 정량적으로 파악하기 위하여, 음성레벨 측정 알고리즘을 고찰하고, 송화측음과 실내소음의 함수로 송화 음성레벨을 측정하였다. 그 결과, 송화측음의 변화에 따라 송화자는 자신의 음성이 송화측음에 의해 마스킹되는 비율에 따라 음성레벨을 조절하여, 항상 자신의 귀로 되돌아오는 음성의 심리적 크기를 일정하게 유지하려는 모니터 기능을 보였다. 또한 송화기를 통한 실내소음이 측음의 변화에 따라 음성레벨에 미치는 영향에 대해서도 알아 본 결과, 실내소음이 증가할수록 피험자는 무의식적으로 송화시 전화기 핸드셋을 통한 자신의 음성이 소음에 의해 마스크되는 양만큼 자신의 음성을 크게 하여, 수화기를 통해 자신의 귀에 되돌아오는 심리적인 음성의 크기를 일정하게 유지하려는 경향을 보였다.

  • PDF

Weighted Finite State Transducer-Based Endpoint Detection Using Probabilistic Decision Logic

  • Chung, Hoon;Lee, Sung Joo;Lee, Yun Keun
    • ETRI Journal
    • /
    • 제36권5호
    • /
    • pp.714-720
    • /
    • 2014
  • In this paper, we propose the use of data-driven probabilistic utterance-level decision logic to improve Weighted Finite State Transducer (WFST)-based endpoint detection. In general, endpoint detection is dealt with using two cascaded decision processes. The first process is frame-level speech/non-speech classification based on statistical hypothesis testing, and the second process is a heuristic-knowledge-based utterance-level speech boundary decision. To handle these two processes within a unified framework, we propose a WFST-based approach. However, a WFST-based approach has the same limitations as conventional approaches in that the utterance-level decision is based on heuristic knowledge and the decision parameters are tuned sequentially. Therefore, to obtain decision knowledge from a speech corpus and optimize the parameters at the same time, we propose the use of data-driven probabilistic utterance-level decision logic. The proposed method reduces the average detection failure rate by about 14% for various noisy-speech corpora collected for an endpoint detection evaluation.

경직형 뇌성마비아동의 말명료도 및 말명료도와 관련된 말 평가 변인 (Speech Evaluation Variables Related to Speech Intelligibility in Children with Spastic Cerebral Palsy)

  • 박지은;김향희;신지철;최홍식;심현섭;박은숙
    • 말소리와 음성과학
    • /
    • 제2권4호
    • /
    • pp.193-212
    • /
    • 2010
  • The purpose of our study was to provide effective speech evaluation items examining the variables of speech that successfully predict the speech intelligibility in CP children. The subjects were 55 children with spastic type cerebral palsy. As for the speech evaluation, we performed a speech subsystem evaluation and a speech intelligibility test. The results of the study are as follows. The evaluation task for the speech subsystems consisted of 48 task items within an observational evaluation stage and three levels of severity. The levels showed correlations with gross motor functions, fine motor functions, and age. Second, the evaluation items for the speech subsystems were rearranged into seven factors. Third, 34 out of 48 task items that positively correlated with the syllable intelligibility rating were as follows. There were four items in the observational evaluation stage. Among the nonverbal articulatory function evaluation items, there were 11 items in level one. There were 12 items in level two. In level three there were eight items. Fourth, there were 23 items among the 48 evaluation tasks that correlated with the sentence intelligibility rating. There was one item in the observational evaluation stage which was in the articulatory structure evaluation task. In level one there were six items. In level two, there were eight items. In level three, there was a total number of eight items. Fifth, there was a total number of 14 items that influenced the syllable intelligibility rating. Sixth, there was a total number of 13 items that influenced the syllable intelligibility rating. According to the results above, the variables that influenced the speech intelligibility of CP children among the articulatory function tasks were in the respiratory function task, phonatory function task, and lip and chin related tasks. We did not find any correlation for the tongue function. The results of our study could be applied to speech evaluation, setting therapy goals, and evaluating the degree of progression in children with CP. We only studied children with the spastic type of cerebral palsy, and there were a small number of severe degree CP children compared to those with a moderate degree of CP. Therefore, when evaluating children with other degrees of severity, we may have to take their characteristics more into account. Further study on speech evaluation variables in relation to the severity of the speech intelligibility and different types of cerebral palsy may be necessary.

  • PDF

지적장애 아동의 롬바드 효과에 따른 말산출 특성 (The Lombard effect on the speech of children with intellectual disability)

  • 이현주;이지윤;김유경
    • 말소리와 음성과학
    • /
    • 제8권4호
    • /
    • pp.115-122
    • /
    • 2016
  • This study investigates the acoustic-phonetic features and speech intelligibility of Lombard speech in children with intellectual disability, by examining the effect of Lombard speech at 3 levels of non-noise, 55dB, and 65dB. Eight children with intellectual disability read sentences and played speaking games, and their speech were analyzed in terms of intensity, pitch, vowel space of /a/, /i/, and /u/, VAI(3), articulation rate and speech intelligibility. Results showed, first, that intensity and pitch increased as noise level increased; second, that VAI(3) increased as the noise level increased; third, that articulation rate decreased as noise intensity increased; finally, that speech intelligibility increased as noise intensity increased. The Lombard speech changed the VAI(3), vowel space, articulation rate, speech intelligibility of the children with intellectual disability as well. This study suggests that the Lombard speech will be clinically useful for the persons who have intellectual disability and difficulties in self-control.

음성장애의 중증도와 발화 수준에 따른 말 명료도의 변화 연구 (A Study on the Speech Intelligibility of Voice Disordered Patients according to the Severity and Utterance Level)

  • 표화영
    • 음성과학
    • /
    • 제15권2호
    • /
    • pp.101-110
    • /
    • 2008
  • The purpose of this study was to investigate the speech intelligibility of voice disordered patients when we consider the severity and utterance level as variables. Based on the severity level, 12 patients were divided into three groups, G1, G2, and G3 group, respectively. Words, phrases and sentences produced by the speakers were judged by four listeners with normal hearing, and we compared the intelligibility scores of the three groups. As a result, the speech intelligibility was decreased as the severity level was increased, and the difference was statistically significant. However, the mean difference among words, phrases and sentences was not significant, and the variation of intelligibility according to the utterance level was not under the regular rules.

  • PDF

Integrated Visual and Speech Parameters in Korean Numeral Speech Recognition

  • Lee, Sang-won;Park, In-Jung;Lee, Chun-Woo;Kim, Hyung-Bae
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.685-688
    • /
    • 2000
  • In this paper, we used image information for the enhancement of Korean numeral speech recognition. First, a noisy environment was made by Gaussian generator at each 10 dB level and the generated signal was added to original Korean numeral speech. And then, the speech was analyzed to recognize Korean numeral speech. Speech through microphone was pre-emphasized with 0.95, Hamming window, autocorrelation and LPC analysis was used. Second, the image obtained by camera, was converted to gray level, autocorrelated, and analyzed using LPC algorithm, to which was applied in speech analysis, Finally, the Korean numerial speech recognition with image information was more ehnanced than speech-only, especially in ‘3’, ‘5’and ‘9’. As the same LPC algorithm and simple image management was used, additional computation a1gorithm like a filtering was not used, a total speech recognition algorithm was made simple.

  • PDF