• Title/Summary/Keyword: Speech discrimination

Search Result 156, Processing Time 0.029 seconds

Performance Comparison of Feature Parameters and Classifiers for Speech/Music Discrimination (음성과 음악 분류를 위한 특징 파라미터와 분류 방법의 성능비교)

  • Kim Su Mi;Kim Hyung Soon
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.149-152
    • /
    • 2003
  • In this paper, we present a performance comparison of feature parameters and classifiers for speech/music discrimination. Experiments were carried out on six feature parameters and three classifiers. It turns out that three classifiers shows similar performance. The feature set that captures the temporal and spectral structure of the signal yields good performance, while the phone-based feature set shows relatively inferior performance.

  • PDF

Korean Auditory Discrimination Test (한국어 청취 판별 검사)

  • Lee Hyun Bok;Kim Sun Hee
    • MALSORI
    • /
    • no.33_34
    • /
    • pp.91-98
    • /
    • 1997
  • Auditory discrimination which represents a very basic and important perceptual skill in children is a necessary condition for effective learning. It is necessary, therefore, to devise a standardized test tool for a reliable assessment of the auditory discrimination ability of children. The Korean Auditory Discrimination Test(KADT) is a tentative test tool that the authors have devised to meet such demand, i.e., to test the auditory discrimination ability of Korean children, both normal and hearing- and speech-impaired, between the ages of 4 and 8. The KADT consists of 40 pairs of words arranged in a systematic manner, of which thirty are 'minimal pairs' of words and the rest homophonous synonyms. The 30 minimal pairs are composed in such a way that major phonological contrasts involving consonants and vowels at initial, medial and final positions are duly represented. The test score will be determined by the number of right responses made by the children. Further attempts will be made to refine and improve KADT in future.

  • PDF

F-ratio of Speaker Variability in Emotional Speech

  • Yi, So-Pae
    • Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.63-72
    • /
    • 2008
  • Various acoustic features were extracted and analyzed to estimate the inter- and intra-speaker variability of emotional speech. Tokens of vowel /a/ from sentences spoken with different modes of emotion (sadness, neutral, happiness, fear and anger) were analyzed. All of the acoustic features (fundamental frequency, spectral slope, HNR, H1-A1 and formant frequency) indicated greater contribution to inter- than intra-speaker variability across all emotions. Each acoustic feature of speech signal showed a different degree of contribution to speaker discrimination in different emotional modes. Sadness and neutral indicated greater speaker discrimination than other emotional modes (happiness, fear, anger in descending order of F-ratio). In other words, the speaker specificity was better represented in sadness and neutral than in happiness, fear and anger with any of the acoustic features.

  • PDF

A SPEECH-PHONETIC STUDY ON THE PRONUNCIATION OF THE OPENBITE PATIENTS (개교환자의 발성에 관한 언어 음성학적 연구)

  • Kim, Ki-Dal;Yang, Won Sik
    • The korean journal of orthodontics
    • /
    • v.21 no.2 s.34
    • /
    • pp.287-307
    • /
    • 1991
  • This study aimed at examining speech defects of openbite patients, which were analized in terms of formant frequency for vowels and word pronunciation length for consonants. In addition, the upper and lower lip (perioral m.) activity was tested by the EMG. The tongue force was measured by the strain gauge, and the speech discrimination test was carried out. One experimental group and one control group were used for this study and they were respectively composed of six female openbite patients and six normal-occlusion females. Eight monophthongs, two fricatives and two affricatives were chosen for speech analysis. Speeches of the above-mentioned groups were recorded and then analized by the ILS/PC-1 software. Four hundred most frequently used monosyllables were also chosen for discrimination score. Openbite patients showed the following characteristics: 1. Abnormality in case of /a/, $/\varepsilon/$, /e/, /i/ $F_2$ and /e/, /a/ $F_1$. 2. Significantly elongated length in their pronunciation of /h/ and $/C^h/$ and somewhat elongated length also in their pronunciation of /s/ and /c/. 3. Significant upper lip activity according to the EMG test during pronunciation of the bilabial consonants. 4. Relatively weak tongue force according to the strain gauge measurement. 5. According to the speech discrimination test, high rate of misarticulation in case of (a) initial /p/ /s'/ and /ts'/, (b) /a/,$/\varepsilon/$,/e/,/je/,/o/, $/\phi/$,/jo/,/u/,/we/, and /i/ (c) final (equation omitted).

  • PDF

Speech Perception and Production of English Postvocalic Voicing by Korean and English Speakers

  • Chang, Woo-Hyeok
    • Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.107-120
    • /
    • 2006
  • The main purpose of this study is to investigate whether Korean learners can use the vowel duration cue to distinguish voicing contrasts in word-final consonants in English. Given that the Korean group's performance on the auditory task was much better than their performance on the identification task or on the production task, we conclude that the AX discrimination task makes contact with a different layer of perception. In particular, the AX discrimination task can be done at the auditory or phonetic level, where differences in vowel length are still encoded in the representation. In contrast, the identification and production tasks are probing the mental representation of vowel length and voicing. It was also founded that Korean speakers stored neither vowel length nor voicing in memorized representations and did not internalize the lengthening of the preceding vowel as a rule to differentiate the voicing contrasts of final consonants, even though they were able to detect the acoustic differences in vowel duration provided that they were tested in an appropriate task.

  • PDF

The Evaluation of the Fuzzy-Chaos Dimension and the Fuzzy-Lyapunov Ddimension (화자인식을 위한 퍼지-상관차원과 퍼지-리아프노프차원의 평가)

  • Yoo, Byong-Wook;Park, Hyun-Sook;Kim, Chang-Seok
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.167-183
    • /
    • 2000
  • In this paper, we propose two kinds of chaos dimensions, the fuzzy correlation and fuzzy Lyapunov dimensions, for speaker recognition. The proposal is based on the point that chaos enables us to analyze the non-linear information contained in individual's speech signal and to obtain superior discrimination capability. We confirm that the proposed fuzzy chaos dimensions play an important role in enhancing speaker recognition ratio, by absorbing the variations of the reference and test pattern attractors. In order to evaluate the proposed fuzzy chaos dimensions, we suggest speaker recognition using the proposed dimensions. In other words, we investigate the validity of the speaker recognition parameters, by estimating the recognition error according to the discrimination error of an individual speaker from the reference pattern.

  • PDF

A Study On The Automatic Discrimination Of The Korean Alveolar Stops (한국어 파열음의 자동 인식에 대한 연구 : 한국어 치경 파열음의 자동 분류에 관한 연구)

  • Choi, Yun-Seok;Kim, Ki-Seok;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1987.11a
    • /
    • pp.330-333
    • /
    • 1987
  • This paper is the study on the automatic discrimination of the Korean alveolar stops. In Korean, it is necessary to discriminate the asperate/tense plosive for the automatic speech recognition system because we, Korean, distinguish asperate/tense plosive allphones from tense and lax plosive. In order to detect acoustic cues for automatic recognition of the [ㄲ, ㄸ, ㅃ], we have experimented the discrimination of [ㄷ,ㄸ,ㅌ]. We used temporal cues like VOT and Silence Duration, etc., and energy cues like ratio of high frequency energy and low frequency energy as the acoustic parameters. The VCV speech data where V is the 8 Simple Vowels and C is the 3 alevolar stops, are used for experiments. The 192 speech data are experimented on and the recognition rate is resulted in about 82%-95%.

  • PDF

The Study for Advancing the Performance of Speaker Verification Algorithm Using Individual Voice Information (개별 음향 정보를 이용한 화자 확인 알고리즘 성능향상 연구)

  • Lee, Je-Young;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.253-263
    • /
    • 2002
  • In this paper, we propose new algorithm of speaker recognition which identifies the speaker using the information obtained by the intensive speech feature analysis such as pitch, intensity, duration, and formant, which are crucial parameters of individual voice, for candidates of high percentage of wrong recognition in the existing speaker recognition algorithm. For testing the power of discrimination of individual parameter, DTW (Dynamic Time Warping) is used. We newly set the range of threshold which affects the power of discrimination in speech verification such that the candidates in the new range of threshold are finally discriminated in the next stage of sound parameter analysis. In the speaker verification test by using voice DB which consists of secret words of 25 males and 25 females of 8 kHz 16 bit, the algorithm we propose shows about 1% of performance improvement to the existing algorithm.

  • PDF

A Study on the Context-dependent Speaker Recognition Adopting the Method of Weighting the Frame-based Likelihood Using SNR (SNR을 이용한 프레임별 유사도 가중방법을 적용한 문맥종속 화자인식에 관한 연구)

  • Choi, Hong-Sub
    • MALSORI
    • /
    • no.61
    • /
    • pp.113-123
    • /
    • 2007
  • The environmental differences between training and testing mode are generally considered to be the critical factor for the performance degradation in speaker recognition systems. Especially, general speaker recognition systems try to get as clean speech as possible to train the speaker model, but it's not true in real testing phase due to environmental and channel noise. So in this paper, the new method of weighting the frame-based likelihood according to frame SNR is proposed in order to cope with that problem. That is to make use of the deep correlation between speech SNR and speaker discrimination rate. To verify the usefulness of this proposed method, it is applied to the context dependent speaker identification system. And the experimental results with the cellular phone speech DB which is designed by ETRI for Koran speaker recognition show that the proposed method is effective and increase the identification accuracy by 11% at maximum.

  • PDF