• Title/Summary/Keyword: Distorted speech

Search Result 37, Processing Time 0.027 seconds

Distorted Speech Rejection For Automatic Speech Recognition under CDMA Wireless Communication (CDMA이동통신환경에서의 음성인식을 위한 왜곡음성신호 거부방법)

  • Kim Nam Soo;Chang Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.597-601
    • /
    • 2004
  • This paper introduces a pre-rejection technique for wireless channel distorted speech with application to automatic speech recognition (ASR) Based on analysis of distorted speech signals over a wireless communication channel. we propose a method to reject the channel distorted speech with a small computational load. From a number of simulation results. we can discover that tile pre-rejection algorithm enhances the robustness of speech recognition operation.

Fast Speaker Adaptation Based on Eigenspace-based MLLR Using Artificially Distorted Speech in Car Noise Environment (차량 잡음 환경에서 인위적 왜곡 음성을 이용한 Eigenspace-based MLLR에 기반한 고속 화자 적응)

  • Song, Hwa-Jeon;Jeon, Hyung-Bae;Kim, Hyung-Soon
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.119-125
    • /
    • 2009
  • This paper proposes fast speaker adaptation method using artificially distorted speech in telematics terminal under the car noise environment based on eigenspace-based maximum likelihood linear regression (ES-MLLR). The artificially distorted speech is built from adding the various car noise signals collected from a driving car to the speech signal collected from an idling car. Then, in every environment, the transformation matrix is estimated by ES-MLLR using the artificially distorted speech corresponding to the specific noise environment. In test mode, an online model is built by weighted sum of the environment transformation matrices depending on the driving condition. In 3k-word recognition task in the telematics terminal, we achieve a performance superior to ES-MLLR even using the adaptation data collected from the driving condition.

  • PDF

A Study on Noise-Robust Methods for Broadcast News Speech Recognition (방송뉴스 인식에서의 잡음 처리 기법에 대한 고찰)

  • Chung Yong-joo
    • MALSORI
    • /
    • no.50
    • /
    • pp.71-83
    • /
    • 2004
  • Recently, broadcast news speech recognition has become one of the most attractive research areas. If we can transcribe automatically the broadcast news and store their contents in the text form instead of the video or audio signal itself, it will be much easier for us to search for the multimedia databases to obtain what we need. However, the desirable speech signal in the broadcast news are usually affected by the interfering signals such as the background noise and/or the music. Also, the speech of the reporter who is speaking over the telephone or with the ill-conditioned microphone is severely distorted by the channel effect. The interfered or distorted speech may be the main reason for the poor performance in the broadcast news speech recognition. In this paper, we investigated some methods to cope with the problems and we could see some performance improvements in the noisy broadcast news speech recognition.

  • PDF

Robust Histogram Equalization Using Compensated Probability Distribution

  • Kim, Sung-Tak;Kim, Hoi-Rin
    • MALSORI
    • /
    • v.55
    • /
    • pp.131-142
    • /
    • 2005
  • A mismatch between the training and the test conditions often causes a drastic decrease in the performance of the speech recognition systems. In this paper, non-linear transformation techniques based on histogram equalization in the acoustic feature space are studied for reducing the mismatched condition. The purpose of histogram equalization(HEQ) is to convert the probability distribution of test speech into the probability distribution of training speech. While conventional histogram equalization methods consider only the probability distribution of a test speech, for noise-corrupted test speech, its probability distribution is also distorted. The transformation function obtained by this distorted probability distribution maybe bring about miss-transformation of feature vectors, and this causes the performance of histogram equalization to decrease. Therefore, this paper proposes a new method of calculating noise-removed probability distribution by using assumption that the CDF of noisy speech feature vectors consists of component of speech feature vectors and component of noise feature vectors, and this compensated probability distribution is used in HEQ process. In the AURORA-2 framework, the proposed method reduced the error rate by over $44\%$ in clean training condition compared to the baseline system. For multi training condition, the proposed methods are also better than the baseline system.

  • PDF

Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients (구개열 환자 발음 판별을 위한 특징 추출 방법 분석)

  • Kim, Sung Min;Kim, Wooil;Kwon, Tack-Kyun;Sung, Myung-Whun;Sung, Mee Young
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1372-1379
    • /
    • 2015
  • This paper presents an analysis of feature extraction methods used for distinguishing the speech of patients with cleft palates and people with normal palates. This research is a basic study on the development of a software system for automatic recognition and restoration of speech disorders, in pursuit of improving the welfare of speech disabled persons. Monosyllable voice data for experiments were collected for three groups: normal speech, cleft palate speech, and simulated clef palate speech. The data consists of 14 basic Korean consonants, 5 complex consonants, and 7 vowels. Feature extractions are performed using three well-known methods: LPC, MFCC, and PLP. The pattern recognition process is executed using the acoustic model GMM. From our experiments, we concluded that the MFCC method is generally the most effective way to identify speech distortions. These results may contribute to the automatic detection and correction of the distorted speech of cleft palate patients, along with the development of an identification tool for levels of speech distortion.

Speech Quality Measure in a Mobile Communication System Using PLP Cepstral Distance with CMS (심리 음향 켑스트럼 평균 차감법을 이용한 이동 전화망에서의 음질 평가)

  • Yun, J.J.;Park, S.W.;Park, Y.C.;Youn, D.H.;Cha, I.H.
    • Speech Sciences
    • /
    • v.6
    • /
    • pp.163-179
    • /
    • 1999
  • For the set up, management and repair of a mobile communication system, continuous estimation of speech quality is required. Speech quality measurement can be conducted by listener's judgement in a subjective test such as MOS (Mean Opinion Score) test. However, this method is laborious, expensive and time-consuming, it is advisable to predict subjective speech quality via objective measures. This paper presents a robust objective speech quality measure, PLP-CMS (Perceptual Linear Predictive-Cepstral Mean Subtraction), which can predict subjective speech quality in mobile communication systems. PLP-CMS has a high correlation with subjective quality owing to PLP (Perceptual Linear Predictive) analysis and shows a robust performance not being influenced by PSTN (Public Switched Telephone Network) channel effects due to CMS (Cepstral Mean Subtraction). To prove the performance of our proposed algorithm, we carried out subjective and objective quality estimation on speech samples which are variously distorted in a real mobile communication system. As a result, we demonstrated that PLP-CMS has a higher correlation with subjective quality than PSQM (Perceptual Speech Quality Measure) and PLP-CD (Perceptual Linear Predictive-Cepstral Distance).

  • PDF

Discomfort caused by the circumferential comfortable retainer (CCR) as a removable maxillary retainer (상악 가철식 보정장치인 circumferential comfortable retainer (CCR)에 대한 불편감 평가)

  • Choi, Jin-Hugh;Moon, Cheol-Hyun
    • The korean journal of orthodontics
    • /
    • v.40 no.5
    • /
    • pp.325-333
    • /
    • 2010
  • Objective: The aim of this study was to illustrate the circumferential comfortable retainer (CCR) as a removable maxillary retainer with good potential patient compliance and to evaluate the discomfort of the retainers including distorted speech, gagging sensation and appliance discomfort. Methods: Sixty-six orthodontic patients (male, 23; female, 43; mean age, $23.42{\pm}10.19$ years) who received orthodontic treatment with fixed orthodontic appliances were randomly assigned to two groups after debonding, a conventional wraparound retainer (CWR) group that fully covers the palate with an acrylic plate and a highly polished surface, and a circumferential comfortable retainer (CCR) group which has a horseshoe shaped base plate with three folds on the anterior region. A questionnaire that had a visual analog scale (VAS) which consists of a 100-mm horizontal line with 2 end-points labeled "no discomfort" on the left and "worst discomfort" on the right, with regard to distorted speech, gagging sensation and discomfort, was administered to patients after 4 weeks of retainer wear. The Mann-Whitney test was used to test the hypothesis that there was no difference between the two retainers. Results: Comparing distorted speech and discomfort, the CCR group significantly had lower values than the CWR group ($p$ < 0.05). Comparing gagging sensation, the CCR group had lower values than the CWR group but there were no statistically significant differences between groups ($p$ = 0.146). Conclusions: In conclusion, the results suggest that the circumferential comfortable retainer (CCR) might facilitate patient compliance and thereby improve the maintenance of the fixed orthodontic treatment outcome.

Perceptual Characteristics of Korean Vowels Distorted by the Frequency Band Limitation (주파수 대역 제한에 의한 한국어 모음의 지각 특성 분석)

  • Kim, YeonWhoa;Choi, DaeLim;Lee, Sook-Hyang;Lee, YongJu
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2014
  • This paper investigated the effects of frequency band limitation on perceptual characteristics of Korean vowels. Monosyllabic speech (144 syllables of CV type, 56 syllables of VC type, 8 syllables of V type) produced by two announcers were low- and high-pass filtered with cutoff frequencies ranging from 300 to 5000 Hz. Six listeners with normal hearing performed perception tests by types of filter and cutoff frequencies. We reported phoneme recognition rates and types of perception error of band-limited Korean vowels to examine how frequency distortion in the process of speech transmission affect listener's perception.

Relationship between Speech Perception in Noise and Phonemic Restoration of Speech in Noise in Individuals with Normal Hearing

  • Vijayasarathy, Srikar;Barman, Animesh
    • Journal of Audiology & Otology
    • /
    • v.24 no.4
    • /
    • pp.167-173
    • /
    • 2020
  • Background and Objectives: Top-down restoration of distorted speech, tapped as phonemic restoration of speech in noise, maybe a useful tool to understand robustness of perception in adverse listening situations. However, the relationship between phonemic restoration and speech perception in noise is not empirically clear. Subjects and Methods: 20 adults (40-55 years) with normal audiometric findings were part of the study. Sentence perception in noise performance was studied with various signal-to-noise ratios (SNRs) to estimate the SNR with 50% score. Performance was also measured for sentences interrupted with silence and for those interrupted by speech noise at -10, -5, 0, and 5 dB SNRs. The performance score in the noise interruption condition was subtracted by quiet interruption condition to determine the phonemic restoration magnitude. Results: Fairly robust improvements in speech intelligibility was found when the sentences were interrupted with speech noise instead of silence. Improvement with increasing noise levels was non-monotonic and reached a maximum at -10 dB SNR. Significant correlation between speech perception in noise performance and phonemic restoration of sentences interrupted with -10 dB SNR speech noise was found. Conclusions: It is possible that perception of speech in noise is associated with top-down processing of speech, tapped as phonemic restoration of interrupted speech. More research with a larger sample size is indicated since the restoration is affected by the type of speech material and noise used, age, working memory, and linguistic proficiency, and has a large individual variability.

Relationship between Speech Perception in Noise and Phonemic Restoration of Speech in Noise in Individuals with Normal Hearing

  • Vijayasarathy, Srikar;Barman, Animesh
    • Korean Journal of Audiology
    • /
    • v.24 no.4
    • /
    • pp.167-173
    • /
    • 2020
  • Background and Objectives: Top-down restoration of distorted speech, tapped as phonemic restoration of speech in noise, maybe a useful tool to understand robustness of perception in adverse listening situations. However, the relationship between phonemic restoration and speech perception in noise is not empirically clear. Subjects and Methods: 20 adults (40-55 years) with normal audiometric findings were part of the study. Sentence perception in noise performance was studied with various signal-to-noise ratios (SNRs) to estimate the SNR with 50% score. Performance was also measured for sentences interrupted with silence and for those interrupted by speech noise at -10, -5, 0, and 5 dB SNRs. The performance score in the noise interruption condition was subtracted by quiet interruption condition to determine the phonemic restoration magnitude. Results: Fairly robust improvements in speech intelligibility was found when the sentences were interrupted with speech noise instead of silence. Improvement with increasing noise levels was non-monotonic and reached a maximum at -10 dB SNR. Significant correlation between speech perception in noise performance and phonemic restoration of sentences interrupted with -10 dB SNR speech noise was found. Conclusions: It is possible that perception of speech in noise is associated with top-down processing of speech, tapped as phonemic restoration of interrupted speech. More research with a larger sample size is indicated since the restoration is affected by the type of speech material and noise used, age, working memory, and linguistic proficiency, and has a large individual variability.