• Title/Summary/Keyword: Speech Parameter

Search Result 373, Processing Time 0.02 seconds

Improving Speech/Music Discrimination Parameter Using Time-Averaged MFCC (MFCC의 단구간 시간 평균을 이용한 음성/음악 판별 파라미터 성능 향상)

  • Choi, Mu-Yeol;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.64
    • /
    • pp.155-169
    • /
    • 2007
  • Discrimination between speech and music is important in many multimedia applications. In our previous work, focusing on the spectral change characteristics of speech and music, we presented a method using the mean of minimum cepstral distances (MMCD), and it showed a very high discrimination performance. In this paper, to further improve the performance, we propose to employ time-averaged MFCC in computing the MMCD. Our experimental results show that the proposed method enhances the discrimination between speech and music. Moreover, the proposed method overcomes the weakness of the conventional MMCD method whose performance is relatively sensitive to the choice of the frame interval to compute the MMCD.

  • PDF

A Study on the Performance of TDNN-Based Speech Recognizer with Network Parameters

  • Nam, Hojung;Kwon, Y.;Paek, Inchan;Lee, K.S.;Yang, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2E
    • /
    • pp.32-37
    • /
    • 1997
  • This paper proposes a isolated speech recognition method of Korean digits using a TDNN(Time Delay Neural Network) which is able to recognizc time-varying speech properties. We also make an investigation of effect on network parameter of TDNN ; hidden layers and time-delays. TDNNs in our experiments consist of 2 and 3 hidden layers and have several time-delays. From experiment result, TDNN structure which has 2 hidden-layers, gives a good result for speech recognition of Korean digits. Mis-recognition by time-delays can be improved by changing TDNN structures and mis-recognition separated from time-delays can be improved by changing input patterns.

  • PDF

SPATIAL EXPLANATIONS OF SPEECH PERCEPTION: A STUDY OF FRICATIVES

  • Choo, Won;Mark Huckvale
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.399-403
    • /
    • 1996
  • This paper addresses issues of perceptual constancy in speech perception through the use of a spatial metaphor for speech sound identity as opposed to a more conventional characterisation with multiple interacting acoustic cues. This spatial representation leads to a correlation between phonetic, acoustic and auditory analyses of speech sounds which can serve as the basis for a model of speech perception based on the general auditory characteristics of sounds. The correlations between the phonetic, perceptual and auditory spaces of the set of English voiceless fricatives /f $\theta$ s $\int$ h / are investigated. The results show that the perception of fricative segments may be explained in terms of 2-dimensional auditory space in which each segment occupies a region. The dimensions of the space were found to be the frequency of the main spectral peak and the 'peakiness' of spectra. These results support the view that perception of a segment is based on its occupancy of a multi-dimensional parameter space. In this way, final perceptual decisions on segments can be postponed until higher level constraints can also be met.

  • PDF

Performance Comparison between the PMC and VTS Method for the Isolated Speech Recognition in Car Noise Environments (자동차 잡음환경 고립단어 음성인식에서의 VTS와 PMC의 성능비교)

  • Chung, Yong-Joo;Lee, Seung-Wook
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.251-261
    • /
    • 2003
  • There has been many research efforts to overcome the problems of speech recognition in noisy conditions. Among the noise-robust speech recognition methods, model-based adaptation approaches have been shown quite effective. Particularly, the PMC (parallel model combination) method is very popular and has been shown to give considerably improved recognition results compared with the conventional methods. In this paper, we experimented with the VTS (vector Taylor series) algorithm which is also based on the model parameter transformation but has not attracted much interests of the researchers in this area. To verify the effectiveness of it, we employed the algorithm in the continuous density HMM (Hidden Markov Model). We compared the performance of the VTS algorithm with the PMC method and could see that the it gave better results than the PMC method.

  • PDF

A Study on the Optimal Mahalanobis Distance for Speech Recognition

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.177-186
    • /
    • 2006
  • In an effort to enhance the quality of feature vector classification and thereby reduce the recognition error rate of the speaker-independent speech recognition, we employ the Mahalanobis distance in the calculation of the similarity measure between feature vectors. It is assumed that the metric matrix of the Mahalanobis distance be diagonal for the sake of cost reduction in memory and time of calculation. We propose that the diagonal elements be given in terms of the variations of the feature vector components. Geometrically, this prescription tends to redistribute the set of data in the shape of a hypersphere in the feature vector space. The idea is applied to the speech recognition by hidden Markov model with fuzzy vector quantization. The result shows that the recognition is improved by an appropriate choice of the relevant adjustable parameter. The Viterbi score difference of the two winners in the recognition test shows that the general behavior is in accord with that of the recognition error rate.

  • PDF

Multi-Channel Speech Enhancement Algorithm Using DOA-based Learning Rate Control (DOA 기반 학습률 조절을 이용한 다채널 음성개선 알고리즘)

  • Kim, Su-Hwan;Lee, Young-Jae;Kim, Young-Il;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.3
    • /
    • pp.91-98
    • /
    • 2011
  • In this paper, a multi-channel speech enhancement method using the linearly constrained minimum variance (LCMV) algorithm and a variable learning rate control is proposed. To control the learning rate for adaptive filters of the LCMV algorithm, the direction of arrival (DOA) is measured for each short-time input signal and the likelihood function of the target speech presence is estimated to control the filter learning rate. Using the likelihood measure, the learning rate is increased during the pure noise interval and decreased during the target speech interval. To optimize the parameter of the mapping function between the likelihood value and the corresponding learning rate, an exhaustive search is performed using the Bark's scale distortion (BSD) as the performance index. Experimental results show that the proposed algorithm outperforms the conventional LCMV with fixed learning rate in the BSD by around 1.5 dB.

  • PDF

A new acoustical parameter for speech intelligibility with regard to early vertical reflections (초기 수직반사음의 역할을 고려한 새로운 명료도 지표)

  • Park, Jong Young;Han, Myung Ho;Jeong, Dae Up;Oh, Yang Ki
    • KIEAE Journal
    • /
    • v.7 no.3
    • /
    • pp.63-70
    • /
    • 2007
  • It is known that early reflections, their energy and delay times after the arrival of direct sound are important factors for speech intelligibility. In this basis, acoustical parameters like D50 and C80 had been proposed and are widely used for assessing the listening condition of rooms. These parameters are focused on the fraction of the early energy to the total, regardless of the spatial characteristics of the early reflections. This means that all the early reflections, arrived in certain time boundary. from front, behind, down and upside have the same impact on speech intelligibility. From the questionable simplicity, the influence of the direction of early reflections on speech intelligibility is examined in this study. A computer simulation speech intelligibility test, conducted for 22 university students, found that the reflection of vertical direction with method of the Paired comparison also the preference of 0.746 degree was visible an increase.

A Study on the Voice-Controlled Wheelchair using Spatio-Temporal Pattern Recognition Neural Network (Spatio-Temporal Pattern Recognition Neural Network를 이용한 전동 휠체어의 음성 제어에 관한 연구)

  • Baek, S.W.;Kim, S.B.;Kwon, J.W.;Lee, E.H.;Hong, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1993 no.05
    • /
    • pp.90-93
    • /
    • 1993
  • In this study, Korean speech was recognized by using spatio-temporal recognition neural network. The subjects of speech are numeric speech from zero to nine and basic command which might be used for motorized wheelchair developed it own Lab. Rabiner and Sambur's method of speech detection was used in determining end-point of speech, speech parameter was extracted by using LPC 16 order. The recognition rate was over 90%.

  • PDF

Designing of efficient super-wide bandwidth extension system using enhanced parameter estimation in time domain (시간 영역에서 개선된 파라미터 추론을 통한 효율적인 초광대역 확장 시스템 설계)

  • Jeon, Jong-jeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.431-433
    • /
    • 2018
  • This paper proposes the system that offer super-wideband speech which is made by artificial bandwidth extension technique using wideband speech signal in time-domain. wideband excitation signal and line spectrum pair(LSP) are extracted based on source-filter model in time-domain. Two parameters are extended by each bandwidth extension algorithms, and then, super-wideband speech parameters are estimated. and synthesized. Subjective test shows super-wideband speech is better speech quality than wideband speech signal.

  • PDF

A Study on Combining Bimodal Sensors for Robust Speech Recognition (강인한 음성인식을 위한 이중모드 센서의 결합방식에 관한 연구)

  • 이철우;계영철;고인선
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.51-56
    • /
    • 2001
  • Recent researches have been focusing on jointly using lip motions and speech for reliable speech recognitions in noisy environments. To this end, this paper proposes the method of combining the visual speech recognizer and the conventional speech recognizer with each output properly weighted. In particular, we propose the method of autonomously determining the weights, depending on the amounts of noise in the speech. The correlations between adjacent speech samples and the residual errors of the LPC analysis are used for this determination. Simulation results show that the speech recognizer combined in this way provides the recognition performance of 83 % even in severely noisy environments.

  • PDF