• Title/Summary/Keyword: speech parameter

Search Result 373, Processing Time 0.025 seconds

Verification and estimation of a posterior probability and probability density function using vector quantization and neural network (신경회로망과 벡터양자화에 의한 사후확률과 확률 밀도함수 추정 및 검증)

  • 고희석;김현덕;이광석
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.2
    • /
    • pp.325-328
    • /
    • 1996
  • In this paper, we proposed an estimation method of a posterior probability and PDF(Probability density function) using a feed forward neural network and code books of VQ(vector quantization). In this study, We estimates a posterior probability and probability density function, which compose a new parameter with well-known Mel cepstrum and verificate the performance for the five vowels taking from syllables by NN(neural network) and PNN(probabilistic neural network). In case of new parameter, showed the best result by probabilistic neural network and recognition rates are average 83.02%.

  • PDF

A Study on Performance Evaluation of Hidden Markov Network Speech Recognition System (Hidden Markov Network 음성인식 시스템의 성능평가에 관한 연구)

  • 오세진;김광동;노덕규;위석오;송민규;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.30-39
    • /
    • 2003
  • In this paper, we carried out the performance evaluation of HM-Net(Hidden Markov Network) speech recognition system for Korean speech databases. We adopted to construct acoustic models using the HM-Nets modified by HMMs(Hidden Markov Models), which are widely used as the statistical modeling methods. HM-Nets are carried out the state splitting for contextual and temporal domain by PDT-SSS(Phonetic Decision Tree-based Successive State Splitting) algorithm, which is modified the original SSS algorithm. Especially it adopted the phonetic decision tree to effectively express the context information not appear in training speech data on contextual domain state splitting. In case of temporal domain state splitting, to effectively represent information of each phoneme maintenance in the state splitting is carried out, and then the optimal model network of triphone types are constructed by in the parameter. Speech recognition was performed using the one-pass Viterbi beam search algorithm with phone-pair/word-pair grammar for phoneme/word recognition, respectively and using the multi-pass search algorithm with n-gram language models for sentence recognition. The tree-structured lexicon was used in order to decrease the number of nodes by sharing the same prefixes among words. In this paper, the performance evaluation of HM-Net speech recognition system is carried out for various recognition conditions. Through the experiments, we verified that it has very superior recognition performance compared with the previous introduced recognition system.

  • PDF

A Performance Improvement Method using Variable Break in Corpus Based Japanese Text-to-Speech System (가변 Break를 이용한 코퍼스 기반 일본어 음성 합성기의 성능 향상 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.155-163
    • /
    • 2009
  • In text-to-speech systems, the conversion of text into prosodic parameters is necessarily composed of three steps. These are the placement of prosodic boundaries. the determination of segmental durations, and the specification of fundamental frequency contours. Prosodic boundaries. as the most important and basic parameter. affect the estimation of durations and fundamental frequency. Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries, However. an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally. unit-selection is conducted using multiple prosodic targets. In the MOS test result. the original speech scored a 4,99. while proposed method scored a 4.25 and conventional method scored a 4.01. The experimental results show that the proposed method improves the naturalness of synthesized speech.

Decision Tree State Tying Modeling Using Parameter Estimation of Bayesian Method (Bayesian 기법의 모수 추정을 이용한 결정트리 상태 공유 모델링)

  • Oh, SangYeob
    • Journal of Digital Convergence
    • /
    • v.13 no.1
    • /
    • pp.243-248
    • /
    • 2015
  • Recognition model is not defined when you configure a model, Been added to the model after model building awareness, Model a model of the clustering due to lack of recognition models are generated by modeling is causes the degradation of the recognition rate. In order to improve decision tree state tying modeling using parameter estimation of Bayesian method. The parameter estimation method is proposed Bayesian method to navigate through the model from the results of the decision tree based on the tying state according to the maximum probability method to determine the recognition model. According to our experiments on the simulation data generated by adding noise to clean speech, the proposed clustering method error rate reduction of 1.29% compared with baseline model, which is slightly better performance than the existing approach.

Syllable Recognition of HMM using Segment Dimension Compression (세그먼트 차원압축을 이용한 HMM의 음절인식)

  • Kim, Joo-Sung;Lee, Yang-Woo;Hur, Kang-In;Ahn, Jum-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.40-48
    • /
    • 1996
  • In this paper, a 40 dimensional segment vector with 4 frame and 7 frame width in every monosyllable interval was compressed into a 10, 14, 20 dimensional vector using K-L expansion and neural networks, and these was used to speech recognition feature parameter for CHMM. And we also compared them with CHMM added as feature parameter to the discrete duration time, the regression coefficients and the mixture distribution. In recognition test at 100 monosyllable, recognition rates of CHMM +${\bigtriangleup}$MCEP, CHMM +MIX and CHMM +DD respectively improve 1.4%, 2.36% and 2.78% over 85.19% of CHMM. And those using vector compressed by K-L expansion are less than MCEP + ${\bigtriangleup}$MCEP but those using K-L + MCEP, K-L + ${\bigtriangleup}$MCEP are almost same. Neural networks reflect more the speech dynamic variety than K-L expansion because they use the sigmoid function for the non-linear transform. Recognition rates using vector compressed by neural networks are higher than those using of K-L expansion and other methods.

  • PDF

Overlap and Add Sinusoidal Synthesis Method of Speech Signal Lising the Damping Harmonic Magnitude Parameter (감쇄(damping) 하모닉 크기 파라미터를 이용한 음성의 중첩합산 정현파 합성 방법)

  • Park, Jong-Bae;Kim, Young-Joon;Lee, In-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.3C
    • /
    • pp.251-256
    • /
    • 2009
  • In this paper, we propose a new method with the improved continuity performance of overlap and add speech signal synthesis method using damping harmonic amplitude parameter. The existing method uses the average value of past and current parameters for the sinusoidal amplitude used as the weight of phase error function. But, the proposed method extracts the more accurate sinusoidal amplitude by using a correlation between the original signals and the synthesized signals for the sinusodal amplitude used as the weights. To verify the performance of the proposed method, we observed the average differential error value between the synthesized signals.

The Performance Improvement of G.729 PLC in Situation of Consecutive Frame Loss (연속적인 프레임 손실 상황에서의 G.729 PLC 성능개선)

  • Hong, Seong-Hoon;Kim, Jin-Woo;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.34-40
    • /
    • 2010
  • As internet spread widely, various service which use the internet have been provided. One of the service is a internet phone. Its usage is increasing by the advantage of cost. But it has a falling off in quality of speech. because it use packet switching method while existing telephone use circuit switching method. Although vocoder use PLC (Packet Loss Concealment) algorithm, it has a weakness of continuous packet loss. In this paper, we propose methods to improve a lowering in quality of speech under continuous loss of packet by using PLC algorithm used in advanced G.729 and G.711. The proposed methods are LP (Linear Prediction) parameter interpolation, excitation signal reconstruction and excitation signal gain reconstruction. As a result, the proposed method shows superior performance about 11%.

A Study on ACFBD-MPC in 8kbps (8kbps에 있어서 ACFBD-MPC에 관한 연구)

  • Lee, See-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.7
    • /
    • pp.49-53
    • /
    • 2016
  • Recently, the use of signal compression methods to improve the efficiency of wireless networks have increased. In particular, the MPC system was used in the pitch extraction method and the excitation source of voiced and unvoiced to reduce the bit rate. In general, the MPC system using an excitation source of voiced and unvoiced would result in a distortion of the synthesis speech waveform in the case of voiced and unvoiced consonants in a frame. This is caused by normalization of the synthesis speech waveform in the process of restoring the multi-pulses of the representation segment. This paper presents an ACFBD-MPC (Amplitude Compensation Frequency Band Division-Multi Pulse Coding) using amplitude compensation in a multi-pulses each pitch interval and specific frequency to reduce the distortion of the synthesis speech waveform. The experiments were performed with 16 sentences of male and female voices. The voice signal was A/D converted to 10kHz 12bit. In addition, the ACFBD-MPC system was realized and the SNR of the ACFBD-MPC estimated in the coding condition of 8kbps. As a result, the SNR of ACFBD-MPC was 13.6dB for the female voice and 14.2dB for the male voice. The ACFBD-MPC improved the male and female voice by 1 dB and 0.9 dB, respectively, compared to the traditional MPC. This method is expected to be used for cellular telephones and smartphones using the excitation source with a low bit rate.

Coordinative movement of articulators in bilabial stop /p/

  • Son, Minjung
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.77-89
    • /
    • 2018
  • Speech articulators are coordinated for the purpose of segmental constriction in terms of a task. In particular, vertical jaw movements repeatedly contribute to consonantal as well as vocalic constriction. The current study explores vertical jaw movements in conjunction with bilabial constriction in bilabial stop /p/ in the context /a/-to-/a/. Revisiting kinematic data of /p/ collected using the electromagenetic midsagittal articulometer (EMMA) method from seven (four female and three male) speakers of Seoul Korean, we examined maximum vertical jaw position, its relative timing with respect to the upper and lower lips, and lip aperture minima. The results of those dependent variables are recapitulated in terms of linguistic (different word boundaries) and paralinguistic (different speech rates) factors as follows. Firstly, maximum jaw height was lower in the across-word boundary condition (across-word < within-word), but it did not differ as a function of different speech rates (comfortable = fast). Secondly, more reduction in the lip aperture (LA) gesture occurred in fast rate, while word-boundary effects were absent. Thirdly, jaw raising was still in progress after the lips' positional extrema were achieved in the within-word condition, while the former was completed before the latter in the across-word condition. Lastly, relative temporal lags between the jaw and the lips (UL and LL) were more synchronous in fast rate, compared to comfortable rate. When these results are considered together, it is possible to posit that speakers are not tolerant of lenition to the extent that it is potentially realized as a labial approximant in either word-boundary condition while jaw height still manifested lower jaw position in the across-word boundary condition. Early termination of vertical jaw maxima before vertical lower lip maxima across-word condition may be partly responsible for the spatial reduction of jaw raising movements. This may come about as a consequence of an excessive number of factors (e.g., upper lip height (UH), lower lip height (LH), jaw angle (JA)) for the representation of a vector with two degrees of freedom (x, y) engaged in a gesture-based task (e.g., lip aperture (LA)). In the task-dynamic application toolkit, the jaw angle parameter can be assigned numerical values for greater weight in the across-word boundary condition, which in turn gives rise to lower jaw position. Speech rate-dependent spatial reduction in lip aperture may be able to be resolved by means of manipulating activation time of an active tract variable in the gestural score level.

A Study on the Automatic Recognition of Korean Basic Spoken Digit Using Energy of Special Bandwidth (특정 대역 에너지를 이용한 한국어 기본 수자 음성의 백동 인식에 관한 연구)

  • Han, Hee;Kim, Soon-Hyob;Park, Kyu-Tae
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.19 no.3
    • /
    • pp.5-12
    • /
    • 1982
  • Through the use of energy ratio of special bandwidths of basic vowels, recognition of Korean basic spoken digit is performed in logical combination with a zero-crossing rate and an energy parameter. In the experiments for recognition of the digits, the speech signal of spoken digits is filtered by a lowpass filter of which the cutoff frequency is 10KHz, and then sampled at 20KHz of sampling rate, In the speech signal processing, we used four FIR digital filters, and the order of filter lengths is 61, 120, 25, 25respectively. The filters are designed by using Remetz exchange algorithm.[13],[14] As a result, the recognition rate of 92% for the three speakers is obstained.

  • PDF