• Title/Summary/Keyword: Art of speech

Search Result 86, Processing Time 0.018 seconds

A Novel Approach for Blind Estimation of Reverberation Time using Gamma Distribution Model

  • Hamza, Amad;Jan, Tariqullah;Jehangir, Asiya;Shah, Waqar;Zafar, Haseeb;Asif, M.
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.529-536
    • /
    • 2016
  • In this paper we proposed an unsupervised algorithm to estimate the reverberation time (RT) directly from the reverberant speech signal. For estimation process we use maximum likelihood estimation (MLE) which is a very well-known and state of the art method for estimation in the field of signal processing. All existing RT estimation methods are based on the decay rate distribution. The decay rate can be obtained either from the energy envelop decay curve analysis of noise source when it is switch off or from decay curve of impulse response of an enclosure. The analysis of a pre-existing method of reverberation time estimation is the foundation of the proposed method. In one of the state of the art method, the reverberation decay is modeled as a Laplacian distribution. In this paper, the proposed method models the reverberation decay as a Gamma distribution along with the unification of an effective technique for spotting free decay in reverberant speech. Maximum likelihood estimation technique is then used to estimate the RT from the free decays. The method was motivated by our observation that the RT of a reverberant signal when falls in specific range, then the decay rate of the signal follows Gamma distribution. Experiments are carried out on different reverberant speech signal to measure the accuracy of the suggested method. The experimental results reveal that the proposed method performs better and the accuracy is high in comparison to the state of the art method.

An Utterance Verification using Vowel String (모음 열을 이용한 발화 검증)

  • 유일수;노용완;홍광석
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.46-49
    • /
    • 2003
  • The use of confidence measures for word/utterance verification has become art essential component of any speech input application. Confidence measures have applications to a number of problems such as rejection of incorrect hypotheses, speaker adaptation, or adaptive modification of the hypothesis score during search in continuous speech recognition. In this paper, we present a new utterance verification method using vowel string. Using subword HMMs of VCCV unit, we create anti-models which include vowel string in hypothesis words. The experiment results show that the utterance verification rate of the proposed method is about 79.5%.

  • PDF

Estimation of Articulatory Characteristics of Vowels Using 'ArtSim' (Artsim'을 이용한 모음의 조음점 추정에 관한 연구)

  • Kim Dae-Ryun;Cho Cheol-Woo
    • MALSORI
    • /
    • no.35_36
    • /
    • pp.121-129
    • /
    • 1998
  • In this paper, articulatory simulator 'Artsim' is used as a tool for the experiments to examine the articulatory characteristics of 6 different vowels. Each vowels are defined by some articulatory points from their vocal tract area functions and shapes of tongues. Each points are varied systematically to synthesize vowels and the synthesized sound is evaluated by human listners. Finally distributions of each vowels within vowel space is obtained. From the experimental results it is verified that our articulatory simulator can be used effectively to investigate the articulatory characteristics of speech.

  • PDF

Fast offline transformer-based end-to-end automatic speech recognition for real-world applications

  • Oh, Yoo Rhee;Park, Kiyoung;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.476-490
    • /
    • 2022
  • With the recent advances in technology, automatic speech recognition (ASR) has been widely used in real-world applications. The efficiency of converting large amounts of speech into text accurately with limited resources has become more vital than ever. In this study, we propose a method to rapidly recognize a large speech database via a transformer-based end-to-end model. Transformers have improved the state-of-the-art performance in many fields. However, they are not easy to use for long sequences. In this study, various techniques to accelerate the recognition of real-world speeches are proposed and tested, including decoding via multiple-utterance-batched beam search, detecting end of speech based on a connectionist temporal classification (CTC), restricting the CTC-prefix score, and splitting long speeches into short segments. Experiments are conducted with the Librispeech dataset and the real-world Korean ASR tasks to verify the proposed methods. From the experiments, the proposed system can convert 8 h of speeches spoken at real-world meetings into text in less than 3 min with a 10.73% character error rate, which is 27.1% relatively lower than that of conventional systems.

Feature Parameter Extraction and Speech Recognition Using Matrix Factorization (Matrix Factorization을 이용한 음성 특징 파라미터 추출 및 인식)

  • Lee Kwang-Seok;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.7
    • /
    • pp.1307-1311
    • /
    • 2006
  • In this paper, we propose new speech feature parameter using the Matrix Factorization for appearance part-based features of speech spectrum. The proposed parameter represents effective dimensional reduced data from multi-dimensional feature data through matrix factorization procedure under all of the matrix elements are the non-negative constraint. Reduced feature data presents p art-based features of input data. We verify about usefulness of NMF(Non-Negative Matrix Factorization) algorithm for speech feature extraction applying feature parameter that is got using NMF in Mel-scaled filter bank output. According to recognition experiment results, we confirm that proposed feature parameter is superior to MFCC(Mel-Frequency Cepstral Coefficient) in recognition performance that is used generally.

An Experimental Clinical Phonetic Study on Patients of Dysarthria, Tonsilhypertrophy, Nasal Obstruction, and Cleft Palate (마비성조음장애, 편도 비대, 비폐쇄 및 구개열 환자의 실험 임상 음성학적 연구)

  • Kim, H.G.;Ko, D.H.;Shin, H.K.;Hong, K.H.;Seo, J.H.
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.67-88
    • /
    • 1997
  • The aim of this study is to develop an assessment program of speech rehabilitation for children having some language and speech disorders. Patients of dysarthria, tonsillectomy, tonsilhypertrophy, and nasal obstruction were selected for this experimental clinical phonetic study. Formant variations ($F_1\;&\;F_2$) show pre- and post-operation differences in tonsillectomy and cleft palate patients. Nasal formants ($NF_1\;&\;NF_2$) show pre- and post-operation differences in nasal obstruction. The articulation reaction time (ART) as a parameter was used to assess Voice Onset Time(VOT). It was shown longer duration for hypokinetic dysarthria and shorter for atoxic dysarthria.. The diadochokinetic rate was measured by Visi-pitch. Lower diadochokinetic rate appeared to spastic and dysarthria in comparison with the control group. It was shown that the nasalance of tonsilhypertrophy, nasal obstruction, and cleft palate patients was seen to increase after operation. In addition, the assessment of nasality can be measured only by simple vowels such as /a/ and /i/.

  • PDF

Research on Noise Reduction Algorithm Based on Combination of LMS Filter and Spectral Subtraction

  • Cao, Danyang;Chen, Zhixin;Gao, Xue
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.748-764
    • /
    • 2019
  • In order to deal with the filtering delay problem of least mean square adaptive filter noise reduction algorithm and music noise problem of spectral subtraction algorithm during the speech signal processing, we combine these two algorithms and propose one novel noise reduction method, showing a strong performance on par or even better than state of the art methods. We first use the least mean square algorithm to reduce the average intensity of noise, and then add spectral subtraction algorithm to reduce remaining noise again. Experiments prove that using the spectral subtraction again after the least mean square adaptive filter algorithm overcomes shortcomings which come from the former two algorithms. Also the novel method increases the signal-to-noise ratio of original speech data and improves the final noise reduction performance.

Prediction of Domain Action Using a Neural Network (신경망을 이용한 영역 행위 예측)

  • Lee, Hyun-Jung;Seo, Jung-Yun;Kim, Hark-Soo
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.2
    • /
    • pp.179-191
    • /
    • 2007
  • In a goal-oriented dialogue, spoken' intentions can be represented by domain actions that consist of pairs of a speech art and a concept sequence. The domain action prediction of user's utterance is useful to correct some errors that occur in a speech recognition process, and the domain action prediction of system's utterance is useful to generate flexible responses. In this paper, we propose a model to predict a domain action of the next utterance using a neural network. The proposed model predicts the next domain action by using a dialogue history vector and a current domain action as inputs of the neural network. In the experiment, the proposed model showed the precision of 80.02% in speech act prediction and the precision of 82.09% in concept sequence prediction.

  • PDF

Speech Recognition by Integrating Audio, Visual and Contextual Features Based on Neural Networks (신경망 기반 음성, 영상 및 문맥 통합 음성인식)

  • 김명원;한문성;이순신;류정우
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.67-77
    • /
    • 2004
  • The recent research has been focused on fusion of audio and visual features for reliable speech recognition in noisy environments. In this paper, we propose a neural network based model of robust speech recognition by integrating audio, visual, and contextual information. Bimodal Neural Network(BMNN) is a multi-layer perception of 4 layers, each of which performs a certain level of abstraction of input features. In BMNN the third layer combines audio md visual features of speech to compensate loss of audio information caused by noise. In order to improve the accuracy of speech recognition in noisy environments, we also propose a post-processing based on contextual information which are sequential patterns of words spoken by a user. Our experimental results show that our model outperforms any single mode models. Particularly, when we use the contextual information, we can obtain over 90% recognition accuracy even in noisy environments, which is a significant improvement compared with the state of art in speech recognition. Our research demonstrates that diverse sources of information need to be integrated to improve the accuracy of speech recognition particularly in noisy environments.

Applying feature normalization based on pole filtering to short-utterance speech recognition using deep neural network (심층신경망을 이용한 짧은 발화 음성인식에서 극점 필터링 기반의 특징 정규화 적용)

  • Han, Jaemin;Kim, Min Sik;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.64-68
    • /
    • 2020
  • In a conventional speech recognition system using Gaussian Mixture Model-Hidden Markov Model (GMM-HMM), the cepstral feature normalization method based on pole filtering was effective in improving the performance of recognition of short utterances in noisy environments. In this paper, the usefulness of this method for the state-of-the-art speech recognition system using Deep Neural Network (DNN) is examined. Experimental results on AURORA 2 DB show that the cepstral mean and variance normalization based on pole filtering improves the recognition performance of very short utterances compared to that without pole filtering, especially when there is a large mismatch between the training and test conditions.