• 제목/요약/키워드: speech rates

Search Result 271, Processing Time 0.023 seconds

A Study on the Speaker Adaptation of a Continuous Speech Recognition using HMM (HMM을 이용한 연속 음성 인식의 화자적응화에 관한 연구)

  • Kim, Sang-Bum;Lee, Young-Jae;Koh, Si-Young;Hur, Kang-In
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.4
    • /
    • pp.5-11
    • /
    • 1996
  • In this study, the method of speaker adaptation for uttered sentence using syllable unit hmm is proposed. Segmentation of syllable unit for sentence is performed automatically by concatenation of syllable unit hmm and viterbi segmentation. Speaker adaptation is performed using MAPE(Maximum A Posteriori Probabillity Estimation) which can adapt any small amount of adaptation speech data and add one sequentially. For newspaper editorial continuous speech, the recognition rates of adaptation of HMM was 71.8% which is approximately 37% improvement over that of unadapted HMM

  • PDF

Analyzing vowel variation in Korean dialects using phone recognition

  • Jooyoung Lee;Sunhee Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.101-107
    • /
    • 2023
  • This study aims to propose an automatic method of detecting vowel variation in the Korean dialects of Gyeong-sang and Jeol-la. The method is based on error patterns extracted using phone recognition. Canonical and recognized phone sequences are compared, and statistical analyses distinguish the vowels appearing in both dialects, the dialect-common vowels, and the vowels with high mismatch rates for each dialect. The dialect-common vowels show monophthongization of diphthongs. The vowels unique to the dialects are /we/ to [e] and /ʌ/ to [ɰ] for Gyeong-sang dialect, and /ɰi/ to [ɯ] in Jeol-la dialect. These results corroborate previous dialectology reports regarding phonetic realization of the Korean dialects. The current method provides a possibility of automatic explanation of the dialect patterns.

Development of Speech Recognition System based on User Context Information in Smart Home Environment (스마트 홈 환경에서 사용자 상황정보 기반의 음성 인식 시스템 개발)

  • Kim, Jong-Hun;Sim, Jae-Ho;Song, Chang-Woo;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.328-338
    • /
    • 2008
  • Most speech recognition systems that have a large capacity and high recognition rates are isolated word speech recognition systems. In order to extend the scope of recognition, it is necessary to increase the number of words that are to be searched. However, it shows a problem that exhibits a decrease in the system performance according to the increase in the number of words. This paper defines the context information that affects speech recognition in a ubiquitous environment to solve such a problem and develops user localization method using inertial sensor and RFID. Also, we develop a new speech recognition system that demonstrates better performances than the existing system by establishing a word model domain of a speech recognition system by context information. This system shows operation without decrease of recognition rate in smart home environment.

Comparison of ICA Methods for the Recognition of Corrupted Korean Speech (잡음 섞인 한국어 인식을 위한 ICA 비교 연구)

  • Kim, Seon-Il
    • 전자공학회논문지 IE
    • /
    • v.45 no.3
    • /
    • pp.20-26
    • /
    • 2008
  • Two independent component analysis(ICA) algorithms were applied for the recognition of speech signals corrupted by a car engine noise. Speech recognition was performed by hidden markov model(HMM) for the estimated signals and recognition rates were compared with those of orginal speech signals which are not corrupted. Two different ICA methods were applied for the estimation of speech signals, one of which is FastICA algorithm that maximizes negentropy, the other is information-maximization approach that maximizes the mutual information between inputs and outputs to give maximum independence among outputs. Word recognition rate for the Korean news sentences spoken by a male anchor is 87.85%, while there is 1.65% drop of performance on the average for the estimated speech signals by FastICA and 2.02% by information-maximization for the various signal to noise ratio(SNR). There is little difference between the methods.

An On-line Speech and Character Combined Recognition System for Multimodal Interfaces (멀티모달 인터페이스를 위한 음성 및 문자 공용 인식시스템의 구현)

  • 석수영;김민정;김광수;정호열;정현열
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.216-223
    • /
    • 2003
  • In this paper, we present SCCRS(Speech and Character Combined Recognition System) for speaker /writer independent. on-line multimodal interfaces. In general, it has been known that the CHMM(Continuous Hidden Markov Mode] ) is very useful method for speech recognition and on-line character recognition, respectively. In the proposed method, the same CHMM is applied to both speech and character recognition, so as to construct a combined system. For such a purpose, 115 CHMM having 3 states and 9 transitions are constructed using MLE(Maximum Likelihood Estimation) algorithm. Different features are extracted for speech and character recognition: MFCC(Mel Frequency Cepstrum Coefficient) Is used for speech in the preprocessing, while position parameter is utilized for cursive character At recognition step, the proposed SCCRS employs OPDP (One Pass Dynamic Programming), so as to be a practical combined recognition system. Experimental results show that the recognition rates for voice phoneme, voice word, cursive character grapheme, and cursive character word are 51.65%, 88.6%, 85.3%, and 85.6%, respectively, when not using any language models. It demonstrates the efficiency of the proposed system.

  • PDF

A Basic Performance Evaluation of the Speech Recognition APP of Standard Language and Dialect using Google, Naver, and Daum KAKAO APIs (구글, 네이버, 다음 카카오 API 활용앱의 표준어 및 방언 음성인식 기초 성능평가)

  • Roh, Hee-Kyung;Lee, Kang-Hee
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.819-829
    • /
    • 2017
  • In this paper, we describe the current state of speech recognition technology and identify the basic speech recognition technology and algorithms first, and then explain the code flow of API necessary for speech recognition technology. We use the application programming interface (API) of Google, Naver, and Daum KaKao, which have the most famous search engine among the speech recognition APIs, to create a voice recognition app in the Android studio tool. Then, we perform a speech recognition experiment on people's standard words and dialects according to gender, age, and region, and then organize the recognition rates into a table. Experiments were conducted on the Gyeongsang-do, Chungcheong-do, and Jeolla-do provinces where the degree of tongues was severe. And Comparative experiments were also conducted on standardized dialects. Based on the resultant sentences, the accuracy of the sentence is checked based on spacing of words, final consonant, postposition, and words and the number of each error is represented by a number. As a result, we aim to introduce the advantages of each API according to the speech recognition rate, and to establish a basic framework for the most efficient use.

Improvements on Speech Recognition for Fast Speech (고속 발화음에 대한 음성 인식 향상)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.88-95
    • /
    • 2006
  • In this Paper. a method for improving the performance of automatic speech recognition (ASR) system for conversational speech is proposed. which mainly focuses on increasing the robustness against the rapidly speaking utterances. The proposed method doesn't require an additional speech recognition task to represent speaking rate quantitatively. Energy distribution for special bands is employed to detect the vowel regions, the number of vowels Per unit second is then computed as speaking rate. To improve the Performance for fast speech. in the pervious methods. a sequence of the feature vectors is expanded by a given scaling factor, which is computed by a ratio between the standard phoneme duration and the measured one. However, in the method proposed herein. utterances are classified by their speaking rates. and the scaling factor is determined individually for each class. In this procedure, a maximum likelihood criterion is employed. By the results from the ASR experiments devised for the 10-digits mobile phone number. it is confirmed that the overall error rate was reduced by $17.8\%$ when the proposed method is employed

A Study on Performance Improvement Method for the Multi-Model Speech Recognition System in the DSR Environment (DSR 환경에서의 다 모델 음성 인식시스템의 성능 향상 방법에 관한 연구)

  • Jang, Hyun-Baek;Chung, Yong-Joo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.2
    • /
    • pp.137-142
    • /
    • 2010
  • Although multi-model speech recognizer has been shown to be quite successful in noisy speech recognition, the results were based on general speech front-ends which do not take into account noise adaptation techniques. In this paper, for the accurate evaluation of the multi-model based speech recognizer, we adopted a quite noise-robust speech front-end, AFE, which was proposed by the ETSI for the noisy DSR environment. For the performance comparison, the MTR which is known to give good results in the DSR environment has been used. Also, we modified the structure of the multi-model based speech recognizer to improve the recognition performance. N reference HMMs which are most similar to the input noisy speech are used as the acoustic models for recognition to cope with the errors in the selection of the reference HMMs and the noise signal variability. In addition, multiple SNR levels are used to train each of the reference HMMs to improve the robustness of the acoustic models. From the experimental results on the Aurora 2 databases, we could see better recognition rates using the modified multi-model based speech recognizer compared with the previous method.

Perception of Transplanted English Prosody by American and Korean Listeners

  • Yi, So-Pae
    • Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.73-89
    • /
    • 2007
  • This study explored the perception of transplanted English prosody by thirty American and Korean, male and female listeners. The English utterances of various sentence types produced by Korean and American male speakers were employed to transplant the American prosody contours to Korean English utterances. Then, the thirty subjects were instructed to rate the transplanted prosodic components. Results showed that the interactions between the three factors (e.g., rater groups & transplantation types; transplantation types & sentence types; rater groups & transplantation types & sentence types) turned out to be meaningful. Both Americans and Koreans perceived the effectiveness of the combined effect of transplanted duration and pitch or duration and pitch and intensity. However, when perceiving individual prosodic components, Americans and Koreans showed different perceptual ratings. As for the overall prosody change, Americans perceived the change of intensity in a significant way but Koreans did not because intensity is not a crucial semantic factor in Korean. Americans rated the transplantation of duration alone as ineffective while Koreans rated otherwise. This was explained by the difference between English and Korean. The difference of perspective was also significant with different sentence types, especially with the three sentence types that had speech rates slower than other sentence types. A slower speech rate intensified the mismatch between the transplanted duration and the original pitch causing a negative impression on American listeners whereas this did not affect Korean listeners. Pedagogical implications of the findings are discussed.

  • PDF

CELP speech coder by the structure of multi-codebook (다중 코드북 구조를 이용한 CELP형 음성부호화기)

  • 박규정;한승조
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.1
    • /
    • pp.23-33
    • /
    • 2001
  • In this paper we propose a multi-codebook structure which can synthesize high quality speech without increasing of CELP coder's computation. We also design a 4.8kbps CELP speech coder with the proposed codebook structure. The proposed multi-codebook structure is made up of basic codebook and the other codebook which Is formed for strengthen spectrum an4 pitch. Multi-codebook structure can represent accurate gains since it represents excitation signals as summation of two kinds of codebooks and uses different codebook gains respectively. Therefore it can provide better speech quality than other conventional structures. In computer simulation of the 4.8kpbs CELP coder designed with the proposed codebook structure its segSNR was 0.81dB more high than the DoD CELP coder of same transmission rates.

  • PDF