• Title/Summary/Keyword: 음소결합확률

Search Result 10, Processing Time 0.022 seconds

A Study of Development for Korean Phonotactic Probability Calculator (한국어 음소결합확률 계산기 개발연구)

  • Lee, Chan-Jong;Lee, Hyun-Bok;Choi, Hun-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.239-244
    • /
    • 2009
  • This paper is to develop the Korean Phonotactic Probability Calculator (KPPC) that anticipates the phonotactic probability in Korean. KPPC calculates the positional segment frequecncy, position-specific biphone frequency and position-specific triphone frequency. And KPPC also calculates the Neighborhood Density that is the number of words that sound similar to a target word. The Phonotactic Calculator that was developed in University of Kansas can be analyzed by the computer-readable phonemic transcription. This can calculate positional frequency and position-specific biphone frequency that were derived from 20,000 dictionary words. But KPPC calculates positional frequency, positional biphone frequency, positional triphone frequency and neighborhood density. KPPC can calculate by korean alphabet or computer-readable phonemic transcription. This KPPC can anticipate high phonotactic probability, low phonotactic probability, high neighborhood density and low neighborhood density.

Language Identification System using phoneme recognizer and phonotactic language model (음소인식기와 음소결합확률모델을 이용한 언어식별시스템)

  • Lee Dae-Seong;Kim Se-Hyun;Oh Yung-Hwan
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.73-76
    • /
    • 2001
  • 본 논문에서는 음소인식기와 음소결합확률모델을 이용하여 전화음성을 대상으로 입력음성이 어느 나라 말 인지를 식별할 수 있는 언어식별시스템을 구현하였고 성능을 실험하였다. 시스템은 음소인식기로 입력음성에 대한 음소열을 인식하는 과정, 인식된 음소열을 이용하여 인식대상 언어별 음소결합확률모델을 생성하는 훈련과정, 훈련과정에서 생성된 음소결합확률모델로부터 확률 값을 계산하여 인식결과를 출력하는 식별과정으로 구성된다. 본 논문에서는 음소결합확률모델로부터 우도를 계산할 때 정보이론(Information Theory, Shannon and Weaver, 1949)을 이용하여 가중치를 적용하는 방법을 제안하였다. 시스템의 훈련 및 실험에는 OGI 11개국어 전화음성 corpus (OGI-TS)를 사용하였으며, 음소인식기는 HTK를 이용하여 구현하였고 음소인식기 훈련에는 NTIMIT 전화음성 DB를 이용하였다. 실험결과 11개국어를 대상으로 45초 길이의 음성에 대해서 평균 $74.1\%$, 10초 길이의 음성에 대해서는 평균 $57.1\%$의 인식률을 얻을 수 있었다.

  • PDF

A Study on the Neural Networks for Korean Phoneme Recognition (한국어 음소 인식을 위한 신경회로망에 관한 연구)

  • Choi, Young-Bae;Yang, Jin-Woo;Lee, Hyung-Jun;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.5-13
    • /
    • 1994
  • This paper presents a study on Neural Networks for Phoneme Recognition and performs the Phoneme Recognition using TDNN (Time Delay Neural Network). Also, this paper proposes training algorithm for speech recognition using neural nets that is a proper to large scale TDNN. Because Phoneme Recognition is indispensable for continuous speech recognition, this paper uses TDNN to get accurate recognition result of phonemes. And this paper proposes new training algorithm that can converge TDNN to an optimal state regardless of the number of phonemes to be recognized. The recognition experiment was performed with new training algorithm for TDNN that combines backpropagation and Cauchy algorithm using stochastic approach. The results of the recognition experiment for three phoneme classes for two speakers show the recognition rates of $98.1\%$. And this paper yielded that the proposed algorithm is an efficient method for higher performance recognition and more reduced convergence time than TDNN.

  • PDF

The Study on the Speaker Adaptation Using Speaker Characteristics of Phoneme (음소에 따른 화자특성을 이용한 화자적응방법에 관한 연구)

  • 채나영;황영수
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.6-9
    • /
    • 2003
  • In this paper, we studied on the difference of speaker adaptation according to the phoneme classification for Korean Speech recognition. In order to study of speech adaptation according to the weight of difference of phoneme as recognition unit, we used SCHMM as recognition system. And Speaker adaptation method used in this paper was MAPE(Maximum A Posteriori Probability Estimation), Linear Spectral Estimation. In order to evaluate the performance of these methods, we used 10 Korean isolated numbers as the experimental data. It is possible for the first and the second methods to be carried out unsupervised learning and used in on-line system. And the first method was shown performance improvement over the second method, and hybrid adaptation showed the better recognition results than those which performed each method. And the result of Speaker adaptation using the variable weight according to the phoneme had better than the result using fixed weight.

  • PDF

A Study on the Diphone Recognition of Korean Connected Words and Eojeol Reconstruction (한국어 연결단어의 이음소 인식과 어절 형성에 관한 연구)

  • ;Jeong, Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.46-63
    • /
    • 1995
  • This thesis described an unlimited vocabulary connected speech recognition system using Time Delay Neural Network(TDNN). The recognition unit is the diphone unit which includes the transition section of two phonemes, and the number of diphone unit is 329. The recognition processing of korean connected speech is composed by three part; the feature extraction section of the input speech signal, the diphone recognition processing and post-processing. In the feature extraction section, the extraction of diphone interval in input speech signal is carried and then the feature vectors of 16th filter-bank coefficients are calculated for each frame in the diphone interval. The diphone recognition processing is comprised by the three stage hierachical structure and is carried using 30 Time Delay Neural Networks. particularly, the structure of TDNN is changed so as to increase the recognition rate. The post-processing section, mis-recognized diphone strings are corrected using the probability of phoneme transition and the probability o phoneme confusion and then the eojeols (Korean word or phrase) are formed by combining the recognized diphones.

  • PDF

An On-line Speech and Character Combined Recognition System for Multimodal Interfaces (멀티모달 인터페이스를 위한 음성 및 문자 공용 인식시스템의 구현)

  • 석수영;김민정;김광수;정호열;정현열
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.216-223
    • /
    • 2003
  • In this paper, we present SCCRS(Speech and Character Combined Recognition System) for speaker /writer independent. on-line multimodal interfaces. In general, it has been known that the CHMM(Continuous Hidden Markov Mode] ) is very useful method for speech recognition and on-line character recognition, respectively. In the proposed method, the same CHMM is applied to both speech and character recognition, so as to construct a combined system. For such a purpose, 115 CHMM having 3 states and 9 transitions are constructed using MLE(Maximum Likelihood Estimation) algorithm. Different features are extracted for speech and character recognition: MFCC(Mel Frequency Cepstrum Coefficient) Is used for speech in the preprocessing, while position parameter is utilized for cursive character At recognition step, the proposed SCCRS employs OPDP (One Pass Dynamic Programming), so as to be a practical combined recognition system. Experimental results show that the recognition rates for voice phoneme, voice word, cursive character grapheme, and cursive character word are 51.65%, 88.6%, 85.3%, and 85.6%, respectively, when not using any language models. It demonstrates the efficiency of the proposed system.

  • PDF

Automatic Generation of Domain-Dependent Pronunciation Lexicon with Data-Driven Rules and Rule Adaptation (학습을 통한 발음 변이 규칙 유도 및 적응을 이용한 영역 의존 발음 사전 자동 생성)

  • Jeon, Je-Hun;Chung, Min-Hwa
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.233-238
    • /
    • 2005
  • 본 논문에서는 학습을 이용한 발음 변이 모델링을 통해 특정 영역에 최적화된 발음 사전 자동 생성의 방법을 제시하였다. 학습 방법을 이용한 발음 변이 모델링의 오류를 최소화 하기 위하여 본 논문에서는 발음 변이 규칙의 적응 기법을 도입하였다. 발음 변이 규칙의 적응은 대용량 음성 말뭉치에서 발음 변이 규칙을 유도한 후, 상대적으로 작은 용량의 음성 말뭉치에서 유도한 규칙과의 결합을 통해 이루어 진다. 본 논문에서 사용된 발음 사전은 해당 형태소의 앞 뒤 음소 문맥의 음운 현상을 반영한 발음 사전이며, 학습 방법으로 얻어진 발음 변이 규칙을 대용량 문자 말뭉치에 적용하여 해당 형태소의 발음을 자동 생성하였다. 발음 사전의 평균 발음의 수는 적용된 발음 변이 규칙의 확률 값들의 한계 값 조정에 의해 이루어졌다. 기존의 지식 기반의 발음 사전과 비교 할 때, 본 방법론으로 작성된 발음 사전을 이용한 대화체 음성 인식 실험에서 0.8%의 단어 오류율(WER)이 감소하였다. 또한 사전에 포함된 형태소의 평균 발음 변이 수에서도 기존의 방법론에서 보다 5.6% 적은 수에서 최상의 성능을 보였다.

  • PDF

A Study on the Korean Broadcasting Speech Recognition (한국어 방송 음성 인식에 관한 연구)

  • 김석동;송도선;이행세
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.1
    • /
    • pp.53-60
    • /
    • 1999
  • This paper is a study on the korean broadcasting speech recognition. Here we present the methods for the large vocabuary continuous speech recognition. Our main concerns are the language modeling and the search algorithm. The used acoustic model is the uni-phone semi-continuous hidden markov model and the used linguistic model is the N-gram model. The search algorithm consist of three phases in order to utilize all available acoustic and linguistic information. First, we use the forward Viterbi beam search to find word end frames and to estimate related scores. Second, we use the backword Viterbi beam search to find word begin frames and to estimate related scores. Finally, we use A/sup */ search to combine the above two results with the N-grams language model and to get recognition results. Using these methods maximum 96.0% word recognition rate and 99.2% syllable recognition rate are achieved for the speaker-independent continuous speech recognition problem with about 12,000 vocabulary size.

  • PDF

Similar Question Search System for online Q&A for the Korean Language Based on Topic Classification (온라인가나다를 위한 주제 분류 기반 유사 질문 검색 시스템)

  • Mun, Jung-Min;Song, Yeong-Ho;Jin, Ji-Hwan;Lee, Hyun-Seob;Lee, Hyun Ah
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.3
    • /
    • pp.263-278
    • /
    • 2015
  • Online Q&A for the National Institute of the Korean Language provides expert's answers for questions about the Korean language, in which many similar questions are repeatedly posted like other Q&A boards. So, if a system automatically finds questions that are similar to a user's question, it can immediately provide users with recommendable answers to their question and prevent experts from wasting time to answer to similar questions repeatedly. In this paper, we set 5 classes of questions based on its topic which are frequently asked, and propose to classify questions to those classes. Our system searches similar questions by combining topic similarity, vector similarity and sequence similarity. Experiment shows that our method improves search correctness with topic classification. In experiment, Mean Reciprocal Rank(MRR) of our system is 0.756, and precision for the first result is 68.31% and precision for top five results is 87.32%.

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.