• Title/Summary/Keyword: 발음 훈련

Search Result 51, Processing Time 0.027 seconds

Improvements of an English Pronunciation Dictionary Generator Using DP-based Lexicon Pre-processing and Context-dependent Grapheme-to-phoneme MLP (DP 알고리즘에 의한 발음사전 전처리와 문맥종속 자소별 MLP를 이용한 영어 발음사전 생성기의 개선)

  • 김회린;문광식;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.21-27
    • /
    • 1999
  • In this paper, we propose an improved MLP-based English pronunciation dictionary generator to apply to the variable vocabulary word recognizer. The variable vocabulary word recognizer can process any words specified in Korean word lexicon dynamically determined according to the current recognition task. To extend the ability of the system to task for English words, it is necessary to build a pronunciation dictionary generator to be able to process words not included in a predefined lexicon, such as proper nouns. In order to build the English pronunciation dictionary generator, we use context-dependent grapheme-to-phoneme multi-layer perceptron(MLP) architecture for each grapheme. To train each MLP, it is necessary to obtain grapheme-to-phoneme training data from general pronunciation dictionary. To automate the process, we use dynamic programming(DP) algorithm with some distance metrics. For training and testing the grapheme-to-phoneme MLPs, we use general English pronunciation dictionary with about 110 thousand words. With 26 MLPs each having 30 to 50 hidden nodes and the exception grapheme lexicon, we obtained the word accuracy of 72.8% for the 110 thousand words superior to rule-based method showing the word accuracy of 24.0%.

  • PDF

Analysis of Feature Extraction Methods for Distinguishing the Speech of Cleft Palate Patients (구개열 환자 발음 판별을 위한 특징 추출 방법 분석)

  • Kim, Sung Min;Kim, Wooil;Kwon, Tack-Kyun;Sung, Myung-Whun;Sung, Mee Young
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1372-1379
    • /
    • 2015
  • This paper presents an analysis of feature extraction methods used for distinguishing the speech of patients with cleft palates and people with normal palates. This research is a basic study on the development of a software system for automatic recognition and restoration of speech disorders, in pursuit of improving the welfare of speech disabled persons. Monosyllable voice data for experiments were collected for three groups: normal speech, cleft palate speech, and simulated clef palate speech. The data consists of 14 basic Korean consonants, 5 complex consonants, and 7 vowels. Feature extractions are performed using three well-known methods: LPC, MFCC, and PLP. The pattern recognition process is executed using the acoustic model GMM. From our experiments, we concluded that the MFCC method is generally the most effective way to identify speech distortions. These results may contribute to the automatic detection and correction of the distorted speech of cleft palate patients, along with the development of an identification tool for levels of speech distortion.

Development of Speech Training Aids Using Vocal Tract Profile (조음도를 이용한 발음훈련기기의 개발)

  • 박상희;김동준;이재혁;윤태성
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.2
    • /
    • pp.209-216
    • /
    • 1992
  • Deafs train articulation by observing mouth of a tutor, sensing tactually the motions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech parameter, or display only frequency spectra in histogram of pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system is to be developed and this system makes a subject know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory motions of the vocal organs from speech signal. Next, a vocal tract profile model using LP analysis is made up. And using this model, articulatory motions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

A Study on the Implementatin of Vocalbulary Independent Korean Speech Recognizer (가변어휘 음성인식기 구현에 관한 연구)

  • 황병한
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06d
    • /
    • pp.60-63
    • /
    • 1998
  • 본 논문에서는 사용자가 별도의 훈련과정 없이 인식대상 어휘를 추가 및 변경이 가능한 가변어휘 인식시스템에 관하여 기술한다. 가변어휘 음성인식에서는 미리 구성된 음소모델을 토대로 인식대상 어휘가 결정되명 발음사전에 의거하여 이들 어휘에 해당하는 음소모델을 연결함으로써 단어모델을 만든다. 사용된 음소모델은 현재 음소의 앞뒤의 음소 context를 고려한 문맥종속형(Context-Dependent)음소모델인 triphone을 사용하였고, 연속확률분포를 가지는 Hidden Markov Model(HMM)기반의 고립단어인식 시스템을 구현하였다. 비교를 위해 문맥 독립형 음소모델인 monophone으로 인식실험을 병행하였다. 개발된 시스템은 음성특징벡터로 MFCC(Mel Frequency Cepstrum Coefficient)를 사용하였으며, test 환경에서 나타나지 않은 unseen triphone 문제를 해결하기 위하여 state-tying 방법중 음성학적 지식에 기반을 둔 tree-based clustering 기법을 도입하였다. 음소모델 훈련에는 ETRI에서 구축한 POW (Phonetically Optimized Words) 음성 데이터베이스(DB)[1]를 사용하였고, 어휘독립인식실험에는 POW DB와 관련없는 22개의 부서명을 50명이 발음한 총 1.100개의 고립단어 부서 DB[2]를 사용하였다. 인식실험결과 문맥독립형 음소모델이 88.6%를 보인데 비해 문맥종속형 음소모델은 96.2%의 더 나은 성능을 보였다.

  • PDF

A study on effective diction training in choral communication (합창 커뮤니케이션에서 효과적인 딕션 훈련을 위한 연구)

  • Kim, Hyung-il
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.237-245
    • /
    • 2021
  • The puropose of this study is to propose an effective diction training techniqe that conductors can use in choral communication. In chorus, the phonology of the language used in the lyrics influences the diction. Therefore, Korean lyrics must be pronounced according to Korean phonology. In verbal language, accuracy of pronunciation is important, but when expressing lyrics in a song, both vocalization and diction are important. In particular, chorus is sung by many people, so if the diction is not accurate, the lyrics will not be delivered properly. In this study, the dictions of lyrics frequently used in actual Korean choral songs were systematically analyzed according to Korean phonological rules. As a result of the study, the main factor that makes choral diction difficult is the phenomenon of phonological fluctuations in Korean. In particular, phonological fluctuations often occurred when pronouncing the final sound and when consonants and consonants were combined. A follow-up study intends to contribute to the development of choral communication by presenting a systematic choral diction based on Korean phonology.

A Study on the Variable Vocabulary Speech Recognition in the Vocabulary-Independent Environments (어휘독립 환경에서의 가변어휘 음성인식에 관한 연구)

  • 황병한
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.369-372
    • /
    • 1998
  • 본 논문은 어휘독립(Vocabulary-Independent) 환경에서 별도의 훈련과정 없이 인식대상 어휘를 추가 및 변경할 수 있는 가변어휘(Variable Vocabulary) 음성인식에 관한 연구를 다룬다. 가변어휘 인식은 처음에 대용량 음성 데이터베이스(DB)로 음소모델을 훈련하고 인식대상 어휘가 결정되면 발음사전에 의거하여 음소모델을 연결함으로써 별도의 훈련과정 없이 인식대상 어휘를 변경 및 추가할 수 있다. 문맥 종속형(Context-Dependent) 음소 모델인 triphone을 사용하여 인식실험을 하였고, 인식성능의 비교를 위해 어휘종속 모델을 별도로 구성하여 인식실험을 하였다. Unseen triphone 문제와 훈련 DB의 부족으로 인한 모델 파라메터의 신뢰성 저하를 방지하기 위해 state-tying 방법 중 음성학적 지식에 기반을 둔 tree-based clustering(TBC) 기법[1]을 도입하였다. Mel Frequency Cepstrum Coefficient(MFCC)와 대수에너지에 기반을 둔 3 가지 음성특징 벡터를 사용하여 인식 실험을 병행하였고, 연속 확률분포를 가지는 Hidden Markov Model(HMM) 기반의 고립단어 인식시스템을 구현하였다. 인식 실험에는 22 개 부서명 DB[3]를 사용하였다. 실험결과 어휘독립 환경에서 최고 98.4%의 인식률이 얻어졌으며, 어휘종속 환경에서의 인식률 99.7%에 근접한 성능을 보였다.

  • PDF

A Study on the Implementation of an Automatic Segmentation System of Korean Speech based on the Hidden Markov Model (HMM에 의한 한국어음성의 자동분할 시스템의 구현에 관한 연구)

  • 김윤중;김미경;이인동
    • Journal of Information Technology Application
    • /
    • v.1 no.3_4
    • /
    • pp.1-23
    • /
    • 1999
  • 본 연구에서는 HMM(Hidden Markov Model) 및 Levelbuilding 알고리즘을 이용하여 인식대상 음소열의 표본 집합(훈련패턴 집합)을 입력으로 하는 음성의 자동 분할 시스템을 구현하였다. 본 시스템은 자연스럽게 발음되어진 연결음 음성으로부터 표준 음소모델을 생성한다. 본 시스템의 구성은 초기화 과정, HMM학습과정 그리고 Levelbuilding을 이용한 분리 및 CLustering 과정으로 구성되어 있다. 초기화 과정에서는 제어 정보를 이용하여 훈련패턴 집합으로부터 초기 음소 집합 군을 생성한다. Levelbuilding을 이용한 분리 및 Clustering 단계에서는 음소 모델과 제어 정보를 이용하여 훈련패턴들을 음소 단위로 분리하고, 분리된 후보 음소들을 Clustering하여 음소집합 군을 생성한다. 음소모델의 구성에 변화가 없을 때까지 이 작업을 반복 수행하여 최적의 음소모델을 생성한다. 본 연구에서는 3개 이하의 숫자단어로 구성된 연결되어 음성 패턴을 대상으로 실험하였다. 연결단어에 대한 음소의 표준모델 생성과정에서 가장 중요한 처리인 훈련패턴의 자동분할 과정을 분석하기 위하여 각 반복과정에서 분리된 정보를 그래프로 도시화하여 확인하였다.

  • PDF

Comparing the effects of letter-based and syllable-based speaking rates on the pronunciation assessment of Korean speakers of English (철자 기반과 음절 기반 속도가 한국인 영어 학습자의 발음 평가에 미치는 영향 비교)

  • Hyunsong Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.1-10
    • /
    • 2023
  • This study investigated the relative effectiveness of letter-based versus syllable-based measures of speech rate and articulation rate in predicting the articulation score, prosody fluency, and rating sum using "English speech data of Koreans for education" from AI Hub. We extracted and analyzed 900 utterances from the training data, including three balanced age groups (13, 19, and 26 years old). The study built three models that best predicted the pronunciation assessment scores using linear mixed-effects regression and compared the predicted scores with the actual scores from the validation data (n=180). The correlation coefficients between them were also calculated. The findings revealed that syllable-based measures of speech and articulation rates were more effective than letter-based measures in all three pronunciation assessment categories. The correlation coefficients between the predicted and actual scores ranged from .65 to .68, indicating the models' good predictive power. However, it remains inconclusive whether speech rate or articulation rate is more effective.

An Enhancement of Japanese Acoustic Model using Korean Speech Database (한국어 음성데이터를 이용한 일본어 음향모델 성능 개선)

  • Lee, Minkyu;Kim, Sanghun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.5
    • /
    • pp.438-445
    • /
    • 2013
  • In this paper, we propose an enhancement of Japanese acoustic model which is trained with Korean speech database by using several combination strategies. We describe the strategies for training more than two language combination, which are Cross-Language Transfer, Cross-Language Adaptation, and Data Pooling Approach. We simulated those strategies and found a proper method for our current Japanese database. Existing combination strategies are generally verified for under-resourced Language environments, but when the speech database is not fully under-resourced, those strategies have been confirmed inappropriate. We made tyied-list with only object-language on Data Pooling Approach training process. As the result, we found the ERR of the acoustic model to be 12.8 %.

Training Effect on the Perception and Production of English Grapheme by Korean Learners of English (한국 학생들의 영어 철자 인지와 발화에 대한 훈련효과)

  • Cho, Mi-Hui
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.226-233
    • /
    • 2019
  • Given that English grapheme is realized as five different American English vowels [ʌ, ju, ʊ, u, ə], the purpose of the current study is to examine Korean learners' perception and production of English grapheme and training effect on words with . Thus, the current study conducted pretest, training, and posttest for 31 Korean university students on 24 English words with . The overall results showed that the participants' perception and production accuracy was significantly improved in the posttest, thus indicating training effect on both perception and production. However, it was not the case that all five different vowels demonstrated training effect. In perception the accuracy rates of [ʌ], [ju], and [ə] were improved after training whereas those of [ʊ] and [u] were not. In production [ʌ], [ʊ], and [u] did not show training effect. These results indicate that the Korean participants had difficulty distinguishing between tense [u] and lax [ʊ] both in perception and production. In particular, the Korean participants tended to replace lax [ʊ] with tense [u] in production. This is because tense [u] is the best match to Korean [u] in acoustic measurements, so that tense [u] is easy for the Korean participants to pronounce than lax [ʊ]. Also, English [ʌ] tended to be mispronounced as [u]-quality vowels such as [u] and [ju], which is due to the spelling . The Korean participants also showed errors which insert [j] after alveolars [t, d, n, s], which runs against yod-dropping in American English. They also deleted [j] after labials and velars, which is due to the absence of orthography in the target words. Finally, pedagogical implications were discussed based on the findings of the current study.


(34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
Copyright (C) KISTI. All Rights Reserved.