• 제목/요약/키워드: Korean-English speech recognition

검색결과 62건 처리시간 0.033초

ON THE USE OF SPEECH RECOGNITION TECHNOLOGY FOR FOREIGN LANGUAGE PRONUNCIATION TEACHING

  • Keikichi Hirose;Carlos T. Ishi;Goh Kawai
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.17-28
    • /
    • 2000
  • Recently speech technologies have shown notable advancements and they now play major roles in computer-aided language learning systems. In the current paper, use of speech recognition technologies is viewed with our system for teaching English pronunciation to Japanese speakers.

  • PDF

한국어 음성 인식을 위한 mono-phone 구성의 기초 연구 (The Basic Study on making mono-phone for Korean Speech Recognition)

  • 황영수;송민석
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2000년도 학술발표대회 논문집 제19권 2호
    • /
    • pp.45-48
    • /
    • 2000
  • In the case of making large vocabulary speech recognition system, it is better to use the segment than the syllable or the word as the recognition unit. In this paper, we study on the basis of making mono-phone for Korean speech recognition. For experiments, we use the speech toolkit of OGI in U.S.A. The result shows that the recognition rate of :he case in which the diphthong is established as a single unit is superior to that of the case in which the diphthong is established as two units, i.e. a glide plus a vowel. And also, the recognition rate by the number of consonants is a little different.

  • PDF

Hyperparameter experiments on end-to-end automatic speech recognition

  • Yang, Hyungwon;Nam, Hosung
    • 말소리와 음성과학
    • /
    • 제13권1호
    • /
    • pp.45-51
    • /
    • 2021
  • End-to-end (E2E) automatic speech recognition (ASR) has achieved promising performance gains with the introduced self-attention network, Transformer. However, due to training time and the number of hyperparameters, finding the optimal hyperparameter set is computationally expensive. This paper investigates the impact of hyperparameters in the Transformer network to answer two questions: which hyperparameter plays a critical role in the task performance and training speed. The Transformer network for training has two encoder and decoder networks combined with Connectionist Temporal Classification (CTC). We have trained the model with Wall Street Journal (WSJ) SI-284 and tested on devl93 and eval92. Seventeen hyperparameters were selected from the ESPnet training configuration, and varying ranges of values were used for experiments. The result shows that "num blocks" and "linear units" hyperparameters in the encoder and decoder networks reduce Word Error Rate (WER) significantly. However, performance gain is more prominent when they are altered in the encoder network. Training duration also linearly increased as "num blocks" and "linear units" hyperparameters' values grow. Based on the experimental results, we collected the optimal values from each hyperparameter and reduced the WER up to 2.9/1.9 from dev93 and eval93 respectively.

한국어-영어 이중언어사용아동의 음운인식능력 (Phonological Awareness in Korean-English Bilingual Children)

  • 박민영;고도흥;이윤경
    • 음성과학
    • /
    • 제13권2호
    • /
    • pp.35-46
    • /
    • 2006
  • This study investigated whether there are differences between Korean-English bilingual and Korean monolingual children on phonological awareness skills. Participants were 11 Korean-English bilingual children and 12 Korean monolingual children. The children's ages ranged between 6 and 7 years. The results were as follows. First, the bilingual children significantly outperformed monolingual children on overall phonological awareness tasks. The bilinguals performed significantly higher than monolinguals on all three types of phonological awareness tasks (segmentation, deletion, and blending). Second, there was a significant difference between the groups with respect to phonological units of the tasks. The bilinguals performed significantly better than monolinguals on the phonemic unit tasks, but two groups did not differ significantly on syllabic unit tasks. There was an interaction effect between unit size(syllables and phonemes) and group (bilinguals and monolinguals). Third, there were correlations for both bilingual and monolingual children between overall phonological awareness skills and word recognition skills.

  • PDF

음절의 시작과 단어 시작의 불일치가 영어 단어 인지에 미치는 영향 (The Effects of Misalignment between Syllable and Word Onsets on Word Recognition in English)

  • 김선미;남기춘
    • 말소리와 음성과학
    • /
    • 제1권4호
    • /
    • pp.61-71
    • /
    • 2009
  • This study aims to investigate whether the misalignment between syllable and word onsets due to the process of resyllabification affects Korean-English late bilinguals perceiving English continuous speech. Two word-spotting experiments were conducted. In Experiment 1, misalignment conditions (resyllabified conditions) were created by adding CVC contexts at the beginning of vowel-initial words and alignment conditions (non-resyllabified conditions) were made by putting the same CVC contexts at the beginning of consonant-initial words. The results of Experiment 1 showed that detections of targets in alignment conditions were faster and more correct than in misalignment conditions. Experiment 2 was conducted in order to avoid any possibilities that the results of Experiment 1 were due to consonant-initial words being easier to recognize than vowel-initial words. For this reason, all the experimental stimuli of Experiment 2 were vowel-initial words preceded by CVC contexts or CV contexts. Experiment 2 also showed misalignment cost when recognizing words in resyllabified conditions. These results indicate that Korean listeners are influenced by misalignment between syllable and word onsets triggered by a resyllabification process when recognizing words in English connected speech.

  • PDF

MFCC와 DTW에 알고리즘을 기반으로 한 디지털 고립단어 인식 시스템 (Digital Isolated Word Recognition System based on MFCC and DTW Algorithm)

  • 장한;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.290-291
    • /
    • 2008
  • The most popular speech feature used in speech recognition today is the Mel-Frequency Cepstral Coefficients (MFCC) algorithm, which could reflect the perception characteristics of the human ear more accurately than other parameters. This paper adopts MFCC and its first order difference, which could reflect the dynamic character of speech signal, as synthetical parametric representation. Furthermore, we quote Dynamic Time Warping (DTW) algorithm to search match paths in the pattern recognition process. We use the software "GoldWave" to record English digitals in the lab environments and the simulation results indicate the algorithm has higher recognition accuracy than others using LPCC, etc. as character parameters in the experiment for Digital Isolated Word Recognition (DIWR) system.

  • PDF

응급의료 영역 한국어 음성대화 데이터베이스 구축 (Building a Korean conversational speech database in the emergency medical domain)

  • 김선희;이주영;최서경;지승훈;강지민;김종인;김도희;김보령;조은기;김호정;장정민;김준형;구본혁;박형민;정민화
    • 말소리와 음성과학
    • /
    • 제12권4호
    • /
    • pp.81-90
    • /
    • 2020
  • 본 논문은 응급의료 환경에서 음성인식 성능을 향상시키기 위하여 실제 환경에서 데이터 수집 방법을 정의하고 정의된 환경에서 수집된 데이터를 전사하는 방법을 제안한다. 그리고 제안된 방법으로 수집되고 전사된 데이터를 이용하여 기본 음성인식 실험을 진행함으로써 제안한 수집 및 전사 방법을 평가하고 향후 연구 방향을 제시하고자 한다. 모든 음성은 기본적으로 16비트 해상도와 16 kHz 샘플링으로 저장되었다. 수집된 데이터는 총 166건의 대화로서 8시간 35분의 분량이다. 수집된 데이터는 Praat를 이용하여 철자 전사, 음소 전사, 방언 전사, 잡음 전사, 그리고 의료 코드 전사를 수행하여 다양한 정보를 포함한 텍스트 데이터를 구축하였다. 이와 같이 수집된 데이터를 이용하여 기본 베이스라인 실험을 통하여 응급의료 영역에서의 음성인식 문제를 실제로 확인할 수 있었다. 본 논문에서 제시한 데이터는 응급의료 영역의 1단계 데이터로서 향후 의료 영역에서의 음성인식 모델의 학습 데이터로 활용되고, 나아가 이 분야의 음성기반 시스템 개발에 기여할 수 있을 것으로 기대된다.

인공와우 시뮬레이션에서 나타난 건청인 영어학습자의 영어 말소리 지각 (Korean ESL Learners' Perception of English Segments: a Cochlear Implant Simulation Study)

  • 임애리;김다히;이석재
    • 말소리와 음성과학
    • /
    • 제6권3호
    • /
    • pp.91-99
    • /
    • 2014
  • Although it is well documented that patients with cochlear implant experience hearing difficulties when processing their first language, very little is known whether or not and to what extent cochlear implant patients recognize segments in a second language. This preliminary study examines how Korean learners of English identify English segments in a normal hearing and cochlear implant simulation conditions. Participants heard English vowels and consonants in the following three conditions: normal hearing condition, 12-channel noise vocoding with 0mm spectral shift, and 12-channel noise vocoding with 3mm spectral shift. Results confirmed that nonnative listeners could also retrieve spectral information from vocoded speech signal, as they recognized vowel features fairly accurately despite the vocoding. In contrast, the intelligibility of manner and place features of consonants was significantly decreased by vocoding. In addition, we found that spectral shift affected listeners' vowel recognition, probably because information regarding F1 is diminished by spectral shifting. Results suggest that patients with cochlear implant and normal hearing second language learners would experience different patterns of listening errors when processing their second language(s).

신경회로망 이용한 한국어 음소 인식 (Korean Phoneme Recognition Using Neural Networks)

  • 김동국;정차균;정홍
    • 대한전기학회논문지
    • /
    • 제40권4호
    • /
    • pp.360-373
    • /
    • 1991
  • Since 70's, efficient speech recognition methods such as HMM or DTW have been introduced primarily for speaker dependent isolated words. These methods however have confronted with difficulties in recognizing continuous speech. Since early 80's, there has been a growing awareness that neural networks might be more appropriate for English and Japanese phoneme recognition using neural networks. Dealing with only a part of vowel or consonant set, Korean phoneme recognition still remains on the elementary level. In this light, we develop a system based on neural networks which can recognize major Korean phonemes. Through experiments using two neural networks, SOFM and TDNN, we obtained remarkable results. Especially in the case of using TDNN, the recognition rate was estimated about 93.78% for training data and 89.83% for test data.