• Title/Summary/Keyword: 음소 인식

Search Result 302, Processing Time 0.02 seconds

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

Development of Embedded Fast/Light Phoneme Recognizer for Distributed Speech Recognition (분산음성인식을 위한 내장형 고속/경량 음소인식기 개발)

  • Kim, Seung-Hi;Hwang, Kyu-Woong;Jeon, Hyun-Bae;Jeong, Hoon;Park, Jun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.395-396
    • /
    • 2007
  • ETRI 음성/언어정보연구센터에서는 분산음성인식을 위해 메모리를 작게 사용하며 속도가 빠른 음소인식기를 개발 중이다. 음향 모델, 언어 모델, 탐색 네트워크 등 고정되어 있는 정보는 인식기를 수행하기 이전에 미리 binary 형태로 구축하여 ROM 형태로 저장함으로써 실제 사용해야 할 RAM 용량을 대폭 줄일 수 있었다. Tied state에 기반한 triphone 모델에서는 unique HMM 만을 사용함으로써 인식시간 및 메모리 사용량을 대폭 줄일 수 있었다. Monophone 인식기의 경우 RAM 사용량이 179KB였으며, triphone 인식기의 경우 435KB의 RAM 사용량과 RTF(Real Time Factor) 0.02를 확인하였다.

An Experiment of a Spoken Digits-Recognition System (숫자음성 자동 인식에 관한 일실험)

  • ;安居院猛
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.15 no.6
    • /
    • pp.23-28
    • /
    • 1978
  • This paper describes a speech recognition system for ten isolated spoken digits. In this system, acoustic parameters such as zero crossing rate, log energy and three formant frequencies estimated by linear prediction method were extracted for classification and/or recognition purpose(s). The former two parameters were used for the classification of unvoiced consonants and the latter one for the recognition of vowels and voiced consonants. Promising recognition results were obtained in this experiment for ten digit utterances spoken by a male speaker.

  • PDF

Automatic segmentation for continuous spoken Korean language recognition based on phonemic TDNN (음소단위 TDNN에 기반한 한국어 연속 음성 인식을 위한 데이타 자동분할)

  • Baac, Coo-Phong;Lee, Geun-Bae;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 1995.10a
    • /
    • pp.30-34
    • /
    • 1995
  • 신경망을 이용하는 연속 음성 인식에서 학습이라 함은 인위적으로 분할된 음성 데이타를 토대로 진행되는 것이 지배적이었다. 그러나 분할된 음성데이타를 마련하기 위해서는 많은 시간과 노력, 숙련 등을 요구할 뿐만아니라 그 자체가 인식도메인의 변화나 확장을 어렵게 하는 하나의 요인 되기도 한다. 그래서 분할된 음성데이타의 사용을 가급적 피하고 그러면서도 성능을 떨어뜨리지 않는 신경망 학습법들이 나타나고 있다. 본 논문에서는 학습된 인식기를 이용하여 자동으로 한국어 음성데이타를 분할한 후 그 분할된 데이타를 이용하여 다시 인식기를 재학습시켜나가는 반복 과정을 소개하고자 한다. 여기에는 TDNN이 인식기로 사용되며 인식단위는 음소이다. 학습은 cross-validation 기법을 이용하여 제어된다.

  • PDF

Comparison of Feature Extraction Methods for the Telephone Speech Recognition (전화 음성 인식을 위한 특징 추출 방법 비교)

  • 전원석;신원호;김원구;이충용;윤대희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.7
    • /
    • pp.42-49
    • /
    • 1998
  • 본 논문에서는 전화망 환경에서 음성 인식 성능을 개선하기 위한 특징 벡터 추출 단계에서의 처리 방법들을 연구하였다. 먼저, 고립 단어 인식 시스템에서 채널 왜곡 보상 방 법들을 단어 모델과 문맥 독립 음소 모델에 대하여 인식 실험을 하였다. 켑스트럼 평균 차 감법, RASTA 처리, 켑스트럼-시간 행렬을 실험하였으며, 인식 모델에 따른 각 알고리즘의 성능을 비교하였다. 둘째로, 문맥 독립 음소 모델을 이용한 인식 시스템의 성능 향상을 위하 여 정적 특징 벡터에 대하여 주성분 분석 방법(principal component analysis)과 선형 판별 분석(linear discriminant analysis)과 같은 선형 변환 방법을 적용하여 분별력이 높은 벡터 공간으로 변환함으로써 인식 성능을 향상시켰다. 또한 선형 변환 방법을 켑스트럼 평균 차 감법과 결합하여 더욱 뛰어난 성능을 보여주었다.

  • PDF

The Study of Korean Speech Recognition for Various Continue HMM (연속 HMM에 따른 우리말 음성인식 조사)

  • Lim Changwug;Shin Chwacheul;Kim Sukdong
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.49-52
    • /
    • 2004
  • 본 논문은 연속 밀도 함수를 갖는 HMM별 한국어 연속 음성 인식에 관한 연구이다. 여기서 우리는 밀도 함수가 2개에서 44개까지 갖는 연속 HMM모델에서 가장 효율적인 연속 음성 인식을 위한 방법을 제시한다. 음성 모델은 36개로 구성한 기본음소를 사용한 CI-Model과 3,000개로 구성한 확장음소를 사용한 CD-Model을 사용하였고, 언어 모델은 N-gram을 이용하여 처리하였다. 이 방법을 사용하여 500개의 문장과 6,486 개의 단어에 대하여 화자 독립으로 CI Model에서 최고 $94.4\%$의 단어 인식률과 $64.6\%$의 문장 인식률을 얻었고, CD Model에서는 $98.2\%$의 단어 인식률과 $73.6\%$의 문장인식률을 안정적으로 얻었다.

  • PDF

The Study on the Speaker Adaptation Using Speaker Characteristics of Phoneme (음소에 따른 화자특성을 이용한 화자적응방법에 관한 연구)

  • 채나영;황영수
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.6-9
    • /
    • 2003
  • In this paper, we studied on the difference of speaker adaptation according to the phoneme classification for Korean Speech recognition. In order to study of speech adaptation according to the weight of difference of phoneme as recognition unit, we used SCHMM as recognition system. And Speaker adaptation method used in this paper was MAPE(Maximum A Posteriori Probability Estimation), Linear Spectral Estimation. In order to evaluate the performance of these methods, we used 10 Korean isolated numbers as the experimental data. It is possible for the first and the second methods to be carried out unsupervised learning and used in on-line system. And the first method was shown performance improvement over the second method, and hybrid adaptation showed the better recognition results than those which performed each method. And the result of Speaker adaptation using the variable weight according to the phoneme had better than the result using fixed weight.

  • PDF

A Study on the Continuous Speech Recognition for the Automatic Creation of International Phonetics (국제 음소의 자동 생성을 활용한 연속음성인식에 관한 연구)

  • Kim, Suk-Dong;Hong, Seong-Soo;Shin, Chwa-Cheul;Woo, In-Sung;Kang, Heung-Soon
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.83-90
    • /
    • 2007
  • One result of the trend towards globalization is an increased number of projects that focus on natural language processing. Automatic speech recognition (ASR) technologies, for example, hold great promise in facilitating global communications and collaborations. Unfortunately, to date, most research projects focus on single widely spoken languages. Therefore, the cost to adapt a particular ASR tool for use with other languages is often prohibitive. This work takes a more general approach. We propose an International Phoneticizing Engine (IPE) that interprets input files supplied in our Phonetic Language Identity (PLI) format to build a dictionary. IPE is language independent and rule based. It operates by decomposing the dictionary creation process into a set of well-defined steps. These steps reduce rule conflicts, allow for rule creation by people without linguistics training, and optimize run-time efficiency. Dictionaries created by the IPE can be used with the speech recognition system. IPE defines an easy-to-use systematic approach that can obtained 92.55% for the recognition rate of Korean speech and 89.93% for English.

  • PDF

Experiments on Extraction of Non-Parametric Warping Functions for Speaker Normalization (화자 정규화를 위한 비정형 워핑함수 도출에 관한 실험)

  • Shin, Ok-Keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.5
    • /
    • pp.255-261
    • /
    • 2005
  • In this paper. experiments are conducted to extract a set of non-Parametric warping functions to examine the characteristics of the warping among speakers' utterances. For this Purpose. we made use of MFCC and LP spectra of vowels in choosing reference spectrum of each vowel as well as representative spectra of each speaker. These spectra are compared by DTW to give the warping functions of each speaker. The set of warping functions are then defined by clustering the warping functions of all the speakers. Noting that male and female warping functions have shapes similar to Piecewise linear function and Power function respectively, a new hybrid set of warping functions is defined. The effectiveness of the extracted warping functions are evaluated by conducting phone level recognition experiments, and improvements in accuracy rate are observed in both warping functions.

A study on the phoneme recognition using radial basis function network (RBFN을 이용한 음소인식에 관한 연구)

  • 김주성;김수훈;허강인
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.5
    • /
    • pp.1026-1035
    • /
    • 1997
  • In this paper, we studied for phoneme recognition using GPFN and PNN as a kind of RBFN. The structure of RBFN is similar to a feedforward networks but different from choosing of activation function, reference vector and learnign algorithm in a hidden layer. Expecially sigmoid function in PNN is replaced by one category included exponential function. And total calculation performance is high, because PNN performs pattern classification with out learning. In phonemerecognition experiment with 5 vowel and 12 consant, recognition rates of GPFN and PNN as a kind of RBFN reflected statistic characteristic of speech are higher than ones of MLP in case of using test data and quantizied data by VQ and LVQ.

  • PDF