• Title/Summary/Keyword: 음소

Search Result 529, Processing Time 0.021 seconds

Korean Continuous Speech Recognition using Phone Models for Function words (기능어용 음소 모델을 적용한 한국어 연속음성 인식)

  • 명주현;정민화
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.354-356
    • /
    • 2000
  • 의사형태소를 디코딩 단위로 한국어 연속 음성 인식에서의 조사, 어미, 접사 및 짧은 용언의 어간등의 단어가 상당수의 인식 오류를 발생시킨다. 이러한 단어들은 발화 지속시간이 매우 짧고 생략이 빈번하며 결합되는 다른 형태소의 형태에 따라서 매우 심한 발음상의 변이를 보인다. 본 논문에서는 이러한 단어들은 한국어 기능어라 정의하고 실제 의사형태소 단위의 인식 실험을 통하여 기능어 집합 1, 2를 규정하였다. 그리고 한국어 기능어에 기능어용 음소를 독립적으로 적용하는 방법을 제안했다. 또한 기능어용 음소가 분리되어 생기는 음향학적 변이들을 처리하기 위해 Gaussian Mixture 수를 증가시켜 보다 견고한 학습을 수행했고, 기능어들의 음향 모델 스코어가 높아짐에 따른 인식에서의 삽입 오류 증가를 낮추기 위해 언어 모델에 fixed penalty를 부여하였다. 기능어 집합1에 대한 음소 모델을 적용한 경우 전체 문장 인식률은 0.8% 향상되었고 기능어 집합2에 대한 기능어 음소 모델을 적용하였을 때 전체 문장 인식률은 1.4% 증가하였다. 위의 실험 결과를 통하여 한국어 기능어에 대해 새로운 음소를 적용하여 독립적으로 학습하여 인식을 수행하는 것이 효과적임을 확인하였다.

  • PDF

Regression Tree based Modeling of Segmental Durations For Text-to-Speech Conversion System (Text-to-Speech 변환 시스템을 위한 회귀 트리 기반의 음소 지속 시간 모델링)

  • Pyo, Kyung-Ran;Kim, Hyung-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10e
    • /
    • pp.191-195
    • /
    • 1999
  • 자연스럽고 명료한 한국어 Text-to-Speech 변환 시스템을 위해서 음소의 지속 시간을 제어하는 일은 매우 중요하다. 음소의 지속 시간은 여러 가지 문맥 정보에 의해서 변화하므로 제어 규칙에 의존하기 보다 방대한 데이터베이스를 이용하여 통계적인 기법으로 음소의 지속 시간에 변화를 주는 요인을 찾아내려고 하는 것이 지금의 추세이다. 본 연구에서도 트리기반 모델링 방법중의 하나인 CART(classification and regression tree) 방법을 사용하여 회귀 트리를 생성하고, 생성된 트리에 기반하여 음소의 지속 시간 예측 모델과, 자연스러운 끊어 읽기를 위한 휴지 기간 예측 모델을 제안하고 있다. 실험에 사용한 음성코퍼스는 550개의 문장으로 구성되어 있으며, 이 중 428개 문장으로 회귀 트리를 학습시켰고, 나머지 122개의 문장으로 실험하였다. 모델의 평가를 위해서 실제값과 예측값과의 상관관계를 구하였더니 음소의 지속 시간을 예측하는 회귀 트리에서는 상관계수가 0.84로 계산되었고, 끊어 읽는 경계에서의 휴지 기간을 예측하는 회귀 트리에서는 상관계수가 0.63으로 나타났다.

  • PDF

Speech Recognition in Noise Environments Using SPLICE with Phonetic Information (음성학적인 정보를 포함한 SPLICE를 이용한 잡음환경에서의 음성인식)

  • Kim Doo Hee;Kim Hyung Soon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.83-86
    • /
    • 2002
  • 훈련과정과 인식과정에서의 주변환경 잡음과 채널 특성 등의 불일치는 음성인식 성능을 급격히 저하시킨다. 이러한 불일치를 보상하기 위해서 켑스트럼 영역에서의 다양한 전처리 방법이 시도되고 있으며 최근에는 stereo 데이터와 잡음 음성의 Gaussian Mixture Model (GMM)을 이용해 보상벡터를 구하는 SPLICE 방법이 좋은 결과를 보이고 있다(1). 기존의 SPLICE가 전체 발성에 대해서 음향학적인 정보만으로 Gaussian 모델을 구하는 반면 본 논문에서는 발성에 해당하는 음소정보를 고려하여 전체 음향 공간을 각 음소에 대해 나누어서 모델링하고 각 음소에 대한 Gaussian 모델과 그 음소에 해당하는 음성데이터만을 이용하여 음소별 보상벡터가 훈련되도록 하였다. 이 경우 보상벡터는 잡음이 각 음소에 미치는 영향을 보다 자세히 나타내게 된다. Aurora 2 데이터베이스를 이용한 실험결과, 제안된 방법이 기존의 SPLICE방법에 비해 성능향상을 보였다.

  • PDF

Natural 3D Lip-Synch Animation Based on Korean Phonemic Data (한국어 음소를 이용한 자연스러운 3D 립싱크 애니메이션)

  • Jung, Il-Hong;Kim, Eun-Ji
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.331-339
    • /
    • 2008
  • This paper presents the development of certain highly efficient and accurate system for producing animation key data for 3D lip-synch animation. The system developed herein extracts korean phonemes from sound and text data automatically and then computes animation key data using the segmented phonemes. This animation key data is used for 3D lip-synch animation system developed herein as well as commercial 3D facial animation system. The conventional 3D lip-synch animation system segments the sound data into the phonemes based on English phonemic system and produces the lip-synch animation key data using the segmented phoneme. A drawback to this method is that it produces the unnatural animation for Korean contents. Another problem is that this method needs the manual supplementary work. In this paper, we propose the 3D lip-synch animation system that can segment the sound and text data into the phonemes automatically based on Korean phonemic system and produce the natural lip-synch animation using the segmented phonemes.

  • PDF

Speech Recognition on Korean Monosyllable using Phoneme Discriminant Filters (음소판별필터를 이용한 한국어 단음절 음성인식)

  • Hur, Sung-Phil;Chung, Hyun-Yeol;Kim, Kyung-Tae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.31-39
    • /
    • 1995
  • In this paper, we have constructed phoneme discriminant filters [PDF] according to the linear discriminant function. These discriminant filters do not follow the heuristic rules by the experts but the mathematical methods in iterative learning. Proposed system. is based on the piecewise linear classifier and error correction learning method. The segmentation of speech and the classification of phoneme are carried out simutaneously by the PDF. Because each of them operates independently, some speech intervals may have multiple outputs. Therefore, we introduce the unified coefficients by the output unification process. But sometimes the output has a region which shows no response, or insensitive. So we propose time windows and median filters to remove such problems. We have trained this system with the 549 monosyllables uttered 3 times by 3 male speakers. After we detect the endpoint of speech signal using threshold value and zero crossing rate, the vowels and consonants are separated by the PDF, and then selected phoneme passes through the following PDF. Finally this system unifies the outputs for competitive region or insensitive area using time window and median filter.

  • PDF

Utilization of Syllabic Nuclei Location in Korean Speech Segmentation into Phonemic Units (음절핵의 위치정보를 이용한 우리말의 음소경계 추출)

  • 신옥근
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.13-19
    • /
    • 2000
  • The blind segmentation method, which segments input speech data into recognition unit without any prior knowledge, plays an important role in continuous speech recognition system and corpus generation. As no prior knowledge is required, this method is rather simple to implement, but in general, it suffers from bad performance when compared to the knowledge-based segmentation method. In this paper, we introduce a method to improve the performance of a blind segmentation of Korean continuous speech by postprocessing the segment boundaries obtained from the blind segmentation. In the preprocessing stage, the candidate boundaries are extracted by a clustering technique based on the GLR(generalized likelihood ratio) distance measure. In the postprocessing stage, the final phoneme boundaries are selected from the candidates by utilizing a simple a priori knowledge on the syllabic structure of Korean, i.e., the maximum number of phonemes between any consecutive nuclei is limited. The experimental result was rather promising : the proposed method yields 25% reduction of insertion error rate compared that of the blind segmentation alone.

  • PDF

Phonetic Transcription based Speech Recognition using Stochastic Matching Method (확률적 매칭 방법을 사용한 음소열 기반 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.696-700
    • /
    • 2007
  • A new method that improves the performance of the phonetic transcription based speech recognition system is presented with the speaker-independent phonetic recognizer. Since SI phoneme HMM based speech recognition system uses only the phoneme transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the phoneme recognition errors generated from using SI models. A new training method that iteratively estimates the phonetic transcription and transformation vectors is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the transformation vectors. The experiments performed over actual telephone line shows that a reduction of about 45% in the error rates could be achieved as compared to the conventional method.

Speech Feature Extraction based on Spikegram for Phoneme Recognition (음소 인식을 위한 스파이크그램 기반의 음성 특성 추출 기술)

  • Han, Seokhyeon;Kim, Jaewon;An, Soonho;Shin, Seonghyeon;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.735-742
    • /
    • 2019
  • In this paper, we propose a method of extracting speech features for phoneme recognition based on spikegram. The Fourier-transform-based features are widely used in phoneme recognition, but they are not extracted in a biologically plausible way and cannot have high temporal resolution due to the frame-based operation. For better phoneme recognition, therefore, it is desirable to have a new method of extracting speech features, which analyzes speech signal in high temporal resolution following the model of human auditory system. In this paper, we analyze speech signal based on a spikegram that models feature extraction and transmission in auditory system, and then propose a method of feature extraction from the spikegram for phoneme recognition. We evaluate the performance of proposed features by using a DNN-based phoneme recognizer and confirm that the proposed features provide better performance than the Fourier-transform-based features for short-length phonemes. From this result, we can verify the feasibility of new speech features extracted based on auditory model for phoneme recognition.

Real-time Phoneme Recognition System Using Max Flow Matching (최대 흐름 정합을 이용한 실시간 음소인식 시스템 구현)

  • Lee, Sang-Yeob;Park, Seong-Won
    • Journal of Korea Game Society
    • /
    • v.12 no.1
    • /
    • pp.123-132
    • /
    • 2012
  • There are many of games using smart devices. Voice recognition is can be useful way for input. In the game, voice have to be quickly recognized, at the same time it have to be manipulated promptly as well. In this study, we developed the optimized real-time phoneme recognition using max flow matching that it can be efficiently used in the game field. Firstly, voice wavelength is transformed to FFT, secondly, transformed value is made by a graph in Z plane, thirdly, data is extracted in specific area, and then data is saved in database. After all the value is recognized using weighted bipartite max flow matching. This way would be useful method in game or robot field when researchers hope to recognize the fast voice recognition.

Speech Recognition Error Compensation using MFCC and LPC Feature Extraction Method (MFCC와 LPC 특징 추출 방법을 이용한 음성 인식 오류 보정)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.137-142
    • /
    • 2013
  • Speech recognition system is input of inaccurate vocabulary by feature extraction case of recognition by appear result of unrecognized or similar phoneme recognized. Therefore, in this paper, we propose a speech recognition error correction method using phoneme similarity rate and reliability measures based on the characteristics of the phonemes. Phonemes similarity rate was phoneme of learning model obtained used MFCC and LPC feature extraction method, measured with reliability rate. Minimize the error to be unrecognized by measuring the rate of similar phonemes and reliability. Turned out to error speech in the process of speech recognition was error compensation performed. In this paper, the result of applying the proposed system showed a recognition rate of 98.3%, error compensation rate 95.5% in the speech recognition.