• Title/Summary/Keyword: Phoneme

Search Result 458, Processing Time 0.024 seconds

The Study on Korean Phoneme for Korean Speech Recogintion

  • Hwang, Young-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.629-632
    • /
    • 2000
  • In this paper, we studied on the phoneme classification for Korean speech recognition. In the case of making large vocabulary speech recognition system, it is better to use phoneme than syllable or word as recognition unit. And, In order to study the difference of speech recognition according to the number of phoneme as recognition unit, we used the speech toolkit of OGI in U.S.A as recognition system. The result showed that the performance of diphthong being unified was better than that of seperated diphthongs, and we required the better result when we used the biphone than when using mono-phone as recognition unit.

  • PDF

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

The Effect of the Number of Phoneme Clusters on Speech Recognition (음성 인식에서 음소 클러스터 수의 효과)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.11
    • /
    • pp.1221-1226
    • /
    • 2014
  • In an effort to improve the efficiency of the speech recognition, we investigate the effect of the number of phoneme clusters. For this purpose, codebooks of varied number of phoneme clusters are prepared by modified k-means clustering algorithm. The subsequent processing is fuzzy vector quantization (FVQ) and hidden Markov model (HMM) for speech recognition test. The result shows that there are two distinct regimes. For large number of phoneme clusters, the recognition performance is roughly independent of it. For small number of phoneme clusters, however, the recognition error rate increases nonlinearly as it is decreased. From numerical calculation, it is found that this nonlinear regime might be modeled by a power law function. The result also shows that about 166 phoneme clusters would be the optimal number for recognition of 300 isolated words. This amounts to roughly 3 variations per phoneme.

Speech Feature Extraction based on Spikegram for Phoneme Recognition (음소 인식을 위한 스파이크그램 기반의 음성 특성 추출 기술)

  • Han, Seokhyeon;Kim, Jaewon;An, Soonho;Shin, Seonghyeon;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.735-742
    • /
    • 2019
  • In this paper, we propose a method of extracting speech features for phoneme recognition based on spikegram. The Fourier-transform-based features are widely used in phoneme recognition, but they are not extracted in a biologically plausible way and cannot have high temporal resolution due to the frame-based operation. For better phoneme recognition, therefore, it is desirable to have a new method of extracting speech features, which analyzes speech signal in high temporal resolution following the model of human auditory system. In this paper, we analyze speech signal based on a spikegram that models feature extraction and transmission in auditory system, and then propose a method of feature extraction from the spikegram for phoneme recognition. We evaluate the performance of proposed features by using a DNN-based phoneme recognizer and confirm that the proposed features provide better performance than the Fourier-transform-based features for short-length phonemes. From this result, we can verify the feasibility of new speech features extracted based on auditory model for phoneme recognition.

A Study on Korean Allophone Recognition Using Hierarchical Time-Delay Neural Network (계층구조 시간지연 신경망을 이용한 한국어 변이음 인식에 관한 연구)

  • 김수일;임해창
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.171-179
    • /
    • 1995
  • In many continuous speech recognition systems, phoneme is used as a basic recognition unit However, the coarticulation generated among neighboring phonemes makes difficult to recognize phonemes consistently. This paper proposes allophone as an alternative recognition unit. We have classified each phoneme into three different allophone groups by the location of phoneme within a syllable. For a recognition algorithm, time-delay neural network(TDNN) has been designed. To recognize all Korean allophones, TDNNs are constructed in modular fashion according to acoustic-phonetic features (e.g. voiced/unvoiced, the location of phoneme within a word). Each TDNN is trained independently, and then they are integrated hierarchically into a whole speech recognition system. In this study, we have experimented Korean plosives with phoneme-based recognition system and allophone-based recognition system. Experimental results show that allophone-based recognition is much less affected by the coarticulation.

  • PDF

A Phoneme Separation and Learning Using of Neural Network in the On-Line Character Recognition System (신경회로망을 이용한 온라인 문자 인식 시스템의 자소 분리에 관한 연구)

  • Hong, Bong-Hwa
    • The Journal of Information Technology
    • /
    • v.9 no.1
    • /
    • pp.55-63
    • /
    • 2006
  • In this paper, a Hangul recognition system using of Kohonen Network in the phoneme separation and learning is proposed. A Hangul consists of phoneme that are consists of strokes. The phoneme recognition and separation are very important in the recognition of character. So, the phonemes which mismatching has been happened are correctly separated through the learning of neural networks. also, learning rate($\alpha$) adjusted according to error, in order to solved that its decreased the number of iteration and the problem of local minimum, adaptively.

  • PDF

The Effect of Focus Representation and Intonational Manipulation in Phoneme Detecting (초점 실현과 운율 조작에 대한 음소지각)

  • Kim, Hee-Seung;Shin, Ji-Young;Kim, Kee-Ho
    • MALSORI
    • /
    • no.60
    • /
    • pp.97-108
    • /
    • 2006
  • The purpose of this study is to observe how Korean listeners detect a target phoneme with 'Focus' represented by prosodic prominence and question-induced semantic emphasis, and with intonational manipulation. According to the automated phoneme detection task using E-Prime, the Korean listeners detected phoneme targets more rapidly when the target-bearing words were in prominence position and in question-induced position. However, the presence of question-induced semantic emphasis reduced the prominence effect, so two effects interacted: when question-induced emphasis were primarily given as a cue, prominence which was given as secondary cue affected less to fine the new information. Besides, the intonation with manipulation was responded to faster than without manipulation.

  • PDF

Sentiment Analysis Using Deep Learning Model based on Phoneme-level Korean (한글 음소 단위 딥러닝 모형을 이용한 감성분석)

  • Lee, Jae Jun;Kwon, Suhn Beom;Ahn, Sung Mahn
    • Journal of Information Technology Services
    • /
    • v.17 no.1
    • /
    • pp.79-89
    • /
    • 2018
  • Sentiment analysis is a technique of text mining that extracts feelings of the person who wrote the sentence like movie review. The preliminary researches of sentiment analysis identify sentiments by using the dictionary which contains negative and positive words collected in advance. As researches on deep learning are actively carried out, sentiment analysis using deep learning model with morpheme or word unit has been done. However, this model has disadvantages in that the word dictionary varies according to the domain and the number of morphemes or words gets relatively larger than that of phonemes. Therefore, the size of the dictionary becomes large and the complexity of the model increases accordingly. We construct a sentiment analysis model using recurrent neural network by dividing input data into phoneme-level which is smaller than morpheme-level. To verify the performance, we use 30,000 movie reviews from the Korean biggest portal, Naver. Morpheme-level sentiment analysis model is also implemented and compared. As a result, the phoneme-level sentiment analysis model is superior to that of the morpheme-level, and in particular, the phoneme-level model using LSTM performs better than that of using GRU model. It is expected that Korean text processing based on a phoneme-level model can be applied to various text mining and language models.

Comparison Research of Non-Target Sentence Rejection on Phoneme-Based Recognition Networks (음소기반 인식 네트워크에서의 비인식 대상 문장 거부 기능의 비교 연구)

  • Kim, Hyung-Tai;Ha, Jin-Young
    • MALSORI
    • /
    • no.59
    • /
    • pp.27-51
    • /
    • 2006
  • For speech recognition systems, rejection function as well as decoding function is necessary to improve the reliability. There have been many research efforts on out-of-vocabulary word rejection, however, little attention has been paid on non-target sentence rejection. Recently pronunciation approaches using speech recognition increase the need for non-target sentence rejection to provide more accurate and robust results. In this paper, we proposed filler model method and word/phoneme detection ratio method to implement non-target sentence rejection system. We made performance evaluation of filler model along to word-level, phoneme-level, and sentence-level filler models respectively. We also perform the similar experiment using word-level and phoneme-level word/phoneme detection ratio method. For the performance evaluation, the minimized average of FAR and FRR is used for comparing the effectiveness of each method along with the number of words of given sentences. From the experimental results, we got to know that word-level method outperforms the other methods, and word-level filler mode shows slightly better results than that of word detection ratio method.

  • PDF

A Study on Korean Phoneme Classification using Recursive Least-Square Algorithm (Recursive Least-Square 알고리즘을 이용한 한국어 음소분류에 관한 연구)

  • Kim, Hoe-Rin;Lee, Hwang-Su;Un, Jong-Gwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.6 no.3
    • /
    • pp.60-67
    • /
    • 1987
  • In this paper, a phoneme classification method for Korean speech recognition has been proposed and its performance has been studied. The phoneme classification has been done based on the phonemic features extracted by the prewindowed recursive least-square (PRLS) algorithm that is a kind of adaptive filter algorithms. Applying the PRLS algorithm to input speech signal, precise detection of phoneme boundaries has been made, Reference patterns of Korean phonemes have been generated by the ordinery vector quantization (VQ) of feature vectors obtained manualy from prototype regions of each phoneme. In order to obtain the performance of the proposed phoneme classification method, the method has been tested using spoken names of seven Korean cities which have eleven different consonants and eight different vowels. In the speaker-dependent phoneme classification, the accuracy is about $85\%$ considering simple phonemic rules of Korean language, while the accuracy of the speaker-independent case is far less than that of the speaker-dependent case.

  • PDF