• Title/Summary/Keyword: 음소

Search Result 529, Processing Time 0.029 seconds

Phoneme Separation and Establishment of Time-Frequency Discriminative Pattern on Korean Syllables (음절신호의 음소 분리와 시간-주파수 판별 패턴의 설정)

  • 류광열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.12
    • /
    • pp.1324-1335
    • /
    • 1991
  • In this paper, a phoneme separation and an establishment of discriminative pattern of Korean phonemes are studied on experiment. The separation uses parameters such as pitch extraction, glottal peak pulse width of each pitch. speech duration. envelope and amplitude bias. The first pitch is extracted by deviations of glottal peak and width. energy and normalization on a bias on the top of vowel envelope. And then, it traces adjacent pitch to vowel in whole. On vewel, amethod to be reduced gliding pattern and the possible of vowel distinction to be used just second formant are proposed, and shrinking pitch waveform has nothing to do with pitch length is estimated. A pattern of envelope, spectrum, shrinking waveform, and a method of analysis by mutual relation among phonemes and manners of articulation on consonant are detected. As experimental results, 90% on vowel phoneme, 80% and 60% on initial and final consonant are discriminated.

  • PDF

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

An English-to-Korean Transliteration Model based on Grapheme and Phoneme (자소 및 음소 정보를 이용한 영어-한국어 음차표기 모델)

  • Oh Jong-Hoon;Choi Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.4
    • /
    • pp.312-326
    • /
    • 2005
  • There has been increasing interest in English-to-Korean transliteration recently. Previous ,works are related to a direct method like $\rightarrow$Korean graphemes> and a pivot method like $\rightarrow$English phoneme$\rightarrow$Korean graphemes>. Though most of the previous works focus on the direct method, transliteration, however, is a phonetic process rather than an orthographic one. In this point of view, we present an English-Korean transliteration model using grapheme and phoneme information. Unlike the previous works, our method uses phonetic information such as phonemes and their context. Moreover, we also use graphemes corresponding to phonemes. Our method shows about $60\%$ word accuracy.

Segmentation of Continuous Speech based on PCA of Feature Vectors (주요고유성분분석을 이용한 연속음성의 세그멘테이션)

  • 신옥근
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.2
    • /
    • pp.40-45
    • /
    • 2000
  • In speech corpus generation and speech recognition, it is sometimes needed to segment the input speech data without any prior knowledge. A method to accomplish this kind of segmentation, often called as blind segmentation, or acoustic segmentation, is to find boundaries which minimize the Euclidean distances among the feature vectors of each segments. However, the use of this metric alone is prone to errors because of the fluctuations or variations of the feature vectors within a segment. In this paper, we introduce the principal component analysis method to take the trend of feature vectors into consideration, so that the proposed distance measure be the distance between feature vectors and their projected points on the principal components. The proposed distance measure is applied in the LBDP(level building dynamic programming) algorithm for an experimentation of continuous speech segmentation. The result was rather promising, resulting in 3-6% reduction in deletion rate compared to the pure Euclidean measure.

  • PDF

Automatic Inter-Phoneme Similarity Calculation Method Using PAM Matrix Model (PAM 행렬 모델을 이용한 음소 간 유사도 자동 계산 기법)

  • Kim, Sung-Hwan;Cho, Hwan-Gue
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.3
    • /
    • pp.34-43
    • /
    • 2012
  • Determining the similarity between two strings can be applied various area such as information retrieval, spell checker and spam filtering. Similarity calculation between Korean strings based on dynamic programming methods firstly requires a definition of the similarity between phonemes. However, existing methods have a limitation that they use manually set similarity scores. In this paper, we propose a method to automatically calculate inter-phoneme similarity from a given set of variant words using a PAM-like probabilistic model. Our proposed method first finds the pairs of similar words from a given word set, and derives derivation rules from text alignment results among the similar word pairs. Then, similarity scores are calculated from the frequencies of variations between different phonemes. As an experimental result, we show an improvement of 10.1%~14.1% and 8.1%~11.8% in terms of sensitivity compared with the simple match-mismatch scoring scheme and the manually set inter-phoneme similarity scheme, respectively, with a specificity of 77.2%~80.4%.

Speech Recognition of Korean Phonemes 'ㅅ', 'ㅈ', 'ㅊ' based on Volatility and Turning Points (변동성과 전환점에 기반한 한국어 음소 'ㅅ', 'ㅈ', 'ㅊ' 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.579-585
    • /
    • 2014
  • A phoneme is the minimal unit of speech, and it plays a very important role in speech recognition. This paper proposes a novel method that can be used to recognize 'ㅅ', 'ㅈ', and 'ㅊ' among Korean phonemes. The proposed method is based on a volatility indicator and a turning point indicator that are calculated for each constituting block of the input speech signal. The volatility indicator is the sum of the differences between the values of each two samples adjacent in a block, and the turning point indicator is the number of extremal points at which the direction of the increment or decrement of the values of the sample are inverted in a block. A phoneme recognition algorithm combines the two indicators to finally determine the positions at which the three target phonemes mentioned above are recognized by utilizing optimized thresholds related with those indicators. The experimental results show that the proposed method can markedly reduce the error rate of the existing methods both in terms of the false reject rate and the false accept rate.

A Study on the Prosody Generation of Korean Sentences using Neural Networks (신경망을 이용한 한국어 운율 발생에 관한 연구)

  • Lee Il-Goo;Min Kyoung-Joong;Kang Chan-Koo;Lim Un-Cheon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.65-69
    • /
    • 1999
  • 합성단위, 합성기, 합성방식 등에 따라 여러 가지 다양한 음성합성시스템이 있으나 순수한 법칙합성 시스템이 아니고 기본 합성단위를 연결하여 합성음을 발생시키는 연결합성 시스템은 연결단위사이의 매끄러운 합성계수의 변화를 구현하지 못해 자연감이 떨어지는 실정이다. 자연음에 존재하는 운율법칙을 정확히 구현하면 합성음의 자연감을 높일 수 있으나 존재하는 모든 운율법칙을 추출하기 위해서는 방대한 분량의 언어자료 구축이 필요하다. 일반 의미 문장으로부터 운율법칙을 추출하는 것이 바람직하겠으나, 모든 운율 현상이 포함된 언어자료는 그 문장 수가 극히 방대하여 처리하기 힘들기 때문에 가능하면 문장 수를 줄이면서 다양한 운율 현상을 포함하는 문장 군을 구축하는 것이 중요하다. 본 논문에서는 음성학적으로 균형 잡힌 고립단어 412 단어를 기반으로 의미문장들을 만들었다. 이들 단어를 각 그룹으로 구분하여 각 그룹에서 추출한 단어들을 조합시켜 의미 문장을 만들도록 하였다. 의미 문장을 만들기 위해 단어 목록에 없는 단어를 첨가하였다. 단어의 문장 내에서의 상대위치에 따른 운율 변화를 살펴보기위해 각 문장의 변형을 만들어 언어자료에 포함시켰다. 자연감을 높이기 위해 구축된 언어자료를 바탕으로 음성데이타베이스를 작성하여 운율분석을 통해 신경망을 훈련시키기 위한 목표패턴을 작성하였다 문장의 음소열을 입력으로 하고 특정음소의 운율정보를 발생시키는 신경망을 구성하여 언어자료를 기반으로 작성한 목표패턴을 이용해 신경망을 훈련시켰다. 신경망의 입력패턴은 문장의 음소열 중 11개 음소열로 구성된다. 이 중 가운데 음소의 운율정보가 출력으로 나타난다. 분절요인에 의한 영향을 고려해주기 위해 전후 5음소를 동시에 입력시키고 문장내에서의 구문론적인 영향을 고려해주기 위해 해당 음소의 문장내에서의 위치, 운율구에 관한 정보등을 신경망의 입력 패턴으로 구성하였다. 특정화자로 하여금 언어자료를 발성하게 한 음성시료의 운율정보를 추출하여 신경망을 훈련시킨 결과 자연음의 운율과 유사한 합성음의 운율을 발생시켰다.

  • PDF

Automatic Phonetic Segmentation of Korean Speech Signal Using Phonetic-acoustic Transition Information (음소 음향학적 변화 정보를 이용한 한국어 음성신호의 자동 음소 분할)

  • 박창목;왕지남
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.24-30
    • /
    • 2001
  • This article is concerned with automatic segmentation for Korean speech signals. All kinds of transition cases of phonetic units are classified into 3 types and different strategies for each type are applied. The type 1 is the discrimination of silence, voiced-speech and unvoiced-speech. The histogram analysis of each indicators which consists of wavelet coefficients and SVF (Spectral Variation Function) in wavelet coefficients are used for type 1 segmentation. The type 2 is the discrimination of adjacent vowels. The vowel transition cases can be characterized by spectrogram. Given phonetic transcription and transition pattern spectrogram, the speech signal, having consecutive vowels, are automatically segmented by the template matching. The type 3 is the discrimination of vowel and voiced-consonants. The smoothed short-time RMS energy of Wavelet low pass component and SVF in cepstral coefficients are adopted for type 3 segmentation. The experiment is performed for 342 words utterance set. The speech data are gathered from 6 speakers. The result shows the validity of the method.

  • PDF

Implementation of the Automatic Segmentation and Labeling System (자동 음성분할 및 레이블링 시스템의 구현)

  • Sung, Jong-Mo;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.50-59
    • /
    • 1997
  • In this paper, we implement an automatic speech segmentation and labeling system which marks phone boundaries automatically for constructing the Korean speech database. We specify and implement the system based on conventional speech segmentation and labeling techniques, and also develop the graphic user interface(GUI) on Hangul $Motif^{TM}$ environment for the users to examine the automatic alignment boundaries and to refine them easily. The developed system is applied to 16kHz sampled speech, and the labeling unit is composed of 46 phoneme-like units(PLUs) and silence. The system uses both of the phonetic and orthographic transcription as input methods of linguistic information. For pattern-matching method, hidden Markov models(HMM) is employed. Each phoneme model is trained using the manually segmented 445 phonetically balanced word (PBW) database. In order to evaluate the performance of the system, we test it using another database consisting of sentence-type speech. According to our experiment, 74.7% of phoneme boundaries are within 20ms of the true boundary and 92.8% are within 40ms.

  • PDF

Phoneme Segmentation in Consideration of Speech feature in Korean Speech Recognition (한국어 음성인식에서 음성의 특성을 고려한 음소 경계 검출)

  • 서영완;송점동;이정현
    • Journal of Internet Computing and Services
    • /
    • v.2 no.1
    • /
    • pp.31-38
    • /
    • 2001
  • Speech database built of phonemes is significant in the studies of speech recognition, speech synthesis and analysis, Phoneme, consist of voiced sounds and unvoiced ones, Though there are many feature differences in voiced and unvoiced sounds, the traditional algorithms for detecting the boundary between phonemes do not reflect on them and determine the boundary between phonemes by comparing parameters of current frame with those of previous frame in time domain, In this paper, we propose the assort algorithm, which is based on a block and reflecting upon the feature differences between voiced and unvoiced sounds for phoneme segmentation, The assort algorithm uses the distance measure based upon MFCC(Mel-Frequency Cepstrum Coefficient) as a comparing spectrum measure, and uses the energy, zero crossing rate, spectral energy ratio, the formant frequency to separate voiced sounds from unvoiced sounds, N, the result of out experiment, the proposed system showed about 79 percents precision subject to the 3 or 4 syllables isolated words, and improved about 8 percents in the precision over the existing phonemes segmentation system.

  • PDF