• Title/Summary/Keyword: Phoneme

Search Result 458, Processing Time 0.033 seconds

Sign Language Transformation System based on a Morpheme Analysis (형태소분석에 기초한 수화영상변환시스템에 관한 연구)

  • Lee, Yong-Dong;Kim, Hyoung-Geun;Jeong, Woon-Dal
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.6
    • /
    • pp.90-98
    • /
    • 1996
  • In this paper we have proposed the sign language transformation system for deaf based on a morpheme analysis. The proposed system extracts phoneme components and connection informations of the input character sequence by using a morpheme analysis. And then the sign image obtained by component analysis is correctly and automatically generated through the sign image database. For the effective sign language transformation, the language description dictionary which consists of a morpheme analysis part for analysis of input character sequence and sign language description part for reference of sign language pattern is costructed. To avoid the duplicating sign language pattern, the pattern is classified a basic, a compound and a similar sign word. The computer simulation shows the usefulness of the proposed system.

  • PDF

A Study on the Speech Recognition of Korean Phonemes Using Recurrent Neural Network Models (순환 신경망 모델을 이용한 한국어 음소의 음성인식에 대한 연구)

  • 김기석;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.8
    • /
    • pp.782-791
    • /
    • 1991
  • In the fields of pattern recognition such as speech recognition, several new techniques using Artifical Neural network Models have been proposed and implemented. In particular, the Multilayer Perception Model has been shown to be effective in static speech pattern recognition. But speech has dynamic or temporal characteristics and the most important point in implementing speech recognition systems using Artificial Neural Network Models for continuous speech is the learning of dynamic characteristics and the distributed cues and contextual effects that result from temporal characteristics. But Recurrent Multilayer Perceptron Model is known to be able to learn sequence of pattern. In this paper, the results of applying the Recurrent Model which has possibilities of learning tedmporal characteristics of speech to phoneme recognition is presented. The test data consist of 144 Vowel+ Consonant + Vowel speech chains made up of 4 Korean monothongs and 9 Korean plosive consonants. The input parameters of Artificial Neural Network model used are the FFT coefficients, residual error and zero crossing rates. The Baseline model showed a recognition rate of 91% for volwels and 71% for plosive consonants of one male speaker. We obtained better recognition rates from various other experiments compared to the existing multilayer perceptron model, thus showed the recurrent model to be better suited to speech recognition. And the possibility of using Recurrent Models for speech recognition was experimented by changing the configuration of this baseline model.

A Study on the Dynamic Feature of Phoneme for Word Recognition (단어인식을 위한 음소의 동적 특징에 관한 검토)

  • 김주곤
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1997.06a
    • /
    • pp.35-39
    • /
    • 1997
  • 본 연구에서는 음소를 인식의 기본단위로 하는 한국어 단어인식 시스템의 인식정도를 개선하기 이해 각 음소의 시간방향의 정보를 포함하고 있는 동적특징인 회귀계수와 K-L(Karhunen-Loeve)변환으로 얻은 특징파라미터(이하 K-L계수라 함)를 이용하여 음소인식과 단어인식 실험을 수행한 결과 그 유효성을 확인하였다. 이를 위해 먼저 파열음을 대상으로 정적 특징과 파라미터인 멜-켑스트럼(Mel-Cepstrum)과 동적 특징 파라미터인 회귀계수(Regressive Coefficient) 와 K-L 계수(Karhunen-Loeve Coefficient)를 추출하여 음소 인식실험을 수행하였다. 그 결과 멜-켑스트럼을 사용한 경우 39.84%, 회귀계수를 사용한 경우 48.52%, K-L계수를 사용한 경우 52.40%의 인식률을 얻었다. 이를 참고로 각각의 특징 파라미터를 결합하여 인식실험한 결과 멜-켑스트럼과 K-L계수를 사용한 경우 47.17%,멜 -켑스트럼과 회귀계수의 경우 60.11%,K-L계수와 회귀계수의 경우 60.35%, 멜-켑스트럼과 K-L계수 , 회귀계수를 사용한 경우 58.13%를 인식률을 얻어 동적특징인 K-L 계수와 회귀계수를 사용한 경우와 멜-켑스트럼과 회귀계수를 사용한 경우가 높은 인식률을 보였으며 이를 단어로 확장하여 인식실험을 수행한 결과 기존의 특징 파라미터를 이용한 경우보다 높은 인식률을 얻어 동적 파라미터의 유효성을 확인하였다

  • PDF

A Study on the Technique of Spectrum Flattening for Improved Pitch Detection (개선된 피치검출을 위한 스펙트럼 평탄화 기법에 관한 연구)

  • 강은영;배명진;민소연
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.310-314
    • /
    • 2002
  • The exact pitch (fundamental frequency) extraction is important in speech signal processing like speech recognition, speech analysis and synthesis. However the exact pitch extraction from speech signal is very difficult due to the effect of formant and transitional amplitude. So in this paper, the pitch is detected after the elimination of formant ingredients by flattening the spectrum in frequency region. The effect of the transition and change of phoneme is low in frequency region. In this paper we proposed the new flattening method of log spectrum and the performance was compared with LPC method and Cepstrum method. The results show the proposed method is better than conventional method.

Speech Recognition Optimization Learning Model using HMM Feature Extraction In the Bhattacharyya Algorithm (바타차랴 알고리즘에서 HMM 특징 추출을 이용한 음성 인식 최적 학습 모델)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.11 no.6
    • /
    • pp.199-204
    • /
    • 2013
  • Speech recognition system is shall be composed model of learning from the inaccurate input speech. Similar phoneme models to recognize, because it leads to the recognition rate decreases. Therefore, in this paper, we propose a method of speech recognition optimal learning model configuration using the Bhattacharyya algorithm. Based on feature of the phonemes, HMM feature extraction method was used for the phonemes in the training data. Similar learning model was recognized as a model of exact learning using the Bhattacharyya algorithm. Optimal learning model configuration using the Bhattacharyya algorithm. Recognition performance was evaluated. In this paper, the result of applying the proposed system showed a recognition rate of 98.7% in the speech recognition.

A Study on the Pitch Detection of Speech Harmonics by the Peak-Fitting (음성 하모닉스 스펙트럼의 피크-피팅을 이용한 피치검출에 관한 연구)

  • Kim, Jong-Kuk;Jo, Wang-Rae;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.85-95
    • /
    • 2003
  • In speech signal processing, it is very important to detect the pitch exactly in speech recognition, synthesis and analysis. If we exactly pitch detect in speech signal, in the analysis, we can use the pitch to obtain properly the vocal tract parameter. It can be used to easily change or to maintain the naturalness and intelligibility of quality in speech synthesis and to eliminate the personality for speaker-independence in speech recognition. In this paper, we proposed a new pitch detection algorithm. First, positive center clipping is process by using the incline of speech in order to emphasize pitch period with a glottal component of removed vocal tract characteristic in time domain. And rough formant envelope is computed through peak-fitting spectrum of original speech signal infrequence domain. Using the roughed formant envelope, obtain the smoothed formant envelope through calculate the linear interpolation. As well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. Inverse fast fourier transform (IFFT) compute this flattened harmonics. After all, we obtain Residual signal which is removed vocal tract element. The performance was compared with LPC and Cepstrum, ACF. Owing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

Perceptual Boundary on a Synthesized Korean Vowel /o/-/u/ Continuum by Chinese Learners of Korean Language (/오/-/우/ 합성모음 연속체에 대한 중국인 한국어 학습자의 청지각적 경계)

  • Yun, Jihyeon;Kim, EunKyung;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.111-121
    • /
    • 2015
  • The present study examines the auditory boundary between Korean /o/ and /u/ on a synthesized vowel continuum by Chinese learners of Korean language. Preceding researches reported that the Chinese learners have difficulty pronouncing Korean monophthongs /o/ and /u/. In this experiment, a nine-step continuum was resynthesized using Praat from a vowel token from a recording of a male announcer who produced it in isolated form. F1 and F2 were synchronously shifted in equal steps in qtone (quarter tone), while F3 and F4 values were held constant for the entire stimuli. A forced choice identification task was performed by the advanced learners who speak Mandarin Chinese as their native language. Their experiment data were compared to a Korean native group. ROC (Receiver Operating Characteristic) analysis and logistic regression were performed to estimate the perceptual boundary. The result indicated the learner group has a different auditory criterion on the continuum from the Korean native group. This suggests that more importance should be placed on hearing and listening training in order to acquire the phoneme categories of the two vowels.

A Study on the Generation of Multi-syllable Nonsense Wordset for the Assessment of Synthetic Speech (합성음성평가를 위한 다음절 무의미단어 생성과 이용에 관한 연구)

  • Jo, Cheol-Woo;Kim, Kyung-Tae;Lee, Yong-Ju
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.5
    • /
    • pp.51-58
    • /
    • 1994
  • These times many kinds of man-machine Interfaces using speech signal, speech recognizers or speech synthesizers, are proposed and utilized in practice. Especially speech synthesis system is widely used in our life. But its assessment method is still in its first stage. In this paper we propose a method to generate multi-syllable nonsense wordset for the purpose of synthetic speech assessment and applies the wordset to one commercial text-to-speech system. Some results about the experiment is suggested and it is verified that the method to generate a nonsense wordset can be used to assess the intelligibility of the synthesizer in phoneme level or in phonemic environmental level.

  • PDF

Phonological Process of Children with Cleft Palate (구개파열 아동의 음음변동에 관한 연구)

  • Choi, Jae-Nam;Sung, Soo-Jin;Nam, Do-Hyun;Choi, Hong-Shik
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.16 no.1
    • /
    • pp.49-52
    • /
    • 2005
  • Background and Objectives : Children with cleft palate children may be imparied in articulation and resonance. This study examined the phonological process usage of 3-, 4- and 5- year old children with cleft palate. Materials and Method : Twenty seven children with cleft palat participated 3-, 4- and 5-year old children with cleft palate. The authors performed speech evaluation using picture consonants test for children with cleft palate. Percentage of consonants correct(PCC), mean value of each phoneme depends on articulation site and manner were evaluated. Results : In place of articulation, ommission of velar consonants were the most frequent. In manner of articulation, ommission of nasal consonants were the most frequent. Backing, glottal stop, was the most prominent phonological process children with cleft palate. Conclusion : These results may indicate that articulation disorder with cleft palate. and other articulation disorders differences should be considered in the interpretation of speech evaluations.

  • PDF

Basic Phonetic Problems Encountered by Poles Studying Korean. (폴란드인이 한국어 학습에 나타난 발음상의 음성학적 문제)

  • Paradowska Anna Isabella
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.247-251
    • /
    • 1996
  • This paper is intended as a preliminary study on phonetic and phonological differences between Polish and Korean languages. In this paper an attempt is made to examine the most conspicious difficulties encountered by Polish learners who begin to speak Korean (and in doing so, 1 would hope that it might be of help to future learners of both languages). Since the phoneme inventory and general phonetic rules for both languages are very different, teaching and learning accurate pronunciation is extremely difficult for both the Poles and Koreans without any previous phonetic training. In the case of Polish and Korean we can see how strong and persistent the influences of the mother-tongue are on the target language. As an example I would like to discuss the basic differences between Polish and Korean consonants. The most important consonantal opposition in Polish is voice-/voicelessness (f. ex.; 〔b〕 / 〔p〕, 〔g〕 / 〔k〕) while in Korean, opposition such as voice-/voicelessness is of secondary importance. Therefore Korean speakers do not perceive the difference between Polish voiced and voiceless consonants. On the other hand, Polish speakers can not distinguish Korean lenis / fortis / aspirated consonants (f. ex.; ㅂ 〔b〕 / ㅃ 〔p〕 / ㅍ〔ph〕, ㄱ 〔g〕 / ㄲ 〔k〕 / ㅋ 〔kh〕)) opposition. The other very important factor is palatalization which is of vital importance in Polish and, because of this, Polish speakers are extremely sensitive to it. In Korean palatalization is not important phonetically and Korean speakers do not distinguish between palatalized and non-palatalized consonants. The transcription used here is based on ' The principles of the International Phonetic Association and the Korean Phonetic Alphabet ' (1981) by Hyun Bok Lee.

  • PDF