• Title/Summary/Keyword: phonemes

Search Result 226, Processing Time 0.018 seconds

A Study on the Phoneme Based Analysis of Korean Initial Plosives Using Statistical Method and Perception Tests (통계적 방법과 인지실험을 통한 한국어 초성파열음의 음소단위 분석에 관한 연구)

  • Jo Cheol-Woo;Lee Woo-Sun;Lee Cyu-Ho;Kim Jong-Ahn;Lim Gwang-Il;Lee Tae-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.8 no.5
    • /
    • pp.78-85
    • /
    • 1989
  • This paper describes a statistical methods and perception test for extracting the parameters to be used for the synthesis-by-rule of Korean plosives. Formant synthesizer is chosen for the synthesis of the phonemes. Speech materials for the analysis consists of 72 CV monosyllables from the single male speaker. The analysis is done mainly focused on the variation of parameters in time and frequency domain, then perception tests are executed to estimate the effects of variations of the formant transitions.

  • PDF

A Study on Duration Length and Place of Feature Extraction for Phoneme Recognition (음소 인식을 위한 특징 추출의 위치와 지속 시간 길이에 관한 연구)

  • Kim, Bum-Koog;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.4
    • /
    • pp.32-39
    • /
    • 1994
  • As a basic research to realize Korean speech recognition system, phoneme recognition was carried out to find out ; 1) the best place which represents each phoneme's characteristics, and 2) the reasonable length of duration for obtaining the best recognition rates. For the recognition experiments, multi-speaker dependent recognition with Bayesian decision rule using 21 order of cepstral coefficient as a feature parameter was adopted. It turned out that the best place of feature extraction for the highest recognition rates were 10~50ms in vowels, 40~100ms in fricatives and affricates, 10~50ms in nasals and liquids, and 10~50ms in plosives. And about 70ms of duration was good enough for the recognition of all 35 phonemes.

  • PDF

Optimizing Multiple Pronunciation Dictionary Based on a Confusability Measure for Non-native Speech Recognition (타언어권 화자 음성 인식을 위한 혼잡도에 기반한 다중발음사전의 최적화 기법)

  • Kim, Min-A;Oh, Yoo-Rhee;Kim, Hong-Kook;Lee, Yeon-Woo;Cho, Sung-Eui;Lee, Seong-Ro
    • MALSORI
    • /
    • no.65
    • /
    • pp.93-103
    • /
    • 2008
  • In this paper, we propose a method for optimizing a multiple pronunciation dictionary used for modeling pronunciation variations of non-native speech. The proposed method removes some confusable pronunciation variants in the dictionary, resulting in a reduced dictionary size and less decoding time for automatic speech recognition (ASR). To this end, a confusability measure is first defined based on the Levenshtein distance between two different pronunciation variants. Then, the number of phonemes for each pronunciation variant is incorporated into the confusability measure to compensate for ASR errors due to words of a shorter length. We investigate the effect of the proposed method on ASR performance, where Korean is selected as the target language and Korean utterances spoken by Chinese native speakers are considered as non-native speech. It is shown from the experiments that an ASR system using the multiple pronunciation dictionary optimized by the proposed method can provide a relative average word error rate reduction of 6.25%, with 11.67% less ASR decoding time, as compared with that using a multiple pronunciation dictionary without the optimization.

  • PDF

Decoding Brain States during Auditory Perception by Supervising Unsupervised Learning

  • Porbadnigk, Anne K.;Gornitz, Nico;Kloft, Marius;Muller, Klaus-Robert
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.112-121
    • /
    • 2013
  • The last years have seen a rise of interest in using electroencephalography-based brain computer interfacing methodology for investigating non-medical questions, beyond the purpose of communication and control. One of these novel applications is to examine how signal quality is being processed neurally, which is of particular interest for industry, besides providing neuroscientific insights. As for most behavioral experiments in the neurosciences, the assessment of a given stimulus by a subject is required. Based on an EEG study on speech quality of phonemes, we will first discuss the information contained in the neural correlate of this judgement. Typically, this is done by analyzing the data along behavioral responses/labels. However, participants in such complex experiments often guess at the threshold of perception. This leads to labels that are only partly correct, and oftentimes random, which is a problematic scenario for using supervised learning. Therefore, we propose a novel supervised-unsupervised learning scheme, which aims to differentiate true labels from random ones in a data-driven way. We show that this approach provides a more crisp view of the brain states that experimenters are looking for, besides discovering additional brain states to which the classical analysis is blind.

Improved First-Phoneme Searches Using an Extended Burrows-Wheeler Transform (확장된 버로우즈-휠러 변환을 이용한 개선된 한글 초성 탐색)

  • Kim, Sung-Hwan;Cho, Hwan-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.682-687
    • /
    • 2014
  • First phoneme queries are important functionalities that provide an improvement in the usability of interfaces that produce errors frequently due to their restricted input environment, such as in navigators and mobile devices. In this paper, we propose a time-space efficient data structure for Korean first phoneme queries that disassembles Korean strings in a phoneme-wise manner, rearranges them into circular strings, and finally, indexes them using the extended Burrows-Wheeler Transform. We also demonstrate that our proposed method can process more types of query using less space than previous methods. We also show it can improve the search time when the query length is shorter and the proportion of first phonemes is higher.

A Parallel Speech Recognition Model on Distributed Memory Multiprocessors (분산 메모리 다중프로세서 환경에서의 병렬 음성인식 모델)

  • 정상화;김형순;박민욱;황병한
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.44-51
    • /
    • 1999
  • This paper presents a massively parallel computational model for the efficient integration of speech and natural language understanding. The phoneme model is based on continuous Hidden Markov Model with context dependent phonemes, and the language model is based on a knowledge base approach. To construct the knowledge base, we adopt a hierarchically-structured semantic network and a memory-based parsing technique that employs parallel marker-passing as an inference mechanism. Our parallel speech recognition algorithm is implemented in a multi-Transputer system using distributed-memory MIMD multiprocessors. Experimental results show that the parallel speech recognition system performs better in recognition accuracy than a word network-based speech recognition system. The recognition accuracy is further improved by applying code-phoneme statistics. Besides, speedup experiments demonstrate the possibility of constructing a realtime parallel speech recognition system.

  • PDF

A Study on the Speech Recognition of Korean Phonemes Using Recurrent Neural Network Models (순환 신경망 모델을 이용한 한국어 음소의 음성인식에 대한 연구)

  • 김기석;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.8
    • /
    • pp.782-791
    • /
    • 1991
  • In the fields of pattern recognition such as speech recognition, several new techniques using Artifical Neural network Models have been proposed and implemented. In particular, the Multilayer Perception Model has been shown to be effective in static speech pattern recognition. But speech has dynamic or temporal characteristics and the most important point in implementing speech recognition systems using Artificial Neural Network Models for continuous speech is the learning of dynamic characteristics and the distributed cues and contextual effects that result from temporal characteristics. But Recurrent Multilayer Perceptron Model is known to be able to learn sequence of pattern. In this paper, the results of applying the Recurrent Model which has possibilities of learning tedmporal characteristics of speech to phoneme recognition is presented. The test data consist of 144 Vowel+ Consonant + Vowel speech chains made up of 4 Korean monothongs and 9 Korean plosive consonants. The input parameters of Artificial Neural Network model used are the FFT coefficients, residual error and zero crossing rates. The Baseline model showed a recognition rate of 91% for volwels and 71% for plosive consonants of one male speaker. We obtained better recognition rates from various other experiments compared to the existing multilayer perceptron model, thus showed the recurrent model to be better suited to speech recognition. And the possibility of using Recurrent Models for speech recognition was experimented by changing the configuration of this baseline model.

Comparison of Acoustic Characteristics of Vowel and Stops in 3, 4 year-old Normal Hearing Children According to Parents' Deafness: Preliminary Study (부모의 청각장애 유무에 따른 3, 4세 건청 자녀의 모음 및 파열음 조음의 음향음성학적 특성 비교: 예비연구)

  • Hong, Jisook;Kang, Youngae;Kim, Jaeock
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.67-77
    • /
    • 2015
  • The purpose of this study was to investigate how deaf parents influence the speech sounds of their normal-hearing children. Twenty four normal hearing children of deaf adults (CODA) and normal hearing parents (NORMAL) aged 3 to 4 participated in the study. The F1, F2, and the vowel triangle area in 7 vowels and the voice onset times (VOTs) and closure durations in 9 stops were measured. The results of the study are as follows. First, the F1 and F2 for all vowels were higher and the vowel triangle area was larger in CODA than in NORMAL although they were not statistically significant. Second, VOTs in $C_{stop}V$ for $/t^*/$ and in $VC_{stop}V$ for $/t^*/$, $/t^h/$, and $/k^h/$ were longer in CODA than in NORMAL. Most stops in CODA appeared to be longer VOTs for most phonemes. Third, the manner and place of articulation in stops did not make a difference between CODA and NORMAL in VOTs and closed durations. CODA does not demonstrate the speech characteristics of deaf people, however, they seem to speak differently than NORMAL, which means CODA might be influenced by a different linguistic environment created by deaf parents in some way.

An ERP Study of the Perception of English High Front Vowels by Native Speakers of Korean and English (영어전설고모음 인식에 대한 ERP 실험연구: 한국인과 영어원어민을 대상으로)

  • Yun, Yungdo
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.21-29
    • /
    • 2013
  • The mismatch negativity (MMN) is known to be a fronto-centrally negative component of the auditory event-related potentials (ERP). $N\ddot{a}\ddot{a}t\ddot{a}nen$ et al. (1997) and Winkler et al. (1999) discuss that MMN acts as a cue to a phoneme perception in the ERP paradigm. In this study a perception experiment based on an ERP paradigm to check how Korean and American English speakers perceive the American English high front vowels was conducted. The study found that the MMN obtained from both Korean and American English speakers was shown around the same time after they heard F1s of English high front vowels. However, when the same groups heard English words containing them, the American English listeners' MMN was shown to be a little faster than the Korean listeners' MMN. These findings suggest that non-speech sounds, such as F1s of vowels, may be processed similarly across speakers of different languages; however, phonemes are processed differently; a native language phoneme is processed faster than a non-native language phoneme.

ETRI small-sized dialog style TTS system (ETRI 소용량 대화체 음성합성시스템)

  • Kim, Jong-Jin;Kim, Jeong-Se;Kim, Sang-Hun;Park, Jun;Lee, Yun-Keun;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.217-220
    • /
    • 2007
  • This study outlines a small-sized dialog style ETRI Korean TTS system which applies a HMM based speech synthesis techniques. In order to build the VoiceFont, dialog-style 500 sentences were used in training HMM. And the context information about phonemes, syllables, words, phrases and sentence were extracted fully automatically to build context-dependent HMM. In training the acoustic model, acoustic features such as Mel-cepstrums, logF0 and its delta, delta-delta were used. The size of the VoiceFont which was built through the training is 0.93Mb. The developed HMM-based TTS system were installed on the ARM720T processor which operates 60MHz clocks/second. To reduce computation time, the MLSA inverse filtering module is implemented with Assembly language. The speed of the fully implemented system is the 1.73 times faster than real time.

  • PDF