• Title/Summary/Keyword: phoneme frequency

Search Result 52, Processing Time 0.026 seconds

Application of Preemphasis FIR Filtering To Speech Detection and Phoneme Segmentation (프리엠퍼시스 FIR 필터링의 음성 검출 및 음소 분할에의 응용)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.5
    • /
    • pp.665-670
    • /
    • 2013
  • In this paper, we propose a new method of speech detection and phoneme segmentation. We investigate the effect of applying preemphasis FIR filtering on the speech signal before the usual speech detection that utilizes the energy profile for discriminating signals from background noise. By this procedure, only the speech section of low energy and frequency becomes distinct in energy profile. It is verified experimentally that the silence/speech boundary becomes sharper by applying the filtering compared to the conventional method. By applications of this procedure, phoneme segmentation is also found to be much facilitated.

Phoneme distribution and phonological processes of orthographic and pronounced phrasal words in light of syllable structure in the Seoul Corpus (음절구조로 본 서울코퍼스의 글 어절과 말 어절의 음소분포와 음운변동)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.1-9
    • /
    • 2016
  • This paper investigated the phoneme distribution and phonological processes of orthographic and pronounced phrasal words in light of syllable structure in the Seoul Corpus in order to provide linguists and phoneticians with a clearer understanding of the Korean language system. To achieve the goal, the phrasal words were extracted from the transcribed label scripts of the Seoul Corpus using Praat. Following this, the onsets, peaks, codas and syllable types of the phrasal words were analyzed using an R script. Results revealed that k0 was most frequently used as an onset in both orthographic and pronounced phrasal words. Also, aa was the most favored vowel in the Korean syllable peak with fewer phonological processes in its pronounced form. The total proportion of all diphthongs according to the frequency of the peaks in the orthographic phrasal words was 8.8%, which was almost double those found in the pronounced phrasal words. For the codas, nn accounted for 34.4% of the total pronounced phrasal words and was the varied form. From syllable type classification of the Corpus, CV appeared to be the most frequent type followed by CVC, V, and VC from the orthographic forms. Overall, the onsets were more prevalent in the pronunciation more than the codas. From the results, this paper concluded that an analysis of phoneme distribution and phonological processes in light of syllable structure can contribute greatly to the understanding of the phonology of spoken Korean.

Phoneme segmentation and Recognition using Support Vector Machines (Support Vector Machines에 의한 음소 분할 및 인식)

  • Lee, Gwang-Seok;Kim, Deok-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.981-984
    • /
    • 2010
  • In this paper, we used Support Vector Machines(SVMs) as the learning method, one of Artificial Neural Network, to segregated from the continuous speech into phonemes, an initial, medial, and final sound, and then, performed continuous speech recognition from it. A Decision boundary of phoneme is determined by algorithm with maximum frequency in a short interval. Speech recognition process is performed by Continuous Hidden Markov Model(CHMM), and we compared it with another phoneme segregated from the eye-measurement. From the simulation results, we confirmed that the method, SVMs, we proposed is more effective in an initial sound than Gaussian Mixture Models(GMMs).

  • PDF

Phoneme Segmentation in Consideration of Speech feature in Korean Speech Recognition (한국어 음성인식에서 음성의 특성을 고려한 음소 경계 검출)

  • 서영완;송점동;이정현
    • Journal of Internet Computing and Services
    • /
    • v.2 no.1
    • /
    • pp.31-38
    • /
    • 2001
  • Speech database built of phonemes is significant in the studies of speech recognition, speech synthesis and analysis, Phoneme, consist of voiced sounds and unvoiced ones, Though there are many feature differences in voiced and unvoiced sounds, the traditional algorithms for detecting the boundary between phonemes do not reflect on them and determine the boundary between phonemes by comparing parameters of current frame with those of previous frame in time domain, In this paper, we propose the assort algorithm, which is based on a block and reflecting upon the feature differences between voiced and unvoiced sounds for phoneme segmentation, The assort algorithm uses the distance measure based upon MFCC(Mel-Frequency Cepstrum Coefficient) as a comparing spectrum measure, and uses the energy, zero crossing rate, spectral energy ratio, the formant frequency to separate voiced sounds from unvoiced sounds, N, the result of out experiment, the proposed system showed about 79 percents precision subject to the 3 or 4 syllables isolated words, and improved about 8 percents in the precision over the existing phonemes segmentation system.

  • PDF

Phoneme Recognition and Error in Postlingually Deafened Adults with Cochlear Implantation (언어습득 이후 난청 성인 인공와우이식자의 음소 지각과 오류)

  • Choi, A.H.;Heo, S.D.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.8 no.3
    • /
    • pp.227-232
    • /
    • 2014
  • The aim of this study was to investigate phoneme recognition in postlingually deafened adults with cochlear implantation. 21-cochlear implantee were participated. They was used cochlear implants more than 1 year. In order to measure consonant performance abilities, subjects were asked for 18 items of Korean consonants in a "aCa" condition with audition alone. The scores ranged from 11 to 86 ($60{\pm}17$)%. The consonant performance abilities correlated with implanted hearing threshold level, significantly (p<.046). This results suggest that consonant performance abilities of postlingual deafened adults cochlear implantee be important for implanted hearing. They had higher correct rates for fricatives and affricatives with distinctive frequency bands than for plosives, liquids & nasals with the same or adjacent frequency bands. All subjects had confusion patterns among the consonants of the same manner of articulation. The reason of consonant confusions was caused that they couldn't recognize different intensities and durations of consonants with the same or adjacent frequency bands.

  • PDF

Phoneme distribution and syllable structure of entry words in the CMU English Pronouncing Dictionary

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.11-16
    • /
    • 2016
  • This study explores the phoneme distribution and syllable structure of entry words in the CMU English Pronouncing Dictionary to provide phoneticians and linguists with fundamental phonetic data on English word components. Entry words in the dictionary file were syllabified using an R script and examined to obtain the following results: First, English words preferred consonants to vowels in their word components. In addition, monophthongs occurred much more frequently than diphthongs. When all consonants were categorized by manner and place, the distribution indicated the frequency order of stops, fricatives, and nasals according to manner and that of alveolars, bilabials and velars according to place. These results were comparable to the results obtained from the Buckeye Corpus (Yang, 2012). Second, from the analysis of syllable structure, two-syllable words were most favored, followed by three- and one-syllable words. Of the words in the dictionary, 92.7% consisted of one, two or three syllables. This result may be related to human memory or decoding time. Third, the English words tended to exhibit discord between onset and coda consonants and between adjacent vowels. Dissimilarity between the last onset and the first coda was found in 93.3% of the syllables, while 91.6% of the adjacent vowels were different. From the results above, the author concludes that an analysis of the phonetic symbols in a dictionary may lead to a deeper understanding of English word structures and components.

Phoneme Separation and Establishment of Time-Frequency Discriminative Pattern on Korean Syllables (음절신호의 음소 분리와 시간-주파수 판별 패턴의 설정)

  • 류광열
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.12
    • /
    • pp.1324-1335
    • /
    • 1991
  • In this paper, a phoneme separation and an establishment of discriminative pattern of Korean phonemes are studied on experiment. The separation uses parameters such as pitch extraction, glottal peak pulse width of each pitch. speech duration. envelope and amplitude bias. The first pitch is extracted by deviations of glottal peak and width. energy and normalization on a bias on the top of vowel envelope. And then, it traces adjacent pitch to vowel in whole. On vewel, amethod to be reduced gliding pattern and the possible of vowel distinction to be used just second formant are proposed, and shrinking pitch waveform has nothing to do with pitch length is estimated. A pattern of envelope, spectrum, shrinking waveform, and a method of analysis by mutual relation among phonemes and manners of articulation on consonant are detected. As experimental results, 90% on vowel phoneme, 80% and 60% on initial and final consonant are discriminated.

  • PDF

A Study on the Spectrum Variation of Korean Speech (한국어 음성의 스펙트럼 변화에 관한 연구)

  • Lee Sou-Kil;Song Jeong-Young
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.179-186
    • /
    • 2005
  • We can extract spectrum of the voices and analyze those, after employing features of frequency that voices have. In the spectrum of the voices monophthongs are thought to be stable, but when a consonant(s) meet a vowel(s) in a syllable or a word, there is a lot of changes. This becomes the biggest obstacle to phoneme speech recognition. In this study, using Mel Cepstrum and Mel Band that count Frequency Band and auditory information, we analyze the spectrums that each and every consonant and vowel has and the changes in the voices reftects auditory features and make it a system. Finally we are going to present the basis that can segment the voices by an unit of phoneme.

  • PDF

Acoustic Characteristics of Patients' Speech Before and After Orthognathic Surgery (부정교합환자의 수술전.후 발음변화에 관한 음향학적 특성)

  • Jeon, Gyeong-Sook;Kim, Dong-Chil;Hwang, Sang-Joon;Shin, Hyo-Keun;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.14 no.3
    • /
    • pp.93-109
    • /
    • 2007
  • It is reported that the orthognathic patients suffer from not only aesthetic problems but also resonance disorder and articulation disorder because of the abnormality of the oral cavity. This study was designed to investigate the resonance of nasality and the intelligibility of speech for acoustic characteristics of patients' speech before and after orthognatic surgery. 8 orthognathic patients participated in the study. The nasality of words containing Korean consonants, Korean consonants and frequency and intensity of the fricative /s/ were measured using Nasometer and CSL (Computerized Speech Lab). Results were as follows: First, the nasality of post orthognathic surgery patients decreased in spontaneous speech. There was a significant difference in the nasality for all words between pre and post orthognatic surgery patients. Second, the nasality of each Korean consonant phoneme of post orthognathic surgery patients decreased. There was also a significant difference of the nasality for each Korean consonant phoneme between pre and post orthognatic surgery patients. Third, the decreased nasality for Korean consonant phonemes showed in plosives, affricates, fricatives, liquids, and nasals after surgery. But the significant difference showed only in plosives and fricatives. Finally, frequency and intensity for the fricative /s/ of post orthognathic patients increased.

  • PDF

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • v.35 no.1
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.