• Title/Summary/Keyword: formant frequencies

Search Result 75, Processing Time 0.019 seconds

The effect of palatal height on the Korean vowels (구개의 높이가 한국어 모음 발음에 미치는 효과에 관한 연구)

  • Chung, Bo-Yoon;Lim, Young-Jun;Kim, Myung-Joo;Nam, Shin-Eun;Lee, Seung-Pyo;Kwon, Ho-Beom
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.48 no.1
    • /
    • pp.69-74
    • /
    • 2010
  • Purpose: The purpose of this study was to analyze the influence of palatal height on Korean vowels and speech intelligibility in Korean adults and to produce baseline data for future prosthodontic treatment. Material and methods: Forty one healthy Korean men and women who had no problem in pronunciation, hearing, and communication and had no history of airway disease participated in this study. Subjects were classified into H, M, and L groups after clinical determination of palatal height with study casts. Seven Korean vowels were used as sample vowels and subjects'clear speech sounds were recorded using Multispeech software program on computer. The F1 and the F2 of 3 groups were produced and they were compared. In addition, the vowel working spaces of 3 groups by /a/, /i/, and /u/ corner vowels were obtained and their areas were compared. Kruskal-Wallis test and Mann-Whiteny U test were used as statistical methods and P < .05 was considered statistically significant. Results: There were no significant differences in formant frequencies among 3 groups except for the F2 formant frequency between H and L group (P = .003). In the analysis of vowel working space areas of 3 groups, the vowel working spaces of 3 groups were similar in shape and no significant differences of their areas were found. Conclusion: The palatal height did not affect vowel frequencies in most of the vowels and speech intelligibility. The dynamics of tongue activity seems to compensate the morphological difference.

A Signal Processing Technique for Predictive Fault Detection based on Vibration Data (진동 데이터 기반 설비고장예지를 위한 신호처리기법)

  • Song, Ye Won;Lee, Hong Seong;Park, Hoonseok;Kim, Young Jin;Jung, Jae-Yoon
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.2
    • /
    • pp.111-121
    • /
    • 2018
  • Many problems in rotating machinery such as aircraft engines, wind turbines and motors are caused by bearing defects. The abnormalities of the bearing can be detected by analyzing signal data such as vibration or noise, proper pre-processing through a few signal processing techniques is required to analyze their frequencies. In this paper, we introduce the condition monitoring method for diagnosing the failure of the rotating machines by analyzing the vibration signal of the bearing. From the collected signal data, the normal states are trained, and then normal or abnormal state data are classified based on the trained normal state. For preprocessing, a Hamming window is applied to eliminate leakage generated in this process, and the cepstrum analysis is performed to obtain the original signal of the signal data, called the formant. From the vibration data of the IMS bearing dataset, we have extracted 6 statistic indicators using the cepstral coefficients and showed that the application of the Mahalanobis distance classifier can monitor the bearing status and detect the failure in advance.

Laryngeal Findings and Phonetic Characteristics in Prelingually Deaf Patients (언어습득기 이전 청각장애인의 후두소견 및 음성학적 특성)

  • Kim, Seong-Tae;Yoon, Tae-Hyun;Kim, Sang-Yoon;Choi, Seung-Ho;Nam, Soon-Yuhl
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.20 no.1
    • /
    • pp.57-62
    • /
    • 2009
  • Background and Objectives : There are few studies reported that specifically examine the laryngeal function in patients with profound hearing loss or deafness, This study was designed to examine videostroboscopic findings and phonetic characteristics in adult patients with prelingually deaf. Materials and Method: Sixteen patients (seven males, nine females) diagnosed as prelingually deaf aged from 19 to 54 years, and were compared with a 20 normal control group with no laryngeal pathology and normal hearing group, Videostroboscopic evaluations were rated by experienced judges on various parameters describing the structure and function of the laryngeal mechanism during comfortable pitch and loudness phonations. Acoustic analysis test were done, and a nasalance test performed to measure rabbit, baby, and mother passage. CSL were measured to determine the first and two formant frequencies of vowels /a/, /i/, /u/, Statistical analysis was done using Mann-Whitney U or Wilcoxon signed ranks test. Results: Videostroboscopic findings showed phase symmetry but significantly more occurrences decrement in the amplitude of vibration, mucosal wave, irregularity of the vibration and increased glottal gap size during the closed phase of phonation, In addition, group of prelingually deaf patients were observed to have significantly more occurrences of abnormal supraglottic activities during phonation. The percentage of shimmer in the group of prelingually deaf patients were higher than in the control group. Characteristics of vowels were lower of the second formant of the vowel /i/. Nasalance in prelingually deaf patients showed normal nasality for all passages, Conclusion: Prelingually deaf patients show stroboscopic abnormal findings without any mucosal lesion, suggesting that they have considerable functional voice disorder. We suggest that prelingually deaf adults should perform vocal training for normalized laryngeal function after cochlear implantation.

  • PDF

Production of English Vowels by Korean Learners (한국인 학습자의 영어 모음 발화 연구)

  • Lee, Kye-Youn;Cho, Mi-Hui
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.9
    • /
    • pp.495-503
    • /
    • 2013
  • The purpose of this study was to investigate how Korean speakers produce English vowels. Twenty one Korean learners produced the vowels [i, ɪ, eɪ, ɛ, æ, ɑ, ʌ, ɔ, oʊ, ʊ, u] in bVt or pVt forms of real words. Acoustic measurements were conducted for the vowel formant frequencies (F1, F2) and duration. Results showed that Korean learners tended to produce the vowel duration longer than native English speakers. Also, the front vowels produced by Korean participants tended to be produced at the more frontal part of the tongue. In addition, Korean participants distinguished the tense and lax pairs not through quality(F1, F2) but through vowel duration. This is different from the native English speakers in that they differentiate tense and lax pairs by quality(F1, F2) as well as vowel duration. Based on these results, pedagogical implications are discussed.

Effective Feature Vector for Isolated-Word Recognizer using Vocal Cord Signal (성대신호 기반의 명령어인식기를 위한 특징벡터 연구)

  • Jung, Young-Giu;Han, Mun-Sung;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.226-234
    • /
    • 2007
  • In this paper, we develop a speech recognition system using a throat microphone. The use of this kind of microphone minimizes the impact of environmental noise. However, because of the absence of high frequencies and the partially loss of formant frequencies, previous systems developed with those devices have shown a lower recognition rate than systems which use standard microphone signals. This problem has led to researchers using throat microphone signals as supplementary data sources supporting standard microphone signals. In this paper, we present a high performance ASR system which we developed using only a throat microphone by taking advantage of Korean Phonological Feature Theory and a detailed throat signal analysis. Analyzing the spectrum and the result of FFT of the throat microphone signal, we find that the conventional MFCC feature vector that uses a critical pass filter does not characterize the throat microphone signals well. We also describe the conditions of the feature extraction algorithm which make it best suited for throat microphone signal analysis. The conditions involve (1) a sensitive band-pass filter and (2) use of feature vector which is suitable for voice/non-voice classification. We experimentally show that the ZCPA algorithm designed to meet these conditions improves the recognizer's performance by approximately 16%. And we find that an additional noise-canceling algorithm such as RAST A results in 2% more performance improvement.