• Title/Summary/Keyword: Articulatory Organs

Search Result 12, Processing Time 0.055 seconds

Development of Speech Training Aids Using Vocal Tract Profile (조음도를 이용한 발음훈련기기의 개발)

  • 박상희;김동준;이재혁;윤태성
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.2
    • /
    • pp.209-216
    • /
    • 1992
  • Deafs train articulation by observing mouth of a tutor, sensing tactually the motions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech parameter, or display only frequency spectra in histogram of pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system is to be developed and this system makes a subject know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory motions of the vocal organs from speech signal. Next, a vocal tract profile model using LP analysis is made up. And using this model, articulatory motions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

On the Implementation of Articulatory Speech Simulator Using MRI (MRI를 이용한 조음모델시뮬레이터 구현에 관하여)

  • Jo, Cheol-Woo
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.45-55
    • /
    • 1997
  • This paper describes the procedure of implementing an articulatory speech simulator, in order to model the human articulatory organs and to synthesize speech from this model after. Images required to construct the vocal tract model were obtained from MRI, they were then used to construct 2D and 3D vocal tract shapes. In this paper 3D vocal tract shapes were constructed by spatially concatenating and interpolating sectional MRI images. 2D vocal tract shapes were constructed and analyzed automatically into a digital filter model. Following this speech sounds corresponding to the model were then synthesized from the filter. All procedures in this study were using MATLAB.

  • PDF

Collection of Korean Audio-video Speech Data

  • Jo, Cheol-Woo;Goecke, Roland;Millar, Bruce
    • Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.5-15
    • /
    • 2000
  • In this paper a detailed description of collecting Korean audio-video speech data is presented. The main aim of this experiment is to collect some audio-video materials which can be used for later experiments to estimate and model the actions of the visible human articulatory organs such as mouth, lips and jaw. We collect audio-video data from seven directions separately. Twelve markers are used to trace the movements.

  • PDF

Rate and Regularity of Articulatory Diadochokinetic Performance in Healthy Korean Elderly via Acoustic Analysis (음향학적 분석을 통한 노년층 연령에 따른 조음교대운동의 속도 및 규칙성)

  • Cho, Yoonhee;Kim, Hyanghee
    • Phonetics and Speech Sciences
    • /
    • v.5 no.3
    • /
    • pp.95-101
    • /
    • 2013
  • Aging is related to anatomical and physiological changes in respiratory and phonation organs. These changes influence articulation which leads to inaccurate speech and slow articulatory diadochokinesis(DDK). DDK indicates the range, rate, regularity, accuracy, and agility of articulation that reflect motor speech function. The purpose of this study is to investigate the rates and regularities of DDK in healthy Korean elderly through passive acoustic analysis (Praat). Thirty subjects between the ages of 65 and 94 participated in this study. Rate was observed for 5 seconds, while regularity was calculated based on the standard deviation on the following: 1) syllable duration of each task; 2) gap duration between syllables. Then, simple regression analysis was conducted in order to examine the effect of age on performance. The result showed that the slow rate was not a significant factor in terms of advancing age. Furthermore, regularity indicated a significant difference in the following: 1) /pʌ/, /kʌ/ and /pʌtʌkʌ/ in syllable duration; 2) /kʌ/ duration in the gap between syllables. In conclusion, articulatory coordination is reduced with the onset of aging. In particular, /kʌ/ would be a sensitive task for articulatory coordination.

Speech training aids for deafs (청각 장애자용 발음 훈련 기기의 개발)

  • 김동준;윤태성;박상희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.746-751
    • /
    • 1991
  • Deafs train articulation by observing mouth of a tutor. sensing tactually the notions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech ter, or display only frequency spectra in histogrm or pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system Is aimed to develop and this system makes a subject to know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory notions of the vocal tract from speech signal. Next, a vocal tract profile mode using LPC analysis is made up. And using this model, articulatory notions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

Remote Articulation Training System for the Deafs (청각장애자를 위한 원격조음훈련시스템의 개발)

  • Shin, T.K.;Shin, C.H.;Lee, J.H.;Yoo, S.K.;Park, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.11
    • /
    • pp.114-117
    • /
    • 1996
  • In this study, remote articulation training system which connects the hearing disabled trainee and the speech therapist via B-ISDN is introduced. The hearing disabled does not have the hearing feedback of his own pronunciation, and the chance of watching his speech organs' movement trajectory will offer him the self-training of articulation. So the system has two purposes of self articulation training and trainer's on-line checking in remote place. We estimate the vocal tract articulatory movements from the speech signal using inverse modelling and display the movement trajectory on the sideview of human face graphically. The trajectories of trainees' articulation is displayed along with the reference trajectories, so the trainee can control his articulating to make the two trajectories overlapped. For on-line communication and ckecking training record, the system has the function of video conferencing and transferring articulatory data.

  • PDF

Remote Articulation Training System for the Deafs (청각장애자를 위한 원격조음훈련시스템의 개발)

  • 이재혁;유선국;박상희
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.7 no.1
    • /
    • pp.43-49
    • /
    • 1996
  • In this study, remote articulation training system which connects the hearing disabled trainee and the speech therapist via B-ISDN is introduced. The hearing disabled does not have the hearing feedback of his own pronuciation, and the chance of watching his speech organs movement trajectory will offer him the self-training of articulation. So the system has two purposes of self articulation training and trainer's on-line checking in remote place. We estimate the vocal tract articultory movements from the speech signal using inverse modelling and display the movement trajectoy on the sideview of human face graphically. The trajectories of trainees articulation is displayed along with the reference trajectories, so the trainee can control his articulating to make the two trajectories overlapped. For on-line communication and ckecking training record the system has the function of video conferencing and tranferring articulatory data.

  • PDF

Implementation of Continuous Utterance Using Buffer Rearrangement for Articula Synthesizer (조음 음성 합성기에서 버퍼 재정렬을 이용한 연속음 구현)

  • Lee, Hui-Sung;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2454-2456
    • /
    • 2002
  • Since articuratory synthesis models the human vocal organs as precise as possible, it is potentially the most desirable method to produce various words and languages. This paper proposes a new type of an articulatory synthesizer using Mermelstein vocal tract model and Kelly-Lochbaum digital filter. Previous researches have assumed that the length of the vocal tract or the number of its cross sections dose not vary while uttering. However, the continuous utterance can not be easily implemented under this assumption. The limitation is overcomed by "Buffer Rearrangement" for dynamic vocal tract in this paper.

  • PDF

Diagnosing Vocal Disorders using Cobweb Clustering of the Jitter, Shimmer, and Harmonics-to-Noise Ratio

  • Lee, Keonsoo;Moon, Chanki;Nam, Yunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5541-5554
    • /
    • 2018
  • A voice is one of the most significant non-verbal elements for communication. Disorders in vocal organs, or habitual muscular setting for articulatory cause vocal disorders. Therefore, by analyzing the vocal disorders, it is possible to predicate vocal diseases. In this paper, a method of predicting vocal disorders using the jitter, shimmer, and harmonics-to-noise ratio (HNR) extracted from vocal records is proposed. In order to extract jitter, shimmer, and HNR, one-second's voice signals are recorded in 44.1khz. In an experiment, 151 voice records are collected. The collected data set is clustered using cobweb clustering method. 21 classes with 12 leaves are resulted from the data set. According to the semantics of jitter, shimmer, and HNR, the class whose centroid has lowest jitter and shimmer, and highest HNR becomes the normal vocal group. The risk of vocal disorders can be predicted by measuring the distance and direction between the centroids.

Spectrum Feature Analysis of Crying Sounds of Infant Cold and Pneumonia (소아감기와 소아폐렴간의 울음소리 스펙트럼 특징 분석)

  • Kim, Bong-Hyun;Lee, Se-Hwan;Cho, Dong-Uk
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.301-306
    • /
    • 2008
  • Recently, various health care methods for infants have been suggested in the impending era of low birth rate society. We propose, in this context, an early diagnosis method for common infant respiratory diseases. Particularly, the method is regarding infant cold and infant pneumonia. Firstly, sounds of infant crying, only expressing means of infants, among the infant cold group and the infant pneumonia group are compared and examined to find the differences from those among the healthy infant group. For this, the link between infected organs and articulatory organs is investigated. Also, resulting wave forms and frequency bandwidths among each group are compared and analyzed, by using the spectrum for a component voice, to diagnose the infant cold and pneumonia. Finally, the effectiveness of this method is verified through the experiments.