• Title/Summary/Keyword: Vowel classification

Search Result 42, Processing Time 0.023 seconds

Application of Machine Learning on Voice Signals to Classify Body Mass Index - Based on Korean Adults in the Korean Medicine Data Center (머신러닝 기반 음성분석을 통한 체질량지수 분류 예측 - 한국 성인을 중심으로)

  • Kim, Junho;Park, Ki-Hyun;Kim, Ho-Seok;Lee, Siwoo;Kim, Sang-Hyuk
    • Journal of Sasang Constitutional Medicine
    • /
    • v.33 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • Objectives The purpose of this study was to check whether the classification of the individual's Body Mass Index (BMI) could be predicted by analyzing the voice data constructed at the Korean medicine data center (KDC) using machine learning. Methods In this study, we proposed a convolutional neural network (CNN)-based BMI classification model. The subjects of this study were Korean adults who had completed voice recording and BMI measurement in 2006-2015 among the data established at the Korean Medicine Data Center. Among them, 2,825 data were used for training to build the model, and 566 data were used to assess the performance of the model. As an input feature of CNN, Mel-frequency cepstral coefficient (MFCC) extracted from vowel utterances was used. A model was constructed to predict a total of four groups according to gender and BMI criteria: overweight male, normal male, overweight female, and normal female. Results & Conclusions Performance evaluation was conducted using F1-score and Accuracy. As a result of the prediction for four groups, The average accuracy was 0.6016, and the average F1-score was 0.5922. Although it showed good performance in gender discrimination, it is judged that performance improvement through follow-up studies is necessary for distinguishing BMI within gender. As research on deep learning is active, performance improvement is expected through future research.

Classification of Signals Segregated using ICA (ICA로 분리한 신호의 분류)

  • Kim, Seon-Il
    • 전자공학회논문지 IE
    • /
    • v.47 no.4
    • /
    • pp.10-17
    • /
    • 2010
  • There is no general method to find out from signals of the channel outputs of ICA(Independent Component Analysis) which is what you want. Assuming speech signals contaminated with the sound from the muffler of a car, this paper presents the method which shows what you want, It is anticipated that speech signals will show larger correlation coefficients for speech signals than others. Batch, maximum and average method were proposed using 'ah', 'oh', 'woo' vowels whose signals were spoken by the same person who spoke the speech signals and using the same vowels whose signals are by another person. With the correlation coefficients which were calculated for each vowel, voting and summation methods were added. This paper shows what the best is among several methods tried.

Phoneme distribution and phonological processes of orthographic and pronounced phrasal words in light of syllable structure in the Seoul Corpus (음절구조로 본 서울코퍼스의 글 어절과 말 어절의 음소분포와 음운변동)

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.1-9
    • /
    • 2016
  • This paper investigated the phoneme distribution and phonological processes of orthographic and pronounced phrasal words in light of syllable structure in the Seoul Corpus in order to provide linguists and phoneticians with a clearer understanding of the Korean language system. To achieve the goal, the phrasal words were extracted from the transcribed label scripts of the Seoul Corpus using Praat. Following this, the onsets, peaks, codas and syllable types of the phrasal words were analyzed using an R script. Results revealed that k0 was most frequently used as an onset in both orthographic and pronounced phrasal words. Also, aa was the most favored vowel in the Korean syllable peak with fewer phonological processes in its pronounced form. The total proportion of all diphthongs according to the frequency of the peaks in the orthographic phrasal words was 8.8%, which was almost double those found in the pronounced phrasal words. For the codas, nn accounted for 34.4% of the total pronounced phrasal words and was the varied form. From syllable type classification of the Corpus, CV appeared to be the most frequent type followed by CVC, V, and VC from the orthographic forms. Overall, the onsets were more prevalent in the pronunciation more than the codas. From the results, this paper concluded that an analysis of phoneme distribution and phonological processes in light of syllable structure can contribute greatly to the understanding of the phonology of spoken Korean.

Spectral and Cepstral Analyses of Esophageal Speakers (식도발성화자 음성의 spectral & cepstral 분석)

  • Shim, Hee-Jeong;Jang, Hyo-Ryung;Shin, Hee-Baek;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.47-54
    • /
    • 2014
  • The purpose of this study was to analyze spectral versus cepstral measurements in esophageal speakers. The comparison between the measurements in thirteen male esophageal speakers was compared with the control group of thirteen normal speakers using the sustained vowel /a/. The main results can be summarized as below: (a) the CPP and L/H ratio of the esophageal group were significantly lower than those of the control group (b) the CPP was significantly correlated with the spectral parameters such as jitter, shimmer, NHR and VTI, and (c) the ROC analysis showed that the threshold of 10.25dB for the CPP achieved a good classification for esophageal speakers, with 100% perfect sensitivity and specificity. Thus, it was known that cepstral-based acoustic measures such as CPP, may be more reliable predictors than other spectral-based acoustic measures such as jitter and shimmer. And it was found that cepstral-based acoustic measures were effective in distinguishing esophageal voice quality from normal voice quality. This research will contribute to establishing a baseline related to speech characteristics in voice rehabilitation with laryngectomees.

A study on the phoneme recognition using radial basis function network (RBFN을 이용한 음소인식에 관한 연구)

  • 김주성;김수훈;허강인
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.5
    • /
    • pp.1026-1035
    • /
    • 1997
  • In this paper, we studied for phoneme recognition using GPFN and PNN as a kind of RBFN. The structure of RBFN is similar to a feedforward networks but different from choosing of activation function, reference vector and learnign algorithm in a hidden layer. Expecially sigmoid function in PNN is replaced by one category included exponential function. And total calculation performance is high, because PNN performs pattern classification with out learning. In phonemerecognition experiment with 5 vowel and 12 consant, recognition rates of GPFN and PNN as a kind of RBFN reflected statistic characteristic of speech are higher than ones of MLP in case of using test data and quantizied data by VQ and LVQ.

  • PDF

Classification of nasal places of articulation based on the spectra of adjacent vowels (모음 스펙트럼에 기반한 전후 비자음 조음위치 판별)

  • Jihyeon Yun;Cheoljae Seong
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.25-34
    • /
    • 2023
  • This study examined the utility of the acoustic features of vowels as cues for the place of articulation of Korean nasal consonants. In the acoustic analysis, spectral and temporal parameters were measured at the 25%, 50%, and 75% time points in the vowels neighboring nasal consonants in samples extracted from a spontaneous Korean speech corpus. Using these measurements, linear discriminant analyses were performed and classification accuracies for the nasal place of articulation were estimated. The analyses were applied separately for vowels following and preceding a nasal consonant to compare the effects of progressive and regressive coarticulation in terms of place of articulation. The classification accuracies ranged between approximately 50% and 60%, implying that acoustic measurements of vowel intervals alone are not sufficient to predict or classify the place of articulation of adjacent nasal consonants. However, given that these results were obtained for measurements at the temporal midpoint of vowels, where they are expected to be the least influenced by coarticulation, the present results also suggest the potential of utilizing acoustic measurements of vowels to improve the recognition accuracy of nasal place. Moreover, the classification accuracy for nasal place was higher for vowels preceding the nasal sounds, suggesting the possibility of higher anticipatory coarticulation reflecting the nasal place.

A Basic Study on the Differential Diagnostic System of Laryngeal Diseases using Hierarchical Neural Networks (다단계 신경회로망을 이용한 후두질환 감별진단 시스템의 개발)

  • 전계록;김기련;권순복;예수영;이승진;왕수건
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.3
    • /
    • pp.197-205
    • /
    • 2002
  • The objectives of this Paper is to implement a diagnostic classifier of differential laryngeal diseases from acoustic signals acquired in a noisy room. For this Purpose, the voice signals of the vowel /a/ were collected from Patients in a soundproof chamber and got mixed with noise. Then, the acoustic Parameters were analyzed, and hierarchical neural networks were applied to the data classification. The classifier had a structure of five-step hierarchical neural networks. The first neural network classified the group into normal and benign or malign laryngeal disease cases. The second network classified the group into normal or benign laryngeal disease cases The following network distinguished polyp. nodule. Palsy from the benign laryngeal cases. Glottic cancer cases were discriminated into T1, T2. T3, T4 by the fourth and fifth networks All the neural networks were based on multilayer perceptron model which classified non-linear Patterns effectively and learned by an error back-propagation algorithm. We chose some acoustic Parameters for classification by investigating the distribution of laryngeal diseases and Pilot classification results of those Parameters derived from MDVP. The classifier was tested by using the chosen parameters to find the optimum ones. Then the networks were improved by including such Pre-Processing steps as linear and z-score transformation. Results showed that 90% of T1, 100% of T2-4 were correctly distinguished. On the other hand. 88.23% of vocal Polyps, 100% of normal cases. vocal nodules. and vocal cord Paralysis were classified from the data collected in a noisy room.

A Study on Printed Hangeul Recognition with Dynamic Jaso Segmentation and Neural Network (동적자소분할과 신경망을 이용한 인쇄체 한글 문자인식기에 관한 연구)

  • 이판호;장희돈;남궁재찬
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.11
    • /
    • pp.2133-2146
    • /
    • 1994
  • In this paper, we present a method for dynamic Jaso segmentation and Hangeul recognition using neural network. It uses the feature vector which is extracted from the mesh depending on the segmentation result. At first, each character is converted to 256 dimension feature vector by four direction contributivity and $8\times8$ mesh. And then, the character is classified into 6 class by neural network and is segmented into Jaso using the classification result the statistic vowel location information and the structural information. After Jaso segmentation, Hanguel recognition using neural network is performed. We experiment on four font of which three fonts are used for training the neural net and the rest is used of testing. Each font has the 2350 characters which are comprised in KS C 5601. The overall recognition rates for the training data and the testing data are 97,4% and 94&% respectively. This result shows the effectivness of proposed method.

  • PDF

The maximum phonation time and temporal aspects in Korean stops in children with spastic cerebral palsy (경직형 뇌성마비 아동의 최대 발성지속시간과 파열음 산출 시 조음시간 특성 비교)

  • Jeong, Jin-Ok;Kim, Deog-Yong;Sim, Hyun-Sub;Park, Eun-Sook
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.135-143
    • /
    • 2011
  • This study evaluated the respiratory capacity of spastic cerebral palsy children who were grouped by GMFCS (Gross Motor Function Classification System) levels and identified the acoustic characteristics of three different types of Korean stops (stop consonants) which are needed for the temporal coordination of larynx and supra-larynx, in these children. Thirty-two children with dysarthria due to spastic cerebral palsy were divided into two subgroups: 14 children classified at GMFCS levels I~III were placed in Group 1 and 18 classified at GMFCS levels IV~V were placed in Group 11, and 18 children with normal speech were selected and placed in the control group. /a/ pronged phonation (sustained vowel /a/) and nine Korean VCV syllables were used. Examined acoustic characteristics were maximum phonation time (MPT) and closure duration and aspiration duration. The results were as follows: 1) The MPTs of the cerebral palsy (CP) groups, both Group I and Group II, were significantly shorter than those of the normal group. 2) The closure durations of the two CP groups were longer than those of the normal group for all 9 target syllables. 3) The aspiration durations of the two CP groups were longer than those of the normal group. 4) The closure duration of the normal and CP Group I was significantly different among tense, aspirated, and lax. However, the CP Group II was different from normal. 5) The aspiration duration of the normal and CP Group I was significantly different among aspirated, tense, and lax. However, the CP Group II was different from normal. 6) The place of articulation influenced less than the manner of articulation on closure and aspiration duration.

  • PDF

Lip-reading System based on Bayesian Classifier (베이지안 분류를 이용한 립 리딩 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Pronunciation recognition systems that use only video information and ignore voice information can be applied to various customized services. In this paper, we develop a system that applies a Bayesian classifier to distinguish Korean vowels via lip shapes in images. We extract feature vectors from the lip shapes of facial images and apply them to the designed machine learning model. Our experiments show that the system's recognition rate is 94% for the pronunciation of 'A', and the system's average recognition rate is approximately 84%, which is higher than that of the CNN tested for comparison. Our results show that our Bayesian classification method with feature values from lip region landmarks is efficient on a small training set. Therefore, it can be used for application development on limited hardware such as mobile devices.