• 제목/요약/키워드: Speech Data Classification

검색결과 115건 처리시간 0.029초

Performance of GMM and ANN as a Classifier for Pathological Voice

  • Wang, Jianglin;Jo, Cheol-Woo
    • 음성과학
    • /
    • 제14권1호
    • /
    • pp.151-162
    • /
    • 2007
  • This study focuses on the classification of pathological voice using GMM (Gaussian Mixture Model) and compares the results to the previous work which was done by ANN (Artificial Neural Network). Speech data from normal people and patients were collected, then diagnosed and classified into two different categories. Six characteristic parameters (Jitter, Shimmer, NHR, SPI, APQ and RAP) were chosen. Then the classification method based on the artificial neural network and Gaussian mixture method was employed to discriminate the data into normal and pathological speech. The GMM method attained 98.4% average correct classification rate with training data and 95.2% average correct classification rate with test data. The different mixture number (3 to 15) of GMM was used in order to obtain an optimal condition for classification. We also compared the average classification rate based on GMM, ANN and HMM. The proper number of mixtures on Gaussian model needs to be investigated in our future work.

  • PDF

음성을 이용한 사상체질 분류 알고리즘 (Automated Speech Analysis Applied to Sasang Constitution Classification)

  • 강재환;유종향;이혜정;김종열
    • 말소리와 음성과학
    • /
    • 제1권3호
    • /
    • pp.155-163
    • /
    • 2009
  • This paper introduces an automatic voice classification system for the diagnosis of individual constitution based on Sasang Constitutional Medicine (SCM) in Traditional Korean Medicine (TKM). For the developing of this algorithm, we used the voices of 473 speakers and extracted a total of 144 speech features from the speech data consisting of five sustained vowels and one sentence. The classification system, based on a rule-based algorithm that is derived from a non parametric statistical method, presents binary negative decisions. In conclusion, 55.7% of the speech data were diagnosed by this system, of which 72.8% were correct negative decisions.

  • PDF

적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법 (Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data)

  • 신미르;신유현
    • 정보처리학회 논문지
    • /
    • 제13권4호
    • /
    • pp.174-180
    • /
    • 2024
  • 본 논문에서는 wav2vec 2.0과 KcELECTRA 모델을 활용하여 멀티모달 학습을 통한 감정 분류 방법을 탐색한다. 음성 데이터와 텍스트 데이터를 함께 활용하는 멀티모달 학습이 음성만을 활용하는 방법에 비해 감정 분류 성능을 유의미하게 향상시킬 수 있음이 알려져 있다. 본 연구는 자연어 처리 분야에서 우수한 성능을 보인 BERT 및 BERT 파생 모델들을 비교 분석하여 텍스트 데이터의 효과적인 특징 추출을 위한 최적의 모델을 선정하여 텍스트 처리 모델로 활용한다. 그 결과 KcELECTRA 모델이 감정 분류 작업에서 뛰어난 성능이 보임을 확인하였다. 또한, AI-Hub에 공개되어 있는 데이터 세트를 활용한 실험을 통해 텍스트 데이터를 함께 활용하면 음성 데이터만 사용할 때보다 더 적은 양의 데이터로도 더 우수한 성능을 달성할 수 있음을 발견하였다. 실험을 통해 KcELECTRA 모델을 활용한 경우가 정확도 96.57%로 가장 우수한 성능을 보였다. 이는 멀티모달 학습이 감정 분류와 같은 복잡한 자연어 처리 작업에서 의미 있는 성능 개선을 제공할 수 있음을 보여준다.

Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류 (Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network)

  • 이태주;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제21권1호
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

한국어 음성을 이용한 연령 분류 딥러닝 알고리즘 기술 개발 (Development of Age Classification Deep Learning Algorithm Using Korean Speech)

  • 소순원;유승민;김주영;안현준;조백환;육순현;김인영
    • 대한의용생체공학회:의공학회지
    • /
    • 제39권2호
    • /
    • pp.63-68
    • /
    • 2018
  • In modern society, speech recognition technology is emerging as an important technology for identification in electronic commerce, forensics, law enforcement, and other systems. In this study, we aim to develop an age classification algorithm for extracting only MFCC(Mel Frequency Cepstral Coefficient) expressing the characteristics of speech in Korean and applying it to deep learning technology. The algorithm for extracting the 13th order MFCC from Korean data and constructing a data set, and using the artificial intelligence algorithm, deep artificial neural network, to classify males in their 20s, 30s, and 50s, and females in their 20s, 40s, and 50s. finally, our model confirmed the classification accuracy of 78.6% and 71.9% for males and females, respectively.

Construction of Customer Appeal Classification Model Based on Speech Recognition

  • Sheng Cao;Yaling Zhang;Shengping Yan;Xiaoxuan Qi;Yuling Li
    • Journal of Information Processing Systems
    • /
    • 제19권2호
    • /
    • pp.258-266
    • /
    • 2023
  • Aiming at the problems of poor customer satisfaction and poor accuracy of customer classification, this paper proposes a customer classification model based on speech recognition. First, this paper analyzes the temporal data characteristics of customer demand data, identifies the influencing factors of customer demand behavior, and determines the process of feature extraction of customer voice signals. Then, the emotional association rules of customer demands are designed, and the classification model of customer demands is constructed through cluster analysis. Next, the Euclidean distance method is used to preprocess customer behavior data. The fuzzy clustering characteristics of customer demands are obtained by the fuzzy clustering method. Finally, on the basis of naive Bayesian algorithm, a customer demand classification model based on speech recognition is completed. Experimental results show that the proposed method improves the accuracy of the customer demand classification to more than 80%, and improves customer satisfaction to more than 90%. It solves the problems of poor customer satisfaction and low customer classification accuracy of the existing classification methods, which have practical application value.

EVS 코덱에서 보청기를 위한 RNN 기반의 음성/음악 분류 성능 향상 (Improvement of Speech/Music Classification Based on RNN in EVS Codec for Hearing Aids)

  • 강상익;이상민
    • 재활복지공학회논문지
    • /
    • 제11권2호
    • /
    • pp.143-146
    • /
    • 2017
  • 본 논문에서는 recurrent neural network (RNN)을 이용하여 보청기 시스템을 위한 기존의 3GPP enhanced voice services (EVS) 코덱의 음성/음악 분류 성능을 향상시키는 방법을 제시한다. 구체적으로, EVS의 음성/음악 분류 알고리즘에서 사용된 특징벡터만을 사용하여 효과적으로 RNN을 구성한 분류기법을 제시한다. 다양한 음악장르 및 잡음 환경에 대해 시스템의 성능을 평가한 결과 RNN을 이용하였을 때 기존의 EVS의 방법보다 우수한 음성/음악 분류 성능을 보였다.

VCV 연쇄음성상에 존재하는 한국어 자음의 분류에 관한 연구 (A Study on the Classification of the Korean Consonants in the VCV Speech Chain)

  • 최윤석;김기석;김원준;황희영
    • 대한전기학회논문지
    • /
    • 제39권6호
    • /
    • pp.607-615
    • /
    • 1990
  • In this paper, I propose the experimental models to classify the consonants in the Vowel-Consonant-Vowel (VCV) speech chain into four phonemic groups such as nasals, liquids, plosives and the others. To classify the fuzzy patterns like speech, it is necessary to analyze the distribution of acoustic feature of many training data. The classification rules are maximum 4 th order polynomial functions obtained by regression analysis, contributing collectively the result. The final result shows about 87% success rates with the data spoken by one man.

Japanese Vowel Sound Classification Using Fuzzy Inference System

  • Phitakwinai, Suwannee;Sawada, Hideyuki;Auephanwiriyakul, Sansanee;Theera-Umpon, Nipon
    • 한국융합학회논문지
    • /
    • 제5권1호
    • /
    • pp.35-41
    • /
    • 2014
  • An automatic speech recognition system is one of the popular research problems. There are many research groups working in this field for different language including Japanese. Japanese vowel recognition is one of important parts in the Japanese speech recognition system. The vowel classification system with the Mamdani fuzzy inference system was developed in this research. We tested our system on the blind test data set collected from one male native Japanese speaker and four male non-native Japanese speakers. All subjects in the blind test data set were not the same subjects in the training data set. We found out that the classification rate from the training data set is 95.0 %. In the speaker-independent experiments, the classification rate from the native speaker is around 70.0 %, whereas that from the non-native speakers is around 80.5 %.

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • 제28권6호
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF