• Title/Summary/Keyword: Gaussian mixture models

Search Result 98, Processing Time 0.024 seconds

New Data Extraction Method using the Difference in Speaker Recognition (화자인식에서 차분을 이용한 새로운 데이터 추출 방법)

  • Seo, Chang-Woo;Ko, Hee-Ae;Lim, Yong-Hwan;Choi, Min-Jung;Lee, Youn-Jeong
    • Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.7-15
    • /
    • 2008
  • This paper proposes the method to extract new feature vectors using the difference between the cepstrum for static characteristics and delta cepstrum for dynamic characteristics in speaker recognition (SR). The difference vector (DV) which it proposes from this paper is containing the static and the dynamic characteristics simultaneously at the intermediate characteristic vector which uses the deference between the static and the dynamic characteristics and as the characteristic vector which is new there is a possibility of doing. Compared to the conventional method, the proposed method can achieve new feature vector without increasing of new parameter, but only need the calculation process for the difference between the cepstrum and delta cepstrum. Experimental results show that the proposed method has a good performance more than 2.03%, on average, compared with conventional method in speaker identification (SI).

  • PDF

Detection of Abnormal Signals in Gas Pipes Using Neural Networks

  • Min, Hwang-Ki;Park, Cheol-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.669-670
    • /
    • 2008
  • In this paper, we present a real-time system to detect abnormal events on gas pipes, based on the signals which are observed through the audio sensors attached on them. First, features are extracted from these signals so that they are robust to noise and invariant to the distance between a sensor and a spot at which an abnormal event like an attack on the gas pipes occurs. Then, a classifier is constructed to detect abnormal events using neural networks. It is a combination of two neural network models, a Gaussian mixture model and a multi-layer perceptron, for the reduction of miss and false alarms. The former works for miss alarm prevention and the latter for false alarm prevention. The experimental result with real data from the actual gas system shows that the proposed system is effective in detecting the dangerous events in real-time with an accuracy of 92.9%.

  • PDF

Performance Comparison of Automatic Detection of Laryngeal Diseases by Voice (후두질환 음성의 자동 식별 성능 비교)

  • Kang Hyun Min;Kim Soo Mi;Kim Yoo Shin;Kim Hyung Soon;Jo Cheol-Woo;Yang Byunggon;Wang Soo-Geun
    • MALSORI
    • /
    • no.45
    • /
    • pp.35-45
    • /
    • 2003
  • Laryngeal diseases cause significant changes in the quality of speech production. Automatic detection of laryngeal diseases by voice is attractive because of its nonintrusive nature. In this paper, we apply speech recognition techniques to detection of laryngeal cancer, and investigate which feature parameters and classification methods are appropriate for this purpose. Linear Predictive Cepstral Coefficients (LPCC) and Mel-Frequency Cepstral Coefficients (MFCC) are examined as feature parameters, and parameters reflecting the periodicity of speech and its perturbation are also considered. As for classifier, multilayer perceptron neural networks and Gaussian Mixture Models (GMM) are employed. According to our experiments, higher order LPCC with the periodic information parameters yields the best performance.

  • PDF

GMM-Based Maghreb Dialect Identification System

  • Nour-Eddine, Lachachi;Abdelkader, Adla
    • Journal of Information Processing Systems
    • /
    • v.11 no.1
    • /
    • pp.22-38
    • /
    • 2015
  • While Modern Standard Arabic is the formal spoken and written language of the Arab world; dialects are the major communication mode for everyday life. Therefore, identifying a speaker's dialect is critical in the Arabic-speaking world for speech processing tasks, such as automatic speech recognition or identification. In this paper, we examine two approaches that reduce the Universal Background Model (UBM) in the automatic dialect identification system across the five following Arabic Maghreb dialects: Moroccan, Tunisian, and 3 dialects of the western (Oranian), central (Algiersian), and eastern (Constantinian) regions of Algeria. We applied our approaches to the Maghreb dialect detection domain that contains a collection of 10-second utterances and we compared the performance precision gained against the dialect samples from a baseline GMM-UBM system and the ones from our own improved GMM-UBM system that uses a Reduced UBM algorithm. Our experiments show that our approaches significantly improve identification performance over purely acoustic features with an identification rate of 80.49%.

Korean Continuous Speech Recognition using Phone Models for Function words (기능어용 음소 모델을 적용한 한국어 연속음성 인식)

  • 명주현;정민화
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.354-356
    • /
    • 2000
  • 의사형태소를 디코딩 단위로 한국어 연속 음성 인식에서의 조사, 어미, 접사 및 짧은 용언의 어간등의 단어가 상당수의 인식 오류를 발생시킨다. 이러한 단어들은 발화 지속시간이 매우 짧고 생략이 빈번하며 결합되는 다른 형태소의 형태에 따라서 매우 심한 발음상의 변이를 보인다. 본 논문에서는 이러한 단어들은 한국어 기능어라 정의하고 실제 의사형태소 단위의 인식 실험을 통하여 기능어 집합 1, 2를 규정하였다. 그리고 한국어 기능어에 기능어용 음소를 독립적으로 적용하는 방법을 제안했다. 또한 기능어용 음소가 분리되어 생기는 음향학적 변이들을 처리하기 위해 Gaussian Mixture 수를 증가시켜 보다 견고한 학습을 수행했고, 기능어들의 음향 모델 스코어가 높아짐에 따른 인식에서의 삽입 오류 증가를 낮추기 위해 언어 모델에 fixed penalty를 부여하였다. 기능어 집합1에 대한 음소 모델을 적용한 경우 전체 문장 인식률은 0.8% 향상되었고 기능어 집합2에 대한 기능어 음소 모델을 적용하였을 때 전체 문장 인식률은 1.4% 증가하였다. 위의 실험 결과를 통하여 한국어 기능어에 대해 새로운 음소를 적용하여 독립적으로 학습하여 인식을 수행하는 것이 효과적임을 확인하였다.

  • PDF

Performance Enhancement of Speaker Identification System Based on GMM Using the Modified EM Algorithm (수정된 EM알고리즘을 이용한 GMM 화자식별 시스템의 성능향상)

  • Kim, Seong-Jong;Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.31-42
    • /
    • 2005
  • Recently, Gaussian Mixture Model (GMM), a special form of CHMM, has been applied to speaker identification and it has proved that performance of GMM is better than CHMM. Therefore, in this paper the speaker models based on GMM and a new GMM using the modified EM algorithm are introduced and evaluated for text-independent speaker identification. Various experiments were performed to evaluate identification performance of two algorithms. As a result of the experiments, the GMM speaker model attained 94.6% identification accuracy using 40 seconds of training data and 32 mixtures and 97.8% accuracy using 80 seconds of training data and 64 mixtures. On the other hand, the new GMM speaker model achieved 95.0% identification accuracy using 40 seconds of training data and 32 mixtures and 98.2% accuracy using 80 seconds of training data and 64 mixtures. It shows that the new GMM speaker identification performance is better than the GMM speaker identification performance.

  • PDF

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

Implementation of HMM Based Speech Recognizer with Medium Vocabulary Size Using TMS320C6201 DSP (TMS320C6201 DSP를 이용한 HMM 기반의 음성인식기 구현)

  • Jung, Sung-Yun;Son, Jong-Mok;Bae, Keun-Sung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.1E
    • /
    • pp.20-24
    • /
    • 2006
  • In this paper, we focused on the real time implementation of a speech recognition system with medium size of vocabulary considering its application to a mobile phone. First, we developed the PC based variable vocabulary word recognizer having the size of program memory and total acoustic models as small as possible. To reduce the memory size of acoustic models, linear discriminant analysis and phonetic tied mixture were applied in the feature selection process and training HMMs, respectively. In addition, state based Gaussian selection method with the real time cepstral normalization was used for reduction of computational load and robust recognition. Then, we verified the real-time operation of the implemented recognition system on the TMS320C6201 EVM board. The implemented recognition system uses memory size of about 610 kbytes including both program memory and data memory. The recognition rate was 95.86% for ETRI 445DB, and 96.4%, 97.92%, 87.04% for three kinds of name databases collected through the mobile phones.

Performance Improvement of EMG-Pattern Recognition Using MFCC-HMM-GMM (MFCC-HMM-GMM을 이용한 근전도(EMG)신호 패턴인식의 성능 개선)

  • Choi, Heung-Ho;Kim, Jung-Ho;Kwon, Jang-Woo
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.5
    • /
    • pp.237-244
    • /
    • 2006
  • This study proposes an approach to the performance improvement of EMG(Electromyogram) pattern recognition. MFCC(Mel-Frequency Cepstral Coefficients)'s approach is molded after the characteristics of the human hearing organ. While it supplies the most typical feature in frequency domain, it should be reorganized to detect the features in EMG signal. And the dynamic aspects of EMG are important for a task, such as a continuous prosthetic control or various time length EMG signal recognition, which have not been successfully mastered by the most approaches. Thus, this paper proposes reorganized MFCC and HMM-GMM, which is adaptable for the dynamic features of the signal. Moreover, it requires an analysis on the most suitable system setting fur EMG pattern recognition. To meet the requirement, this study balanced the recognition-rate against the error-rates produced by the various settings when loaming based on the EMG data for each motion.

A Study on the Perlormance Variations of the Mobile Phone Speaker Verification System According to the Various Background Speaker Properties (휴대폰음성을 이용한 화자인증시스템에서 배경화자에 따른 성능변화에 관한 연구)

  • Choi, Hong-Sub
    • Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.105-114
    • /
    • 2005
  • It was verified that a speaker verification system improved its performances of EER by regularizing log likelihood ratio, using background speaker models. Recently the wireless mobile phones are becoming more dominant communication terminals than wired phones. So the need for building a speaker verification system on mobile phone is increasing abruptly. Therefore in this paper, we had some experiments to examine the performance of speaker verification based on mobile phone's voices. Especially we are focused on the performance variations in EER(Equal Error Rate) according to several background speaker's characteristics, such as selecting methods(MSC, MIX), number of background speakers, aging factor of speech database. For this, we constructed a speaker verification system that uses GMM(Gaussin Mixture Model) and found that the MIX method is generally superior to another method by about 1.0% EER. In aspect of number of background speakers, EER is decreasing in proportion to the background speakers populations. As the number is increasing as 6, 10 and 16, the EERs are recorded as 13.0%, 12.2%, and 11.6%. An unexpected results are happened in aging effects of the speech database on the performance. EERs are measured as 4%, 12% and 19% for each seasonally recorded databases from session 1 to session 3, respectively, where duration gap between sessions is set by 3 months. Although seasons speech database has 10 speakers and 10 sentences per each, which gives less statistical confidence to results, we confirmed that enrolled speaker models in speaker verification system should be regularly updated using the ongoing claimant's utterances.

  • PDF