• Title/Summary/Keyword: GMM parameters

Search Result 60, Processing Time 0.021 seconds

Improved Generalized Method of Moment Estimators to Estimate Diffusion Models (확산모형에 대한 일반화적률추정법의 개선)

  • Choi, Youngsoo;Lee, Yoon-Dong
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.5
    • /
    • pp.767-783
    • /
    • 2013
  • Generalized Method of Moment(GMM) is a popular estimation method to estimate model parameters in empirical financial studies. GMM is frequently applied to estimate diffusion models that are basic techniques of modern financial engineering. However, recent research showed that GMM had poor properties to estimate the parameters that pertain to the diffusion coefficient in diffusion models. This research corrects the weakness of GMM and suggests alternatives to improve the statistical properties of GMM estimators. In this study, a simulation method is adopted to compare estimation methods. Out of compared alternatives, NGMM-Y, a version of improved GMM that adopts the NLL idea of Shoji and Ozaki (1998), showed the best properties. Especially NGMM-Y estimator is superior to other versions of GMM estimators for the estimation of diffusion coefficient parameters.

Comparison Study on the Performances of NLL and GMM for Estimating Diffusion Processes (NLL과 GMM을 중심으로 한 확산모형 추정법 비교)

  • Kim, Dae-Gyun;Lee, Yoon-Dong
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1007-1020
    • /
    • 2011
  • Since the research of Black and Scholes (1973), modeling methods using diffusion processes have performed principal roles in financial engineering. In modern financial theories, various types of diffusion processes were suggested and applied in real situations. An estimation of the model parameters is an indispensible step to analyze financial data using diffusion process models. Many estimation methods were suggested and their properties were investigated. This paper reviews the statistical properties of the, Euler approximation method, New Local Linearization(NLL) method, and Generalized Methods of Moment(GMM) that are known as the most practical methods. From the simulation study, we found the NLL and Euler methods performed better than GMM. GMM is frequently used to estimate the parameters because of its simplicity; however this paper shows the performance of GMM is poorer than the Euler approximation method or the NLL method that are even simpler than GMM. This paper shows the performance of the GMM is extremely poor especially when the parameters in diffusion coefficient are to be estimated.

Speech/Mixed Content Signal Classification Based on GMM Using MFCC (MFCC를 이용한 GMM 기반의 음성/혼합 신호 분류)

  • Kim, Ji-Eun;Lee, In-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.185-192
    • /
    • 2013
  • In this paper, proposed to improve the performance of speech and mixed content signal classification using MFCC based on GMM probability model used for the MPEG USAC(Unified Speech and Audio Coding) standard. For effective pattern recognition, the Gaussian mixture model (GMM) probability model is used. For the optimal GMM parameter extraction, we use the expectation maximization (EM) algorithm. The proposed classification algorithm is divided into two significant parts. The first one extracts the optimal parameters for the GMM. The second distinguishes between speech and mixed content signals using MFCC feature parameters. The performance of the proposed classification algorithm shows better results compared to the conventionally implemented USAC scheme.

GMM-based Emotion Recognition Using Speech Signal (음성 신호를 사용한 GMM기반의 감정 인식)

  • 서정태;김원구;강면구
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.235-241
    • /
    • 2004
  • This paper studied the pattern recognition algorithm and feature parameters for speaker and context independent emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used for speaker and context independent recognition. The speech parameters used as the feature are pitch. energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and its derivatives showed better performance than that using the pitch and energy parameters. For pattern recognition algorithm. GMM-based emotion recognizer was superior to KNN and VQ-based recognizer.

Speaker and Context Independent Emotion Recognition System using Gaussian Mixture Model (GMM을 이용한 화자 및 문장 독립적 감정 인식 시스템 구현)

  • 강면구;김원구
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2463-2466
    • /
    • 2003
  • This paper studied the pattern recognition algorithm and feature parameters for emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used lot speaker and context independent recognition. The speech parameters used as the feature are pitch, energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and their derivatives as a feature showed better performance than that using the Pitch and energy Parameters. For pattern recognition algorithm, GMM based emotion recognizer was superior to KNN and VQ based recognizer

  • PDF

Speaker Verification Using SVM Kernel with GMM-Supervector Based on the Mahalanobis Distance (Mahalanobis 거리측정 방법 기반의 GMM-Supervector SVM 커널을 이용한 화자인증 방법)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.216-221
    • /
    • 2010
  • In this paper, we propose speaker verification method using Support Vector Machine (SVM) kernel with Gaussian Mixture Model (GMM)-supervector based on the Mahalanobis distance. The proposed GMM-supervector SVM kernel method is combined GMM with SVM. The GMM-supervectors are generated by GMM parameters of speaker and other speaker utterances. A speaker verification threshold of GMM-supervectors is decided by SVM kernel based on Mahalanobis distance to improve speaker verification accuracy. The experimental results for text-independent speaker verification using 20 speakers demonstrates the performance of the proposed method compared to GMM, SVM, GMM-supervector SVM kernel based on Kullback-Leibler (KL) divergence, and GMM-supervector SVM kernel based on Bhattacharyya distance.

Performance comparison of Text-Independent Speaker Recognizer Using VQ and GMM (VQ와 GMM을 이용한 문맥독립 화자인식기의 성능 비교)

  • Kim, Seong-Jong;Chung, Hoon;Chung, Ik-Joo
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.235-244
    • /
    • 2000
  • This paper was focused on realizing the text-independent speaker recognizer using the VQ and GMM algorithm and studying the characteristics of the speaker recognizers that adopt these two algorithms. Because it was difficult ascertain the effect two algorithms have on the speaker recognizer theoretically, we performed the recognition experiments using various parameters and, as the result of the experiments, we could show that GMM algorithm had better recognition performance than VQ algorithm as following. The GMM showed better performance with small training data, and it also showed just a little difference of recognition rate as the kind of feature vectors and the length of input data vary. The GMM showed good recognition performance than the VQ on the whole.

  • PDF

Estimation of Mixture Numbers of GMM for Speaker Identification (화자 식별을 위한 GMM의 혼합 성분의 개수 추정)

  • Lee, Youn-Jeong;Lee, Ki-Yong
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.237-245
    • /
    • 2004
  • In general, Gaussian mixture model(GMM) is used to estimate the speaker model for speaker identification. The parameter estimates of the GMM are obtained by using the expectation-maximization (EM) algorithm for the maximum likelihood(ML) estimation. However, if the number of mixtures isn't defined well in the GMM, those parameters are obtained inappropriately. The problem to find the number of components is significant to estimate the optimal parameter in mixture model. In this paper, to estimate the optimal number of mixtures, we propose the method that starts from the sufficient mixtures, after, the number is reduced by investigating the mutual information between mixtures for GMM. In result, we can estimate the optimal number of mixtures. The effectiveness of the proposed method is shown by the experiment using artificial data. Also, we performed the speaker identification applying the proposed method comparing with other approaches.

  • PDF

Speech Emotion Recognition Based on GMM Using FFT and MFB Spectral Entropy (FFT와 MFB Spectral Entropy를 이용한 GMM 기반의 감정인식)

  • Lee, Woo-Seok;Roh, Yong-Wan;Hong, Hwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2008.04a
    • /
    • pp.99-100
    • /
    • 2008
  • This paper proposes a Gaussian Mixture Model (GMM) - based speech emotion recognition methods using four feature parameters; 1) Fast Fourier Transform(FFT) spectral entropy, 2) delta FFT spectral entropy, 3) Mel-frequency Filter Bank (MFB) spectral entropy, and 4) delta MFB spectral entropy. In addition, we use four emotions in a speech database including anger, sadness, happiness, and neutrality. We perform speech emotion recognition experiments using each pre-defined emotion and gender. The experimental results show that the proposed emotion recognition using FFT spectral-based entropy and MFB spectral-based entropy performs better than existing emotion recognition based on GMM using energy, Zero Crossing Rate (ZCR), Linear Prediction Coefficient (LPC), and pitch parameters. In experimental Results, we attained a maximum recognition rate of 75.1% when we used MFB spectral entropy and delta MFB spectral entropy.

  • PDF

Performance of GMM and ANN as a Classifier for Pathological Voice

  • Wang, Jianglin;Jo, Cheol-Woo
    • Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.151-162
    • /
    • 2007
  • This study focuses on the classification of pathological voice using GMM (Gaussian Mixture Model) and compares the results to the previous work which was done by ANN (Artificial Neural Network). Speech data from normal people and patients were collected, then diagnosed and classified into two different categories. Six characteristic parameters (Jitter, Shimmer, NHR, SPI, APQ and RAP) were chosen. Then the classification method based on the artificial neural network and Gaussian mixture method was employed to discriminate the data into normal and pathological speech. The GMM method attained 98.4% average correct classification rate with training data and 95.2% average correct classification rate with test data. The different mixture number (3 to 15) of GMM was used in order to obtain an optimal condition for classification. We also compared the average classification rate based on GMM, ANN and HMM. The proper number of mixtures on Gaussian model needs to be investigated in our future work.

  • PDF