• Title/Summary/Keyword: Speech feature

Search Result 712, Processing Time 0.023 seconds

Feature Compensation Method Based on Parallel Combined Mixture Model (병렬 결합된 혼합 모델 기반의 특징 보상 기술)

  • 김우일;이흥규;권오일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.603-611
    • /
    • 2003
  • This paper proposes an effective feature compensation scheme based on speech model for achieving robust speech recognition. Conventional model-based method requires off-line training with noisy speech database and is not suitable for online adaptation. In the proposed scheme, we can relax the off-line training with noisy speech database by employing the parallel model combination technique for estimation of correction factors. Applying the model combination process over to the mixture model alone as opposed to entire HMM makes the online model combination possible. Exploiting the availability of noise model from off-line sources, we accomplish the online adaptation via MAP (Maximum A Posteriori) estimation. In addition, the online channel estimation procedure is induced within the proposed framework. For more efficient implementation, we propose a selective model combination which leads to reduction or the computational complexities. The representative experimental results indicate that the suggested algorithm is effective in realizing robust speech recognition under the combined adverse conditions of additive background noise and channel distortion.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

Speaker Identification Using GMM Based on Local Fuzzy PCA (국부 퍼지 클러스터링 PCA를 갖는 GMM을 이용한 화자 식별)

  • Lee, Ki-Yong
    • Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.159-166
    • /
    • 2003
  • To reduce the high dimensionality required for training of feature vectors in speaker identification, we propose an efficient GMM based on local PCA with Fuzzy clustering. The proposed method firstly partitions the data space into several disjoint clusters by fuzzy clustering, and then performs PCA using the fuzzy covariance matrix in each cluster. Finally, the GMM for speaker is obtained from the transformed feature vectors with reduced dimension in each cluster. Compared to the conventional GMM with diagonal covariance matrix, the proposed method needs less storage and shows faster result, under the same performance.

  • PDF

A study on Effective Feature Parameters Comparison for Speaker Recognition (화자인식에 효과적인 특징벡터에 관한 비교연구)

  • Park TaeSun;Kim Sang-Jin;Kwang Moon;Hahn Minsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.145-148
    • /
    • 2003
  • In this paper, we carried out comparative study about various feature parameters for the effective speaker recognition such as LPC, LPCC, MFCC, Log Area Ratio, Reflection Coefficients, Inverse Sine, and Delta Parameter. We also adopted cepstral liftering and cepstral mean subtraction methods to check their usefulness. Our recognition system is HMM based one with 4 connected-Korean-digit speech database. Various experimental results will help to select the most effective parameter for speaker recognition.

  • PDF

Speaker Adaptation using ICA-based Feature Transformation (ICA 기반의 특징변환을 이용한 화자적응)

  • Park ManSoo;Kim Hoi-Rin
    • MALSORI
    • /
    • no.43
    • /
    • pp.127-136
    • /
    • 2002
  • The speaker adaptation technique is generally used to reduce the speaker difference in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the transformation matrix is learned from a speaker independent training data. When the amount of data is small, however, it is necessary to adjust the ICA-based transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method: through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. We observed that the proposed technique is effective to adaptation performance.

  • PDF

A Study on the Characteristics of Segmental-Feature HMM (분절특징 HMM의 특성에 관한 연구)

  • Yun Young-Sun;Jung Ho-Young
    • MALSORI
    • /
    • no.43
    • /
    • pp.163-178
    • /
    • 2002
  • In this paper, we discuss the characteristics of Segmental-Feature HMM and summarize previous studies of SFHMM. There are several approaches to reduce the number of parameters in the previous studies. However, if the number of parameters decreased, the performance of systems also fell. Therefore, we consider the fast computation approach with preserving the same number of parameters. In this paper, we present the new segment comparison method to speed up the computation of SFHMM without loss of performance. The proposed method uses the three-frame calculation rather than the full(five) frames in the given segment. The experimental results show that the performance of the proposed system is better than that of the previous studies.

  • PDF

Speaker Identification Using an Ensemble of Feature Enhancement Methods (특징 강화 방법의 앙상블을 이용한 화자 식별)

  • Yang, IL-Ho;Kim, Min-Seok;So, Byung-Min;Kim, Myung-Jae;Yu, Ha-Jin
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2011
  • In this paper, we propose an approach which constructs classifier ensembles of various channel compensation and feature enhancement methods. CMN and CMVN are used as channel compensation methods. PCA, kernel PCA, greedy kernel PCA, and kernel multimodal discriminant analysis are used as feature enhancement methods. The proposed ensemble system is constructed with the combination of 15 classifiers which include three channel compensation methods (including 'without compensation') and five feature enhancement methods (including 'without enhancement'). Experimental results show that the proposed ensemble system gives highest average speaker identification rate in various environments (channels, noises, and sessions).

  • PDF

Global Covariance based Principal Component Analysis for Speaker Identification (화자식별을 위한 전역 공분산에 기반한 주성분분석)

  • Seo, Chang-Woo;Lim, Young-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.69-73
    • /
    • 2009
  • This paper proposes an efficient global covariance-based principal component analysis (GCPCA) for speaker identification. Principal component analysis (PCA) is a feature extraction method which reduces the dimension of the feature vectors and the correlation among the feature vectors by projecting the original feature space into a small subspace through a transformation. However, it requires a larger amount of training data when performing PCA to find the eigenvalue and eigenvector matrix using the full covariance matrix by each speaker. The proposed method first calculates the global covariance matrix using training data of all speakers. It then finds the eigenvalue matrix and the corresponding eigenvector matrix from the global covariance matrix. Compared to conventional PCA and Gaussian mixture model (GMM) methods, the proposed method shows better performance while requiring less storage space and complexity in speaker identification.

  • PDF

Phonological Error Patterns: Clinical Aspects on Coronal Feature (음운 오류 패턴: 설정성 자질의 임상적 고찰)

  • Kim, Min-Jung;Lee, Sung-Eun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.239-244
    • /
    • 2010
  • The purpose of this study is to investigate two phonological error patterns on coronal feature of children with functional articulation disorders and to compare them with those of general children. We tested 120 children with functional articulation disorders and 100 general children from 2~4 years of age with 'Assessment of Phonology & Articulation for Chidren(APAC)'. The results were as follows: (1) 37 disordered children substituted [+coronal] consonants for [-coronal] consonants (fronting of velars) and 9 disordered children substituted [-coronal] consonants for [+coronal] consonants (backing to velars). (2) Theses two phonological patterns were affected by the articulatory place of following phoneme. (3) The fronting pattern of children with articulation disorders was similar with that of general children, but their backing pattern was different with that of general children. These results show the clinical usefulness of coronal feature in phonological pattern analysis, the need of articulatory assessment with various phonetic context, and the importance of error contexts in clinical judgment.

  • PDF

Rank-weighted reconstruction feature for a robust deep neural network-based acoustic model

  • Chung, Hoon;Park, Jeon Gue;Jung, Ho-Young
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.235-241
    • /
    • 2019
  • In this paper, we propose a rank-weighted reconstruction feature to improve the robustness of a feed-forward deep neural network (FFDNN)-based acoustic model. In the FFDNN-based acoustic model, an input feature is constructed by vectorizing a submatrix that is created by slicing the feature vectors of frames within a context window. In this type of feature construction, the appropriate context window size is important because it determines the amount of trivial or discriminative information, such as redundancy, or temporal context of the input features. However, we ascertained whether a single parameter is sufficiently able to control the quantity of information. Therefore, we investigated the input feature construction from the perspectives of rank and nullity, and proposed a rank-weighted reconstruction feature herein, that allows for the retention of speech information components and the reduction in trivial components. The proposed method was evaluated in the TIMIT phone recognition and Wall Street Journal (WSJ) domains. The proposed method reduced the phone error rate of the TIMIT domain from 18.4% to 18.0%, and the word error rate of the WSJ domain from 4.70% to 4.43%.