• Title/Summary/Keyword: 특징 보상 기법

Search Result 71, Processing Time 0.029 seconds

Speech enhancement method based on feature compensation gain for effective speech recognition in noisy environments (잡음 환경에 효과적인 음성인식을 위한 특징 보상 이득 기반의 음성 향상 기법)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.51-55
    • /
    • 2019
  • This paper proposes a speech enhancement method utilizing the feature compensation gain for robust speech recognition performances in noisy environments. In this paper we propose a speech enhancement method utilizing the feature compensation gain which is obtained from the PCGMM (Parallel Combined Gaussian Mixture Model)-based feature compensation method employing variational model composition. The experimental results show that the proposed method significantly outperforms the conventional front-end algorithms and our previous research over various background noise types and SNR (Signal to Noise Ratio) conditions in mismatched ASR (Automatic Speech Recognition) system condition. The computation complexity is significantly reduced by employing the noise model selection technique with maintaining the speech recognition performance at a similar level.

Speech Enhancement Based on Feature Compensation for Independently Applying to Different Types of Speech Recognition Systems (이기종 음성 인식 시스템에 독립적으로 적용 가능한 특징 보상 기반의 음성 향상 기법)

  • Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2367-2374
    • /
    • 2014
  • This paper proposes a speech enhancement method which can be independently applied to different types of speech recognition systems. Feature compensation methods are well known to be effective as a front-end algorithm for robust speech recognition in noisy environments. The feature types and speech model employed by the feature compensation methods should be matched with ones of the speech recognition system for their effectiveness. However, they cannot be successfully employed by the speech recognition with "unknown" specification, such as a commercialized speech recognition engine. In this paper, a speech enhancement method is proposed, which is based on the PCGMM-based feature compensation method. The experimental results show that the proposed method significantly outperforms the conventional front-end algorithms for unknown speech recognition over various background noise conditions.

Incorporation of IMM-based Feature Compensation and Uncertainty Decoding (IMM 기반 특징 보상 기법과 불확실성 디코딩의 결합)

  • Kang, Shin-Jae;Han, Chang-Woo;Kwon, Ki-Soo;Kim, Nam-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6C
    • /
    • pp.492-496
    • /
    • 2012
  • This paper presents a decoding technique for speech recognition using uncertainty information from feature compensation method to improve the speech recognition performance in the low SNR condition. Traditional feature compensation algorithms have difficulty in estimating clean feature parameters in adverse environment. Those algorithms focus on the point estimation of desired features. The point estimation of feature compensation method degrades speech recognition performance when incorrectly estimated features enter into the decoder of speech recognition. In this paper, we apply the uncertainty information from well-known feature compensation method, such as IMM, to the recognition engine. Applied technique shows better performance in the Aurora-2 DB.

Minimum Classification Error Training to Improve Discriminability of PCMM-Based Feature Compensation (PCMM 기반 특징 보상 기법에서 변별력 향상을 위한 Minimum Classification Error 훈련의 적용)

  • Kim Wooil;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.58-68
    • /
    • 2005
  • In this paper, we propose a scheme to improve discriminative property in the feature compensation method for robust speech recognition under noisy environments. The estimation of noisy speech model used in existing feature compensation methods do not guarantee the computation of posterior probabilities which discriminate reliably among the Gaussian components. Estimation of Posterior probabilities is a crucial step in determining the discriminative factor of the Gaussian models, which in turn determines the intelligibility of the restored speech signals. The proposed scheme employs minimum classification error (MCE) training for estimating the parameters of the noisy speech model. For applying the MCE training, we propose to identify and determine the 'competing components' that are expected to affect the discriminative ability. The proposed method is applied to feature compensation based on parallel combined mixture model (PCMM). The performance is examined over Aurora 2.0 database and over the speech recorded inside a car during real driving conditions. The experimental results show improved recognition performance in both simulated environments and real-life conditions. The result verifies the effectiveness of the proposed scheme for increasing the performance of robust speech recognition systems.

Luminance Compensation using Feature Points and Histogram for VR Video Sequence (특징점과 히스토그램을 이용한 360 VR 영상용 밝기 보상 기법)

  • Lee, Geon-Won;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.808-816
    • /
    • 2017
  • 360 VR video systems has become important to provide immersive effect for viewers. The system consists of stitching, projection, compression, inverse projection, viewport extraction. In this paper, an efficient luminance compensation technique for 360 VR video sequences, where feature extraction and histogram equalization algorithms are utilized. The proposed luminance compensation algorithm enhance the performance of stitching in 360 VR system. The simulation results showed that the proposed technique is useful to increase the quality of the displayed image.

Feature Compensation Method Based on Parallel Combined Mixture Model (병렬 결합된 혼합 모델 기반의 특징 보상 기술)

  • 김우일;이흥규;권오일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.603-611
    • /
    • 2003
  • This paper proposes an effective feature compensation scheme based on speech model for achieving robust speech recognition. Conventional model-based method requires off-line training with noisy speech database and is not suitable for online adaptation. In the proposed scheme, we can relax the off-line training with noisy speech database by employing the parallel model combination technique for estimation of correction factors. Applying the model combination process over to the mixture model alone as opposed to entire HMM makes the online model combination possible. Exploiting the availability of noise model from off-line sources, we accomplish the online adaptation via MAP (Maximum A Posteriori) estimation. In addition, the online channel estimation procedure is induced within the proposed framework. For more efficient implementation, we propose a selective model combination which leads to reduction or the computational complexities. The representative experimental results indicate that the suggested algorithm is effective in realizing robust speech recognition under the combined adverse conditions of additive background noise and channel distortion.

PCA-based Variational Model Composition Method for Roust Speech Recognition with Time-Varying Background Noise (시변 잡음에 강인한 음성 인식을 위한 PCA 기반의 Variational 모델 생성 기법)

  • Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2793-2799
    • /
    • 2013
  • This paper proposes an effective feature compensation method to improve speech recognition performance in time-varying background noise condition. The proposed method employs principal component analysis to improve the variational model composition method. The proposed method is employed to generate multiple environmental models for the PCGMM-based feature compensation scheme. Experimental results prove that the proposed scheme is more effective at improving speech recognition accuracy in various SNR conditions of background music, compared to the conventional front-end methods. It shows 12.14% of average relative improvement in WER compared to the previous variational model composition method.

PCMM-Based Feature Compensation Method Using Multiple Model to Cope with Time-Varying Noise (시변 잡음에 대처하기 위한 다중 모델을 이용한 PCMM 기반 특징 보상 기법)

  • 김우일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.473-480
    • /
    • 2004
  • In this paper we propose an effective feature compensation scheme based on the speech model in order to achieve robust speech recognition. The proposed feature compensation method is based on parallel combined mixture model (PCMM). The previous PCMM works require a highly sophisticated procedure for estimation of the combined mixture model in order to reflect the time-varying noisy conditions at every utterance. The proposed schemes can cope with the time-varying background noise by employing the interpolation method of the multiple mixture models. We apply the‘data-driven’method to PCMM tot move reliable model combination and introduce a frame-synched version for estimation of environments posteriori. In order to reduce the computational complexity due to multiple models, we propose a technique for mixture sharing. The statistically similar Gaussian components are selected and the smoothed versions are generated for sharing. The performance is examined over Aurora 2.0 and speech corpus recorded while car-driving. The experimental results indicate that the proposed schemes are effective in realizing robust speech recognition and reducing the computational complexities under both simulated environments and real-life conditions.

Seam Carving based Occlusion Region Compensation Algorithm (심카빙 기반 가려짐 영역 보상 기법)

  • An, Jae-Woo;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.573-583
    • /
    • 2011
  • In this paper, we propose an occlusion compensation algorithm which is used for virtual view generation. In general, since occlusion region is recovered from neighboring pixels by taking the mean value or median value of neighbor pixels, the visual characteristics of a given image are not considered and consequently the accuracy of the compensated occlusion regions is not guaranteed. To solve these problem, we propose an algorithm that considers primary visual characteristics of a given image to compensate the occluded regions by using seam carving algorithm. In the proposed algorithm, we first use Sobel mask to obtain the edge map of a given image and then make it binary digit 0 or 1 and finally thinning process follows. Then, the energy patterns of original and thinned edge map obtained by the modified seam carving method are used to compensate the occlusion regions. Through experiments with many test images, we verify that the proposed algorithm performed better than conventional algorithms.

A study on Gaussian mixture model deep neural network hybrid-based feature compensation for robust speech recognition in noisy environments (잡음 환경에 효과적인 음성 인식을 위한 Gaussian mixture model deep neural network 하이브리드 기반의 특징 보상)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.6
    • /
    • pp.506-511
    • /
    • 2018
  • This paper proposes an GMM(Gaussian Mixture Model)-DNN(Deep Neural Network) hybrid-based feature compensation method for effective speech recognition in noisy environments. In the proposed algorithm, the posterior probability for the conventional GMM-based feature compensation method is calculated using DNN. The experimental results using the Aurora 2.0 framework and database demonstrate that the proposed GMM-DNN hybrid-based feature compensation method shows more effective in Known and Unknown noisy environments compared to the GMM-based method. In particular, the experiments of the Unknown environments show 9.13 % of relative improvement in the average of WER (Word Error Rate) and considerable improvements in lower SNR (Signal to Noise Ratio) conditions such as 0 and 5 dB SNR.