• Title/Summary/Keyword: Speech Enhancement

Search Result 340, Processing Time 0.03 seconds

Objective Evaluation of Beamforming Techniques for Hearing Devices with Two-channel Microphone (2채널 마이크로폰을 이용한 청각 기기에서의 빔포밍에 대한 객관적 검증)

  • Cho, Kyeong-Won;Han, Jong-Hee;Hong, Sung-Hwa;Lee, Sang-Min;Kim, Dong-Wook;Kim, In-Young;Kim, Sun-I.
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.198-206
    • /
    • 2011
  • Hearing devices like cochlear implant, vibrant soundbridge, etc. try to offer better sound for people. In hearing devices, several beamformers including conventional directional microphone are applicable to noise reduction. Each beamformer has different directional response and it could change sound intelligibility or quality for listeners. Therefore, we investigated the performance of three beamformers, which are first and second order directional microphone, and broadband beamformer(BBF) with a computer simulation assuming hearing device microphone configuration. We also calculated objective measurements which have been used to evaluate speech enhancement algorithms. In the simulation, a single speech and a single babble noisewere propagated from the front and $135^{\circ}$ azimuth degrees respectively. Microphones were configured in an end-fire array and the spacing was varied in comparison. With 3 cm spacing, BBF had about 3 dB higher enhanced SNR than that of directional microphones. However, enhancement of segmental SNR and frequency weighted segmental SNR were similar between the first order directional microphone and broadband beamformer. In addition when steady state noise was used, broadband beamformer showed the increased performance and had the highest enhanced SNR, and segmental SNR.

Part-Of-Speech Tagging using multiple sources of statistical data (이종의 통계정보를 이용한 품사 부착 기법)

  • Cho, Seh-Yeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.501-506
    • /
    • 2008
  • Statistical POS tagging is prone to error, because of the inherent limitations of statistical data, especially single source of data. Therefore it is widely agreed that the possibility of further enhancement lies in exploiting various knowledge sources. However these data sources are bound to be inconsistent to each other. This paper shows the possibility of using maximum entropy model to Korean language POS tagging. We use as the knowledge sources n-gram data and trigger pair data. We show how perplexity measure varies when two knowledge sources are combined using maximum entropy method. The experiment used a trigram model which produced 94.9% accuracy using Hidden Markov Model, and showed increase to 95.6% when combined with trigger pair data using Maximum Entropy method. This clearly shows possibility of further enhancement when various knowledge sources are developed and combined using ME method.

An Improvement of Stochastic Feature Extraction for Robust Speech Recognition (강인한 음성인식을 위한 통계적 특징벡터 추출방법의 개선)

  • 김회린;고진석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.180-186
    • /
    • 2004
  • The presence of noise in speech signals degrades the performance of recognition systems in which there are mismatches between the training and test environments. To make a speech recognizer robust, it is necessary to compensate these mismatches. In this paper, we studied about an improvement of stochastic feature extraction based on band-SNR for robust speech recognition. At first, we proposed a modified version of the multi-band spectral subtraction (MSS) method which adjusts the subtraction level of noise spectrum according to band-SNR. In the proposed method referred as M-MSS, a noise normalization factor was newly introduced to finely control the over-estimation factor depending on the band-SNR. Also, we modified the architecture of the stochastic feature extraction (SFE) method. We could get a better performance when the spectral subtraction was applied in the power spectrum domain than in the mel-scale domain. This method is denoted as M-SFE. Last, we applied the M-MSS method to the modified stochastic feature extraction structure, which is denoted as the MMSS-MSFE method. The proposed methods were evaluated on isolated word recognition under various noise environments. The average error rates of the M-MSS, M-SFE, and MMSS-MSFE methods over the ordinary spectral subtraction (SS) method were reduced by 18.6%, 15.1%, and 33.9%, respectively. From these results, we can conclude that the proposed methods provide good candidates for robust feature extraction in the noisy speech recognition.

Cepstral Distance and Log-Energy Based Silence Feature Normalization for Robust Speech Recognition (강인한 음성인식을 위한 켑스트럼 거리와 로그 에너지 기반 묵음 특징 정규화)

  • Shen, Guang-Hu;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.278-285
    • /
    • 2010
  • The difference between training and test environments is one of the major performance degradation factors in noisy speech recognition and many silence feature normalization methods were proposed to solve this inconsistency. Conventional silence feature normalization method represents higher classification performance in higher SNR, but it has a problem of performance degradation in low SNR due to the low accuracy of speech/silence classification. On the other hand, cepstral distance represents well the characteristic distribution of speech/silence (or noise) in low SNR. In this paper, we propose a Cepstral distance and Log-energy based Silence Feature Normalization (CLSFN) method which uses both log-energy and cepstral euclidean distance to classify speech/silence for better performance. Because the proposed method reflects both the merit of log energy being less affected with noise in high SNR and the merit of cepstral distance having high discrimination accuracy for speech/silence classification in low SNR, the classification accuracy will be considered to be improved. The experimental results showed that our proposed CLSFN presented the improved recognition performances comparing with the conventional SFN-I/II and CSFN methods in all kinds of noisy environments.

Speech Enhancement using Spectral Subtraction and Two Channel Beamfomer (Spectral Subtraction과 Two Channel Beamfomer를 이용한 음성 강조 기법)

  • 김학윤
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.1
    • /
    • pp.38-44
    • /
    • 1999
  • In this paper, a new spectral subtraction technique with two microphone inputs is proposed. In conventional spectral subtraction using a single microphone, the averaged noise spectrum is subtracted from the observed short-time input spectrum. This results in reduction of mean value of noise spectrum only, the component varying around the mean value remaining intact. In the method proposed in this paper, the short-time noise spectrum excluding the speech component is estimated by introducing the blocking matrix used in Griffiths-Jim-type adaptive beamformer with two microphone inputs, combined with the spectral compensation technique. A simulation was conducted to verify the effectiveness of the method.

  • PDF

A Study on Improving Voice Quality and Pitch Searching of the VSELP Coder (VSELP 부호화기의 음질 및 주기탐색 개선에 관한 연구)

  • 성기철;문상재
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.740-749
    • /
    • 1994
  • This paper presents method for improving the performance of the VSELP speech coder. The hybrid method is employed for pitch period searching. Pitch searching time is reduced and pitch detection error, caused by quantization error of excitation signal of encoder in VSELP coder, is reduced by this method. This paper also adopts a pitch period enhancement filter and an adaptive first order filter. In this result, pitch period searching time is reduced to 26%, and MOS of reconstructed speech signal is increased by 3.19 to 4.04.

  • PDF

Performance Enhancement of SBC for Voice Signal Using Adaptive Postfiltering at the Medium Bit Rate (중간 전송율에서 적응 포스트 필터링을 이용한 음성용 SBC의 성능 향상)

  • 김원구;이남걸;윤대희;차일환
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.2
    • /
    • pp.121-131
    • /
    • 1992
  • In this paper, three methods are studied to enhance the performance of SBC ( Sub-Band Coding )schemes for voice signal at the medium bit rate between 12 kbps and If; kbps, and adaptive postfilteritng using human auditory characteristics Is (Bone at the decoder out put. First, GQMF(Generalized Quadrature Mirror Filter ) Is used instead of QME'((Quadrature MirrorFiltcr ) to have better performance. Second, by adaptive bit allocation to each sub-band, speech quality is enhanced and valuable rate ceding If possible. Third, corriparlson study oS thr: coder performance using APCM(Adaptive Pulse Code ModulatioTi) and ADPCM( Adaptive Differentiai Pulse Code Modulatiori) , Indicates that SB AfCM performance better than the other. Adaptive postfiltering at the decoder output enhances the quality of the coded speech. The two proposed postfiltering methods decrease the noise sufficiently at the expense of the low computational load.

  • PDF

Distinct Segmental Implementations in English and Spanish Prosody

  • Lee, Joo-Kyeong
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.199-206
    • /
    • 2004
  • This paper attempts to provide a substantial explanation of different prosodic implementations on segments in English and Spanish, arguing that the phonetic modification invoked by prosody may effectively reflect phonological structure. In English, a high front vowel in accented syllables is acoustically realized as higher F1 and F2 frequencies than in unaccented syllables, due to its more peripheral and sonorous articulation (Harrington et al. 1999). In this paper, an acoustic experiment was conducted to see if such a manner of segmental modification invoked by prosody in English extends to other languages such as Spanish. Results show that relatively more prominent syllables entail higher F1 values as a result of their more sonorous articulation in Spanish, but either front or back vowel does not show a higher F2 or a lower F2 frequency. This is interpreted as an indication that a prosodically prominent syllable entails its vocalic enhancement in both horizontal and vertical dimensions of articulation in English. In Spanish, however, only the vertical dimensional articulation is maximized, resulting in a higher F1. I suggest that this difference may be attributed to the different phonological structures of vowels in English and Spanish, and that sonority expansion alone would be sufficient in the articulation of prosodic prominence as long as the phonological distinction of vowels is well retained.

  • PDF

Study on optimal number of latent source in speech enhancement based Bayesian nonnegative matrix factorization (베이지안 비음수 행렬 인수분해 기반의 음성 강화 기법에서 최적의 latent source 개수에 대한 연구)

  • Lee, Hye In;Seo, Ji Hun;Lee, Young Han;Kim, Je Woo;Lee, Seok Pil
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.418-420
    • /
    • 2015
  • 본 논문은 베이지안 비음수 행렬 인수분해 (Bayesian nonnegative matrix factorization, BNMF) 기반의 음성 강화 기법에서 음성과 잡음 성분의 latent source 수에 따른 강화성능에 대해 서술한다. BNMF 기반의 음성 강화 기법은 입력 신호를 서브 신호들의 합으로 분해한 후, 잡음 성분을 제거하는 방식으로 그 성능이 기존의 NMF 기반의 방법들보다 우수한 것으로 알려져 있다. 그러나 많은 계산량과 latent source 의 수에 따라 성능의 차이가 있다는 단점이 있다. 이러한 단점을 개선하기 위해 본 논문에서는 BNMF 기반의 음성 강화 기법에서 최적의 latent source 개수를 찾기 위한 실험을 진행하였다. 실험은 잡음의 종류, 음성의 종류, 음성과 잡음의 latent source 의 개수, 그리고 SNR 을 바꿔가며 진행하였고, 성능 평가 방법으로 PESQ (perceptual evaluation of speech quality) 를 이용하였다. 실험 결과, 음성의 latent source 개수는 성능에 영향을 주지 않지만, 잡음의 latent source 개수는 많을수록 성능이 좋은 것으로 확인되었다.

  • PDF

Speech enhancement based on reinforcement learning (강화학습 기반의 음성향상기법)

  • Park, Tae-Jun;Chang, Joon-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.335-337
    • /
    • 2018
  • 음성향상기법은 음성에 포함된 잡음이나 잔향을 제거하는 기술로써 마이크로폰으로 입력된 음성신호는 잡음이나 잔향에 의해 왜곡되어지므로 음성인식, 음성통신 등의 음성신호처리 기술의 핵심 기술이다. 이전에는 음성신호와 잡음신호 사이의 통계적 정보를 이용하는 통계모델 기반의 음성향상기법이 주로 사용되었으나 통계 모델 기반의 음성향상기술은 정상 잡음 환경과는 달리 비정상 잡음 환경에서 성능이 크게 저하되는 문제점을 가지고 있었다. 최근 머신러닝 기법인 심화신경망 (DNN, deep neural network)이 도입되어 음성 향상 기법에서 우수한 성능을 내고 있다. 심화신경망을 이용한 음성 향상 기법은 다수의 은닉 층과 은닉 노드들을 통하여 잡음이 존재하는 음성 신호와 잡음이 존재하지 않는 깨끗한 음성 신호 사이의 비선형적인 관계를 잘 모델링하였다. 이러한 심화신경망 기반의 음성향상기법을 향상 시킬 수 있는 방법 중 하나인 강화학습을 적용하여 기존 심화신경망 대비 성능을 향상시켰다. 강화학습이란 대표적으로 구글의 알파고에 적용된 기술로써 특정 state에서 최고의 reward를 받기 위해 어떠한 policy를 통한 action을 취해서 다음 state로 나아갈지를 매우 많은 경우에 대해 학습을 통해 최적의 action을 선택할 수 있도록 학습하는 방법을 말한다. 본 논문에서는 composite measure를 기반으로 reward를 설계하여 기존 PESQ (Perceptual Evaluation of Speech Quality) 기반의 reward를 설계한 기술 대비 음성인식 성능을 높였다.