• Title/Summary/Keyword: speech distortion weights

Search Result 4, Processing Time 0.017 seconds

SNR-based Weight Control for the Spatially Preprocessed Speech Distortion Weighted Multi-channel Wiener Filtering (공간 필터와 결합된 음성 왜곡 가중 다채널 위너 필터에서의 신호 대 잡음 비에 의한 가중치 결정 방법)

  • Kim, Gibak
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.455-462
    • /
    • 2013
  • This paper introduces the Spatially Preprocessed Speech Distortion Weighted Multi-channel Wiener Filter (SP-SDW-MWF) for multi-microphone noise reduction and proposes a method to determine the speech distortion weights. The SP-SDW-MWF is known as a robust noise reduction algorithm against the error caused by the mismatch in microphones. The SP-SDW-MWF adopts weights which determine the amount of noise reduction at the expense of introducing speech distortion in the noise-suppressed speech. In this paper, we use the error of power spectral density between the estimated signal and the desired signal as the evaluation measure. Thus the a priori SNR is used to control the speech distortion weights in the frequency domain. In the experimental results, the proposed method yields better result in terms of MFCC distortion compared to the conventional method.

Online Blind Channel Normalization Using BPF-Based Modulation Frequency Filtering

  • Lee, Yun-Kyung;Jung, Ho-Young;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1190-1196
    • /
    • 2016
  • We propose a new bandpass filter (BPF)-based online channel normalization method to dynamically suppress channel distortion when the speech and channel noise components are unknown. In this method, an adaptive modulation frequency filter is used to perform channel normalization, whereas conventional modulation filtering methods apply the same filter form to each utterance. In this paper, we only normalize the two mel frequency cepstral coefficients (C0 and C1) with large dynamic ranges; the computational complexity is thus decreased, and channel normalization accuracy is improved. Additionally, to update the filter weights dynamically, we normalize the learning rates using the dimensional power of each frame. Our speech recognition experiments using the proposed BPF-based blind channel normalization method show that this approach effectively removes channel distortion and results in only a minor decline in accuracy when online channel normalization processing is used instead of batch processing

Improvement of Synthetic Speech Quality using a New Spectral Smoothing Technique (새로운 스펙트럼 완만화에 의한 합성 음질 개선)

  • 장효종;최형일
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1037-1043
    • /
    • 2003
  • This paper describes a speech synthesis technique using a diphone as an unit phoneme. Speech synthesis is basically accomplished by concatenating unit phonemes, and it's major problem is discontinuity at the connection part between unit phonemes. To solve this problem, this paper proposes a new spectral smoothing technique which reflects not only formant trajectories but also distribution characteristics of spectrum and human's acoustic characteristics. That is, the proposed technique decides the quantity and extent of smoothing by considering human's acoustic characteristics at the connection part of unit phonemes, and then performs spectral smoothing using weights calculated along a time axis at the border of two diphones. The proposed technique reduces the discontinuity and minimizes the distortion which is caused by spectral smoothing. For the purpose of performance evaluation, we tested on five hundred diphones which are extracted from twenty sentences using ETRI Voice DB samples and individually self-recorded samples.

Speech Synthesis using Diphone Clustering and Improved Spectral Smoothing (다이폰 군집화와 개선된 스펙트럼 완만화에 의한 음성합성)

  • Jang, Hyo-Jong;Kim, Kwan-Jung;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.665-672
    • /
    • 2003
  • This paper describes a speech synthesis technique by concatenating unit phoneme. At that time, a major problem is that discontinuity is happened from connection part between unit phonemes, especially from connection part between unit phonemes recorded by different persons. To solve the problem, this paper uses clustered diphone, and proposes a spectral smoothing technique, not only using formant trajectory and distribution characteristic of spectrum but also reflecting human's acoustic characteristic. That is, the proposed technique performs unit phoneme clustering using distribution characteristic of spectrum at connection part between unit phonemes and decides a quantity and a scope for the smoothing by considering human's acoustic characteristic at the connection part of unit phonemes, and then performs the spectral smoothing using weights calculated along a time axes at the border of two diphones. The proposed technique removes the discontinuity and minimizes the distortion which can be occurred by spectrum smoothing. For the purpose of the performance evaluation, we test on five hundred diphones which are extracted from twenty sentences recorded by five persons, and show the experimental results.