• Title/Summary/Keyword: Speech spectrum

Search Result 307, Processing Time 0.023 seconds

A study on loss combination in time and frequency for effective speech enhancement based on complex-valued spectrum (효과적인 복소 스펙트럼 기반 음성 향상을 위한 시간과 주파수 영역 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.1
    • /
    • pp.38-44
    • /
    • 2022
  • Speech enhancement is performed to improve intelligibility and quality of the noise-corrupted speech. In this paper, speech enhancement performance was compared using different loss functions in time and frequency domains. This study proposes a combination of loss functions to utilize advantage of each domain by considering both the details of spectrum and the speech waveform. In our study, Scale Invariant-Source to Noise Ratio (SI-SNR) is used for the time domain loss function, and Mean Squared Error (MSE) is used for the frequency domain, which is calculated over the complex-valued spectrum and magnitude spectrum. The phase loss is obtained using the sin function. Speech enhancement result is evaluated using Source-to-Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI). In order to confirm the result of speech enhancement, resulting spectrograms are also compared. The experimental results over the TIMIT database show the highest performance when using combination of SI-SNR and magnitude loss functions.

The Speech Recognition Using the Diffusion Network (확산망을 이용한 음성인식)

  • 허만택
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1996.10a
    • /
    • pp.70-75
    • /
    • 1996
  • In this paper, the pre-precessing method for the recognition of single vowels by use of spectrum envelope is presented , we use new method of an extrating spectrum envelope using the diffusion filter bank. We reduced the total processing time, and got higher enhancement of discrimination . By getting 88.3% of average recognition rate for single vowels of real voice through computer simulation, we confirmed it to be useful for speech recongition which use spectrum analysis for voice signal to have many frequency components.

  • PDF

Speech and Noise Recognition System by Neural Network (신경회로망에 의한 음성 및 잡음 인식 시스템)

  • Choi, Jae-Sung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.4
    • /
    • pp.357-362
    • /
    • 2010
  • This paper proposes the speech and noise recognition system by using a neural network in order to detect the speech and noise sections at each frame. The proposed neural network consists of a layered neural network training by back-propagation algorithm. First, a power spectrum obtained by fast Fourier transform and linear predictive coefficients are used as the input to the neural network for each frame, then the neural network is trained using these power spectrum and linear predictive coefficients. Therefore, the proposed neural network can train using clean speech and noise. The performance of the proposed recognition system was evaluated based on the recognition rate using various speeches and white, printer, road, and car noises. In this experiment, the recognition rates were 92% or more for such speech and noise when training data and evaluation data were the different.

Speech Recognition in Noisy Environments using the NOise Spectrum Estimation based on the Histogram Technique (히스토그램 처리방법에 의한 잡음 스펙트럼 추정을 이용한 잡음환경에서의 음성인식)

  • Kwon, Young-Uk;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.68-75
    • /
    • 1997
  • Spectral subtraction is widely-used preprocessing technique for speech recognition in additive noise environments, but it requires a good estimate of the noise power spectrum. In this paper, we employ the histogram technique for the estimation of noise spectrum. This technique has advantages over other noise estimation methods in that it does not requires speech/non-speech detection and can estimate slowly-varying noise spectra. According to the speaker-independent isolated word recognition in both colored Gaussian and car noise environments under various SNR conditions. Histogram-technique-based spectral subtraction method yields superier performance to the one with conventional noise estimation method using the spectral average of initial frames during non-speech period.

  • PDF

Parts-Based Feature Extraction of Spectrum of Speech Signal Using Non-Negative Matrix Factorization

  • Park, Jeong-Won;Kim, Chang-Keun;Lee, Kwang-Seok;Koh, Si-Young;Hur, Kang-In
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.4
    • /
    • pp.209-212
    • /
    • 2003
  • In this paper, we proposed new speech feature parameter through parts-based feature extraction of speech spectrum using Non-Negative Matrix Factorization (NMF). NMF can effectively reduce dimension for multi-dimensional data through matrix factorization under the non-negativity constraints, and dimensionally reduced data should be presented parts-based features of input data. For speech feature extraction, we applied Mel-scaled filter bank outputs to inputs of NMF, than used outputs of NMF for inputs of speech recognizer. From recognition experiment result, we could confirm that proposed feature parameter is superior in recognition performance than mel frequency cepstral coefficient (MFCC) that is used generally.

On a Pitch Alteration Method Compensated with the Spectrum for High Quality Speech Synthesis (스펙트럼 보상된 고음질 합성용 피치 변경법)

  • 문효정
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.123-126
    • /
    • 1995
  • The waveform coding are concerned with simply preserving the wave shape of speech signal through a redundancy reduction process. In the case of speech synthesis, the wave form coding with high quality are mainly used to the synthesis by analysis. However, because the parameters of this coding are not classified as either excitation and vocal tract parameters, it is difficult to applying the waveform coding to the synthesis by rule. In this paper, we proposed a new pitch alteration method that can change the pitch period in waveform coding by using scaling the time-axis and compensating the spectrum. This is a time-frequency domain method that is preserved in the phase components of the waveform and that has a little spectrum distortion with 2.5% and less for 50% pitch change.

  • PDF

Introduction to the Spectrum and Spectrogram (스팩트럼과 스팩트로그램의 이해)

  • Jin, Sung-Min
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.19 no.2
    • /
    • pp.101-106
    • /
    • 2008
  • The speech signal has been put into a form suitable for storage and analysis by computer, several different operation can be performed. Filtering, sampling and quantization are the basic operation in digiting a speech signal. The waveform can be displayed, measured and even edited, and spectra can be computed using methods such as the Fast Fourier Transform (FFT), Linear predictive Coding (LPC), Cepstrum and filtering. The digitized signal also can be used to generate spectrograms. The spectrograph provide major advantages to the study of speech. So, author introduces the basic techniques for the acoustic recording, digital signal processing and the principles of spectrum and spectrogram.

  • PDF

Pseudo-Cepstral Representation of Speech Signal and Its Application to Speech Recognition (음성 신호의 의사 켑스트럼 표현 및 음성 인식에의 응용)

  • Kim, Hong-Kook;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1E
    • /
    • pp.71-81
    • /
    • 1994
  • In this paper, we propose a pseudo-cepstral representation of line spectrum pair(LSP) frequencies and evaluate speech recognition performance with cepstral lift using the pseudo-cepstrum. The pseudo-cepstrum corresponding to LSP frequencies is derived by approxmating the relationship between LPC-cepstrum and LSP frequencies. Three cepstral liftering procedures are applied to the pseudo-cepstrum to improve the performance of speech recognition. They are the root-power-sums ligter, the general exponential lifter, and the bandpass lifter. Then, the liftered psedudo-cepstra are warped into a mel-frequency scale to obtain feature vectors for speech recognition. Among the three lifters, the general exponential lifter results in the best performance on speech recognition. When we use the proposed pseudo-cepstra feature vectors for recognizing noisy speech, the signal-to-noise ratio (SNR) improvement of about 5~10dB LSP is obtained.

  • PDF

Method for Spectral Enhancement by Binary Mask for Speech Recognition Enhancement Under Noise Environment (잡음환경에서 음성인식 성능향상을 위한 바이너리 마스크를 이용한 스펙트럼 향상 방법)

  • Choi, Gab-Keun;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.7
    • /
    • pp.468-474
    • /
    • 2010
  • The major factor that disturbs practical use of speech recognition is distortion by the ambient and channel noises. Generally, the ambient noise drops the performance and restricts places to use. DSR (Distributed Speech Recognition) based speech recognition also has this problem. Various noise cancelling algorithms are applied to solve this problem, but loss of spectrum and remaining noise by incorrect noise estimation at low SNR environments cause drop of recognition rate. This paper proposes methods for speech enhancement. This method uses MMSE-STSA for noise cancelling and ideal binary mask to compensate damaged spectrum. According to experiments at noisy environment (SNR 15 dB ~ 0 dB), the proposed methods showed better spectral results and recognition performance.

A study on skip-connection with time-frequency self-attention for improving speech enhancement based on complex-valued spectrum (복소 스펙트럼 기반 음성 향상의 성능 향상을 위한 time-frequency self-attention 기반 skip-connection 기법 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.2
    • /
    • pp.94-101
    • /
    • 2023
  • A deep neural network composed of encoders and decoders, such as U-Net, used for speech enhancement, concatenates the encoder to the decoder through skip-connection. Skip-connection helps reconstruct the enhanced spectrum and complement the lost information. The features of the encoder and the decoder connected by the skip-connection are incompatible with each other. In this paper, for complex-valued spectrum based speech enhancement, Self-Attention (SA) method is applied to skip-connection to transform the feature of encoder to be compatible with the features of decoder. SA is a technique in which when generating an output sequence in a sequence-to-sequence tasks the weighted average of input is used to put attention on subsets of input, showing that noise can be effectively eliminated by being applied in speech enhancement. The three models using encoder and decoder features to apply SA to skip-connection are studied. As experimental results using TIMIT database, the proposed methods show improvements in all evaluation metrics compared to the Deep Complex U-Net (DCUNET) with skip-connection only.