• Title/Summary/Keyword: 음성신호

Search Result 1,513, Processing Time 0.024 seconds

Adaptive Noise Reduction using Standard Deviation of Wavelet Coefficients in Speech Signal (웨이브렛 계수의 표준편차를 이용한 음성신호의 적응 잡음 제거)

  • 황향자;정광일;이상태;김종교
    • Science of Emotion and Sensibility
    • /
    • v.7 no.2
    • /
    • pp.141-148
    • /
    • 2004
  • This paper proposed a new time adapted threshold using the standard deviations of Wavelet coefficients after Wavelet transform by frame scale. The time adapted threshold is set up using the sum of standard deviations of Wavelet coefficient in cA3 and weighted cDl. cA3 coefficients represent the voiced sound with low frequency and cDl coefficients represent the unvoiced sound with high frequency. From simulation results, it is demonstrated that the proposed algorithm improves SNR and MSE performance more than Wavelet transform and Wavelet packet transform does. Moreover, the reconstructed signals by the proposed algorithm resemble the original signal in terms of plosive sound, fricative sound and affricate sound but Wavelet transform and Wavelet packet transform reduce those sounds seriously.

  • PDF

Spectrum Based Excitation Extraction for HMM Based Speech Synthesis System (스펙트럼 기반 여기신호 추출을 통한 HMM기반 음성합성기의 음질 개선 방법)

  • Lee, Bong-Jin;Kim, Seong-Woo;Baek, Soon-Ho;Kim, Jong-Jin;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.82-90
    • /
    • 2010
  • This paper proposes an efficient method to enhance the quality of synthesized speech in HMM based speech synthesis system. The proposed method trains spectral parameters and excitation signals using Gaussian mixture model, and estimates appropriate excitation signals from spectral parameters during the synthesis stage. Both WB-PESQ and MUSHRA results show that the proposed method provides better speech quality than conventional HMM based speech synthesis system.

Preprocessing method for enhancing digital audio quality in speech communication system (음성통신망에서 디지털 오디오 신호 음질개선을 위한 전처리방법)

  • Song Geun-Bae;Ahn Chul-Yong;Kim Jae-Bum;Park Ho-Chong;Kim Austin
    • Journal of Broadcast Engineering
    • /
    • v.11 no.2 s.31
    • /
    • pp.200-206
    • /
    • 2006
  • This paper presents a preprocessing method to modify the input audio signals of a speech coder to obtain the finally enhanced signals at the decoder. For the purpose, we introduce the noise suppression (NS) scheme and the adaptive gain control (AGC) where an audio input and its coding error are considered as a noisy signal and a noise, respectively. The coding error is suppressed from the input and then the suppressed input is level aligned to the original input by the following AGC operation. Consequently, this preprocessing method makes the spectral energy of the music input redistributed all over the spectral domain so that the preprocessed music can be coded more effectively by the following coder. As an artifact, this procedure needs an additional encoding pass to calculate the coding error. However, it provides a generalized formulation applicable to a lot of existing speech coders. By preference listening tests, it was indicated that the proposed approach produces significant enhancements in the perceived music qualities.

Reduction Algorithm of Environmental Noise by Multi-band Filter (멀티밴드필터에 의한 환경잡음억압 알고리즘)

  • Choi, Jae-Seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.91-97
    • /
    • 2012
  • This paper first proposes the speech recognition algorithm by detection of the speech and noise sections at each frame, then proposes the reduction algorithm of environmental noise by multi-band filter which removes the background noises at each frame according to detection of the speech and noise sections. The proposed algorithm reduces the background noises using filter bank sub-band domain after extracting the features from the speech data. In this experiment, experimental results of the proposed noise reduction algorithm by the multi-band filter demonstrate using the speech and noise data, at each frame. Based on measuring the spectral distortion, experiments confirm that the proposed algorithm is effective for the speech by corrupted the noise.

On a Split Model for Analysis Techniques of Wideband Speech Signal (광대역 음성신호의 분할모델 분석기법에 관한 연구)

  • Park, Young-Ho;Ham, Myung-Kyu;You, Kwang-Bock;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.80-84
    • /
    • 1999
  • In this paper, the split model analysis algorithm, which can generate the wideband speech signal from the spectral information of narrowband signal, is developed. The split model analysis algorithm deals with the separation of the 10/sup th/ order LPC model into five cascade-connected 2/sup nd/ order model. The use of the less complex 2/sup nd/ order models allows for the exclusion of the complicated nonlinear relationships between model parameters and all the poles of the LPC model. The relationships between the model parameters and its corresponding analog poles is proved and applied to each 2/sup nd/ order model. The wideband speech signal is obtained by changing only the sampling rate.

  • PDF

A Study on the Segmentation of Speech Signal into Phonemic Units (음성 신호의 음소 단위 구분화에 관한 연구)

  • Lee, Yeui-Cheon;Lee, Gang-Sung;Kim, Soon-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.4
    • /
    • pp.5-11
    • /
    • 1991
  • This paper suggests a segmentation method of speech signal into phonemic units. The suggested segmentation system is speaker-independent and performed without anyprior information of speech signal. In segmentation process, we first divide input speech signal into purevoiced region and not pure voiced speech regions. After then we apply the second algorithm which segments each region into the detailed phonemic units by using the voiced detection parameters, i.e., the time variation of 0th LPC cepstrum coefficient parameter and the ZCR parameter. Types of speech, used to prove the availability of segmentation algorithm suggested in this paper, are the vocabulary composed of isolated words and continuous words. According to the experiments, the successful segmentation rate for 507 phonemic units involved in the total vocabulary is 91.7%.

  • PDF

Voice Conversion Using Linear Multivariate Regression Model and LP-PSOLA Synthesis Method (선형다변회귀모델과 LP-PSOLA 합성방식을 이용한 음성변환)

  • 권홍석;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.15-23
    • /
    • 2001
  • This paper presents a voice conversion technique that modifies the utterance of a source speaker as if it were spoken by a target speaker. Feature parameter conversion methods to perform the transformation of vocal tract and prosodic characteristics between the source and target speakers are described. The transformation of vocal tract characteristics is achieved by modifying the LPC cepstral coefficients using Linear Multivariate Regression (LMR). Prosodic transformation is done by changing the average pitch period between speakers, and it is applied to the residual signal using the LP-PSOLA scheme. Experimental results show that transformed speech by LMR and LP-PSOLA synthesis method contains much characteristics of the target speaker.

  • PDF

Speech Enhancement Based on Mixture Hidden Filter Model (HFM) Under Nonstationary Noise (혼합 은닉필터모델 (HFM)을 이용한 비정상 잡음에 오염된 음성신호의 향상)

  • 강상기;백성준;이기용;성굉모
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.387-393
    • /
    • 2002
  • The enhancement technique of noise signal using mixture HFM (Midden Filter Model) are proposed. Given the parameters of the clean signal and noise, noisy signal is modeled by a linear state-space model with Markov switching parameters. Estimation of state vector is required for estimating original signal. The estimation procedure is based on mixture interacting multiple model (MIMM) and the estimator of speech is given by the weighted sum of parallel Kalman filters operating interactively. Simulation results showed that the proposed method offers performance gains relative to the previous results with slightly increased complexity.

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

Mixed Noise Cancellation by Independent Vector Analysis and Frequency Band Beamforming Algorithm in 4-channel Environments (4채널 환경에서 독립벡터분석 및 주파수대역 빔형성 알고리즘에 의한 혼합잡음제거)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.5
    • /
    • pp.811-816
    • /
    • 2019
  • This paper first proposes a technique to separate clean speech signals and mixed noise signals by using an independent vector analysis algorithm of frequency band for 4 channel speech source signals with a noise. An improved output speech signal from the proposed independent vector analysis algorithm is obtained by using the cross-correlation between the signal outputs from the frequency domain delay-sum beamforming and the output signals separated from the proposed independent vector analysis algorithm. In the experiments, the proposed algorithm improves the maximum SNRs of 10.90dB and the segmental SNRs of 10.02dB compared with the frequency domain delay-sum beamforming algorithm for the input mixed noise speeches with 0dB and -5dB SNRs including white noise, respectively. Therefore, it can be seen from this experiment and consideration that the speech quality of this proposed algorithm is improved compared to the frequency domain delay-sum beamforming algorithm.