• Title/Summary/Keyword: Speech Signal

Search Result 1,174, Processing Time 0.023 seconds

High-Band Codec for Bandwidth Scalable Wideband Speech Codec (대역폭 계층 구조의 광대역 음성 부호화기를 위한 상위 대역 부호화기 연구)

  • Kim Youngvo;Jeong Byounghak;Son Chang-Yong;Sung Ho-Sang;Park Hochong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.7
    • /
    • pp.395-401
    • /
    • 2005
  • In this paper, the high-band codec for bandwidth scalable wideband speech codec is proposed. The wideband input speech signal is separated into low-band signal and high-band signal, and the low-band signal is encoded by the standard narrow-band speech codec and the high-band signal is encoded by the proposed codec. In the high-band codec. the signal is transformed into frequency domain by MLT on a subframe basis, and MLT coefficients are splitted into magnitude and sign for quantization. The magnitudes of MLT coefficients are arranged into several time-frequency bands and each band is quantized in 2D-DCT domain, where the low-band information is utilized for better performance. The sign of MLT coefficient is quantized based on a priority selection process with the weighting measurement. The objective and subjective performance of wideband speech codec including the proposed high-band codec is measured, and it is confirmed that the proposed codec has better performance than 32kbps G.722.1.

16kbps Windeband Sideband Speech Codec (16kbps 광대역 음성 압축기 개발)

  • 박호종;송재종
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.1
    • /
    • pp.5-10
    • /
    • 2002
  • This paper proposes new 16 kbps wideband speech codec with bandwidth of 7 kHz. The proposed codec decomposes the input speech signal into low-band and high-band signals using QMF (Quadrature Mirror Filter), then AMR (Adaptive Multi Rate) speech codec processes the low-band signal and new transform-domain codec based on G.722.1 wideband cosec compresses the high-band signal. The proposed codec allocates different number of bits to each band in an adaptive way according to the property of input signal, which provides better performance than the codec with the fixed bit allocation scheme. In addition, the proposed cosec processes high-band signal using wavelet transform for better performance. The performance of proposed codec is measured in a subjective method. and the simulations with various speech data show that the proposed coders has better performance than G.722 48 kbps SB-ADPCM.

A New Hearing Aid Algorithm for Speech Discrimination using ICA and Multi-band Loudness Compensation

  • Lee Sangmin;Won Jong Ho;Park Hyung Min;Hong Sung Hwa;Kim In Young;Kim Sun I.
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.3
    • /
    • pp.177-184
    • /
    • 2005
  • In this paper, we proposed a new hearing aid algorithm to improve SNR(signal to noise ratio) of noisy speech signal and speech perception. The proposed hearing aid algorithm is a multi-band loudness compensation based independent component analysis (ICA). The proposed algorithm was compared with a conventional spectral subtraction algorithm on behind-the-ear type hearing aid. The proposed algorithm successfully separated a target speech signal from background noise and from a mixture of the speech signals. The algorithms were compared each other by means of SNR. The average improvement of SNR by ICA based algorithm was 16.64dB, whereas spectral subtraction algorithm was 8.67dB. From the clinical tests, we concluded that our proposed algorithm would help hearing aid user to hear clearly a target speech in noisy conditions.

Intelligibility Analysis on the Eavesdropping Sound of Glass Windows Using MTF-STI (MTF-STI를 이용한 유리창 도청음의 명료도 분석)

  • Kim, Hee-Dong;Kim, Yoon-Ho;Kim, Seock-Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1
    • /
    • pp.8-15
    • /
    • 2007
  • Speech intelligibility of the eavesdropping sound is investigated on a acoustic cavity - glass window coupled system. Using MLS (Maximum Length Sequency) signal as a sound source, acceleration and velocity responses of the glass window are measured by accelerometer and laser doppler vibrometer. MTF (Modulation Transfer Function) is used to identify tile speech transmission characteristics of the cavity and window system. STI (Speech Transmission Index) based upon MTF is calculated and speech intelligibility of the vibration sound of the glass window is estimated. Speech intelligibilities by the acceleration signal and the velocity signal are compared. Finally, intelligibility of the conversation sound is confirmed by the subjective test.

Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech (음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터)

  • Kim, Jung-Min;Bae, Keun-Sung
    • MALSORI
    • /
    • no.61
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF

Real-Time Implementation of AMR Speech Codec Using TMS320VC5510 DSP (TMS320VC5510 DSP를 이용한 AMR 음성부호화기의 실시간 구현)

  • Kim, Jun;Bae, Keun-Sung
    • MALSORI
    • /
    • no.65
    • /
    • pp.143-152
    • /
    • 2008
  • This paper focuses on the real time implementation of an adaptive multi-rate (AMR) speech codec, that is a standard speech codec of IMT-2000, using the TMS320VC5510. The series of TMS320VC55x is a 16-bit fixed-point digital signal processor (DSP) having low power consumption for the use of mobile communications by Texas Instruments (TI) corporation. After we analyze the AMR algorithm and source code as well as the structure and I/O of 7MS320VC55x, we carry out optimizing the programs for real time implementation. The implemented AMR speech codec uses 55.2 kbyte for the program memory and 98.3 kbyte for the data memory, and it requires 709,878 clocks, i.e. about 3.5 ms, for processing a frame of 20 ms speech signal.

  • PDF

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

Speech Enhancement Using Multiple Kalman Filter (다중칼만필터를 이용한 음성향상)

  • 이기용
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.225-230
    • /
    • 1998
  • In this paper, a Kalman filter approach for enhancing speech signals degraded by statistically independent additive nonstationary noise is developed. The autoregressive hidden markov model is used for modeling the statistical characteristics of both the clean speech signal and the nonstationary noise process. In this case, the speech enhancement comprises a weighted sum of conditional mean estimators for the composite states of the models for the speech and noise, where the weights equal to the posterior probabilities of the composite states, given the noisy speech. The conditional mean estimators use a smoothing spproach based on two Kalmean filters with Markovian switching coefficients, where one of the filters propagates in the forward-time direction with one frame. The proposed method is tested against the noisy speech signals degraded by Gaussian colored noise or nonstationary noise at various input signal-to-noise ratios. An app개ximate improvement of 4.7-5.2 dB is SNR is achieved at input SNR 10 and 15 dB. Also, in a comparison of conventional and the proposed methods, an improvement of the about 0.3 dB in SNR is obtained with our proposed method.

  • PDF

Enhancement of speech with time-variant and colored noise

  • Mine, Katsutoshi;Kitazaki, Masato;Wakabayashi, Katsuyoshi;Morimoto, Yuji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1098-1102
    • /
    • 1990
  • We consider a method for enhancement of speech signal degraded by additive random noise with time-variant and/or colored natures. For enhancement of speech signal with such noise, it is effective to utilize the natures of speech and noise. The objective of enhancement of speech is to improve the overall quality and the articulation of speech degraded by the time-variant and/or colored random noise. In the proposed method the distribution model of speech spectrum is given as information to noise reduction system. The proposed system can improve about lOdB in SNR when the input SNR is 0 dB.

  • PDF

Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information (청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.