• Title/Summary/Keyword: speech coder

Search Result 166, Processing Time 0.021 seconds

Speech Feature Extraction Using Auditory Model (청각모델을 이용한 음성신호의 특징 추출 방법에 관한 연구)

  • Park, Kyu-Hong;Kim, Young-Ho;Jung, Sang-Kuk;Rho, Seung-Yong
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2259-2261
    • /
    • 1998
  • Auditory Models that are capable of achieving human performance would provide a basis for realizing effective speech processing systems. Perceptual invariance to adverse signal conditions (noise, microphone and channel distortions, room reverberations) may provide a basis for robust speech recognition and speech coder with high efficiency. Auditory model that simulates the part of auditory periphery up through the auditory nerve level and new distance measure that is defined as angle between vectors are described.

  • PDF

Transcoding Algorithm for SMV and G.723.1 Vocoders via Direct Parameter Transformation (SMV와 G.723.1 음성부호화기를 위한 파라미터 직접 변환 방식의 상호부호화 알고리듬)

  • 서성호;장달원;이선일;유창동
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.61-70
    • /
    • 2003
  • In this paper, a transcoding algorithm for the Selectable Mode Vocoder (SMV) and the G.723.1 speech coder via direct parameter transformation is proposed. In contrast to the conventional tandem transcoding algorithm, the proposed algorithm converts the parameters of one coder to the other without going through the decoding and encoding process. The proposed algorithm is composed of four parts: the parameter decoding, line spectral pair (LSP) conversion, pitch period conversion, excitation conversion and rate selection. The evaluation results show that the proposed algorithm achieves equivalent speech quality to that of tandem transcoding with reduced computational complexity and delay.

Improving LD-CELP using frame classification and modified synthesis filter (프레임 분류와 합성필터의 변형을 이용한 적은 지연을 갖는 음성 부호화기의 성능)

  • 임은희;이주호;김형명
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.6
    • /
    • pp.1430-1437
    • /
    • 1996
  • A low delay code excited linear predictive speech coder(LD-CELP) at bit rates under 8kbps is considered. We try to improve the perfomance of speech coder with frame type dependent modification of synthesis filter. We first classify frames into 3 groups: voiced, unvoiced and onset. For voicedand unvoiced frame, the spectral envelope of the synthesis filter is adapted to the phonetic characteristics. For transition frame from unvoiced to voiced, the synthesis filter which has been interpolated with the bias filter is used. The proposed vocoder produced more clear sound with similar delay level than other pre-existing LD-CELP vocoders.

  • PDF

Wavelet-based Pitch Detector for 2.4 kbps Harmonic-CELP Coder (2.4 kbps 하모닉-CELP 코더를 위한 웨이블렛 피치 검출기)

  • 방상운;이인성;권오주
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.8
    • /
    • pp.717-726
    • /
    • 2003
  • This paper presents the methods that design the Wavelet-based pitch detector for 2,4 kbps Harmonic-CELP Coder, and that achieve the effective waveform interpolation by decision window shape of the transition region, Waveform interpolation coder operates by encoding one pitch-period-sized segment, a prototype segment, of speech for each frame, generate the smooth waveform interpolation between the prototype segments for voiced frame, But, harmonic synthesis of the prototype waveforms between previous frame and current frame occur not only waveform errors but also discontinuity at frame boundary on that case of pitch halving or doubling, In addtion, in transition region since waveform interpolation coder synthesizes the excitation waveform by using overlap-add with triangularity window, therefore, Harmonic-CELP fail to model the instantaneous increasing speech and synthesis waveform linearly increases, First of all, in order to detect the precise pitch period, we use the hybrid 1st pitch detector, and increse the precision by using 2nd ACF-pitch detector, Next, in order to modify excitation window, we detect the onset, offset of frame by GCI, As the result, pitch doubling is removed and pitch error rate is decreased 5.4% in comparison with ACF, and is decreased 2,66% in comparison with wavelet detector, MOS test improve 0.13 at transition region.

Spectrum Representation Based on LPC Cepstral VQ for Low Bit Rate CELP Coder (LPC Cepstral 벡터 양자화에 의한 저 전송율 CELP 음성부호기의 스펙트럼 표기)

  • 정재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.761-771
    • /
    • 1994
  • This paper focuses on how spectrum information can be represented efficiently in a very low bit rate CELP speech coder. To achieve the goal, an LPC cepstral coefficients VQ scheme representing the spectrum information in a CELP coder is proposed. To represent the spectrum information using LPC cepstrums, three different cepstral distance measures having different spectral meanings in the frequency domain are considered, and their performances are compared and analyzed. The experimental results show that spectrum information in low bit rate CELP coders can be represented very efficiently using the proposed LPC cepstral vector quantization scheme.

  • PDF

Speech/Music Signal Classification Based on Spectrum Flux and MFCC For Audio Coder (오디오 부호화기를 위한 스펙트럼 변화 및 MFCC 기반 음성/음악 신호 분류)

  • Sangkil Lee;In-Sung Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.239-246
    • /
    • 2023
  • In this paper, we propose an open-loop algorithm to classify speech and music signals using the spectral flux parameters and Mel Frequency Cepstral Coefficients(MFCC) parameters for the audio coder. To increase responsiveness, the MFCC was used as a short-term feature parameter and spectral fluxes were used as a long-term feature parameters to improve accuracy. The overall voice/music signal classification decision is made by combining the short-term classification method and the long-term classification method. The Gaussian Mixed Model (GMM) was used for pattern recognition and the optimal GMM parameters were extracted using the Expectation Maximization (EM) algorithm. The proposed long-term and short-term combined speech/music signal classification method showed an average classification error rate of 1.5% on various audio sound sources, and improved the classification error rate by 0.9% compared to the short-term single classification method and 0.6% compared to the long-term single classification method. The proposed speech/music signal classification method was able to improve the classification error rate performance by 9.1% in percussion music signals with attacks and 5.8% in voice signals compared to the Unified Speech Audio Coding (USAC) audio classification method.

VQ Codebook Index Interpolation Method for Frame Erasure Recovery of CELP Coders in VoIP

  • Lim Jeongseok;Yang Hae Yong;Lee Kyung Hoon;Park Sang Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.877-886
    • /
    • 2005
  • Various frame recovery algorithms have been suggested to overcome the communication quality degradation problem due to Internet-typical impairments on Voice over IP(VoIP) communications. In this paper, we propose a new receiver-based recovery method which is able to enhance recovered speech quality with almost free computational cost and without an additional increment of delay and bandwidth consumption. Most conventional recovery algorithms try to recover the lost or erroneous speech frames by reconstructing missing coefficients or speech signal during speech decoding process. Thus they eventually need to modify the decoder software. The proposed frame recovery algorithm tries to reconstruct the missing frame itself, and does not require the computational burden of modifying the decoder. In the proposed scheme, the Vector Quantization(VQ) codebook indices of the erased frame are directly estimated by referring the pre-computed VQ Codebook Index Interpolation Tables(VCIIT) using the VQ indices from the adjacent(previous and next) frames. We applied the proposed scheme to the ITU-T G.723.1 speech coder and found that it improved reconstructed speech quality and outperforms conventional G.723.1 loss recovery algorithm. Moreover, the suggested simple scheme can be easily applicable to practical VoIP systems because it requires a very small amount of additional computational cost and memory space.

Design of Multi Rate Wideband Speech Coder Using the AMR(Adaptive Multi-Rate) Coder (AMR 부호화기와 결합된 다전송률 광대역 음성부호화기 설계)

  • 김은주;이호창;이인성
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.755-758
    • /
    • 2000
  • 본 논문에서는 AMR(Adaptive Multi-Rate)를 이용하여 광대역 음성부호화기를 설계하였다. 16kHz로 샘플링 된 입력 신호를 QMF 필터에 의해 두 개의 대역으로 나누어, 각각 decimation하여 두 개의 8kHz 샘플링 신호로 변환시킨 후 저대역(0Hz-3400Hz)의 신호와 고대역(3400Hz -7000Hz)의 신호로 나누어 각각 부호화한다. 나누어진 두 개의 협대역 음성신호는 AMR(Adaptive Multi-Rate)과 ATC(Adaptive Transform Coding)을 사용하여 각각 부호화되어 전송된다. 두 대역으로부터 부호화된 정보는 20.2kbps에서 12.75kbps까지의 전송률을 갖고, 수신단에서는 각 대역을 AMR과ATC방법으로 역부호화하여 음성신호를 합성한다. 설계된 광대역 음성부호화기의 성능을 평가하기 위해 ITU-T의 표준안인 G.722를 포함하여 MOS 시험을 하였다.

  • PDF

Design of Multi Rate Wideband Speech Coder Using the AMR(Adaptive Multi-Rate) Coder (AMR 부호화기와 결합된 다전송률 광대역 음성부호화기 설계)

  • 김은주;이인성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.5B
    • /
    • pp.632-638
    • /
    • 2001
  • 본 논문에서는 AMR(Adaptive Multi-Rate)를 이용하여 광대역 음성부호화기를 설계하였다. 16kHz로 샘플링된 입력 신호를 QMF 필터에 의해 두 개의 대역으로 나누어, 각각 decimation하여 두 개의 8kHz 샘플링 신호로 변환시킨 후 저대역(0Hz-3400Hz)의 신호와 고대역(3400Hz∼7000Hz)의 신호로 나누어 각각 부호화한다. 나누어진 두 개의 협대역 음성신호는 AMR(Adaptive Multi-Rate)과 ATC(Adaptive Transform Coding)을 사용하여 각각 부호화되어 전송된다. 두 대역으로부터 부호화된 정보는 20.2kbps에서 12.75kbps까지의 전송률을 갖고, 수신단에서는 각 대역을 AMR과 ATC 방법으로 역부호화하여 음성신호를 합성한다. 설계된 광대역 음성부호화기의 성능을 평가하기 위해 ITU-T의 표준안인 G.722를 포함하여 MOS 시험을 하였다.

  • PDF

Modified Generic Mode Coding Scheme for Enhanced Sound Quality of G.718 SWB (G.718 초광대역 코덱의 음질 향상을 위한 개선된 Generic Mode Coding 방법)

  • Cho, Keun-Seok;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.119-125
    • /
    • 2012
  • This paper describes a new algorithm for encoding spectral shape and envelope in the generic mode of G.718 super-wide band (SWB). In the G.718 SWB coder, generic mode coding and sinusoidal enhancement are used for the quantization of modified discrete cosine transform (MDCT)-based parameters in the high frequency band. In the generic mode, the high frequency band is divided into sub-bands and for every sub-band the most similar match with the selected similarity criteria is searched from the coded and envelope normalized wideband content. In order to improve the quantization scheme in high frequency region of speech/audio signals, the modified generic mode by the improvement of the generic mode in G.718 SWB is proposed. In the proposed generic mode, perceptual vector quantization of spectral envelopes and the resolution increase for spectral copy are used. The performance of the proposed algorithm is evaluated in terms of objective quality. Experimental results show that the proposed algorithm increases the quality of sounds significantly.