• Title/Summary/Keyword: CODEBOOK

Search Result 346, Processing Time 0.036 seconds

Design of a 4kb/s ACELP Codec Using the Generalized AbS Principle (Generalized AbS 구조를 이용한 4kb/s ACELP 음성 부호화기의 설계)

  • 성호상;강상원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.33-38
    • /
    • 1999
  • In this paper, we combine a generalized analysis-by-synthesis (AbS) structure and an algebraic excitation scheme to propose a new 4kb/s speech codec. This codec partly uses the structure of G.729. We design a line spectrum pair (LSP) quantizer, an adaptive codebook, and an excitation codebook to fit the 4 kb/s bit rate. The codec has a 25㎳ algorithmic delay, which corresponds to a 20㎳ frame size and a 5㎳ lookahead. At the bit rates below 4kb/s, most CELP speech codecs using the AbS principle have a drawback that results a rapid degradation of speech quality. To overcome this drawback we use the generalized AbS structure which is efficient for the low bit rate speech codec. LP coefficients are converted to LSP and quantized using a predictive 2-stage VQ. A low complexity algebraic codebook which uses shifting method is used for the fixed codebook excitation, and gains of the adaptive codebook and the fixed codebook are quantized using the VQ. To evaluate the performance of the proposed codec A-B preference tests are done with the fixed rate 8kb/s QCELP. As the result of the test, the performance of the codec is similar to that of the fixed rate 8kb/s QCELP.

  • PDF

Image Coding Using LOT and FSVQ with Two-Channel Conjugate Codebooks (LOT와 2-채널 결합 코드북을 갖은 FSVQ를 이용한 영상 부호화)

  • 채종길;황찬식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.772-780
    • /
    • 1994
  • Vector quantization with two-channel conjugate codebook has been researched as an efficient coding technique that can reduce the computational complexity and codebook storage. This paper proposes FSVQ using two-channel conjugate codebook in order to reduce the number of state codebooks. Input vector in the two-channel conjugate FSVQ is coded with state codebook of a seperated state according to each codebook. In addition, LOT is adopted to obtain to obtain a high coding gain and to reduce blocking effect which appears in the block coding. As a result, although FSVQ can achieve higher data compression ratio than general vector quantization, it has a disadvantage of having a very large number of state codebooks. However FSVQ with two-channel conjugate codebooks can employ a significantly reduced number of state codebooks, even though it has a small loss in the PSNR compared with the conventional FSVQ using one codebook. Moreover FSVQ in the LOT domain can reduce blocking effect and high coding gain compared with FSVQ in the spatial domain.

  • PDF

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF

A Classified Space VQ Design for Text-Independent Speaker Recognition (문맥 독립 화자인식을 위한 공간 분할 벡터 양자기 설계)

  • Lim, Dong-Chul;Lee, Hanig-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.673-680
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text independent speaker recognition. In a concrete way, we present a non-iterative method which makes a vector quantization codebook and this method performs non-iterative learning so that the computational complexity is epochally reduced The proposed Classified Space VQ (CSVQ) design method for text Independent speaker recognition is generalized from Semi-noniterative VQ design method for text dependent speaker recognition. CSVQ contrasts with the existing desiEn method which uses the iterative learninE algorithm for every traininE speaker. The characteristics of a CSVQ design is as follows. First, the proposed method performs the non-iterative learning by using a Classified Space Codebook. Second, a quantization region of each speaker is equivalent for the quantization region of a Classified Space Codebook. And the quantization point of each speaker is the optimal point for the statistical distribution of each speaker in a quantization region of a Classified Space Codebook. Third, Classified Space Codebook (CSC) is constructed through Sample Vector Formation Method (CSVQ1, 2) and Hyper-Lattice Formation Method (CSVQ 3). In the numerical experiment, we use the 12th met-cepstrum feature vectors of 10 speakers and compare it with the existing method, changing the codebook size from 16 to 128 for each Classified Space Codebook. The recognition rate of the proposed method is 100% for CSVQ1, 2. It is equal to the recognition rate of the existing method. Therefore the proposed CSVQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal and CSVQ with CSC can be applied to a general purpose recognition.

Improvement of Overlapped Codebook Search in QCELP (QCELP에서 중첩된 코드북 검색의 개선)

  • 박광철;한승진;이정현
    • The KIPS Transactions:PartC
    • /
    • v.8C no.1
    • /
    • pp.105-112
    • /
    • 2001
  • In this paper, we present the advanced QCELP codebook search improving the qualification of speech, which can make QCELP vocoder used in noise robust system. While conventional QCELP usually searches stochastic codebook once, we can find that two times search is the most suitable for improving the quality of speech after we did 2-5 times search. Consequently, the advanced QCELP vocoder represents excitation signal in detail using two times precise quantization and so improve the qualification of speech. In our experiment, we use the speeches collected from circumstance (such as lecture room, house, street, laboratory etc.) without regarding noise as input dat and measure the speech Qualification using SNR, segSNR. As the result of the experiment, we find that the advanced QCELP makes SNR and segSNR improved by 38.35% and 65.51% respectively compared with conventional QCELP.

  • PDF

VQ Codebook Index Interpolation Method for Frame Erasure Recovery of CELP Coders in VoIP

  • Lim Jeongseok;Yang Hae Yong;Lee Kyung Hoon;Park Sang Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.877-886
    • /
    • 2005
  • Various frame recovery algorithms have been suggested to overcome the communication quality degradation problem due to Internet-typical impairments on Voice over IP(VoIP) communications. In this paper, we propose a new receiver-based recovery method which is able to enhance recovered speech quality with almost free computational cost and without an additional increment of delay and bandwidth consumption. Most conventional recovery algorithms try to recover the lost or erroneous speech frames by reconstructing missing coefficients or speech signal during speech decoding process. Thus they eventually need to modify the decoder software. The proposed frame recovery algorithm tries to reconstruct the missing frame itself, and does not require the computational burden of modifying the decoder. In the proposed scheme, the Vector Quantization(VQ) codebook indices of the erased frame are directly estimated by referring the pre-computed VQ Codebook Index Interpolation Tables(VCIIT) using the VQ indices from the adjacent(previous and next) frames. We applied the proposed scheme to the ITU-T G.723.1 speech coder and found that it improved reconstructed speech quality and outperforms conventional G.723.1 loss recovery algorithm. Moreover, the suggested simple scheme can be easily applicable to practical VoIP systems because it requires a very small amount of additional computational cost and memory space.

Fast Motion Estimation Algorithm Using Motion Vector Prediction and Neural Network (움직임 예측과 신경 회로망을 이용한 고속 움직임 추정 알고리즘)

  • 최정현;이경환;이법기;정원식;김경규;김덕규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9A
    • /
    • pp.1411-1418
    • /
    • 1999
  • In this paper, we propose a fast motion estimation algorithm using motion prediction and neural network. Considering that the motion vectors have high spatial correlation, the motion vector of current block is predicted by those of neighboring blocks. The codebook of motion vector is designed by Kohonen self-organizing feature map(KSFM) learning algorithm which has a fast learning speed and 2-D adaptive chararteristics. Since the similar codevectors are closely located in the 2-D codebook the motion is progressively estimated from the predicted codevector in the codebook. Computer simulation results show that the proposed method has a good performance with reduced computational complexity.

  • PDF

An optimal codebook design for multistage gain-shape vector quantizer using genetic algorithms (유전알고리즘에 의한 다단 gain-shape 양자화기의 최적 코드북 설계)

  • 김대진;안선하
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.1
    • /
    • pp.80-93
    • /
    • 1997
  • This paper proposes a new technique of optimal codebook design in multistage gain-shape vector quantization (MS-GS VQ) for wireless image communication. An original image is divided into a smany blocks as possible in order to get strong robustness to channel transmission errors: the original image is decomposed into a number of subband images, each of which contains a sperate spatial frequency information and is obtained by the biorthogonal wavlet transform; each subband is separated into several consecutive VQ stages, where each stage has a residual information of the previous stage; one vector in each stage is divided into two components-gain and shape. But, this decomposition genrates too many blocks and it thus makes the determination of optimal codebooks difficult. We overcome this difficulty by evolving each block's codebook independently with different genetic algorithm that uses each stage's individual training vectors. Th eimpact of th eproposed VQ technique on the channel transmission errors is compared with that of other VQ techniques. Simulation results show that the proposed VQ technique (MS-GS VQ) with the optimal codebook designe dy genetic algorithms is very robust to channel transmission errors even under the bursty and high BER conditions.

  • PDF

Complexity Reduction Algorithm for Quantized EGT Codebook Searching in Multiple Antenna Systems (다중 안테나 시스템에서 양자화된 동 이득 전송 기법의 코드북 검색 복잡도 감쇄 기법)

  • Park, Noe-Yoon;Kim, Young-Ju
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.1
    • /
    • pp.98-105
    • /
    • 2011
  • Reduced complexity codebook searching for Quantized Equal Gain Transmission(QEGT) is proposed over MIMO-OFDM systems. QEGT codebook is divided into M groups of Q index members. Each group has a representative index. At the 1st stage only the representative indices are searched then the best index is selected. At the 2nd stage the optimum index is determined only among the group of the selected representative index. This strategy reduces the overall index search algorithm comparing to the conventional methods. Monte-Carlo simulation shows that the searching complexity is reduced, but the link-level performance is still almost the same as the conventional methods when the number of transmission antennas are 3 to 7.

The Convergence Characteristics of The Time-Averaged Distortion in Vector Quantization: Part II. Applications to Testing Trained Codebooks (벡터 앙자화에서 시간 평균 왜곡치의 수렴 특성: II. 훈련된 부호책의 감사 기법)

  • Dong Sik Kim
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.5
    • /
    • pp.747-755
    • /
    • 1995
  • When codebooks designed by a clustering algorithm using training sets, a time-averaged distortion, which is called the inside-training-set- distortion (ITSD), is usually calculated in each iteration of the algorithm, since the input probability function is unknown in general. The algorithm stops if the ITSD no more significantly decreases. Then, in order to test the trained codebook, the outside-training-set-distortion (OTSD) is to be calculated by a time-averaged approximation using the test set. Hence codebooks that yield small values of the OTSD are regarded as good codebooks. In other words, the calculation of the OTSD is a criterion to testing a trained codebook. But, such an argument is not always true if some conditions are not satisfied. Moreover, in order to obtain an approximation of the OTSD using the test set, it is known that a large test set is requared in general. But, large test set causes heavy calculation com0plexity. In this paper, from the analyses in [16], it has been revealed that the enough size of the test set is only the same as that of the codebook when codebook size is large. Then a simple method to testing trained codebooks is addressed. Experimental results on synthetic data and real images supporting the analysis are also provided and discussed.

  • PDF