• Title/Summary/Keyword: encoding complexity

Search Result 329, Processing Time 0.024 seconds

Complexity Reduction Algorithm of Speech Coder(EVRC) for CDMA Digital Cellular System

  • Min, So-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1551-1558
    • /
    • 2007
  • The standard of evaluating function of speech coder for mobile telecommunication can be shown in channel capacity, noise immunity, encryption, complexity and encoding delay largely. This study is an algorithm to reduce complexity applying to CDMA(Code Division Multiple Access) mobile telecommunication system, which has a benefit of keeping the existing advantage of telecommunication quality and low transmission rate. This paper has an objective to reduce the computing complexity by controlling the frequency band nonuniform during the changing process of LSP(Line Spectrum Pairs) parameters from LPC(Line Predictive Coding) coefficients used for EVRC(Enhanced Variable-Rate Coder, IS-127) speech coders. Its experimental result showed that when comparing the speech coder applied by the proposed algorithm with the existing EVRC speech coder, it's decreased by 45% at average. Also, the values of LSP parameters, Synthetic speech signal and Spectrogram test result were obtained same as the existing method.

  • PDF

New filter design to replace the post and perceptual weighting filter of transcoder and performance evaluation (상호부호화기의 후처리 필터와 인지가중 필터를 대신하는 새로운 필터 설계 및 성능 평가)

  • 최진규;윤성완;강홍구;윤대희
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2232-2235
    • /
    • 2003
  • In speech communication systems where two different speech codecs are interoperated, transcoding algorithm is a good approach because of its low complexity and improved synthesized speech quality. This paper proposes an efficient method to further improve the performance of transcoding algorithms as well as to reduce the complexity. In the conventional transcoding algorithms. a post-filter and a perceptual weighting filter should be operated sequentially because both decoding and encoding processes are needed. This results in the redundancy of the processing in terms of complexity and perceptual quality. Using the fact that their filter structures are similar, we replaced the two filters with one. The proposed algorithm requires 72.8% lower complexity than the conventional transcoding algorithm when we compare only the complexity of the filtering processes. The results of both objective and subjective tests verify that the proposed algorithm has slightly better quality than the conventional one.

  • PDF

Fixed-Complexity Sphere Encoder for Multi-User MIMO Systems

  • Mohaisen, Manar;Chang, Kyung-Hi
    • Journal of Communications and Networks
    • /
    • v.13 no.1
    • /
    • pp.63-69
    • /
    • 2011
  • In this paper, we propose a fixed-complexity sphere encoder (FSE) for multi-user multi-input multi-output (MU-MIMO) systems. The proposed FSE accomplishes a scalable tradeoff between performance and complexity. Also, because it has a parallel tree-search structure, the proposed encoder can be easily pipelined, leading to a tremendous reduction in the precoding latency. The complexity of the proposed encoder is also analyzed, and we propose two techniques that reduce it. Simulation and analytical results demonstrate that in a $4{\times}4$ MU-MIMO system, the proposed FSE requires only 11.5% of the computational complexity needed by the conventional QR decomposition with M-algorithm encoder (QRDM-E). Also, the encoding throughput of the proposed encoder is 7.5 times that of the QRDM-E with tolerable degradation in the BER performance, while achieving the optimum diversity order.

Fast PU Decision Method Using Coding Information of Co-Located Sub-CU in Upper Depth for HEVC (상위깊이의 Sub-CU 부호화 정보를 이용한 HEVC의 고속 PU 결정 기법)

  • Jang, Jae-Kyu;Choi, Ho-Youl;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.20 no.2
    • /
    • pp.340-347
    • /
    • 2015
  • HEVC (High Efficiency Video Coding) achieves high coding efficiency by employing a quadtree-based coding unit (CU) block partitioning structure and various prediction units (PUs), and the determination of the best CU partition structure and the best PU mode based on rate-distortion (R-D) cost. However, the computation complexity of encoding also dramatically increases. In this paper, to reduce such encoding computational complexity, we propose three fast PU mode decision methods based on encoding information of upper depth as follows. In the first method, the search of PU mode of the current CU is early terminated based on the sub-CBF (Coded Block Flag) of upper depth. In the second method, the search of intra prediction modes of PU in the current CU is skipped based on the sub-Intra R-D cost of upper depth. In the last method, the search of intra prediction modes of PU in the lower depth's CUs is skipped based on the sub-CBF of the current depth's CU. Experimental results show that the three proposed methods reduce the computational complexity of HM 14.0 to 31.4%, 2.5%, and 23.4% with BD-rate increase of 1.2%, 0.11%, and 0.9%, respectively. The three methods can be applied in a combined way to be applied to both of inter prediction and intra prediction, which results in the complexity reduction of 34.2% with 1.9% BD-rate increase.

High Bit-Rates Quantization of the First-Order Markov Process Based on a Codebook-Constrained Sample-Adaptive Product Quantizers (부호책 제한을 가지는 표본 적응 프로덕트 양자기를 이용한 1차 마르코프 과정의 고 전송률 양자화)

  • Kim, Dong-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • For digital data compression, the quantization is the main part of the lossy source coding. In order to improve the performance of quantization, the vector quantizer(VQ) can be employed. The encoding complexity, however, exponentially increases as the vector dimension or bit rate gets large. Much research has been conducted to alleviate such problems of VQ. Especially for high bit rates, a constrained VQ, which is called the sample-adaptive product quantizer(SAPQ), has been proposed for reducing the hugh encoding complexity of regular VQs. SAPQ has very similar structure as to the product VQ(PQ). However, the quantizer performance can be better than the PQ case. Further, the encoding complexity and the memory requirement for the codebooks are lower than the regular full-search VQ case. Among SAPQs, 1-SAPQ has a simple quantizer structure, where each product codebook is symmetric with respect to the diagonal line in the underlying vector space. It is known that 1-SAPQ shows a good performance for i.i.d. sources. In this paper, a study on designing 1-SAPQ for the first-order Markov process. For an efficient design of 1-SAPQ, an algorithm for the initial codebook is proposed, and through the numerical analysis it is shown that 1-SAPQ shows better quantizer distortion than the VQ case, of which encoding complexity is similar to that of 1-SAPQ, and shows distortions, which are close to that of the DPCM(differential pulse coded modulation) scheme with the Lloyd-Max quantizer.

Distortion Variation Minimization in low-bit-rate Video Communication

  • Park, Sang-Hyun
    • Journal of information and communication convergence engineering
    • /
    • v.5 no.1
    • /
    • pp.54-58
    • /
    • 2007
  • A real-time frame-layer rate control algorithm with a token bucket traffic shaper is proposed for distortion variation minimization. The proposed rate control method uses a non-iterative optimization method for low computational complexity, and performs bit allocation at the frame level to minimize the average distortion over an entire sequence as well as variations in distortion between frames. The proposed algorithm does not produce time delay from encoding, and is suitable for real-time low-complexity video encoder. Experimental results indicate that the proposed control method provides better visual and PSNR performances than the existing rate control method.

Distributed Video Coding Based on Selective Block Encoding Using Feedback of Motion Information (움직임 정보의 피드백을 갖는 선택적 블록 부호화에 기초한 분산 비디오 부호화 기법)

  • Kim, Jin-Soo;Kim, Jae-Gon;Seo, Kwang-Deok;Lee, Myeong-Jin
    • Journal of Broadcast Engineering
    • /
    • v.15 no.5
    • /
    • pp.642-652
    • /
    • 2010
  • Recently, DVC (Distributed Video Coding) techniques are drawing a lot of interests as one of the future research works to achieve low complexity encoding in various applications. But, due to the limited computational complexity, the performances of DVC algorithms are inferior to those of conventional international standard video coders, which use zig-zag scan, run length code, entropy code and skipped macroblock. In this paper, in order to overcome the performance limit of the DVC system, the distortion for every block is estimated when side information is found at the decoder and then we propose a new selective block encoding scheme which provides the encoder side with the motion information for the highly distorted blocks and then allows the sender to encode the motion compensated frame difference signal. Through computer simulations, it is shown that the coding efficiency of the proposed scheme reaches almost that of the conventional inter-frame coding scheme.

A Fast Encoding Algorithm for Image Vector Quantization Based on Prior Test of Multiple Features (복수 특징의 사전 검사에 의한 영상 벡터양자화의 고속 부호화 기법)

  • Ryu Chul-hyung;Ra Sung-woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1231-1238
    • /
    • 2005
  • This paper presents a new fast encoding algorithm for image vector quantization that incorporates the partial distances of multiple features with a multidimensional look-up table (LUT). Although the methods which were proposed earlier use the multiple features, they handles the multiple features step by step in terms of searching order and calculating process. On the other hand, the proposed algorithm utilizes these features simultaneously with the LUT. This paper completely describes how to build the LUT with considering the boundary effect for feasible memory cost and how to terminate the current search by utilizing partial distances of the LUT Simulation results confirm the effectiveness of the proposed algorithm. When the codebook size is 256, the computational complexity of the proposed algorithm can be reduced by up to the $70\%$ of the operations required by the recently proposed alternatives such as the ordered Hadamard transform partial distance search (OHTPDS), the modified $L_2-norm$ pyramid ($M-L_2NP$), etc. With feasible preprocessing time and memory cost, the proposed algorithm reduces the computational complexity to below the $2.2\%$ of those required for the exhaustive full search (EFS) algorithm while preserving the same encoding quality as that of the EFS algorithm.

Efficient Partial Parallel Encoders for IRA Codes in DVB-S2 (DVB-S2 IRA Code를 위한 최적 부호화 방법)

  • Hwang, Sung-Oh;Lee, Jai-Yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.901-906
    • /
    • 2010
  • Low density parity check (LDPC) code, first introduced by Gallager and re-discovered by MacKay et al, has attracted researcher's interest mainly due to their performance and low decoding complexity. It was remarkable that the performance is very close to Shannon capacity limit under the assumption of having long codeword length and iterative decoder. However, comparing to turbo codes widely used in the current mobile communication, the encoding complexity of LDPC codes has been regarded as the drawback. This paper proposes a solution for DVB-S2 LDPC encoder to reduce the encoder latency. We use the fast IRA encoder that use the transformation of the parity check matrix into block-wise form and the partial parallel process to reduce the number of system clocks for the IRA code encoding. We compare the proposed encoder with the current DVB-S2 encoder to show that the performance of proposal is better than that of the current DVB-S2 encoder.

Fast Distributed Video Coding using Parallel LDPCA Encoding (병렬 LDPCA 채널코드 부호화 방법을 사용한 고속 분산비디오부호화)

  • Park, Jong-Bin;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.144-154
    • /
    • 2011
  • In this paper, we propose a parallel LDPCA encoding method for fast transform-domain Wyner-Ziv video encoding which is suitable in an ultra fast and low power video encoding. The conventional transform-domain Wyner-Ziv video encoding performs LDPCA channel coding of quantized transform coefficients in bitplane-serial fashion, which takes about 60% of total encoding time, and this computational complexity becomes severer as the bitrate increases. The proposed method binds several bitplanes into one packed message and carries out the LDPCA encoding in parallel. The proposed LDPCA encoding method improves the encoding speed by 8 ~ 55 times. In the experiment, the proposed Wyner-Ziv encoder can encode 700 ~ 2,300 QCIF size frames per second with GOP=64. The method can be applied to the pixel-domain Wyner-Ziv encoder using LDPCA, and has a wide scope of application.