• Title/Summary/Keyword: complexity reduction

Search Result 673, Processing Time 0.024 seconds

Two New Types of Candidate Symbol Sorting Schemes for Complexity Reduction of a Sphere Decoder

  • Jeon, Eun-Sung;Kim, Yo-Han;Kim, Dong-Ku
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.888-894
    • /
    • 2007
  • The computational complexity of a sphere decoder (SD) is conventionally reduced by decoding order scheme which sorts candidate symbols in the ascending order of the Euclidean distance from the output of a zero-forcing (ZF) receiver. However, since the ZF output may not be a reliable sorting reference, we propose two types of sorting schemes to allow faster decoding. The first is to use the newly found lattice points in the previous search round instead of the ZF output (Type I). Since these lattice points are closer to the received signal than the ZF output, they can serve as a more reliable sorting reference for finding the maximum likelihood (ML) solution. The second sorting scheme is to sort candidate symbols in descending order according to the number of candidate symbols in the following layer, which are called child symbols (Type II). These two proposed sorting schemes can be combined with layer sorting for more complexity reduction. Through simulation, the Type I and Type II sorting schemes were found to provide 12% and 20% complexity reduction respectively over conventional sorting schemes. When they are combined with layer sorting, Type I and Type II provide an additional 10-15% complexity reduction while maintaining detection performance.

Complexity Reduction Scheme for Lattice Reduction-based MIMO Receiver under Time Varying Fading Environments (시변 페이딩 환경에서 Lattice Reduction 기반 MIMO 수신기를 위한 계산량 감소 기법)

  • Kim, Han-Nah;Choi, Kwon-Hue
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11A
    • /
    • pp.852-861
    • /
    • 2009
  • We propose a complexity reduced Lattice Reduction(LR) scheme for MIMO detection under time varying fading environments. It is found that for successive MIMO transmission instances, the integer matrix P after LR decomposition remains the same or only a few elements of the matrix P are slightly changed. Based on this feature, we perform LR reduction by setting the initial values for P matrix for the decomposition to be the one obtained in the previous instance not starting from the identity matrix. Simulation results reveal that the proposed scheme drastically reduces overall complexity of LR reduction compared to the conventional scheme for various system parameters under time varying channels. We also show that the proposed scheme can be applied to Seysen LR as well as LLL(Lenstra, Lenstra, and Lavasaz)-LR.

STRUCTURED CODEWORD SEARCH FOR VECTOR QUANTIZATION (백터양자화가의 구조적 코더 찾기)

  • 우홍체
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.467-470
    • /
    • 2000
  • Vector quantization (VQ) is widely used in many high-quality and high-rate data compression applications such as speech coding, audio coding, image coding and video coding. When the size of a VQ codebook is large, the computational complexity for the full codeword search method is a significant problem for many applications. A number of complexity reduction algorithms have been proposed and investigated using such properties of the codebook as the triangle inequality. This paper proposes a new structured VQ search algorithm that is based on a multi-stage structure for searching for the best codeword. Even using only two stages, a significant complexity reduction can be obtained without any loss of quality.

  • PDF

A Penalized Likelihood Method for Model Complexity

  • Ahn, Sung M.
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.173-184
    • /
    • 2001
  • We present an algorithm for the complexity reduction of a general Gaussian mixture model by using a penalized likelihood method. One of our important assumptions is that we begin with an overfitted model in terms of the number of components. So our main goal is to eliminate redundant components in the overfitted model. As shown in the section of simulation results, the algorithm works well with the selected densities.

  • PDF

A Modified PTS Algorithm for P APR Reduction ill OFDM Signal

  • Kim, Jeong-Goo;Wu, Xiaojun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3C
    • /
    • pp.163-169
    • /
    • 2011
  • Partial transmit sequence (PTS) algorithm is known as one of the most efficient ways to reduce the peak-to-average power ratio (PAPR) in the orthogonal frequency division multiplexing (OFDM) system. The PTS algorithm, however, requires large numbers of computation to implement. Thus there has been a trade-off between performance of PAPR reduction and computational complexity. In this paper, the performance of PAPR reduction and computation complexity of PTS algorithms are analyzed and compared through computer simulations. Subsequently, a new PTS algorithm is proposed which can be a reasonable method to reduce the PAPR of OFDM when both the performance of PAPR reduction and computational complexity are considered simultaneously.

A Study of HOQ Complexity Reduction by Quantification Method of TypeIII (수량화방법III에 의한 HOQ의 단순화에 관한 연구)

  • 이형규;이상복
    • Journal of Korean Society for Quality Management
    • /
    • v.31 no.2
    • /
    • pp.131-142
    • /
    • 2003
  • QFD(Quality Function Deployment) is design method which is focused to guarantee of quality and function to satisfy for customer’s need. QFD are used entire manufacturing, specially new production development and design. HOQ(House of Quality) are important tool of QFD, which is implement that complex function and communications of customer. Actually implementation of HOQ are difficult by HOQ's size. It is well known that complexity of HOQ are exponentially increasing by increasing of HOQ’s size. In this Paper, we study of HOQ Complexity reduction by Quantification Method of Type Ⅲ. That method is efficiency and minimize of loss information by reduction HOQ. We give example and prove our suggestion method is better than other methods.

Complexity Reduction Method for BSAC Decoder

  • Jeong, Gyu-Hyeok;Ahn, Yeong-Uk;Lee, In-Sung
    • ETRI Journal
    • /
    • v.31 no.3
    • /
    • pp.336-338
    • /
    • 2009
  • This letter proposes a complexity reduction method to speed up the noiseless decoding of a bit-sliced arithmetic coding (BSAC) decoder. This scheme fully utilizes the group of consecutive arithmetic-coded symbols known as the decoding band and the significance tree structure sorted in order of significance at every decoding band. With the same audio quality, the proposed method reduces the number of calculations that are performed during the noiseless decoding in BSAC to about 22% of the amount of calculations with the conventional full-search method.

  • PDF

Computationally Efficient Lattice Reduction Aided Detection for MIMO-OFDM Systems under Correlated Fading Channels

  • Liu, Wei;Choi, Kwonhue;Liu, Huaping
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.503-510
    • /
    • 2012
  • We analyze the relationship between channel coherence bandwidth and two complexity-reduced lattice reduction aided detection (LRAD) algorithms for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems in correlated fading channels. In both the adaptive LR algorithm and the fixed interval LR algorithm, we exploit the inherent feature of unimodular transformation matrix P that remains the same for the adjacent highly correlated subcarriers. Complexity simulations demonstrate that the adaptive LR algorithm could eliminate up to approximately 90 percent of the multiplications and 95 percent of the divisions of the brute-force LR algorithm with large coherence bandwidth. The results also show that the adaptive algorithm with both optimum and globally suboptimum initial interval settings could significantly reduce the LR complexity, compared with the brute-force LR and fixed interval LR algorithms, while maintaining the system performance.

Search Range Reduction Algorithm with Motion Vectors of Upper Blocks for HEVC (상위 블록 움직임 벡터를 이용한 HEVC 움직임 예측 탐색 범위 감소 기법)

  • Lee, Kyujoong
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.18-25
    • /
    • 2018
  • In High Efficiency Video Coding (HEVC), integer motion estimation (IME) requires a large amount of computational complexity because HEVC adopts the high flexible and hierarchical coding structures. In order to reduce the computational complexity of IME, this paper proposes the search range reduction algorithm, which takes advantage of motion vectors similarity between different layers. It needs only a few modification for HEVC reference software. Based on the experimental results, the proposed algorithm reduces the processing time of IME by 28.1% on average, whereas its the $Bj{\emptyset}ntegaard$ delta bitrate (BD-BR) increase is 0.15% which is negligible.

Reduction Method of Computational Complexity for Image Filtering Utilizing the Factorization Theorem (인수분해 공식을 이용한 영상 필터링 연산량 저감 방법)

  • Jung, Chan-sung;Lee, Jaesung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.354-357
    • /
    • 2013
  • The filtering algorithm is used very frequently in the preprocessing stage of many image processing algorithms in computer vision processing. Because video signals are two-dimensional signals, computaional complexity is very high. To reduce the complexity, separable filters and the factorization theorem is applied to the filtering operation. As a result, it is shown that a significant reduction in computational complexity is achieved, although the experimental results could be slightly different depending on the condition of the image.

  • PDF