• 제목/요약/키워드: computational complexity reduction

Search Result 258, Processing Time 0.844 seconds

Matrix Decomposition for Low Computational Complexity in Orthogonal Precoding of N-continuous Schemes for Sidelobe Suppression of OFDM Signals

  • Kawasaki, Hikaru;Matsui, Takahiro;Ohta, Masaya;Yamashita, Katsumi
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.2
    • /
    • pp.117-123
    • /
    • 2017
  • N-continuous orthogonal frequency division multiplexing (OFDM) is a precoding method for sidelobe suppression of OFDM signals and seamlessly connects OFDM symbols up to the high-order derivative for sidelobe suppression, which is suitable for suppressing out-of-band radiation. However, it severely degrades the error rate as it increases the continuous derivative order. Two schemes for orthogonal precoding of N-continuous OFDM have been proposed to achieve an ideal error rate while maintaining sidelobe suppression performance; however, the large size of the precoder matrices in both schemes causes very high computational complexity for precoding and decoding. This paper proposes matrix decomposition of precoder matrices with a large size in the orthogonal precoding schemes in order to reduce computational complexity. Numerical experiments show that the proposed method can drastically reduce computational complexity without any performance degradation.

Complexity-Reduced Algorithms for LDPC Decoder for DVB-S2 Systems

  • Choi, Eun-A;Jung, Ji-Won;Kim, Nae-Soo;Oh, Deock-Gil
    • ETRI Journal
    • /
    • v.27 no.5
    • /
    • pp.639-642
    • /
    • 2005
  • This paper proposes two kinds of complexity-reduced algorithms for a low density parity check (LDPC) decoder. First, sequential decoding using a partial group is proposed. It has the same hardware complexity and requires a fewer number of iterations with little performance loss. The amount of performance loss can be determined by the designer, based on a tradeoff with the desired reduction in complexity. Second, an early detection method for reducing the computational complexity is proposed. Using a confidence criterion, some bit nodes and check node edges are detected early on during decoding. Once the edges are detected, no further iteration is required; thus early detection reduces the computational complexity.

  • PDF

Modified Cubic Convolution Interpolation for Low Computational Complexity

  • Jun, Young-Hyun;Yun, Jong-Ho;Choi, Myung-Ryul
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2006.08a
    • /
    • pp.1259-1262
    • /
    • 2006
  • In this paper, we propose a modified cubic convolution interpolation for the enlargement or reduction of digital images using a pixel difference value. The proposed method has a low complexity: the number of multiplier of weighted value to calculate one pixel of a scaled image has seven less than that of cubic convolution interpolation has sixteen. We use the linear function of the cubic convolution and the difference pixel value for selecting interpolation methods. The proposed method is compared with the conventional one for the computational complexity and the image quality. The simulation results show that the proposed method has less computational complexity than one of the cubic convolution interpolation.

  • PDF

A Hierarchical Mode Decision Method for H.264 Intra Image Coding

  • Liu, Jiantan;Yoo, Kook-Yeol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.297-300
    • /
    • 2007
  • Due to its impressive compression performance, the H.264 video coder is highlighted in the video communications industry, such as DMB (Digital Multimedia Broadcasting), PMP (Portable Multimedia Player), etc. The main bottleneck to use the H.264 coder lays in the computational complexity, i.e. five times more complex than the market leading MPEG-4 Simple Profile codec. In this paper, we propose the hierarchical mode decision method for intraframe coding for the reduction of the computation complexity of the encoder. By determining the mode group early, the propose algorithm can skip the computationally demanding computation in the mode decision. The proposed algorithm is composed of three steps: $16{\times}16$ mode decision, $4{\times}4$ mode-group decisions, and final mode decision among the selected mode group. The simulation results show that the proposed algorithm achieves 20% to 50% reduction in the computational complexity compared with the conventional algorithm.

Study on the fast nearest-neighbor searching classifier using distance approximation (거리 근사를 이용하는 고속 최근 이웃 탐색 분류기에 관한 연구)

  • 이일완;채수익
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.2
    • /
    • pp.71-79
    • /
    • 1997
  • In this paper, we propose a new nearest-neighbor classifier with reduced computational complexity in search process. In the proposed classifier, the classes are divided into two sets: reference and non-reference sets. It reduces computational requriement by approximating the distance between the input and a class iwth the information of distances among the calsses. It calculates only the distance between the input and the reference classes. We convert a given classifier into RCC (reduced computational complexity but smal lincrease in misclassification probability of its corresponding RCC classifier. We designed RCC classifiers for the recognition of digits from the NIST database. We obtained an RCC classifier with 60% reduction in the computational complexity with the cost of 0.5% increase in misclassification probability.

  • PDF

ANALYSIS OF THE UPPER BOUND ON THE COMPLEXITY OF LLL ALGORITHM

  • PARK, YUNJU;PARK, JAEHYUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.20 no.2
    • /
    • pp.107-121
    • /
    • 2016
  • We analyze the complexity of the LLL algorithm, invented by Lenstra, Lenstra, and $Lov{\acute{a}}sz$ as a a well-known lattice reduction (LR) algorithm which is previously known as having the complexity of $O(N^4{\log}B)$ multiplications (or, $O(N^5({\log}B)^2)$ bit operations) for a lattice basis matrix $H({\in}{\mathbb{R}}^{M{\times}N})$ where B is the maximum value among the squared norm of columns of H. This implies that the complexity of the lattice reduction algorithm depends only on the matrix size and the lattice basis norm. However, the matrix structures (i.e., the correlation among the columns) of a given lattice matrix, which is usually measured by its condition number or determinant, can affect the computational complexity of the LR algorithm. In this paper, to see how the matrix structures can affect the LLL algorithm's complexity, we derive a more tight upper bound on the complexity of LLL algorithm in terms of the condition number and determinant of a given lattice matrix. We also analyze the complexities of the LLL updating/downdating schemes using the proposed upper bound.

Two New Types of Candidate Symbol Sorting Schemes for Complexity Reduction of a Sphere Decoder

  • Jeon, Eun-Sung;Kim, Yo-Han;Kim, Dong-Ku
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.888-894
    • /
    • 2007
  • The computational complexity of a sphere decoder (SD) is conventionally reduced by decoding order scheme which sorts candidate symbols in the ascending order of the Euclidean distance from the output of a zero-forcing (ZF) receiver. However, since the ZF output may not be a reliable sorting reference, we propose two types of sorting schemes to allow faster decoding. The first is to use the newly found lattice points in the previous search round instead of the ZF output (Type I). Since these lattice points are closer to the received signal than the ZF output, they can serve as a more reliable sorting reference for finding the maximum likelihood (ML) solution. The second sorting scheme is to sort candidate symbols in descending order according to the number of candidate symbols in the following layer, which are called child symbols (Type II). These two proposed sorting schemes can be combined with layer sorting for more complexity reduction. Through simulation, the Type I and Type II sorting schemes were found to provide 12% and 20% complexity reduction respectively over conventional sorting schemes. When they are combined with layer sorting, Type I and Type II provide an additional 10-15% complexity reduction while maintaining detection performance.

On a Reduction of Pitch Search Time for IMBE Vocoder by Using the Spectral AMDF (SAMDF를 이용한 IMBE VOCODER의 피치 검색 시간 단축에 관한 연구)

  • 홍성훈
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06c
    • /
    • pp.155-158
    • /
    • 1998
  • IMBE(Improved Multi-Band Excitation) vocoders exhibit good performance at low data rates. The major drawback to IMBE coders is their large computational requirements. In this paper, thus, we propose a new pitch search method that preserves the quality of the IMBE vocoder with reduced complexity. The basic idea is to reduce computation complexity of the pitch searching by using the SAMDF. Applying the proposed method to the IMBE vocoder, we can get approximately 52.02% searching time reduction in the pitch search. There is no difference in voice quality between conventional IMBE and proposed IMBE.

  • PDF

On a Reduction of Codebook Searching Time by using RPE Searching Tchnique in the CELP Vocoder (RPE 검색을 이용한 CELP 보코더의 불규칙 코드북 검색)

  • 김대식
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1995.06a
    • /
    • pp.141-145
    • /
    • 1995
  • Code excited linear prediction speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their large computational requirements. In this paper, we propose a new codebook search method that preserves the quality of the CELP vocoder with reduced complexity. The basic idea is to restrict the searching range of the random codebook by using a searching technique of the regular pulse excitation. Applying the proposed method to the CELP vocoder, we can get approximately 48% complexity reduction in the codebook search.

  • PDF

Design of M-Channel IIR Uniform DFT Filter Banks Using Recursive Digital Filters

  • Dehghani, M.J.;Aravind, R.;Prabhu, K.M.M.
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.345-355
    • /
    • 2003
  • In this paper, we propose a method for designing a class of M-channel, causal, stable, perfect reconstruction, infinite impulse response (IIR), and parallel uniform discrete Fourier transform (DFT) filter banks. It is based on a previously proposed structure by Martinez et al. [1] for IIR digital filter design for sampling rate reduction. The proposed filter bank has a modular structure and is therefore very well suited for VLSI implementation. Moreover, the current structure is more efficient in terms of computational complexity than the most general IIR DFT filter bank, and this results in a reduced computational complexity by more than 50% in both the critically sampled and oversampled cases. In the polyphase oversampled DFT filter bank case, we get flexible stop-band attenuation, which is also taken care of in the proposed algorithm.

  • PDF