• Title/Summary/Keyword: Algorithm Complexity

Search Result 2,981, Processing Time 0.034 seconds

N-Step Sliding Recursion Formula of Variance and Its Implementation

  • Yu, Lang;He, Gang;Mutahir, Ahmad Khwaja
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.832-844
    • /
    • 2020
  • The degree of dispersion of a random variable can be described by the variance, which reflects the distance of the random variable from its mean. However, the time complexity of the traditional variance calculation algorithm is O(n), which results from full calculation of all samples. When the number of samples increases or on the occasion of high speed signal processing, algorithms with O(n) time complexity will cost huge amount of time and that may results in performance degradation of the whole system. A novel multi-step recursive algorithm for variance calculation of the time-varying data series with O(1) time complexity (constant time) is proposed in this paper. Numerical simulation and experiments of the algorithm is presented and the results demonstrate that the proposed multi-step recursive algorithm can effectively decrease computing time and hence significantly improve the variance calculation efficiency for time-varying data, which demonstrates the potential value for time-consumption data analysis or high speed signal processing.

High Speed Motion Match Utilizing A Multi-Resolution Algorithm (다중해상도 알고리즘을 이용한 고속 움직임 정합)

  • Joo, Heon-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.131-139
    • /
    • 2007
  • This paper proposed a multi-resolution algorithm. Its search point and complexity were compared with those of block match algorithm. Also the speed up comparison was made with the block match algorithm. The proposed multi-resolution NTSS-3 Level algorithm was compared again with its targets, TSS-3 Level algorithm and NTSS algorithm. The comparison results showed that the NTSS-3 Level algorithm was superior in search point and speed up. Accordingly, the proposed NTSS-3 Level algorithm was two to three times better in search point and two to four times better in complexity calculation than those of the compared object, the block match algorithm. In speed up, the proposed NTSS-3 Level algorithm was two times better. Accordingly, the proposed multi-resolution NTSS-3 Level algorithm showed PSNR ration portion excellency in search point and speed up.

  • PDF

Fast Motion Estimation Algorithm Using Limited Sub-blocks (제한된 서브블록을 이용한 고속 움직임 추정 알고리즘)

  • Kim Seong-Hee;Oh Jeong-Su
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.3C
    • /
    • pp.258-263
    • /
    • 2006
  • Each pixel in a matching block does not equally contribute to block matching and the matching error is greatly affected by image complexity. On the basis of the facts, this paper proposes a fast motion estimation algorithm using some sub-blocks selected by the image complexity. The proposed algorithm divides a matching block into 16 sub-blocks, computes the image complexity in every sub-block, executes partial block matching using some sub-blocks with large complexity, and detects a motion vector. The simulation results show that the proposed algorithm brings about negligible image degradation, but can reduce a large amount of computation in comparison with conventional algorithms.

A COMPLEXITY-REDUCED INTERPOLATION ALGORITHM FOR SOFT-DECISION DECODING OF REED-SOLOMON CODES

  • Lee, Kwankyu
    • Journal of applied mathematics & informatics
    • /
    • v.31 no.5_6
    • /
    • pp.785-794
    • /
    • 2013
  • Soon after Lee and O'Sullivan proposed a new interpolation algorithm for algebraic soft-decision decoding of Reed-Solomon codes, there have been some attempts to apply a coordinate transformation technique to the new algorithm, with a remarkable complexity reducing effect. In this paper, a conceptually simple way of applying the transformation technique to the interpolation algorithm is proposed.

A fast running FIR Filter structure reducing computational complexity

  • Lee, Jae-Kyun;Lee, Chae-Wook
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.45-48
    • /
    • 2005
  • In this paper, we propose a new fast running FIR filter structure that improves the convergence speed of adaptive signal processing and reduces the computational complexity. The proposed filter is applied to wavelet based adaptive algorithm. Actually we compared the performance of the proposed algorithm with other algorithm using computer simulation of adaptive noise canceler based on synthesis speech. As the result, We know the proposed algorithm is prefer than the existent algorithm.

  • PDF

DISTRIBUTED ALGORITHMS SOLVING THE UPDATING PROBLEMS

  • Park, Jung-Ho;Park, Yoon-Young;Choi, Sung-Hee
    • Journal of applied mathematics & informatics
    • /
    • v.9 no.2
    • /
    • pp.607-620
    • /
    • 2002
  • In this paper, we consider the updating problems to reconstruct the biconnected-components and to reconstruct the weighted shortest path in response to the topology change of the network. We propose two distributed algorithms. The first algorithm solves the updating problem that reconstructs the biconnected-components after the several processors and links are added and deleted. Its bit complexity is O((n'+a+d)log n'), its message complexity is O(n'+a+d), the ideal time complexity is O(n'), and the space complexity is O(e long n+e' log n'). The second algorithm solves the updating problem that reconstructs the weighted shortest path. Its message complexity and ideal-time complexity are $O(u^2+a+n')$ respectively.

Low-Complexity Triple-Error-Correcting Parallel BCH Decoder

  • Yeon, Jaewoong;Yang, Seung-Jun;Kim, Cheolho;Lee, Hanho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.5
    • /
    • pp.465-472
    • /
    • 2013
  • This paper presents a low-complexity triple-error-correcting parallel Bose-Chaudhuri-Hocquenghem (BCH) decoder architecture and its efficient design techniques. A novel modified step-by-step (m-SBS) decoding algorithm, which significantly reduces computational complexity, is proposed for the parallel BCH decoder. In addition, a determinant calculator and a error locator are proposed to reduce hardware complexity. Specifically, a sharing syndrome factor calculator and a self-error detection scheme are proposed. The multi-channel multi-parallel BCH decoder using the proposed m-SBS algorithm and design techniques have considerably less hardware complexity and latency than those using a conventional algorithms. For a 16-channel 4-parallel (1020, 990) BCH decoder over GF($2^{12}$), the proposed design can lead to a reduction in complexity of at least 23 % compared to conventional architecttures.

A Complexity Reduced PNFS Algorithm for the OFDM System with Frequency Offset and Phase Noise (주파수 오프셋과 위상 잡음이 있는 OFDM 시스템에서 PNFS 알고리즘 간소화를 통한 복잡도 개선)

  • Kim, Do-Hoon;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.23 no.4
    • /
    • pp.499-506
    • /
    • 2012
  • In this paper, we analyze the effects of phase noise and frequency offset that cause performance degradation. Basically, we like to propose reduced PNFS(Phase Noise and Frequency offset Suppression) algorithm. The OFDM system is seriously affected by ICI component such as phase noise, frequency offset and Doppler effects. Especially, complicated processing algorithm with high complexity was required it in order to compensate those ICI components. So, we propose PNFS algorithm that can decrease complexity and compensate ICI components. We propose a method decreased complexity by approximation of parameters that affect slightly performance change and compare the quantity of conventional and revised PNFS algorithm. Also, simulation shows that BER performance of revised PNFS algorithm can be improved slightly.

Computationally Efficient Lattice Reduction Aided Detection for MIMO-OFDM Systems under Correlated Fading Channels

  • Liu, Wei;Choi, Kwonhue;Liu, Huaping
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.503-510
    • /
    • 2012
  • We analyze the relationship between channel coherence bandwidth and two complexity-reduced lattice reduction aided detection (LRAD) algorithms for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems in correlated fading channels. In both the adaptive LR algorithm and the fixed interval LR algorithm, we exploit the inherent feature of unimodular transformation matrix P that remains the same for the adjacent highly correlated subcarriers. Complexity simulations demonstrate that the adaptive LR algorithm could eliminate up to approximately 90 percent of the multiplications and 95 percent of the divisions of the brute-force LR algorithm with large coherence bandwidth. The results also show that the adaptive algorithm with both optimum and globally suboptimum initial interval settings could significantly reduce the LR complexity, compared with the brute-force LR and fixed interval LR algorithms, while maintaining the system performance.

Reconfiguration method for array structures using spare element lines (여분소자 라인을 이용한 배열구조의 재구성 방법)

  • 김형석;최상방
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.2
    • /
    • pp.50-60
    • /
    • 1997
  • Reconfiguration of a memory array using spare rows and columns has been known to be a useful technique to improve the yield. When the numbers of spare rows and scolumns are limited, respectively, the repair problem is known to be NP-complete. In this paper, we propose the reconfiguration algorithm for an array of memory cells using faulty cel clustering, which removes rows and columns algrithm is the simplest reconfiguration method with the time complexity of $O(n^2)$, where n is the number of faulty cells, however the repair rate is very low. Whereas the exhaustive search algorithm has a high repair rate, but the time complexity is $O(2^n)$. The proposed algorithm provides the same repair rate as the exhaustive search algorithm for almost all cases and runs as fast as the greedy method. It has the time complexity of $O(n^3)$ in the worst case. We show that the propsed algorithm provides more efficient solutions than other algorithms using simulations.

  • PDF