• Title/Summary/Keyword: Time Complexity

Search Result 3,046, Processing Time 0.034 seconds

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

A Study on VLSI-Oriented 2-D Systolic Array Processor Design for APP (Algebraic Path Problem) (VLSI 지향적인 APP용 2-D SYSTOLIC ARRAY PROCESSOR 설계에 관한 연구)

  • 이현수;방정희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.7
    • /
    • pp.1-13
    • /
    • 1993
  • In this paper, the problems of the conventional special-purpose array processor such as the deficiency of flexibility have been investigated. Then, a new modified methodology has been suggested and applied to obtain the common solution of the three typical App algorithms like SP(Shortest Path), TC(Transitive Closure), and MST(Minimun Spanning Tree) among the various APP algorithms using the similar method to obtain the solution. In the newly proposed APP parallel algorithm, real-time Processing is possible, without the structure enhancement and the functional restriction. In addition, we design 2-demensional bit-parallel low-triangular systolic array processor and the 1-PE in detail. For its evaluation, we consider its computational complexity according to bit-processing method and describe relationship of total chip size and execution time. Therefore, the proposed processor obtains, on which a large data inputs in real-time, 3n-4 execution time which is optimal o(n) time complexity, o(n$^{2}$) space complexity which is the number of total gate and pipeline period rate is one.

  • PDF

Efficient Bit-Parallel Multiplier for Binary Field Defind by Equally-Spaced Irreducible Polynomials (Equally Spaced 기약다항식 기반의 효율적인 이진체 비트-병렬 곱셈기)

  • Lee, Ok-Suk;Chang, Nam-Su;Kim, Chang-Han;Hong, Seok-Hie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.2
    • /
    • pp.3-10
    • /
    • 2008
  • The choice of basis for representation of element in $GF(2^m)$ affects the efficiency of a multiplier. Among them, a multiplier using redundant representation efficiently supports trade-off between the area complexity and the time complexity since it can quickly carry out modular reduction. So time of a previous multiplier using redundant representation is faster than time of multiplier using others basis. But, the weakness of one has a upper space complexity compared to multiplier using others basis. In this paper, we propose a new efficient multiplier with consideration that polynomial exponentiation operations are frequently used in cryptographic hardwares. The proposed multiplier is suitable fer left-to-right exponentiation environment and provides efficiency between time and area complexity. And so, it has both time delay of $T_A+({\lceil}{\log}_2m{\rceil})T_x$ and area complexity of (2m-1)(m+s). As a result, the proposed multiplier reduces $2(ms+s^2)$ compared to the previous multiplier using equally-spaced polynomials in area complexity. In addition, it reduces $T_A+({\lceil}{\log}_2m+s{\rceil})T_x$ to $T_A+({\lceil}{\log}_2m{\rceil})T_x$ in the time complexity.($T_A$:Time delay of one AND gate, $T_x$:Time delay of one XOR gate, m:Degree of equally spaced irreducible polynomial, s:spacing factor)

A Polynomial-Time Algorithm for Breaking the McEliece's Public-Key Cryptosystem (McEliece 공개키 암호체계의 암호해독을 위한 Polynomial-Time 알고리즘)

  • Park, Chang-Seop-
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 1991.11a
    • /
    • pp.40-48
    • /
    • 1991
  • McEliece 공개키 암호체계에 대한 새로운 암호해독적 공격이 제시되어진다. 기존의 암호해독 algorithm이 exponential-time의 complexity를 가지는 반면, 본고에서 제시되어지는 algorithm은 polynomial-time의 complexity를 가진다. 모든 linear codes에는 systematic generator matrix가 존재한다는 사실이 본 연구의 동기가 된다. Public generator matrix로부터, 암호해독에 사용되어질 수 있는 새로운 trapdoor generator matrix가 Gauss-Jordan Elimination의 역할을 하는 일련의 transformation matrix multiplication을 통해 도출되어진다. 제시되어지는 algorithm의 계산상의 complexity는 주로 systematic trapdoor generator matrix를 도출하기 위해 사용되는 binary matrix multiplication에 기인한다. Systematic generator matrix로부터 쉽게 도출되어지는 parity-check matrix를 통해서 인위적 오류의 수정을 위한 Decoding이 이루어진다.

  • PDF

Reduced Complexity Signal Detection for OFDM Systems with Transmit Diversity

  • Kim, Jae-Kwon;Heath Jr. Robert W.;Powers Edward J.
    • Journal of Communications and Networks
    • /
    • v.9 no.1
    • /
    • pp.75-83
    • /
    • 2007
  • Orthogonal frequency division multiplexing (OFDM) systems with multiple transmit antennas can exploit space-time block coding on each subchannel for reliable data transmission. Spacetime coded OFDM systems, however, are very sensitive to time variant channels because the channels need to be static over multiple OFDM symbol periods. In this paper, we propose to mitigate the channel variations in the frequency domain using a linear filter in the frequency domain that exploits the sparse structure of the system matrix in the frequency domain. Our approach has reduced complexity compared with alternative approaches based on time domain block-linear filters. Simulation results demonstrate that our proposed frequency domain block-linear filter reduces computational complexity by more than a factor of ten at the cost of small performance degradation, compared with a time domain block-linear filter.

A SYN flooding attack detection approach with hierarchical policies based on self-information

  • Sun, Jia-Rong;Huang, Chin-Tser;Hwang, Min-Shiang
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.346-354
    • /
    • 2022
  • The SYN flooding attack is widely used in cyber attacks because it paralyzes the network by causing the system and bandwidth resources to be exhausted. This paper proposed a self-information approach for detecting the SYN flooding attack and provided a detection algorithm with a hierarchical policy on a detection time domain. Compared with other detection methods of entropy measurement, the proposed approach is more efficient in detecting the SYN flooding attack, providing low misjudgment, hierarchical detection policy, and low time complexity. Furthermore, we proposed a detection algorithm with limiting system resources. Thus, the time complexity of our approach is only (log n) with lower time complexity and misjudgment rate than other approaches. Therefore, the approach can detect the denial-of-service/distributed denial-of-service attacks and prevent SYN flooding attacks.

On the Complex-Valued Recursive Least Squares Escalator Algorithm with Reduced Computational Complexity

  • Kim, Nam-Yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.5C
    • /
    • pp.521-526
    • /
    • 2009
  • In this paper, a complex-valued recursive least squares escalator filter algorithm with reduced computational complexity for complex-valued signal processing applications is presented. The local tap weight of RLS-ESC algorithm is updated by incrementing its old value by an amount equal to the local estimation error times the local gain scalar, and for the gain scalar, the local input autocorrelation is calculated at the previous time. By deriving a new gain scalar that can be calculated by using the current local input autocorrelation, reduced computational complexity is accomplished. Compared with the computational complexity of the complex-valued version of RLS-ESC algorithm, the computational complexity of the proposed method can be reduced by 50% without performance degradation. The reduced computational complexity of the proposed algorithm is even less than that of the LMS-ESC. Simulation results for complex channel equalization in 64QAM modulation schemes demonstrate that the proposed algorithm has superior convergence and constellation performance.

A General Analysis and Complexity Reduction for the Lattice Transversal Joint Adaptive Filter

  • Yoo, Jae-Ha
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.2035-2038
    • /
    • 2002
  • The necessity of the filter coefficients compensation for the LTJ adaptive filter was explained generally and easily by analyzing it with respect to the time-varying transform domain adaptive filter. And also the reduction method of computational complexity for filter coefficients compensation was proposed and its effectiveness was verified through experiments using artificial and real speech signals. The proposed adaptive filter reduces the computational complexity for filter coefficients compensation by 95%, and when the filter is applied to the acoustic echo canceller with 1000 taps, the total complexity is reduced by 82%

  • PDF

Task Complexity of Movement Skills for Robots (로봇 운동솜씨의 작업 복잡도)

  • Kwon, Woo-Young;Suh, Il-Hong;Lee, Jun-Goo;You, Bum-Jae;Oh, Sang-Rok
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.3
    • /
    • pp.194-204
    • /
    • 2012
  • Measuring task complexity of movement skill is an important factor to evaluate a difficulty of learning and/or imitating a task for autonomous robots. Although many complexity-measures are proposed in research areas such as neuroscience, physics, computer science, and biology, there have been little attention on the robotic tasks. To cope with measuring complexity of robotic task, we propose an information-theoretic measure for task complexity of movement skills. By modeling proprioceptive as well as exteroceptive sensor data as multivariate Gaussian distribution, movements of a task can be modeled as probabilistic model. Additionally, complexity of temporal variations is modeled by sampling in time and modeling as individual random variables. To evaluate our proposed complexity measure, several experiments are performed on the real robotic movement tasks.

Rule of Combination Using Expanded Approximation Algorithm (확장된 근사 알고리즘을 이용한 조합 방법)

  • Moon, Won Sik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.3
    • /
    • pp.21-30
    • /
    • 2013
  • Powell-Miller theory is a good method to express or treat incorrect information. But it has limitation that requires too much time to apply to actual situation because computational complexity increases in exponential and functional way. Accordingly, there have been several attempts to reduce computational complexity but side effect followed - certainty factor fell. This study suggested expanded Approximation Algorithm. Expanded Approximation Algorithm is a method to consider both smallest supersets and largest subsets to expand basic space into a space including inverse set and to reduce Approximation error. By using expanded Approximation Algorithm suggested in the study, basic probability assignment function value of subsets was alloted and added to basic probability assignment function value of sets related to the subsets. This made subsets newly created become Approximation more efficiently. As a result, it could be known that certain function value which is based on basic probability assignment function is closely near actual optimal result. And certainty in correctness can be obtained while computational complexity could be reduced. by using Algorithm suggested in the study, exact information necessary for a system can be obtained.