• Title/Summary/Keyword: Computational complexity

Search Result 2,078, Processing Time 0.026 seconds

Fast Detection of Distributed Global Scale Network Attack Symptoms and Patterns in High-speed Backbone Networks

  • Kim, Sun-Ho;Roh, Byeong-Hee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.3
    • /
    • pp.135-149
    • /
    • 2008
  • Traditional attack detection schemes based on packets or flows have very high computational complexity. And, network based anomaly detection schemes can reduce the complexity, but they have a limitation to figure out the pattern of the distributed global scale network attack. In this paper, we propose an efficient and fast method for detecting distributed global-scale network attack symptoms in high-speed backbone networks. The proposed method is implemented at the aggregate traffic level. So, our proposed scheme has much lower computational complexity, and is implemented in very high-speed backbone networks. In addition, the proposed method can detect attack patterns, such as attacks in which the target is a certain host or the backbone infrastructure itself, via collaboration of edge routers on the backbone network. The effectiveness of the proposed method are demonstrated via simulation.

Advanced Block Matching Algorithm for Motion Estimation and Motion Compensation

  • Cho, Hyo-Moon;Cho, Sang-Bock
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.23-25
    • /
    • 2007
  • The partial distortion elimination (PDE) scheme is used to decrease the sum of absolute difference (SAD) computational complexity, since the SAD calculation has been taken much potion of the video compression. In motion estimation (ME) based on PDE, it is ideal that the initial value of SAD in summing performance has large value. The traditional scan order methods have many operation time and high operational complexity because these adopted the division or multiplication. In this paper, we introduce the new scan order and search order by using only adder. We define the average value which is called to rough average value (RAVR). Which is to reduce the computational complexity and increase the operational speed and then we can obtain the improvement of SAD performance. And also this RAVR is used to decide the search order sequence, since the difference RAVR between the current block and candidate block is small then this candidate block has high probability to suitable candidate. Thus, our proposed algorithm combines above two main concepts and suffers the improving SAD performance and the easy hardware implementation methods.

  • PDF

A Modified PTS Algorithm for P APR Reduction ill OFDM Signal

  • Kim, Jeong-Goo;Wu, Xiaojun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.3C
    • /
    • pp.163-169
    • /
    • 2011
  • Partial transmit sequence (PTS) algorithm is known as one of the most efficient ways to reduce the peak-to-average power ratio (PAPR) in the orthogonal frequency division multiplexing (OFDM) system. The PTS algorithm, however, requires large numbers of computation to implement. Thus there has been a trade-off between performance of PAPR reduction and computational complexity. In this paper, the performance of PAPR reduction and computation complexity of PTS algorithms are analyzed and compared through computer simulations. Subsequently, a new PTS algorithm is proposed which can be a reasonable method to reduce the PAPR of OFDM when both the performance of PAPR reduction and computational complexity are considered simultaneously.

A Practical Implementation of the LTJ Adaptive Filter and Its Application to the Adaptive Echo Canceller (LTJ 적응필터의 실용적 구현과 적응반향제거기에 대한 적용)

  • Yoo, Jae-Ha
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.227-235
    • /
    • 2004
  • In this paper, we proposed a new practical implementation method of the lattice transversal joint (LTJ) adaptive filter using speech codec's information. And it was applied to the adaptive echo cancellation problem to verify the efficiency of the proposed method. Realtime implementation of the LTJ adaptive filter is very difficult due to high computational complexity for the filter coefficients compensation. However, in case of using speech codec, complexity can be reduced since linear predictive coding (LPC) coefficients are updated each frame or sub-frame instead of every sample. Furthermore, LPC coefficients can be acquired from speech decoder and transformed to the reflection coefficients. Therefore, the computational complexity for updates of the reflection coefficients can be reduced. The effectiveness of the proposed LTJ adaptive filter was verified by the experiments about convergence and tracking performance of the adaptive echo canceller.

  • PDF

Complexity Results for the Design Problem of Content Distribution Networks

  • Choi, Byung-Cheon;Chung, Jibok
    • Management Science and Financial Engineering
    • /
    • v.20 no.2
    • /
    • pp.7-12
    • /
    • 2014
  • Content Delivery Network (CDN) has evolved to overcome a network bottleneck and improve user perceived Quality of Service (QoS). A CDN replicates contents from the origin server to replica servers to reduce the overload of the origin server. CDN providers would try to achieve an acceptable performance at the least cost including the storage space or processing power. In this paper, we introduce a new optimization model for the CDN design problem considering the user perceived QoS and single path (non-bifurcated) routing constraints and analyze the computational complexity for some special cases.

Frequency-Domain RLS Algorithm Based on the Block Processing Technique (블록 프로세싱 기법을 이용한 주파수 영역에서의 회귀 최소 자승 알고리듬)

  • 박부견;김동규;박원석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.240-240
    • /
    • 2000
  • This paper presents two algorithms based on the concept of the frequency domain adaptive filter(FDAF). First the frequency domain recursive least squares(FRLS) algorithm with the overlap-save filtering technique is introduced. This minimizes the sum of exponentially weighted square errors in the frequency domain. To eliminate discrepancies between the linear convolution and the circular convolution, the overlap-save method is utilized. Second, the sliding method of data blocks is studied Co overcome processing delays and complexity roads of the FRLS algorithm. The size of the extended data block is twice as long as the filter tap length. It is possible to slide the data block variously by the adjustable hopping index. By selecting the hopping index appropriately, we can take a trade-off between the convergence rate and the computational complexity. When the input signal is highly correlated and the length of the target FIR filter is huge, the FRLS algorithm based on the block processing technique has good performances in the convergence rate and the computational complexity.

  • PDF

Improved Method for the Macroblock-Level Deblocking Scheme

  • Le, Thanh Ha;Jung, Seung-Won;Baek, Seung-Jin;Ko, Sung-Jea
    • ETRI Journal
    • /
    • v.33 no.2
    • /
    • pp.194-200
    • /
    • 2011
  • This paper presents a deblocking method for video compression in which the blocking artifacts are effectively extracted and eliminated based on both spatial and frequency domain operations. Firstly, we use a probabilistic approach to analyze the performance of the conventional macroblock-level deblocking scheme. Then, based on the results of the analysis, an algorithm to reduce the computational complexity is introduced. Experimental results show that the proposed algorithm outperforms the conventional video coding methods in terms of computation complexity while coding efficiency is maintained.

Algorithm for Efficient D-Class Computation (효율적인 D-클래스 계산을 위한 알고리즘)

  • Han, Jae-Il
    • Journal of Information Technology Services
    • /
    • v.6 no.1
    • /
    • pp.151-158
    • /
    • 2007
  • D-class computation requires multiplication of three Boolean matrices for each of all possible triples of $n{\times}n$ Boolean matrices and search for equivalent $n{\times}n$ Boolean matrices according to a specific equivalence relation. It is easy to see that even multiplying all $n{\times}n$ Boolean matrices with themselves shows exponential time complexity and D-Class computation was left an unsolved problem due to its computational complexity. The vector-based multiplication theory shows that the multiplication of three Boolean matrices for each of all possible triples of $n{\times}n$ Boolean matrices can be done much more efficiently. However, D-Class computation requires computation of equivalent classes in addition to the efficient multiplication. The paper discusses a theory and an algorithm for efficient D-class computation, and shows execution results of the algorithm.

IEM-based Tone Injection for Peak-to-Average Power Ratio Reduction of Multi-carrier Modulation

  • Zhang, Yang;Zhao, Xiangmo;Hou, Jun;An, Yisheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.9
    • /
    • pp.4502-4517
    • /
    • 2019
  • Tone Injection (TI) scheme significantly reduces the peak-to-average power ratio (PAPR) of Multicarrier Modulation (MCM). However, the computational complexity of the TI scheme rises exponentially with the extra freedom constellation number. Therefore, a novel immune evolutionary mechanism-based TI scheme is proposed in this paper to reduce the computational complexity. By restraining undesirable degeneracy during the processing, this IEM scheme can dramatically increase the population fitness. Monte Carlo results show that proposed IEM-based TI scheme can achieve a significant PAPR and BER improvement with a low complexity.

A Hierarchical Mode Decision Method for H.264 Intra Image Coding

  • Liu, Jiantan;Yoo, Kook-Yeol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.297-300
    • /
    • 2007
  • Due to its impressive compression performance, the H.264 video coder is highlighted in the video communications industry, such as DMB (Digital Multimedia Broadcasting), PMP (Portable Multimedia Player), etc. The main bottleneck to use the H.264 coder lays in the computational complexity, i.e. five times more complex than the market leading MPEG-4 Simple Profile codec. In this paper, we propose the hierarchical mode decision method for intraframe coding for the reduction of the computation complexity of the encoder. By determining the mode group early, the propose algorithm can skip the computationally demanding computation in the mode decision. The proposed algorithm is composed of three steps: $16{\times}16$ mode decision, $4{\times}4$ mode-group decisions, and final mode decision among the selected mode group. The simulation results show that the proposed algorithm achieves 20% to 50% reduction in the computational complexity compared with the conventional algorithm.