• Title/Summary/Keyword: Complexity analysis

Search Result 2,404, Processing Time 0.029 seconds

Scalability Analysis of MANET IPv6 Address Auto-configuration Protocols based on Link Error Modeling (링크 에러 모델링을 이용한 MANET 환경에서의 IPv6 자동주소 설정 방식의 확장성 분석)

  • Kim, Sang-Chul
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.4
    • /
    • pp.282-291
    • /
    • 2008
  • This paper focuses on message complexity performance analysis of MANET AAPs in reference to link errors generated by the mobile wireless nodes. To obtain the message complexity performance of AAPs in reference to the link error probability ($P_e$), an enhancement was made by proposing the retransmission limit (S) to be computed for error recovery (based on the link error probability), and then for each of the AAPs the control procedures for the retransmission limit have been properly included. The O-notation has been applied in analyzing the upper bound of the number of messages generated by a MANET group of N nodes. Based on a link error probability range of $P_e=0$ to 0.8, the AAPs investigated in this paper are Strong DAD, Weak DAD with proactive routing protocol (WDP), Weak DAD with on-demand routing protocol (WDO), and MANETconf. Based on the simulation results and analysis of the message complexity, for nominal situations, the message complexity of WDP was lowest, closely followed by WDO. The message complexity of MANETconf is higher than that of WDO, and Strong DAD results to be most complex among the four AAPs.

DESIGN-ORIENTED AERODYNAMIC ANALYSES OF HELICOPTER ROTOR IN HOVER (정지비행 헬리콥터 로터의 설계를 위한 공력해석)

  • Jung H.J.;Kim T.S.;Son C.H.;Joh C.Y.
    • Journal of computational fluids engineering
    • /
    • v.11 no.3 s.34
    • /
    • pp.1-7
    • /
    • 2006
  • Euler and Navier-Stokes flow analyses for helicopter rotor in hover were performed as low and high fidelity analysis models respectively for the future multidisciplinary design optimization(MDO). These design-oriented analyses possess several attributes such as variable complexity, sensitivity-computation capability and modularity which analysis models involved in MDO are recommended to provide with. To realize PC-based analyses for both fidelity models, reduction of flow domain was made by appling farfield boundary condition based on 3-dimensional point sink with simple momentum theory and also periodic boundary condition in the azimuthal direction. Correlations of thrust, torque and their sensitivities between low and high complexity models were tried to evaluate the applicability of these analysis models in MDO process. It was found that the low-fidelity Euler analysis model predicted inaccurate sensitivity derivatives at relatively high angle of attack.

Dynamic Control Algorithm of GOP Structure based on Picture Complexity (영상 복잡도에 기반한 GOP구조의 동적 제어 알고리즘)

  • 문영득;최금수
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.4
    • /
    • pp.258-264
    • /
    • 2004
  • This paper propose a method that GOP structure based on the picture complexity change realtime adaptive without pre-analysis or time delay. Proposed algorithm calculates the complexity of pictures at first, and the ratio of the complexity( X$\sub$p/ /X$\sub$i/) between P picture and I picture is calculated. The suitable M value for the three picture select by comparing with predetermined threshold. Used bit and vbv_delay the value of GOP is calculated according to selected M. Experimental results show that the prediction error is reduce than the fixed GOP structure. Since the complexity distribution of the sequence is different, applied limits of threshold value is changed, also.

Computational Complexity Analysis of Cascade AOA Estimation Algorithm Based on FMCCA Antenna

  • Kim, Tae-yun;Hwang, Suk-seung
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.2
    • /
    • pp.91-98
    • /
    • 2022
  • In the next generation wireless communication system, the beamforming technique based on a massive antenna is one of core technologies for transmitting and receiving huge amounts of data, efficiently and accurately. For highly performed and highly reliable beamforming, it is required to accurately estimate the Angle of Arrival (AOA) for the desired signal incident to an antenna. Employing the massive antenna with a large number of elements, although the accuracy of the AOA estimation is enhanced, its computational complexity is dramatically increased so much that real-time communication is difficult. In order to improve this problem, AOA estimation algorithms based on the massive antenna with the low computational complexity have been actively studied. In this paper, we compute and analyze the computational complexity of the cascade AOA estimation algorithm based on the Flexible Massive Concentric Circular Array (FMCCA). In addition, its computational complexity is compared to conventional AOA estimation techniques such as the Multiple Signal Classification (MUSIC) algorithm with the high resolution and the Only Beamspace MUSIC (OBM) algorithm.

A complexity analysis of a "pragmatic" relaxation method for the combinatorial optimization with a side constraint (단일 추가제약을 갖는 조합최적화문제를 위한 실용적 완화해법의 계산시간 분석)

  • 홍성필
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.25 no.1
    • /
    • pp.27-36
    • /
    • 2000
  • We perform a computational complexity analysis of a heuristic algotithm proposed in the literature for the combinatorial optimization problems extended with a single side-constraint. This algorithm, although such a view was not given in the original work, is a disguised version of an optimal Lagrangian dual solution technique. It also has been observed to be a very efficient heuristic producing near-optimal solutions for the primal problems in some experiments. Especially, the number of iterations grows sublinearly in terms of the network node size so that the heuristic seems to be particularly suitable for the applicatons such as routing with semi-real time requirements. The goal of this paper is to establish a polynomal worst-case complexity of the algorithm. In particular, the obtained complexity bound suports the sublinear growth of the required iterations.

  • PDF

Complexity Analysis for Implementation of the ISO/IEEE 11073 PHD Standards (ISO/IEEE 11073 PHD 표준 구현을 통한 복잡도 분석)

  • Kim, Sang-Kon;Yoo, Done-Sik;Kim, Tae-Kon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.307-312
    • /
    • 2012
  • In this paper, we perform a complexity analysis for implementation of the ISO/IEEE 11073 Personal Health Device (PHD) standards in order to check the required system resources when ISO/IEEE 11073 PHD standards are implemented on the embedded system. Base on the implemented programs complying the PHD standards for a weighing scale, a blood pressure monitor, and a glucose meter among the various personal health devices, we make a pseudo-code. And then from the two different points of view such as program memory space and data memory space, we make a complexity analysis model. Because system resources or capability are strongly restricted in the personal health devices, our research work is very useful to estimate the required system resources.

NEW COMPLEXITY ANALYSIS OF IPM FOR $P_*({\kappa})$ LCP BASED ON KERNEL FUNCTIONS

  • Cho, Gyeong-Mi;Kim, Min-Kyung;Lee, Yong-Hoon
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.12 no.4
    • /
    • pp.227-238
    • /
    • 2008
  • In this paper we extend primal-dual interior point algorithm for linear optimization (LO) problems to $P_*({\kappa})$ linear complementarity problems(LCPs) ([1]). We define proximity functions and search directions based on kernel functions, ${\psi}(t)=\frac{t^{p+1}-1}{p+1}-{\log}\;t$, $p{\in}$[0, 1], which is a generalized form of the one in [16]. It is the first to use this class of kernel functions in the complexity analysis of interior point method(IPM) for $P_*({\kappa})$ LCPs. We show that if a strictly feasible starting point is available, then new large-update primal-dual interior point algorithms for $P_*({\kappa})$ LCPs have $O((1+2{\kappa})nlog{\frac{n}{\varepsilon}})$ complexity which is similar to the one in [16]. For small-update methods, we have $O((1+2{\kappa})\sqrt{n}{\log}{\frac{n}{\varepsilon}})$ which is the best known complexity so far.

  • PDF

Korean Maintainability Prediction Methodology Reflecting System Complexity (시스템 복잡도를 반영한 한국형 정비도 예측 방법론)

  • Kwon, Jae-Eon;Hur, Jang-Wook
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.4
    • /
    • pp.119-126
    • /
    • 2021
  • During the development of a weapon system, the concept of maintainability is used for quantitatively predicting and analyzing the maintenance time. However, owing to the complexity of a weapon system, the standard maintenance time predicted during the system's development differs significantly from the measured time during the operation of the equipment after the system's development. According to the analysis presented in this paper, the maintenance time can be predicted by considering the system's complexity on the basis of the military specifications, and the procedure can be Part B of Procedure II and Method B of Procedure V. The maintenance work elements affected by the system complexity were identified by the analytic hierarchy process technique, and the system-complexity-reflecting weights of the maintenance work elements were calculated by the Delphi method, which involves expert surveys. Based on MIL-HDBK-470A and MIL-HDBK-472, it is going to present a Korean-style maintainability prediction method that reflects system complexity of weapons systems.

An Analysis of Effective Throughput in Distributed Wireless Scheduling

  • Radwan, Amr
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.155-162
    • /
    • 2016
  • Several distributed scheduling policies have been proposed with the objective of attaining the maximum throughput region or a guaranteed fraction throughput region. These policies consider only the theoretical throughput and do not account the lost in throughput due to the time complexity of implementing an algorithm in practice. Therefore, we propose a novel concept called effective throughput to characterize the actual throughput by taking into account the time complexity. Effective throughput can be viewed as the actual transmitted data without including the control message overhead. Numerical results demonstrate that in practical scheduling, time complexity significantly affects throughput. The performance of throughput degrades when the time complexity is high.

Low Complexity Decoder for Space-Time Turbo Codes

  • Lee Chang-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.303-309
    • /
    • 2006
  • By combining the space-time diversity technique and iterative turbo codes, space-time turbo codes(STTCS) are able to provide powerful error correction capability. However, the multi-path transmission and iterative decoding structure of STTCS make the decoder very complex. In this paper, we propose a low complexity decoder, which can be used to decode STTCS as well as general iterative codes such as turbo codes. The efficient implementation of the backward recursion and the log-likelihood ratio(LLR) update in the proposed algorithm improves the computational efficiency. In addition, if we approximate the calculation of the joint LLR by using the approximate ratio(AR) algorithm, the computational complexity can be reduced even further. A complexity analysis and computer simulations over the Rayleigh fading channel show that the proposed algorithm necessitates less than 40% of the additions required by the conventional Max-Log-MAP algorithm, while providing the same overall performance.