• Title/Summary/Keyword: Time Complexity

Search Result 3,047, Processing Time 0.025 seconds

MODELS AND SOLUTION METHODS FOR SHORTEST PATHS IN A NETWORK WITH TIME-DEPENDENT FLOW SPEEDS

  • Sung, Ki-Seok;Bell, Michael G-H
    • Management Science and Financial Engineering
    • /
    • v.4 no.2
    • /
    • pp.1-13
    • /
    • 1998
  • The Shortest Path Problem in Time-dependent Networks, where the travel time of each link depends on the time interval, is not realistic since the model and its solution violate the Non-passing Property (NPP:often referred to as FIFO) of real phenomena. Furthermore, solving the problem needs much more computational and memory complexity than the general shortest path problem. A new model for Time-dependent Networks where the flow speeds of each link depend on time interval, is suggested. The model is more realistic since its solution maintains the NPP. Solving the problem needs just a little more computational complexity, and the same memory complexity, as the general shortest path problem. A solution algorithm modified from Dijkstra's label setting algorithm is presented. We extend this model to the problem of Minimum Expected Time Path in Time-dependent Stochastic Networks where flow speeds of each link change statistically on each time interval. A solution method using the Kth-shortest Path algorithm is presented.

  • PDF

Time Complexity Analysis of SPIHT(Set Partitioning in Hierarchy Trees) Image Coding Algorithm (SPIHT 영상코딩 알고리즘의 시간복잡도 해석)

  • 박영석
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.1
    • /
    • pp.36-40
    • /
    • 2003
  • A number of embedded wavelet image coding methods have been Proposed since the introduction of EZW(Embedded Zerotree Wavelet) algorithm. A common characteristic of these methods is that they use fundamental ideas found in the EZW algorithm. Especially, one of these methods is the SPIHT(Set Partitioning in Hierarchy Trees) algorithm, which became very popular since it was able to achieve equal or better performance than EZW without having to use an arithmetic encoder. The SPIHT algorithm is computationally very simple, but even so it provides excellent numerical and visual results. But the evaluation of its time complexity is no more than the relative result of experimental comparisons and the strict time complexity analysis wasn't taken until now. In this paper, we analyze strictly the processing time complexity of SPIHT algorithm and prove that the time complexity for one bit-plane processing is O( nlog $_2$n) in worst case.

  • PDF

Fuzzy Linguistic Approach for Evaluating Task Complexity in Nuclear Power Plant (원자력발전소에서의 작업복잡도를 평가하기 위한 퍼지기반 작업복잡도 지수의 개발)

  • Jung Kwang-Tae;Jung Won-dea;Park Jin-Kyun
    • Journal of the Korean Society of Safety
    • /
    • v.20 no.1 s.69
    • /
    • pp.126-132
    • /
    • 2005
  • The purpose of this study is to propose a method to evaluate task complexity using CIFs(Complexity Influencing Factors). We developed a method that CIFs can be used in the evaluation of task complexity using fuzzy linguistic approach. That is, a fuzzy linguistic multi-criteria method to assess task complexity in a specific task situation was proposed. The CIFs luting was assessed in linguistic terms, which are described by fuzzy numbers with triangular and trapezoidal membership function. A fuzzy weighted average algorithm, based on the extension principle, was employed to aggregate these fuzzy numbers. Finally, the method was validated by experimental approach. In the result, it was validated that TCIM(Tink Complexity Index Method) is an efficient method to evaluate task complexity because the correlation coefficient between task performance time and TCI(Task Complexity Index) was 0.699.

Related-key Impossible Boomerang Cryptanalysis on LBlock-s

  • Xie, Min;Zeng, Qiya
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5717-5730
    • /
    • 2019
  • LBlock-s is the core block cipher of authentication encryption algorithm LAC, which uses the same structure of LBlock and an improved key schedule algorithm with better diffusion property. Using the differential properties of the key schedule algorithm and the cryptanalytic technique which combines impossible boomerang attacks with related-key attacks, a 15-round related-key impossible boomerang distinguisher is constructed for the first time. Based on the distinguisher, an attack on 22-round LBlock-s is proposed by adding 4 rounds on the top and 3 rounds at the bottom. The time complexity is about only 268.76 22-round encryptions and the data complexity is about 258 chosen plaintexts. Compared with published cryptanalysis results on LBlock-s, there has been a sharp decrease in time complexity and an ideal data complexity.

Efficient Detection of Space-Time Block Codes Based on Parallel Detection

  • Kim, Jeong-Chang;Cheun, Kyung-Whoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2A
    • /
    • pp.100-107
    • /
    • 2011
  • Algorithms based on the QR decomposition of the equivalent space-time channel matrix have been proved useful in the detection of V-BLAST systems. Especially, the parallel detection (PD) algorithm offers ML approaching performance up to 4 transmit antennas with reasonable complexity. We show that when directly applied to STBCs, the PD algorithm may suffer a rather significant SNR degradation over ML detection, especially at high SNRs. However, simply extending the PD algorithm to allow p ${\geq}$ 2 candidate layers, i.e. p-PD, regains almost all the loss but only at a significant increase in complexity. Here, we propose a simplification to the p-PD algorithm specific to STBCs without a corresponding sacrifice in performance. The proposed algorithm results in significant complexity reductions for moderate to high order modulations.

The Effects of Task Complexity for Text Summarization by Korean Adult EFL Learners

  • Lee, Haemoon;Park, Heesoo
    • Journal of English Language & Literature
    • /
    • v.57 no.6
    • /
    • pp.911-938
    • /
    • 2011
  • The present study examined the effect of two variables of task complexity, reasoning demand and time pressure, each from the resourcedirecting and resource-dispersing dimension in Robinson's (2001) framework of task classification. Reasoning demand was operationalized as the two types of texts to read and summarize, expository and argumentative. Time pressure was operationalized as the two modes of performance, oral and written. Six university students summarized the two types of text orally and twenty four students from the same school summarized them in the written form. Results from t test and ANCOVA showed that in the oral mode, reasoning demand tends to heighten the complexity of the language used in the summary in competition with accuracy but such an effect disappeared in the written mode. It was interpreted that the degree of time pressure is not the only difference between the oral and written modes but that the two modes may be fundamentally different cognitive tasks, and that Robinson's (2001) and Skehan's (1998) models were differentially supported by the oral mode of tasks but not by the written mode of the tasks.

A Study on the Type and the Facilities in Compositeness of the Domestic Discount Store (국내 대형할인점의 복합화에 따른 유형과 시설에 관한 연구)

  • 문선욱;양정필
    • Korean Institute of Interior Design Journal
    • /
    • no.41
    • /
    • pp.137-145
    • /
    • 2003
  • This research analyzed the space scheme in connection with complexity, one of the new changes in the discount stores, and has a goal of predicting the direction of space scheme in the upcoming complexity era. The research was conducted in the following way. Firstly, this researcher tried to grasp what kinds of changes were required in the overall distribution industry socially and economically. Secondly, the characteristic and situation of discount stores were scrutinized. Thirdly, the domestic stores' complexity status was classified and types of those were elicited. Fourthly, the time-series change and use were analyzed. The result of this analysis reveals that the types of complexity can be divided by location and adjustment to environmental changes. The time-series analysis shows that total operating area, the number of parked cars and the tenant ratio have increased dramatically in 2000 and 2003. And, according to the correlation analysis between factors, the tenant ratio has, a strong correlation with other two factors. Self-complexity takes the basic form of living facilities and complexity with other facilities is combined with other cultural, sales, educational and administrative ones. Mass-complexity is merged with the stadiums, parks or station sites. As you've seen, the concept of complex shopping mall for the realization of one stop shopping and convenience will continue in the days to come. It is desirable that the study on the large-scale shopping spaces will be conducted continually for the preparedness of future life style.

ESTIMATING THE OPERATOR'S PERFORMANCE TIME OF EMERGENCY PROCEDURAL TASKS BASED ON A TASK COMPLEXITY MEASURE

  • Jung, Won-Dea;Park, Jin-Kyun
    • Nuclear Engineering and Technology
    • /
    • v.44 no.4
    • /
    • pp.415-420
    • /
    • 2012
  • It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.

A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY

  • Hugo, Jacques;Gertman, David
    • Nuclear Engineering and Technology
    • /
    • v.45 no.2
    • /
    • pp.141-150
    • /
    • 2013
  • There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

Iterative Multiple Symbol Differential Detection for Turbo Coded Differential Unitary Space-Time Modulation

  • Vanichchanunt, Pisit;Sangwongngam, Paramin;Nakpeerayuth, Suvit;Wuttisittikulkij, Lunchakorn
    • Journal of Communications and Networks
    • /
    • v.10 no.1
    • /
    • pp.44-54
    • /
    • 2008
  • In this paper, an iterative multiple symbol differential detection for turbo coded differential unitary space-time modulation using a posteriori probability (APP) demodulator is investigated. Two approaches of different complexity based on linear prediction are presented to utilize the temporal correlation of fading for the APP demodulator. The first approach intends to take account of all possible previous symbols for linear prediction, thus requiring an increase of the number of trellis states of the APP demodulator. In contrast, the second approach applies Viterbi algorithm to assist the APP demodulator in estimating the previous symbols, hence allowing much reduced decoding complexity. These two approaches are found to provide a trade-off between performance and complexity. It is shown through simulation that both approaches can offer significant BER performance improvement over the conventional differential detection under both correlated slow and fast Rayleigh flat-fading channels. In addition, when comparing the first approach to a modified bit-interleaved turbo coded differential space-time modulation counterpart of comparable decoding complexity, the proposed decoding structure can offer performance gain over 3 dB at BER of $10^{-5}$.