• Title/Summary/Keyword: Complexity analysis

Search Result 2,438, Processing Time 0.026 seconds

NEW COMPLEXITY ANALYSIS OF PRIMAL-DUAL IMPS FOR P* LAPS BASED ON LARGE UPDATES

  • Cho, Gyeong-Mi;Kim, Min-Kyung
    • Bulletin of the Korean Mathematical Society
    • /
    • v.46 no.3
    • /
    • pp.521-534
    • /
    • 2009
  • In this paper we present new large-update primal-dual interior point algorithms for $P_*$ linear complementarity problems(LAPS) based on a class of kernel functions, ${\psi}(t)={\frac{t^{p+1}-1}{p+1}}+{\frac{1}{\sigma}}(e^{{\sigma}(1-t)}-1)$, p $\in$ [0, 1], ${\sigma}{\geq}1$. It is the first to use this class of kernel functions in the complexity analysis of interior point method(IPM) for $P_*$ LAPS. We showed that if a strictly feasible starting point is available, then new large-update primal-dual interior point algorithms for $P_*$ LAPS have $O((1+2+\kappa)n^{{\frac{1}{p+1}}}lognlog{\frac{n}{\varepsilon}})$ complexity bound. When p = 1, we have $O((1+2\kappa)\sqrt{n}lognlog\frac{n}{\varepsilon})$ complexity which is so far the best known complexity for large-update methods.

Computational Complexity Analysis of Cascade AOA Estimation Algorithm Based on Massive Array Antenna Configuration (메시브 배열 안테나 형상에 따른 캐스케이드 도래각 추정 알고리즘의 계산 복잡도 분석)

  • Tae-yun Kim;Suk-seung Hwang
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.13 no.3
    • /
    • pp.277-287
    • /
    • 2024
  • In satellite systems, efficient communication and observation require identifying of specific signal arrival points using onboard antenna systems. When utilizing massive array antennas to estimate the angle of arrival (AOA) of signals, traditional high-performance AOA estimation algorithms such as Multiple Signal Classification (MUSIC) encounter extremely high complexity due to the numerous individual antenna elements. Although, in order to improve this computational complexity problem, the cascade AOA estimation algorithm with CAPON and beamspace-MUSIC was recently proposed, the comparison of the computational complexity of the proposed algorithm across different massive array antenna configurations has not yet been conducted. In this paper, we provide the analyzed results of the computational complexity of the proposed cascade algorithm based on various massive array antennas, and determine an optimal antenna configuration for the efficient AOA estimation in satellite systems.

ANALYSIS OF THE UPPER BOUND ON THE COMPLEXITY OF LLL ALGORITHM

  • PARK, YUNJU;PARK, JAEHYUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.20 no.2
    • /
    • pp.107-121
    • /
    • 2016
  • We analyze the complexity of the LLL algorithm, invented by Lenstra, Lenstra, and $Lov{\acute{a}}sz$ as a a well-known lattice reduction (LR) algorithm which is previously known as having the complexity of $O(N^4{\log}B)$ multiplications (or, $O(N^5({\log}B)^2)$ bit operations) for a lattice basis matrix $H({\in}{\mathbb{R}}^{M{\times}N})$ where B is the maximum value among the squared norm of columns of H. This implies that the complexity of the lattice reduction algorithm depends only on the matrix size and the lattice basis norm. However, the matrix structures (i.e., the correlation among the columns) of a given lattice matrix, which is usually measured by its condition number or determinant, can affect the computational complexity of the LR algorithm. In this paper, to see how the matrix structures can affect the LLL algorithm's complexity, we derive a more tight upper bound on the complexity of LLL algorithm in terms of the condition number and determinant of a given lattice matrix. We also analyze the complexities of the LLL updating/downdating schemes using the proposed upper bound.

Predicting CEFR Levels in L2 Oral Speech, Based on Lexical and Syntactic Complexity

  • Hu, Xiaolin
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.35-45
    • /
    • 2021
  • With the wide spread of the Common European Framework of Reference (CEFR) scales, many studies attempt to apply them in routine teaching and rater training, while more evidence regarding criterial features at different CEFR levels are still urgently needed. The current study aims to explore complexity features that distinguish and predict CEFR proficiency levels in oral performance. Using a quantitative/corpus-based approach, this research analyzed lexical and syntactic complexity features over 80 transcriptions (includes A1, A2, B1 CEFR levels, and native speakers), based on an interview test, Standard Speaking Test (SST). ANOVA and correlation analysis were conducted to exclude insignificant complexity indices before the discriminant analysis. In the result, distinctive differences in complexity between CEFR speaking levels were observed, and with a combination of six major complexity features as predictors, 78.8% of the oral transcriptions were classified into the appropriate CEFR proficiency levels. It further confirms the possibility of predicting CEFR level of L2 learners based on their objective linguistic features. This study can be helpful as an empirical reference in language pedagogy, especially for L2 learners' self-assessment and teachers' prediction of students' proficiency levels. Also, it offers implications for the validation of the rating criteria, and improvement of rating system.

Complexity Analysis of Internet Video Coding (IVC) Decoding

  • Park, Sang-hyo;Dong, Tianyu;Jang, Euee S.
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.179-188
    • /
    • 2017
  • The Internet Video Coding (IVC) standard is due to be published by Moving Picture Experts Group (MPEG) for various Internet applications such as internet broadcast streaming. IVC aims at three things fundamentally: 1) forming IVC patents under a free of charge license, 2) reaching comparable compression performance to AVC/H.264 constrained Baseline Profile (cBP), and 3) maintaining computational complexity for feasible implementation of real-time encoding and decoding. MPEG experts have worked diligently on the intellectual property rights issues for IVC, and they reported that IVC already achieved the second goal (compression performance) and even showed comparable performance to even AVC/H.264 High Profile (HP). For the complexity issue, however, there has not been thorough analysis on IVC decoder. In this paper, we analyze the IVC decoder in view of the time complexity by evaluating running time. Through the experimental results, IVC is 3.6 times and 3.1 times more complex than AVC/H.264 cBP under constrained set (CS) 1 and CS2, respectively. Compared to AVC/H.264 HP, IVC is 2.8 times and 2.9 times slower in decoding time under CS1 and CS2, respectively. The most critical tool to be improved for lightweight IVC decoder is motion compensation process containing a resolution-adaptive interpolation filtering process.

The Cognitive Complexity of Clothing Attributes -Focused on Clothing Involvement- (의류 제품의 속성에 대한 인지적 복잡성에 관한 연구 -의복 관여를 중심으로-)

  • Park Sung-Eun
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.30 no.4 s.152
    • /
    • pp.497-506
    • /
    • 2006
  • The purpose of this study is to clarify the cognitive complexity of clothing attributes, which influence the preference and purchase intentions. The subjects of this study are 434 female college students and formal survey methodologies were used for collecting data. Those data were analyzed with SAS program, and various methods such as factor analysis, cluster analysis, conjoint analysis, path analysis of Lisrel were followed. The results of this study were as follows: 1) Clothing involvement consists of the affective factor and the cognitive factor. 2) The consumers were divided into three groups with regards to the degree of their clothing involvement. 3) Significant differences were found regarding the cognitive complexity of clothing attributes among these groups.

WHAT CAN WE SAY ABOUT THE TIME COMPLEXITY OF ALGORITHMS \ulcorner

  • Park, Chin-Hong
    • Journal of applied mathematics & informatics
    • /
    • v.8 no.3
    • /
    • pp.959-973
    • /
    • 2001
  • We shall discuss one of some techniques needed to analyze algorithms. It is called a big-O function technique. The measures of efficiency of an algorithm have two cases. One is the time used by a computer to solve the problem using this algorithm when the input values are of a specified size. The other one is the amount of computer memory required to implement the algorithm when the input values are of a specified size. Mainly, we will restrict our attention to time complexity. To figure out the Time Complexity in nonlinear problems of Numerical Analysis seems to be almost impossible.

Effects of Phonetic Complexity and Articulatory Severity on Percentage of Correct Consonant and Speech Intelligibility in Adults with Dysarthria (조음복잡성 및 조음중증도에 따른 마비말장애인의 자음정확도와 말명료도)

  • Song, HanNae;Lee, Youngmee;Sim, HyunSub;Sung, JeeEun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the effects of phonetic complexity and articulatory severity on Percentage of Correct Consonant (PCC) and speech intelligibility in adults with dysarthria. Speech samples of thirty-two words from APAC (Assessment of Phonology and Articulation of Children) were collected from 38 dysarthric speakers with one of two different levels of articulatory severities (mild or mild-moderate). A PCC and speech intelligibility score was calculated by the 4 levels of phonetic complexity. Two-way mixed ANOVA analysis revealed: (1) the group with mild severity showed significantly higher PCC and speech intelligibility scores than the mild-moderate articulatory severity group, (2) PCC at the phonetic complexity level 4 was significantly lower than those at the other levels and (3) an interaction effect of articulatory severity and phonetic complexity was observed only on the PCC. Pearson correlation analysis demonstrated the degree of correlation between PCC and speech intelligibility varied depending on the level of articulatory severity and phonetic complexity. The clinical implications of the findings were discussed.

Message Complexity Analysis of MANET Address Autoconfiguration-Single Node Joining Case (단일 노드 결합시 MANET 자동 네트워킹 프로토콜의 메시지 복잡도 분석)

  • Kim, Sang-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.5B
    • /
    • pp.257-269
    • /
    • 2007
  • This paper proposes a novel method to perform a quantitative analysis of message complexity and applies this method in comparing the message complexity among the mobile ad hoc network (MANET) address autoconfiguration protocols (AAPs). To obtain the upper bound of the message complexity of the protocols, the O-notation of a MANET group of N nodes has been applied. The message complexity of the single node joining case in Strong DAD, Weak DAD with proactive routing protocols (WDP), Weak DAD with on-demand routing protocols (WDO), and MANETconf has been derived as n(mO(N)+O(t)), n(O(N)+O(t)), n(O(N)+2O(t)), and nO((t+1)N)+O(N)+O(2) respectively. In order to verify the bounds, analytical simulations that quantify the message complexity of the address autoconfiguration process based on the different coflict probabilities are conducted.

Identification and Organization of Task Complexity Factors Based on a Model Combining Task Design Aspects and Complexity Dimensions

  • Ham, Dong-Han
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.1
    • /
    • pp.59-68
    • /
    • 2013
  • Objective: The purpose of this paper is to introduce a task complexity model combining task design aspects and complexity dimensions and to explain an approach to identifying and organizing task complexity factors based on the model. Background: Task complexity is a critical concept in describing and predicting human performance in complex systems such as nuclear power plants(NPPs). In order to understand the nature of task complexity, task complexity factors need to be identified and organized in a systematic manner. Although several methods have been suggested for identifying and organizing task complexity factors, it is rare to find an analytical approach based on a theoretically sound model. Method: This study regarded a task as a system to be designed. Three levels of design ion, which are functional, behavioral, and structural level of a task, characterize the design aspects of a task. The behavioral aspect is further classified into five cognitive processing activity types(information collection, information analysis, decision and action selection, action implementation, and action feedback). The complexity dimensions describe a task complexity from different perspectives that are size, variety, and order/organization. Combining the design aspects and complexity dimensions of a task, we developed a model from which meaningful task complexity factors can be identified and organized in an analytic way. Results: A model consisting of two facets, each of which is respectively concerned with design aspects and complexity dimensions, were proposed. Additionally, twenty-one task complexity factors were identified and organized based on the model. Conclusion: The model and approach introduced in this paper can be effectively used for examining human performance and human-system interface design issues in NPPs. Application: The model and approach introduced in this paper could be used for several human factors problems, including task allocation and design of information aiding, in NPPs and extended to other types of complex systems such as air traffic control systems as well.