• Title/Summary/Keyword: complexity analysis

Search Result 2,397, Processing Time 0.03 seconds

NEW COMPLEXITY ANALYSIS OF PRIMAL-DUAL IMPS FOR P* LAPS BASED ON LARGE UPDATES

  • Cho, Gyeong-Mi;Kim, Min-Kyung
    • Bulletin of the Korean Mathematical Society
    • /
    • v.46 no.3
    • /
    • pp.521-534
    • /
    • 2009
  • In this paper we present new large-update primal-dual interior point algorithms for $P_*$ linear complementarity problems(LAPS) based on a class of kernel functions, ${\psi}(t)={\frac{t^{p+1}-1}{p+1}}+{\frac{1}{\sigma}}(e^{{\sigma}(1-t)}-1)$, p $\in$ [0, 1], ${\sigma}{\geq}1$. It is the first to use this class of kernel functions in the complexity analysis of interior point method(IPM) for $P_*$ LAPS. We showed that if a strictly feasible starting point is available, then new large-update primal-dual interior point algorithms for $P_*$ LAPS have $O((1+2+\kappa)n^{{\frac{1}{p+1}}}lognlog{\frac{n}{\varepsilon}})$ complexity bound. When p = 1, we have $O((1+2\kappa)\sqrt{n}lognlog\frac{n}{\varepsilon})$ complexity which is so far the best known complexity for large-update methods.

ANALYSIS OF THE UPPER BOUND ON THE COMPLEXITY OF LLL ALGORITHM

  • PARK, YUNJU;PARK, JAEHYUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.20 no.2
    • /
    • pp.107-121
    • /
    • 2016
  • We analyze the complexity of the LLL algorithm, invented by Lenstra, Lenstra, and $Lov{\acute{a}}sz$ as a a well-known lattice reduction (LR) algorithm which is previously known as having the complexity of $O(N^4{\log}B)$ multiplications (or, $O(N^5({\log}B)^2)$ bit operations) for a lattice basis matrix $H({\in}{\mathbb{R}}^{M{\times}N})$ where B is the maximum value among the squared norm of columns of H. This implies that the complexity of the lattice reduction algorithm depends only on the matrix size and the lattice basis norm. However, the matrix structures (i.e., the correlation among the columns) of a given lattice matrix, which is usually measured by its condition number or determinant, can affect the computational complexity of the LR algorithm. In this paper, to see how the matrix structures can affect the LLL algorithm's complexity, we derive a more tight upper bound on the complexity of LLL algorithm in terms of the condition number and determinant of a given lattice matrix. We also analyze the complexities of the LLL updating/downdating schemes using the proposed upper bound.

Predicting CEFR Levels in L2 Oral Speech, Based on Lexical and Syntactic Complexity

  • Hu, Xiaolin
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.35-45
    • /
    • 2021
  • With the wide spread of the Common European Framework of Reference (CEFR) scales, many studies attempt to apply them in routine teaching and rater training, while more evidence regarding criterial features at different CEFR levels are still urgently needed. The current study aims to explore complexity features that distinguish and predict CEFR proficiency levels in oral performance. Using a quantitative/corpus-based approach, this research analyzed lexical and syntactic complexity features over 80 transcriptions (includes A1, A2, B1 CEFR levels, and native speakers), based on an interview test, Standard Speaking Test (SST). ANOVA and correlation analysis were conducted to exclude insignificant complexity indices before the discriminant analysis. In the result, distinctive differences in complexity between CEFR speaking levels were observed, and with a combination of six major complexity features as predictors, 78.8% of the oral transcriptions were classified into the appropriate CEFR proficiency levels. It further confirms the possibility of predicting CEFR level of L2 learners based on their objective linguistic features. This study can be helpful as an empirical reference in language pedagogy, especially for L2 learners' self-assessment and teachers' prediction of students' proficiency levels. Also, it offers implications for the validation of the rating criteria, and improvement of rating system.

Complexity Analysis of Internet Video Coding (IVC) Decoding

  • Park, Sang-hyo;Dong, Tianyu;Jang, Euee S.
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.179-188
    • /
    • 2017
  • The Internet Video Coding (IVC) standard is due to be published by Moving Picture Experts Group (MPEG) for various Internet applications such as internet broadcast streaming. IVC aims at three things fundamentally: 1) forming IVC patents under a free of charge license, 2) reaching comparable compression performance to AVC/H.264 constrained Baseline Profile (cBP), and 3) maintaining computational complexity for feasible implementation of real-time encoding and decoding. MPEG experts have worked diligently on the intellectual property rights issues for IVC, and they reported that IVC already achieved the second goal (compression performance) and even showed comparable performance to even AVC/H.264 High Profile (HP). For the complexity issue, however, there has not been thorough analysis on IVC decoder. In this paper, we analyze the IVC decoder in view of the time complexity by evaluating running time. Through the experimental results, IVC is 3.6 times and 3.1 times more complex than AVC/H.264 cBP under constrained set (CS) 1 and CS2, respectively. Compared to AVC/H.264 HP, IVC is 2.8 times and 2.9 times slower in decoding time under CS1 and CS2, respectively. The most critical tool to be improved for lightweight IVC decoder is motion compensation process containing a resolution-adaptive interpolation filtering process.

The Cognitive Complexity of Clothing Attributes -Focused on Clothing Involvement- (의류 제품의 속성에 대한 인지적 복잡성에 관한 연구 -의복 관여를 중심으로-)

  • Park Sung-Eun
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.30 no.4 s.152
    • /
    • pp.497-506
    • /
    • 2006
  • The purpose of this study is to clarify the cognitive complexity of clothing attributes, which influence the preference and purchase intentions. The subjects of this study are 434 female college students and formal survey methodologies were used for collecting data. Those data were analyzed with SAS program, and various methods such as factor analysis, cluster analysis, conjoint analysis, path analysis of Lisrel were followed. The results of this study were as follows: 1) Clothing involvement consists of the affective factor and the cognitive factor. 2) The consumers were divided into three groups with regards to the degree of their clothing involvement. 3) Significant differences were found regarding the cognitive complexity of clothing attributes among these groups.

WHAT CAN WE SAY ABOUT THE TIME COMPLEXITY OF ALGORITHMS \ulcorner

  • Park, Chin-Hong
    • Journal of applied mathematics & informatics
    • /
    • v.8 no.3
    • /
    • pp.959-973
    • /
    • 2001
  • We shall discuss one of some techniques needed to analyze algorithms. It is called a big-O function technique. The measures of efficiency of an algorithm have two cases. One is the time used by a computer to solve the problem using this algorithm when the input values are of a specified size. The other one is the amount of computer memory required to implement the algorithm when the input values are of a specified size. Mainly, we will restrict our attention to time complexity. To figure out the Time Complexity in nonlinear problems of Numerical Analysis seems to be almost impossible.

Effects of Phonetic Complexity and Articulatory Severity on Percentage of Correct Consonant and Speech Intelligibility in Adults with Dysarthria (조음복잡성 및 조음중증도에 따른 마비말장애인의 자음정확도와 말명료도)

  • Song, HanNae;Lee, Youngmee;Sim, HyunSub;Sung, JeeEun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the effects of phonetic complexity and articulatory severity on Percentage of Correct Consonant (PCC) and speech intelligibility in adults with dysarthria. Speech samples of thirty-two words from APAC (Assessment of Phonology and Articulation of Children) were collected from 38 dysarthric speakers with one of two different levels of articulatory severities (mild or mild-moderate). A PCC and speech intelligibility score was calculated by the 4 levels of phonetic complexity. Two-way mixed ANOVA analysis revealed: (1) the group with mild severity showed significantly higher PCC and speech intelligibility scores than the mild-moderate articulatory severity group, (2) PCC at the phonetic complexity level 4 was significantly lower than those at the other levels and (3) an interaction effect of articulatory severity and phonetic complexity was observed only on the PCC. Pearson correlation analysis demonstrated the degree of correlation between PCC and speech intelligibility varied depending on the level of articulatory severity and phonetic complexity. The clinical implications of the findings were discussed.

Message Complexity Analysis of MANET Address Autoconfiguration-Single Node Joining Case (단일 노드 결합시 MANET 자동 네트워킹 프로토콜의 메시지 복잡도 분석)

  • Kim, Sang-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.5B
    • /
    • pp.257-269
    • /
    • 2007
  • This paper proposes a novel method to perform a quantitative analysis of message complexity and applies this method in comparing the message complexity among the mobile ad hoc network (MANET) address autoconfiguration protocols (AAPs). To obtain the upper bound of the message complexity of the protocols, the O-notation of a MANET group of N nodes has been applied. The message complexity of the single node joining case in Strong DAD, Weak DAD with proactive routing protocols (WDP), Weak DAD with on-demand routing protocols (WDO), and MANETconf has been derived as n(mO(N)+O(t)), n(O(N)+O(t)), n(O(N)+2O(t)), and nO((t+1)N)+O(N)+O(2) respectively. In order to verify the bounds, analytical simulations that quantify the message complexity of the address autoconfiguration process based on the different coflict probabilities are conducted.

Identification and Organization of Task Complexity Factors Based on a Model Combining Task Design Aspects and Complexity Dimensions

  • Ham, Dong-Han
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.1
    • /
    • pp.59-68
    • /
    • 2013
  • Objective: The purpose of this paper is to introduce a task complexity model combining task design aspects and complexity dimensions and to explain an approach to identifying and organizing task complexity factors based on the model. Background: Task complexity is a critical concept in describing and predicting human performance in complex systems such as nuclear power plants(NPPs). In order to understand the nature of task complexity, task complexity factors need to be identified and organized in a systematic manner. Although several methods have been suggested for identifying and organizing task complexity factors, it is rare to find an analytical approach based on a theoretically sound model. Method: This study regarded a task as a system to be designed. Three levels of design ion, which are functional, behavioral, and structural level of a task, characterize the design aspects of a task. The behavioral aspect is further classified into five cognitive processing activity types(information collection, information analysis, decision and action selection, action implementation, and action feedback). The complexity dimensions describe a task complexity from different perspectives that are size, variety, and order/organization. Combining the design aspects and complexity dimensions of a task, we developed a model from which meaningful task complexity factors can be identified and organized in an analytic way. Results: A model consisting of two facets, each of which is respectively concerned with design aspects and complexity dimensions, were proposed. Additionally, twenty-one task complexity factors were identified and organized based on the model. Conclusion: The model and approach introduced in this paper can be effectively used for examining human performance and human-system interface design issues in NPPs. Application: The model and approach introduced in this paper could be used for several human factors problems, including task allocation and design of information aiding, in NPPs and extended to other types of complex systems such as air traffic control systems as well.

A study on convergence and complexity of reproducing kernel collocation method

  • Hu, Hsin-Yun;Lai, Chiu-Kai;Chen, Jiun-Shyan
    • Interaction and multiscale mechanics
    • /
    • v.2 no.3
    • /
    • pp.295-319
    • /
    • 2009
  • In this work, we discuss a reproducing kernel collocation method (RKCM) for solving $2^{nd}$ order PDE based on strong formulation, where the reproducing kernel shape functions with compact support are used as approximation functions. The method based on strong form collocation avoids the domain integration, and leads to well-conditioned discrete system of equations. We investigate the convergence and the computational complexity for this proposed method. An important result obtained from the analysis is that the degree of basis in the reproducing kernel approximation has to be greater than one for the method to converge. Some numerical experiments are provided to validate the error analysis. The complexity of RKCM is also analyzed, and the complexity comparison with the weak formulation using reproducing kernel approximation is presented.