• Title/Summary/Keyword: Complexity of use

Search Result 1,359, Processing Time 0.037 seconds

The Relationship between Syntactic Complexity Indices and Scores on Language Use in the Analytic Rating Scale (통사적 복잡성과 분석적 척도의 언어 사용 점수간의 관계 탐색)

  • Young-Ju Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.229-235
    • /
    • 2023
  • This study investigates the relationship between syntactic complexity indices and scores on language use in Jacobs et al.(1981)' analytic rating scale. Syntactic complexity indices obtained from TAASSC program and 440 essays written by EFL students from the ICNALE corpus were analyzed. Specifically, this study explores the relationship between scores on language use and Lu(2011)'s traditional syntactic complexity indices, phrasal complexity indices, and clausal complexity indices, respectively. Results of the stepwise regression analysis showed that phrasal complexity indices turned out to be the best predictor of scores on language use, although the variance in scores on language use was relatively small, compared with the previous study. Implications of the findings of the current study for writing instruction (i.e., syntactic structures at the phrase level) were also discussed.

Software Effort Estimation based on Use Case Transaction (유스케이스 트랜잭션 기반의 소프트웨어 공수 예측 기법)

  • Lee, Sun-Kyung;Kang, Dong-Won;Bae, Doo-Hwan
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.566-570
    • /
    • 2010
  • Use Case Point(UCP) is a measure of a software project size for software effort estimation based on use case. UCP measures the size of the software project based on the use case model. Because UCP is based on the use case model, it is intuitive and easy to obtain. Also, it does not require extra artifacts. On the other hand, UCP has some problems. UCP assumes every transaction has the same complexity. But, the number of operations and complexity of operations may affect complexity of transaction. In addition, UCP uses simple rating scale of complexity, but it may be inadequate for detailed estimates. To solve these problems, we suggest "Transaction Point(TP)", a size measure based on use case transaction. TP considers actors and operations in transaction. Complexity of transaction is based on the number of operations and complexity of operation, so it can support detailed estimation.

DPCM-Based Image Pre-Analyzer and Quantization Method for Controlling the JPEG File Size (JPEG 파일 크기를 제어하기 위한 DPCM 기반의 영상 사전 분석기와 양자화 방법)

  • Shin, Sun-Young;Go, Hyuk-Jin;Park, Hyun-Sang;Jeon, Byeung-Woo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.561-564
    • /
    • 2005
  • In this paper, we present a new JPEG (Joint Photograph Experts Group) compression architecture which compresses still image into fixed size of bitstream to use restricted system memory efficiently. The size of bitstream is determined by the complexity of image and the quantization table. But the quantization table is set in advance the complexity of image is the essential factor. Therefore the size of bitstream for high complexity image is large and the size for low complexity image is small. This means that the management of restricted system memory is difficult. The proposed JPEG encoder estimates the size of bitstream using the correlation between consecutive frames and selects the quantization table suited to the complexity of image. This makes efficient use of system memory.

  • PDF

Low-Complexity Network Coding Algorithms for Energy Efficient Information Exchange

  • Wang, Yu;Henning, Ian D.
    • Journal of Communications and Networks
    • /
    • v.10 no.4
    • /
    • pp.396-402
    • /
    • 2008
  • The use of network coding in wireless networks has been proposed in the literature for energy efficient broadcast. However, the decoding complexity of existing algorithms is too high for low-complexity devices. In this work we formalize the all-to-all information exchange problem and shows how to optimize the transmission scheme in terms of energy efficiency. Furthermore, we prove by construction that there exists O(1) -complexity network coding algorithms for grid networks which can achieve such optimality. We also present low-complexity heuristics for random. topology networks. Simulation results show that network coding algorithms outperforms forwarding algorithms in most cases.

NEW COMPLEXITY ANALYSIS OF PRIMAL-DUAL IMPS FOR P* LAPS BASED ON LARGE UPDATES

  • Cho, Gyeong-Mi;Kim, Min-Kyung
    • Bulletin of the Korean Mathematical Society
    • /
    • v.46 no.3
    • /
    • pp.521-534
    • /
    • 2009
  • In this paper we present new large-update primal-dual interior point algorithms for $P_*$ linear complementarity problems(LAPS) based on a class of kernel functions, ${\psi}(t)={\frac{t^{p+1}-1}{p+1}}+{\frac{1}{\sigma}}(e^{{\sigma}(1-t)}-1)$, p $\in$ [0, 1], ${\sigma}{\geq}1$. It is the first to use this class of kernel functions in the complexity analysis of interior point method(IPM) for $P_*$ LAPS. We showed that if a strictly feasible starting point is available, then new large-update primal-dual interior point algorithms for $P_*$ LAPS have $O((1+2+\kappa)n^{{\frac{1}{p+1}}}lognlog{\frac{n}{\varepsilon}})$ complexity bound. When p = 1, we have $O((1+2\kappa)\sqrt{n}lognlog\frac{n}{\varepsilon})$ complexity which is so far the best known complexity for large-update methods.

The Use of VFG for Measuring the Slice Complexity (슬라이스 복잡도 측정을 위한 VFG의 사용)

  • 문유미;최완규;이성주
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.1
    • /
    • pp.183-191
    • /
    • 2001
  • We develop the new data s]ice representation, called the data value flow graph(VFG), for modeling the information flow oil data slices. Then we define a slice complexity measure by using the existing flow complexity measure in order to measure the complexity of the information flow on VFG. We show relation of the slice complexity on a slice with the slice complexity on a procedure. We also demonstrate the measurement scale factors through a set of atomic modifications and a concatenation operator on VFG.

  • PDF

The Influence of a New Product's Innovative Attributes and Planned Obsolescence on Consumer Purchase Intention (신제품의 혁신 속성과 계획적 진부화가 소비자의 구매의도에 미치는 영향)

  • Park, Chul-Ju
    • Journal of Distribution Science
    • /
    • v.13 no.8
    • /
    • pp.81-90
    • /
    • 2015
  • Purpose - To vitalize a market or develop a new one, companies frequently release new products into the market, often by shortening the time to market, called the release period. This research aims to investigate the purchase intention behavior of consumers in terms of buying new products at the time of product release based on the release speed. Research Design, Data, and Methodology - The research reviews the influence of relative advantage, complexity, and compatibility among innovative attributes of new products, as proposed by Rogers. Moreover, it examines the moderating effect of the innovative new product attributes in terms of speed of obsolescence of old products and how that influences consumer purchase behavior. Additionally, this study tests the research hypotheses using empirical analysis. Results - The analysis demonstrated that the relative predominance (H1) and suitability (H3) of new products had a statistically significant positive influence on new product purchase intention. However, the complexity (H2) of new products had a statistically significant positive influence on new product purchase intention in contrast to its predicted sign (-). The results of the moderating effect of the old product use period were as follows. H4-1 was not supported since the difference between the path coefficients of the group with the low level old product use period and the group with the high level, represented by the relationship of relative predominance and new product purchase intention, was not statistically significant. H5-1 was also not supported since the difference between the path coefficients of the group with the low level of old product use period and the group with the high level, represented by the relationship of complexity and new product purchase intention, was not statistically significant. However, H4-2 was supported since the difference between the path coefficients of the group with the low level of old product use frequency and the group with the high level, represented by the relationship of relative advantage and new product purchase intention was statistically significant. H5-2 was not supported since the difference between the path coefficients of the group with the low level of old product use frequency and the group with the high level, represented by the relationship of complexity and new product purchase intention, was not statistically significant. H6-2 was also not supported since the difference between the path coefficients of the group with the low level of old product use frequency and the group with the high level, represented by the relationship of compatibility and new product purchase intention, was not statistically significant. Conclusion - According to the results, only H4-2 among the hypotheses on the moderating effect of the old product use period and use frequency was statistically significant. Future research should focus on carrying out a detailed review of the hypothesis on the moderating effect of the old product usage period and frequency, find the cause, and connect this to potential new research.

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

TOEPLITZ SEQUENCES OF INTERMEDIATE COMPLEXITY

  • Kim, Hyoung-Keun;Park, Seung-Seol
    • Journal of the Korean Mathematical Society
    • /
    • v.48 no.2
    • /
    • pp.383-395
    • /
    • 2011
  • We present two constructions of Toeplitz sequences with an intermediate complexity function by using the generalized Oxtoby sequence. In the first one, we use the blocks from the infinite sequence, which has entropy dimension $\frac{1}{2}$. The second construction provides the Toeplitz sequences which have various entropy dimensions.

A FAST TEMPLATE MATCHING METHOD USING VECTOR SUMMATION OF SUBIMAGE PROJECTION

  • Kim, Whoi-Yul;Park, Yong-Sup
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.171-176
    • /
    • 1999
  • Template matching is one of the most often used techniques for machine vision applications to find a template of size M$\times$M or subimage in a scene image of size N$\times$N. Most template matching methods, however, require pixel operations between the template and the image under analysis resulting in high computational cost of O(M2N2). So in this thesis, we present a two stage template matching method. In the first stage, we use a novel low cost feature whose complexity is approaching O(N2) to select matching candidates. In the second stage, we use conventional template matching method to find out the exact matching point. We compare the result with other methods in terms of complexity, efficiency and performance. Proposed method was proved to have constant time complexity and to be quite invariant to noise.