• Title/Summary/Keyword: Complexity analysis

Search Result 2,404, Processing Time 0.031 seconds

Impact Factors Analysis on AR Shopping Service's Immersion

  • SHIN, Myoung-Ho;LEE, Young-Min;KIM, Jin-Hwan
    • Journal of Distribution Science
    • /
    • v.17 no.12
    • /
    • pp.13-21
    • /
    • 2019
  • Purpose - It is very important to examine customer's behavior about AR shopping either practically or academically. Thus, it will be worthwhile to discuss more in details about AR utility which is even in early stage of distribution industry now. Research design, data, and methodology - This study has designed in consideration of control effects of perceived complexity based on customer's flow as dependent variable, and on AR characteristics and technology readiness as independent variables. Study data has been collected from questionnaires after using AR shopping service directly by those who are 20-30 years old of male and female respondents, which has been analyzed with 167 questionnaires. Hypothesis is verified using by hierarchical regression analysis. Results - After results of hypothesis verified, positive influence has been shown in terms of sensory immersion, manipulation, and optimism, however, it is rejected in relation to navigation and innovativeness. Control effect of perceived complexity has not been appeared. Conclusions - Implications of this study are as follows. First, AR shopping service has to provide an informational value. Second, by providing AR service to customer group, marketing activities will be in effects. Third, recognized complexity is not connected with significant control effect in terms of customer's devotion of service.

A New Analysis and a Reduction Method of Computational Complexity for the Lattice Transversal Joint (LTJ) Adaptive Filter (격자 트랜스버설 결합 (LTJ) 적응필터의 새로운 해석과 계산량 감소 방법)

  • 유재하
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.5
    • /
    • pp.438-445
    • /
    • 2002
  • In this paper, the necessity of the filter coefficients compensation for the lattice transversal joint (LTJ) adaptive filter was explained in general and with ease by analyzing it with respect to the time-varying transform domain adaptive filter. And also the reduction method of computational complexity for filter coefficients compensation was proposed using the property that speech signal is stationary during a short time period and its effectiveness was verified through experiments using artificial and real speech signals. The proposed adaptive filter reduces the computational complexity for filter coefficients compensation by 95%, and when the filter is applied to the acoustic echo canceller with 1000 taps, the total complexity is reduced by 82%.

Low Complexity Zero-Forcing Beamforming for Distributed Massive MIMO Systems in Large Public Venues

  • Li, Haoming;Leung, Victor C.M.
    • Journal of Communications and Networks
    • /
    • v.15 no.4
    • /
    • pp.370-382
    • /
    • 2013
  • Distributed massive MIMO systems, which have high bandwidth efficiency and can accommodate a tremendous amount of traffic using algorithms such as zero-forcing beam forming (ZFBF), may be deployed in large public venues with the antennas mounted under-floor. In this case the channel gain matrix H can be modeled as a multi-banded matrix, in which off-diagonal entries decay both exponentially due to heavy human penetration loss and polynomially due to free space propagation loss. To enable practical implementation of such systems, we present a multi-banded matrix inversion algorithm that substantially reduces the complexity of ZFBF by keeping the most significant entries in H and the precoding matrix W. We introduce a parameter p to control the sparsity of H and W and thus achieve the tradeoff between the computational complexity and the system throughput. The proposed algorithm includes dense and sparse precoding versions, providing quadratic and linear complexity, respectively, relative to the number of antennas. We present analysis and numerical evaluations to show that the signal-to-interference ratio (SIR) increases linearly with p in dense precoding. In sparse precoding, we demonstrate the necessity of using directional antennas by both analysis and simulations. When the directional antenna gain increases, the resulting SIR increment in sparse precoding increases linearly with p, while the SIR of dense precoding is much less sensitive to changes in p.

Security Analysis of AES for Related-Key Rectangle Attacks (AES의 연관키 렉탱글 공격에 대한 안전성 분석)

  • Kim, Jong-Sung;Hong, Seok-Hie;Lee, Chang-Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.2
    • /
    • pp.39-48
    • /
    • 2009
  • In this paper we improve previous related-key rectangle attacks on AES from 9 rounds to 10 rounds: Our attacks break the first 10 rounds of 12-round AES-192 with 256 related keys, a data complexity of $2^{124}$ and a time complexity of $2^{183}$, and also break the first 10 rounds of 12-round AES-192 with 64 related keys, a data complexity of $2^{122}$ and a time complexity of $2^{183.6}$, Our attacks are the best knoown attacks on AES-192.

Developing efficient model updating approaches for different structural complexity - an ensemble learning and uncertainty quantifications

  • Lin, Guangwei;Zhang, Yi;Liao, Qinzhuo
    • Smart Structures and Systems
    • /
    • v.29 no.2
    • /
    • pp.321-336
    • /
    • 2022
  • Model uncertainty is a key factor that could influence the accuracy and reliability of numerical model-based analysis. It is necessary to acquire an appropriate updating approach which could search and determine the realistic model parameter values from measurements. In this paper, the Bayesian model updating theory combined with the transitional Markov chain Monte Carlo (TMCMC) method and K-means cluster analysis is utilized in the updating of the structural model parameters. Kriging and polynomial chaos expansion (PCE) are employed to generate surrogate models to reduce the computational burden in TMCMC. The selected updating approaches are applied to three structural examples with different complexity, including a two-storey frame, a ten-storey frame, and the national stadium model. These models stand for the low-dimensional linear model, the high-dimensional linear model, and the nonlinear model, respectively. The performances of updating in these three models are assessed in terms of the prediction uncertainty, numerical efforts, and prior information. This study also investigates the updating scenarios using the analytical approach and surrogate models. The uncertainty quantification in the Bayesian approach is further discussed to verify the validity and accuracy of the surrogate models. Finally, the advantages and limitations of the surrogate model-based updating approaches are discussed for different structural complexity. The possibility of utilizing the boosting algorithm as an ensemble learning method for improving the surrogate models is also presented.

The Software Complexity Estimation Method in Algorithm Level by Analysis of Source code (소스코드의 분석을 통한 알고리즘 레벨에서의 소프트웨어 복잡도 측정 방법)

  • Lim, Woong;Nam, Jung-Hak;Sim, Dong-Gyu;Cho, Dae-Sung;Choi, Woong-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.153-164
    • /
    • 2010
  • A program consumes energy by executing its instructions. The amount of cosumed power is mainly proportional to algorithm complexity and it can be calculated by using complexity information. Generally, the complexity of a S/W is estimated by the microprocessor simulator. But, the simulation takes long time why the simulator is a software modeled the hardware and it only provides the information about computational complexity quantitatively. In this paper, we propose a complexity estimation method of analysis of S/W on source code level and produce the complexity metric mathematically. The function-wise complexity metrics give the detailed information about the calculation-concentrated location in function. The performance of the proposed method is compared with the result of the gate-level microprocessor simulator 'SimpleScalar'. The used softwares for performance test are $4{\times}4$ integer transform, intra-prediction and motion estimation in the latest video codec, H.264/AVC. The number of executed instructions are used to estimate quantitatively and it appears about 11.6%, 9.6% and 3.5% of error respectively in contradistinction to the result of SimpleScalar.

ON THE COMPUTATION OF THE NON-PERIODIC AUTOCORRELATION FUNCTION OF TWO TERNARY SEQUENCES AND ITS RELATED COMPLEXITY ANALYSIS

  • Koukouvinos, Christos;Simos, Dimitris E.
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.3_4
    • /
    • pp.547-562
    • /
    • 2011
  • We establish a new formalism of the non-periodic autocorrelation function (NPAF) of two sequences, which is suitable for the computation of the NPAF of any two sequences. It is shown, that this encoding of NPAF is efficient for sequences of small weight. In particular, the check for two sequences of length n having weight w to have zero NPAF can be decided in $O(n+w^2{\log}w)$. For n > w^2{\log}w$, the complexity is O(n) thus we cannot expect asymptotically faster algorithms.

Performance Optimization of Tandem Source-Channel Coding Systems Employing Unequal Error Protection Under Complexity Constraints (복잡도 제한 하에서 비균등 오류 보호 기법을 사용하는 탠덤 소스-채널 코딩 시스템의 성능 최적화)

  • Lim, Jongtae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2537-2543
    • /
    • 2014
  • Between tandem source-channel coding systems and joint source-channel coding systems, it has been known that there is a complexity threshold in complexity versus performance. In this paper, by expanding the previous analysis for equal error protection systems, we analyze and compare the performance under complexity constrains for tandem source-channel coding systems which employ unequal error protection. Under a given complexity constraint, the optimization is performed to minimize the end-to-end distortion of each representative tandem and joint source-channel coding system. The results show that the complexity threshold for unequal error protection systems becomes smaller and the performance enhancement of unequal error protection systems over equal error protection systems gets smaller as the system complexity gets larger.

A Software Complexity Measurement Technique for Object-Oriented Reverse Engineering (객체지향 역공학을 위한 소프트웨어 복잡도 측정 기법)

  • Kim Jongwan;Hwang Chong-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.847-852
    • /
    • 2005
  • Over the last decade, numerous complexity measurement techniques for Object-Oriented (OO) software system have been proposed for managing the effects of OO codes. These techniques may be based on source code analysis such as WMC (Weighted Methods per Class) and LCOM (Lack of Cohesion in Methods). The techniques are limited to count the number of functions (C++). However. we suggested a new weighted method that checks the number of parameters, the return value and its data type. Then we addressed an effective complexity measurement technique based on the weight of class interfaces to provide guidelines for measuring the class complexity of OO codes in reverse engineering. The results of this research show that the proposed complexity measurement technique ECC(Enhanced Class Complexity) is consistent and accurate in C++ environment.

The Relationship between Syntactic Complexity Indices and Scores on Language Use in the Analytic Rating Scale (통사적 복잡성과 분석적 척도의 언어 사용 점수간의 관계 탐색)

  • Young-Ju Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.229-235
    • /
    • 2023
  • This study investigates the relationship between syntactic complexity indices and scores on language use in Jacobs et al.(1981)' analytic rating scale. Syntactic complexity indices obtained from TAASSC program and 440 essays written by EFL students from the ICNALE corpus were analyzed. Specifically, this study explores the relationship between scores on language use and Lu(2011)'s traditional syntactic complexity indices, phrasal complexity indices, and clausal complexity indices, respectively. Results of the stepwise regression analysis showed that phrasal complexity indices turned out to be the best predictor of scores on language use, although the variance in scores on language use was relatively small, compared with the previous study. Implications of the findings of the current study for writing instruction (i.e., syntactic structures at the phrase level) were also discussed.