• 제목/요약/키워드: complexity function

검색결과 823건 처리시간 0.031초

The Relationship of Complexity and Order in Determining Aesthetic Preference in Architectural Form

  • Whang, Hee-Joon
    • Architectural research
    • /
    • 제13권4호
    • /
    • pp.19-30
    • /
    • 2011
  • This investigation, based on empirical research, examined the role of complexity and order in the aesthetic experience of architectural forms. The basic assumption of this study was that perception in architectural form is a process of interpreting a pattern in a reductive way. Thus, perceptual arousal is not determined by the absolute complexity of a configuration. Rather, the actual perceived complexity is a function of the organization of the system (order). In addition, complexity and order were defined and categorized into four variables according to their significant characteristics; simple order, complex order, random complexity, and lawful complexity. The series of experiments confirmed that there is a point on the psychological complexity dimension which is optimal. By demonstrating that consensual and individual aesthetic preference can be measured to have a unimodal function of relationship with complexity, the results of the experiments indicated that complexity and orderliness are effective design factors for enhancing aesthetics of a building facade. This investigation offered a conceptual framework that relates the physical (architectural form) and psychological factors (complexity and order) operating in the aesthetic experience of building facades.

WHAT CAN WE SAY ABOUT THE TIME COMPLEXITY OF ALGORITHMS \ulcorner

  • Park, Chin-Hong
    • Journal of applied mathematics & informatics
    • /
    • 제8권3호
    • /
    • pp.959-973
    • /
    • 2001
  • We shall discuss one of some techniques needed to analyze algorithms. It is called a big-O function technique. The measures of efficiency of an algorithm have two cases. One is the time used by a computer to solve the problem using this algorithm when the input values are of a specified size. The other one is the amount of computer memory required to implement the algorithm when the input values are of a specified size. Mainly, we will restrict our attention to time complexity. To figure out the Time Complexity in nonlinear problems of Numerical Analysis seems to be almost impossible.

ON COMPLEXITY ANALYSIS OF THE PRIMAL-DUAL INTERIOR-POINT METHOD FOR SECOND-ORDER CONE OPTIMIZATION PROBLEM

  • Choi, Bo-Kyung;Lee, Gue-Myung
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제14권2호
    • /
    • pp.93-111
    • /
    • 2010
  • The purpose of this paper is to obtain new complexity results for a second-order cone optimization (SOCO) problem. We define a proximity function for the SOCO by a kernel function. Furthermore we formulate an algorithm for a large-update primal-dual interior-point method (IPM) for the SOCO by using the proximity function and give its complexity analysis, and then we show that the new worst-case iteration bound for the IPM is $O(q\sqrt{N}(logN)^{\frac{q+1}{q}}log{\frac{N}{\epsilon})$, where $q{\geqq}1$.

확장된 근사 알고리즘을 이용한 조합 방법 (Rule of Combination Using Expanded Approximation Algorithm)

  • 문원식
    • 디지털산업정보학회논문지
    • /
    • 제9권3호
    • /
    • pp.21-30
    • /
    • 2013
  • Powell-Miller theory is a good method to express or treat incorrect information. But it has limitation that requires too much time to apply to actual situation because computational complexity increases in exponential and functional way. Accordingly, there have been several attempts to reduce computational complexity but side effect followed - certainty factor fell. This study suggested expanded Approximation Algorithm. Expanded Approximation Algorithm is a method to consider both smallest supersets and largest subsets to expand basic space into a space including inverse set and to reduce Approximation error. By using expanded Approximation Algorithm suggested in the study, basic probability assignment function value of subsets was alloted and added to basic probability assignment function value of sets related to the subsets. This made subsets newly created become Approximation more efficiently. As a result, it could be known that certain function value which is based on basic probability assignment function is closely near actual optimal result. And certainty in correctness can be obtained while computational complexity could be reduced. by using Algorithm suggested in the study, exact information necessary for a system can be obtained.

AN ELIGIBLE KERNEL BASED PRIMAL-DUAL INTERIOR-POINT METHOD FOR LINEAR OPTIMIZATION

  • Cho, Gyeong-Mi
    • 호남수학학술지
    • /
    • 제35권2호
    • /
    • pp.235-249
    • /
    • 2013
  • It is well known that each kernel function defines primal-dual interior-point method (IPM). Most of polynomial-time interior-point algorithms for linear optimization (LO) are based on the logarithmic kernel function ([9]). In this paper we define new eligible kernel function and propose a new search direction and proximity function based on this function for LO problems. We show that the new algorithm has $\mathcal{O}(({\log}\;p)^{\frac{5}{2}}\sqrt{n}{\log}\;n\;{\log}\frac{n}{\epsilon})$ and $\mathcal{O}(q^{\frac{3}{2}}({\log}\;p)^3\sqrt{n}{\log}\;\frac{n}{\epsilon})$ iteration complexity for large- and small-update methods, respectively. These are currently the best known complexity results for such methods.

함수복잡도를 이용한 큐브선택과 이단계 리드뮬러표현의 최소화 (Cube selection using function complexity and minimizatio of two-level reed-muller expressions)

  • Lee, Gueesang
    • 전자공학회논문지A
    • /
    • 제32A권6호
    • /
    • pp.104-110
    • /
    • 1995
  • In this paper, an effective method for the minimization of two-level Reed-muller expressions by cube selection whcih considers functional complexity is presented. In contrast to the previous methods which use Xlinking operations to join two cubes for minimizatio, the cube selection method tries to select cubes one at a time until they cover the ON-set of the given function. This method works for most benchmark circuits, but for parity-type functions it shows power performance. To solve this problem, a cost function which computes the functional complexity instead of only the size of ON-set of the function is used. Therefore the optimization is performed considering how the trun minterms are grouped together so that they can be realized by only a small number of cubes. In other words, it considers how the function is changed and how the change affects the next optimization step. Experimental results shows better performance in many cases including parity-type functions compared to pervious results.

  • PDF

Node Monitoring 알고리듬과 NP 방법을 사용한 효율적인 LDPC 복호방법 (Node Monitoring Algorithm with Piecewise Linear Function Approximation for Efficient LDPC Decoding)

  • 서희종
    • 한국전자통신학회논문지
    • /
    • 제6권1호
    • /
    • pp.20-26
    • /
    • 2011
  • 본 논문에서는 NM(node monitoring) 알고리듬과 NP(Piecewise Linear Function Approximation)를 사용해서 LDPC 코드 복호의 복잡도를 감소시키기 위한 효율적인 알고리듬을 제안한다. 이 NM 알고리듬은 새로운 node-threshold 방법과 message passing 알고리듬에 근거해서 제안되었는데, 이에 NP 방법을 사용해서 알고리듬의 복잡도를 더 줄일 수 있었다. 이 알고리듬의 효율성을 입증하기 위해서 모의 실험을 하였다. 모의실험결과, 기존에 잘 알려진 방법에 비해서 20% 정도 더 효율적이었다.

NEW INTERIOR POINT METHODS FOR SOLVING $P_*(\kappa)$ LINEAR COMPLEMENTARITY PROBLEMS

  • Cho, You-Young;Cho, Gyeong-Mi
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제13권3호
    • /
    • pp.189-202
    • /
    • 2009
  • In this paper we propose new primal-dual interior point algorithms for $P_*(\kappa)$ linear complementarity problems based on a new class of kernel functions which contains the kernel function in [8] as a special case. We show that the iteration bounds are $O((1+2\kappa)n^{\frac{9}{14}}\;log\;\frac{n{\mu}^0}{\epsilon}$) for large-update and $O((1+2\kappa)\sqrt{n}log\frac{n{\mu}^0}{\epsilon}$) for small-update methods, respectively. This iteration complexity for large-update methods improves the iteration complexity with a factor $n^{\frac{5}{14}}$ when compared with the method based on the classical logarithmic kernel function. For small-update, the iteration complexity is the best known bound for such methods.

  • PDF

모형의 복잡성, 구조 및 목적함수가 모형 검정에 미치는 영향 (Effects of Model Complexity, Structure and Objective Function on Calibration Process)

  • Choi, Kyung Sook
    • 한국농공학회지
    • /
    • 제45권4호
    • /
    • pp.89-97
    • /
    • 2003
  • Using inference models developed for estimation of the parameters necessary to implement the Runoff Block of the Stormwater Management Model (SWMM), a number of alternative inference scenarios were developed to assess the influence of inference model complexity and structure on the calibration of the catchment modelling system. These inference models varied from the assumption of a spatially invariant value (catchment average) to spatially variable with each subcatchment having its own unique values. Fur-thermore, the influence of different measures of deviation between the recorded information and simulation predictions were considered. The results of these investigations indicate that the model performance is more influenced by model structure than complexity, and control parameter values are very much dependent on objective function selected as this factor was the most influential for both the initial estimates and the final results.

가중치를 적용한 FFP 소프트웨어 규모 측정 (A Software Size Estimation Using Weighted FFP)

  • 박주석
    • 인터넷정보학회논문지
    • /
    • 제6권2호
    • /
    • pp.37-47
    • /
    • 2005
  • 대부분 소프트웨어 규모 추정 기법들은 사용자에게 제공될 기능에 기반을 두고 있으며, 기능에 대한 점수를 부여하는 과정에서 복잡도를 함께 고려하고 있다. 완전기능점수 기법은 데이터 처리, 실시간 시스템과 알고리즘 소프트웨어 등 광범위한 분야에 적용되는 장점을 갖고 있는 반면에 규모를 추정하는데 필요한 기능 요소들에 대한 가중치를 부여하지 않는 단점도 갖고 있다. 본 논문은 신규로 개발되는 프로젝트와 유지보수 프로젝트들에 적용되는 완전기능점수 계산 방법에 각기능 요소들에 대한 복잡도를 고려하여 소프트웨어 규모를 추정할 수 있는 방법을 제안하였다. 이를 위해 기능 점수 기반으로 실측된 데이터를 이용하여 제안된 방법의 타당성을 검증하였다. 검증한 결과, 소프트웨어의 규모 추정에 사용되는 속성들인 기능 요소들에 다른 가중치를 적용하였을 경우 보다 좋은 규모 추정이 가능하였다.

  • PDF