• Title/Summary/Keyword: complexity function

Search Result 822, Processing Time 0.027 seconds

The Relationship of Complexity and Order in Determining Aesthetic Preference in Architectural Form

  • Whang, Hee-Joon
    • Architectural research
    • /
    • v.13 no.4
    • /
    • pp.19-30
    • /
    • 2011
  • This investigation, based on empirical research, examined the role of complexity and order in the aesthetic experience of architectural forms. The basic assumption of this study was that perception in architectural form is a process of interpreting a pattern in a reductive way. Thus, perceptual arousal is not determined by the absolute complexity of a configuration. Rather, the actual perceived complexity is a function of the organization of the system (order). In addition, complexity and order were defined and categorized into four variables according to their significant characteristics; simple order, complex order, random complexity, and lawful complexity. The series of experiments confirmed that there is a point on the psychological complexity dimension which is optimal. By demonstrating that consensual and individual aesthetic preference can be measured to have a unimodal function of relationship with complexity, the results of the experiments indicated that complexity and orderliness are effective design factors for enhancing aesthetics of a building facade. This investigation offered a conceptual framework that relates the physical (architectural form) and psychological factors (complexity and order) operating in the aesthetic experience of building facades.

WHAT CAN WE SAY ABOUT THE TIME COMPLEXITY OF ALGORITHMS \ulcorner

  • Park, Chin-Hong
    • Journal of applied mathematics & informatics
    • /
    • v.8 no.3
    • /
    • pp.959-973
    • /
    • 2001
  • We shall discuss one of some techniques needed to analyze algorithms. It is called a big-O function technique. The measures of efficiency of an algorithm have two cases. One is the time used by a computer to solve the problem using this algorithm when the input values are of a specified size. The other one is the amount of computer memory required to implement the algorithm when the input values are of a specified size. Mainly, we will restrict our attention to time complexity. To figure out the Time Complexity in nonlinear problems of Numerical Analysis seems to be almost impossible.

ON COMPLEXITY ANALYSIS OF THE PRIMAL-DUAL INTERIOR-POINT METHOD FOR SECOND-ORDER CONE OPTIMIZATION PROBLEM

  • Choi, Bo-Kyung;Lee, Gue-Myung
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.14 no.2
    • /
    • pp.93-111
    • /
    • 2010
  • The purpose of this paper is to obtain new complexity results for a second-order cone optimization (SOCO) problem. We define a proximity function for the SOCO by a kernel function. Furthermore we formulate an algorithm for a large-update primal-dual interior-point method (IPM) for the SOCO by using the proximity function and give its complexity analysis, and then we show that the new worst-case iteration bound for the IPM is $O(q\sqrt{N}(logN)^{\frac{q+1}{q}}log{\frac{N}{\epsilon})$, where $q{\geqq}1$.

Rule of Combination Using Expanded Approximation Algorithm (확장된 근사 알고리즘을 이용한 조합 방법)

  • Moon, Won Sik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.3
    • /
    • pp.21-30
    • /
    • 2013
  • Powell-Miller theory is a good method to express or treat incorrect information. But it has limitation that requires too much time to apply to actual situation because computational complexity increases in exponential and functional way. Accordingly, there have been several attempts to reduce computational complexity but side effect followed - certainty factor fell. This study suggested expanded Approximation Algorithm. Expanded Approximation Algorithm is a method to consider both smallest supersets and largest subsets to expand basic space into a space including inverse set and to reduce Approximation error. By using expanded Approximation Algorithm suggested in the study, basic probability assignment function value of subsets was alloted and added to basic probability assignment function value of sets related to the subsets. This made subsets newly created become Approximation more efficiently. As a result, it could be known that certain function value which is based on basic probability assignment function is closely near actual optimal result. And certainty in correctness can be obtained while computational complexity could be reduced. by using Algorithm suggested in the study, exact information necessary for a system can be obtained.

AN ELIGIBLE KERNEL BASED PRIMAL-DUAL INTERIOR-POINT METHOD FOR LINEAR OPTIMIZATION

  • Cho, Gyeong-Mi
    • Honam Mathematical Journal
    • /
    • v.35 no.2
    • /
    • pp.235-249
    • /
    • 2013
  • It is well known that each kernel function defines primal-dual interior-point method (IPM). Most of polynomial-time interior-point algorithms for linear optimization (LO) are based on the logarithmic kernel function ([9]). In this paper we define new eligible kernel function and propose a new search direction and proximity function based on this function for LO problems. We show that the new algorithm has $\mathcal{O}(({\log}\;p)^{\frac{5}{2}}\sqrt{n}{\log}\;n\;{\log}\frac{n}{\epsilon})$ and $\mathcal{O}(q^{\frac{3}{2}}({\log}\;p)^3\sqrt{n}{\log}\;\frac{n}{\epsilon})$ iteration complexity for large- and small-update methods, respectively. These are currently the best known complexity results for such methods.

Cube selection using function complexity and minimizatio of two-level reed-muller expressions (함수복잡도를 이용한 큐브선택과 이단계 리드뮬러표현의 최소화)

  • Lee, Gueesang
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.32A no.6
    • /
    • pp.104-110
    • /
    • 1995
  • In this paper, an effective method for the minimization of two-level Reed-muller expressions by cube selection whcih considers functional complexity is presented. In contrast to the previous methods which use Xlinking operations to join two cubes for minimizatio, the cube selection method tries to select cubes one at a time until they cover the ON-set of the given function. This method works for most benchmark circuits, but for parity-type functions it shows power performance. To solve this problem, a cost function which computes the functional complexity instead of only the size of ON-set of the function is used. Therefore the optimization is performed considering how the trun minterms are grouped together so that they can be realized by only a small number of cubes. In other words, it considers how the function is changed and how the change affects the next optimization step. Experimental results shows better performance in many cases including parity-type functions compared to pervious results.

  • PDF

Node Monitoring Algorithm with Piecewise Linear Function Approximation for Efficient LDPC Decoding (Node Monitoring 알고리듬과 NP 방법을 사용한 효율적인 LDPC 복호방법)

  • Suh, Hee-Jong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.1
    • /
    • pp.20-26
    • /
    • 2011
  • In this paper, we propose an efficient algorithm for reducing the complexity of LDPC code decoding by using node monitoring (NM) and Piecewise Linear Function Approximation (NP). This NM algorithm is based on a new node-threshold method, and the message passing algorithm. Piecewise linear function approximation is used to reduce the complexity for more. This algorithm was simulated in order to verify its efficiency. Simulation results show that the complexity of our NM algorithm is reduced to about 20%, compared with thoes of well-known method.

NEW INTERIOR POINT METHODS FOR SOLVING $P_*(\kappa)$ LINEAR COMPLEMENTARITY PROBLEMS

  • Cho, You-Young;Cho, Gyeong-Mi
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.13 no.3
    • /
    • pp.189-202
    • /
    • 2009
  • In this paper we propose new primal-dual interior point algorithms for $P_*(\kappa)$ linear complementarity problems based on a new class of kernel functions which contains the kernel function in [8] as a special case. We show that the iteration bounds are $O((1+2\kappa)n^{\frac{9}{14}}\;log\;\frac{n{\mu}^0}{\epsilon}$) for large-update and $O((1+2\kappa)\sqrt{n}log\frac{n{\mu}^0}{\epsilon}$) for small-update methods, respectively. This iteration complexity for large-update methods improves the iteration complexity with a factor $n^{\frac{5}{14}}$ when compared with the method based on the classical logarithmic kernel function. For small-update, the iteration complexity is the best known bound for such methods.

  • PDF

Effects of Model Complexity, Structure and Objective Function on Calibration Process (모형의 복잡성, 구조 및 목적함수가 모형 검정에 미치는 영향)

  • Choi, Kyung Sook
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.45 no.4
    • /
    • pp.89-97
    • /
    • 2003
  • Using inference models developed for estimation of the parameters necessary to implement the Runoff Block of the Stormwater Management Model (SWMM), a number of alternative inference scenarios were developed to assess the influence of inference model complexity and structure on the calibration of the catchment modelling system. These inference models varied from the assumption of a spatially invariant value (catchment average) to spatially variable with each subcatchment having its own unique values. Fur-thermore, the influence of different measures of deviation between the recorded information and simulation predictions were considered. The results of these investigations indicate that the model performance is more influenced by model structure than complexity, and control parameter values are very much dependent on objective function selected as this factor was the most influential for both the initial estimates and the final results.

A Software Size Estimation Using Weighted FFP (가중치를 적용한 FFP 소프트웨어 규모 측정)

  • Park Juseok
    • Journal of Internet Computing and Services
    • /
    • v.6 no.2
    • /
    • pp.37-47
    • /
    • 2005
  • Most of the methods of estimating the size of software are based on the functions provided to costumers and in the process of granting the score to each function we consider the complexity during the process. The FFP technique has advantages applied to vast areas like data management. real-time system, algorithmic software, etc, but on the other hand, has disadvantage on estimating sizes for weights for necessary function elements. This paper proposes the estimating method for software size by considering the complexity of each function elements in full function point calculation method applied to a new developed project and maintenance projects. For this, based on function point by using surveyed data proved the validity of proposed method. The valid result. was that the function elements, the attributes used in size estimation of software, est mated better estimated sizes than in the case of other weights being applied.

  • PDF