• Title/Summary/Keyword: Global Approximate Optimization

Search Result 37, Processing Time 0.024 seconds

Utilizing Soft Computing Techniques in Global Approximate Optimization (전역근사최적화를 위한 소프트컴퓨팅기술의 활용)

  • 이종수;장민성;김승진;김도영
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2000.04b
    • /
    • pp.449-457
    • /
    • 2000
  • The paper describes the study of global approximate optimization utilizing soft computing techniques such as genetic algorithms (GA's), neural networks (NN's), and fuzzy inference systems(FIS). GA's provide the increasing probability of locating a global optimum over the entire design space associated with multimodality and nonlinearity. NN's can be used as a tool for function approximations, a rapid reanalysis model for subsequent use in design optimization. FIS facilitates to handle the quantitative design information under the case where the training data samples are not sufficiently provided or uncertain information is included in design modeling. Properties of soft computing techniques affect the quality of global approximate model. Evolutionary fuzzy modeling (EFM) and adaptive neuro-fuzzy inference system (ANFIS) are briefly introduced for structural optimization problem in this context. The paper presents the success of EFM depends on how optimally the fuzzy membership parameters are selected and how fuzzy rules are generated.

  • PDF

A B-spline based Branch & Bound Algorithm for Global Optimization (전역 최적화를 위한 B-스플라인 기반의 Branch & Bound알고리즘)

  • Park, Sang-Kun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.15 no.1
    • /
    • pp.24-32
    • /
    • 2010
  • This paper introduces a B-spline based branch & bound algorithm for global optimization. The branch & bound is a well-known algorithm paradigm for global optimization, of which key components are the subdivision scheme and the bound calculation scheme. For this, we consider the B-spline hypervolume to approximate an objective function defined in a design space. This model enables us to subdivide the design space, and to compute the upper & lower bound of each subspace where the bound calculation is based on the LHS sampling points. We also describe a search tree to represent the searching process for optimal solution, and explain iteration steps and some conditions necessary to carry out the algorithm. Finally, the performance of the proposed algorithm is examined on some test problems which would cover most difficulties faced in global optimization area. It shows that the proposed algorithm is complete algorithm not using heuristics, provides an approximate global solution within prescribed tolerances, and has the good possibility for large scale NP-hard optimization.

Efficient Adaptive Global Optimization for Constrained Problems (구속조건이 있는 문제의 적응 전역최적화 효율 향상에 대한 연구)

  • Ahn, Joong-Ki;Lee, Ho-Il;Lee, Sung-Mhan
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.6
    • /
    • pp.557-563
    • /
    • 2010
  • This paper addresses the issue of adaptive global optimization using Kriging metamodel known as EGO(Efficient Global Optimization). The algorithm adaptively chooses where to generate subsequent samples based on an explicit trade-off between reduction of global uncertainty and exploration of the region of the interest. A strategy that saves the computational cost by using expectations derived from probabilistic nature of approximate model is proposed. At every iteration, a candidate test point that seems to be feasible/inactive or has little possibility to improve for minimum is identified and excluded from updating approximate models. By doing that the computational cost is saved without loss of accuracy.

Dynamic response optmization using approximate search (근사 선탐색을 이용한 동적 반응 최적화)

  • Kim, Min-Soo;Choi, Dong-hoon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.22 no.4
    • /
    • pp.811-825
    • /
    • 1998
  • An approximate line search is presented for dynamic response optimization with Augmented Lagrange Multiplier(ALM) method. This study empolys the approximate a augmented Lagrangian, which can improve the efficiency of the ALM method, while maintaining the global convergence of the ALM method. Although the approximate augmented Lagragian is composed of only the linearized cost and constraint functions, the quality of this approximation should be good since an approximate penalty term is found to have almost second-order accuracy near the optimum. Typical unconstrained optimization algorithms such as quasi-Newton and conjugate gradient methods are directly used to find exact search directions and a golden section method followed by a cubic polynomial approximation is empolyed for approximate line search since the approximate augmented Lagrangian is a nonlinear function of design variable vector. The numberical performance of the proposed approach is investigated by solving three typical dynamic response optimization problems and comparing the results with those in the literature. This comparison shows that the suggested approach is robust and efficient.

Applications of Soft Computing Techniques in Response Surface Based Approximate Optimization

  • Lee, Jongsoo;Kim, Seungjin
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.8
    • /
    • pp.1132-1142
    • /
    • 2001
  • The paper describes the construction of global function approximation models for use in design optimization via global search techniques such as genetic algorithms. Two different approximation methods referred to as evolutionary fuzzy modeling (EFM) and neuro-fuzzy modeling (NFM) are implemented in the context of global approximate optimization. EFM and NFM are based on soft computing paradigms utilizing fuzzy systems, neural networks and evolutionary computing techniques. Such approximation methods may have their promising characteristics in a case where the training data is not sufficiently provided or uncertain information may be included in design process. Fuzzy inference system is the central system for of identifying the input/output relationship in both methods. The paper introduces the general procedures including fuzzy rule generation, membership function selection and inference process for EFM and NFM, and presents their generalization capabilities in terms of a number of fuzzy rules and training data with application to a three-bar truss optimization.

  • PDF

Optimal Design of a Heat Sink Using the Kriging Method (크리깅 방법에 의한 방열판 최적설계)

  • Ryu Je-Seon;Rew Keun-Ho;Park Kyoungwoo
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.29 no.10 s.241
    • /
    • pp.1139-1147
    • /
    • 2005
  • The shape optimal design of the plate-fin type heat sink with vortex generator is performed to minimize the pressure loss subjected to the desired maximum temperature numerically. Evaluation of the performance function, in general, is required much computational cost in fluid/thermal systems. Thus, global approximate optimization techniques have been introduced into the optimization of fluid/thermal systems. In this study, Kriging method Is used to obtain the optimal solutions associated with the computational fluid dynamics (CFD). The results show that when the temperature .rise is less than 40 K, the optimal design variables are $B_1=2.44\;mm,\;B_2=2.09\;mm$, and t=7.58 mm. Kriging method can dramatically reduce computational time by 1/6 times compared to SQP method so that the efficiency of Kriging method can be validated.

One Dimensional Optimization using Learning Network

  • Chung, Taishn;Bien, Zeungnam
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1995.10b
    • /
    • pp.33-39
    • /
    • 1995
  • One dimensional optimization problem is considered, we propose a method to find the global minimum of one-dimensional function with on gradient information but only the finite number of input-output samples. We construct a learning network which has a good learning capability and of which global maximum(or minimum) can be calculated with simple calculation. By teaching this network to approximate the given function with minimal samples, we can get the global minimum of the function. We verify this method using some typical esamples.

  • PDF

A Robust Optimization Using the Statistics Based on Kriging Metamodel

  • Lee Kwon-Hee;Kang Dong-Heon
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.8
    • /
    • pp.1169-1182
    • /
    • 2006
  • Robust design technology has been applied to versatile engineering problems to ensure consistency in product performance. Since 1980s, the concept of robust design has been introduced to numerical optimization field, which is called the robust optimization. The robustness in the robust optimization is determined by a measure of insensitiveness with respect to the variation of a response. However, there are significant difficulties associated with the calculation of variations represented as its mean and variance. To overcome the current limitation, this research presents an implementation of the approximate statistical moment method based on kriging metamodel. Two sampling methods are simultaneously utilized to obtain the sequential surrogate model of a response. The statistics such as mean and variance are obtained based on the reliable kriging model and the second-order statistical approximation method. Then, the simulated annealing algorithm of global optimization methods is adopted to find the global robust optimum. The mathematical problem and the two-bar design problem are investigated to show the validity of the proposed method.

Improving the Training Performance of Multilayer Neural Network by Using Stochastic Approximation and Backpropagation Algorithm (확률적 근사법과 후형질과 알고리즘을 이용한 다층 신경망의 학습성능 개선)

  • 조용현;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.145-154
    • /
    • 1994
  • This paper proposes an efficient method for improving the training performance of the neural network by using a hybrid of a stochastic approximation and a backpropagation algorithm. The proposed method improves the performance of the training by appliying a global optimization method which is a hybrid of a stochastic approximation and a backpropagation algorithm. The approximate initial point for a stochastic approximation and a backpropagation algorihtm. The approximate initial point for fast global optimization is estimated first by applying the stochastic approximation, and then the backpropagation algorithm, which is the fast gradient descent method, is applied for a high speed global optimization. And further speed-up of training is made possible by adjusting the training parameters of each of the output and the hidden layer adaptively to the standard deviation of the neuron output of each layer. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to that of the backpropagation, the Baba's MROM, and the Sun's method with randomized initial point settings. The results of adaptive adjusting of the training parameters show that the proposed method further improves the convergence speed about 20% in training.

  • PDF

Simultaneous Optimization of Structure and Control Systems Based on Convex Optimization - An approximate Approach - (볼록최적화에 의거한 구조계와 제어계의 동시최적화 - 근사적 어프로치 -)

  • Son, Hoe-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.8
    • /
    • pp.1353-1362
    • /
    • 2003
  • This paper considers a simultaneous optimization problem of structure and control systems. The problem is generally formulated as a non-convex optimization problem for the design parameters of mechanical structure and controller. Therefore, it is not easy to obtain the global solutions for practical problems. In this paper, we parameterize all design parameters of the mechanical structure such that the parameters work in the control system as decentralized static output feedback gains. Using this parameterization, we have formulated a simultaneous optimization problem in which the design specification is defined by the Η$_2$and Η$\_$$\infty$/ norms of the closed loop transfer function. So as to lead to a convex problem we approximate the nonlinear terms of design parameters to the linear terms. Then, we propose a convex optimization method that is based on linear matrix inequality (LMI). Using this method, we can surely obtain suboptimal solution for the design specification. A numerical example is given to illustrate the effectiveness of the proposed method.