• Title/Summary/Keyword: decomposition optimization

Search Result 213, Processing Time 0.028 seconds

An Empirical Data Driven Optimization Approach By Simulating Human Learning Processes (인간의 학습과정 시뮬레이션에 의한 경험적 데이터를 이용한 최적화 방법)

  • Kim Jinhwa
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.29 no.4
    • /
    • pp.117-134
    • /
    • 2004
  • This study suggests a data driven optimization approach, which simulates the models of human learning processes from cognitive sciences. It shows how the human learning processes can be simulated and applied to solving combinatorial optimization problems. The main advantage of using this method is in applying it into problems, which are very difficult to simulate. 'Undecidable' problems are considered as best possible application areas for this suggested approach. The concept of an 'undecidable' problem is redefined. The learning models in human learning and decision-making related to combinatorial optimization in cognitive and neural sciences are designed, simulated, and implemented to solve an optimization problem. We call this approach 'SLO : simulated learning for optimization.' Two different versions of SLO have been designed: SLO with position & link matrix, and SLO with decomposition algorithm. The methods are tested for traveling salespersons problems to show how these approaches derive new solution empirically. The tests show that simulated learning for optimization produces new solutions with better performance empirically. Its performance, compared to other hill-climbing type methods, is relatively good.

A SUPERLINEAR $\mathcal{VU}$ SPACE-DECOMPOSITION ALGORITHM FOR SEMI-INFINITE CONSTRAINED PROGRAMMING

  • Huang, Ming;Pang, Li-Ping;Lu, Yuan;Xia, Zun-Quan
    • Journal of applied mathematics & informatics
    • /
    • v.30 no.5_6
    • /
    • pp.759-772
    • /
    • 2012
  • In this paper, semi-infinite constrained programming, a class of constrained nonsmooth optimization problems, are transformed into unconstrained nonsmooth convex programs under the help of exact penalty function. The unconstrained objective function which owns the primal-dual gradient structure has connection with $\mathcal{VU}$-space decomposition. Then a $\mathcal{VU}$-space decomposition method can be applied for solving this unconstrained programs. Finally, the superlinear convergence algorithm is proved under certain assumption.

Integrating Multiple Mathematical Models for Supply Chain Optimization (공급사슬 최적화를 위한 다중의 수리적 모델 활용 구조)

  • 한현수
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2001.10a
    • /
    • pp.97-100
    • /
    • 2001
  • 제조 기업의 가치사슬 최적화를 위한 전략적, 운영상 의사결정 문제는 수리적 모델을 이용한 DSS의 효과적인 활용을 통하여 해결 될 수 있다. 의사결정 프로세스는 필연적으로 공급사슬의 여러 성과 목표와 관련 조직간의 Trade-off 및 연계관계(Interaction)가 고려되므로 복수의 DSS 활용이 필요하게 된다. 이와 관련하여 본 논문에서는 공급 사슬 전체의 최적화를 위한 다수의 전략적 목표 및 의사결정 프로세스, 연계된 수리적 모델들을 정의하고, 관련 조직 및 성과 지표 별 부분적 최적화(Local Optimality)를 지양하고 전체최적화 (Global Optimality)를 달성하기 위한 DSS Logic을 철강산업 프로세스를 대상으로 수리적 모델들의 분할(Decomposition) 및 통합개념을 통하여 제시하였다.

  • PDF

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Reliability-based Design Optimization using Multiplicative Decomposition Method (곱분해기법을 이용한 신뢰성 기반 최적설계)

  • Kim, Tae-Kyun;Lee, Tae-Hee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.22 no.4
    • /
    • pp.299-306
    • /
    • 2009
  • Design optimization is a method to find optimum point which minimizes the objective function while satisfying design constraints. The conventional optimization does not consider the uncertainty originated from modeling or manufacturing process, so optimum point often locates on the boundaries of constraints. Reliability based design optimization includes optimization technique and reliability analysis that calculates the reliability of the system. Reliability analysis can be classified into simulation method, fast probability integration method, and moment-based reliability method. In most generally used MPP based reliability analysis, which is one of fast probability integration method, if many MPP points exist, cost and numerical error can increase in the process of transforming constraints into standard normal distribution space. In this paper, multiplicative decomposition method is used as a reliability analysis for RBDO, and sensitivity analysis is performed to apply gradient based optimization algorithm. To illustrate whole process of RBDO mathematical and engineering examples are illustrated.

Overlapping Decentralized Robust EA Control Design for an Active Suspension System of a Full Car Model (전차량의 능동 현가 장치 제어를 위한 중복 분산형 견실 고유구조지정 제어기 설계)

  • 정용하;최재원;김영호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.217-217
    • /
    • 2000
  • A decentralized robust EA(eigensoucture assignment) controller is designed for an active suspension system of a vehicle based on a full car model with 7-degree of freedom. Using overlapping decomposition, the full car model is decentralized by two half car models. For each half car model, a robust eigenstructure assignment controller can be obtained by using optimization approach. The performance of the decentralized robust EA controller is compared with that of a conventional centralized EA controller through computer simulations.

  • PDF

Mathematical Modeling for Traffic Flow (교통흐름의 수학적 모형)

  • Lee, Seong-Cheol
    • Journal of the Korea Safety Management & Science
    • /
    • v.13 no.1
    • /
    • pp.127-131
    • /
    • 2011
  • Even if there are no causing factors such as car crash and road works, traffic congestion come from traffic growth on the road. In this case, estimation of traffic flow helps find the solution of traffic congestion problem. In this paper, we present a optimization model which used on traffic equilibrium problem and studied the problem of inverting shortest path sets for complex traffic system. And we also develop pivotal decomposition algorithm for reliability function of complex traffic system. Several examples are illustrated.

Estimation of the Properties for a Charring Material Using the RPSO Algorithm (RPSO 알고리즘을 이용한 탄화 재료의 열분해 물성치 추정)

  • Chang, Hee-Chul;Park, Won-Hee;Yoon, Kyung-Beom;Kim, Tae-Kuk
    • The KSFM Journal of Fluid Machinery
    • /
    • v.14 no.1
    • /
    • pp.34-41
    • /
    • 2011
  • Fire characteristics can be analyzed more realistically by using more accurate properties related to the fire dynamics and one way to acquire these fire properties is to use one of the inverse property estimation techniques. In this study two optimization algorithms which are frequently applied for the inverse heat transfer problems are selected to demonstrate the procedure of obtaining pyrolysis properties of charring material with relatively simple thermal decomposition. Thermal decomposition is occurred at the surface of the charring material heated by receiving the radiative energy from external heat sources and in this process the heat transfer through the charring material is simplified by an unsteady 1-dimensional problem. The basic genetic algorithm(GA) and repulsive particle swarm optimization(RPSO) algorithm are used to find the eight properties of a charring material; thermal conductivity(virgin, char), specific heat(virgin, char), char density, heat of pyrolysis, pre-exponential factor and activation energy by using the surface temperature and mass loss rate history data which are obtained from the calculated experiments. Results show that the RPSO algorithm has better performance in estimating the eight pyrolysis properties than the basic GA for problems considered in this study.

사진렌즈 설계에서 SVD에 의한 감쇠최소자승법의 수렴성과 안정성

  • 김태희;김경찬
    • Korean Journal of Optics and Photonics
    • /
    • v.6 no.3
    • /
    • pp.178-187
    • /
    • 1995
  • The method that determines the appropriate damping factor is studied for a lens design. When suitable damping factor is applied to the additive damped least-squares (DLS) method, the convergence and the stability of the optimization process are examined in a triplet-type photographic lens design. We calculate eigenvalues of the product of the Jacobian matrix of error functions by using the singular value decomposition (SVD) method. We adopt the median of eigenvalues as an appropriate damping factor. The convergence and the stability of the optimization process are improved by choosing the adequate damping factor for the optimization of a photographic lens. It is known that the numerical inaccuracy in the calculation of normal equation is overcome by using the orthogonal transformations of the Jacobian matrix. Therefore, a combination of the method for setting a proper damping factor and the orthogonal transformations of the Jacobian matrix is good for application to the design of an aspheric lens with high-order terms. terms.

  • PDF

A STUDY ON THE EFFICIENCY OF AERODYNAMIC DESIGN OPTIMIZATION USING DISTRIBUTED COMPUTATION (분산컴퓨팅 환경에서 공력 설계최적화의 효율성 연구)

  • Kim Y.-J.;Jung H.-J.;Kim T.-S.;Joh C.-Y.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.163-167
    • /
    • 2005
  • A research to evaluate efficiency of design optimization was performed for aerodynamic design optimization problem in distributed computing environment. The aerodynamic analyses which take most of computational work during design optimization were divided into several jobs and allocated to associated PC clients through network. This is not a parallel process based on domain decomposition rather than a simultaneous distributed-analyses process using network-distributed computers. GBOM(gradient-based optimization method), SAO(Sequential Approximate Optimization) and RSM(Response Surface Method) were implemented to perform design optimization of transonic airfoil and to evaluate their efficiencies. One dimensional minimization followed by direction search involved in the GBOM was found an obstacle against improving efficiency of the design process in distributed computing environment. The SAO was found quite suitable for the distributed computing environment even it has a handicap of local search. The RSM is apparently the fittest for distributed computing environment, but additional trial and error works needed to enhance the reliability of the approximation model are annoying and time-consuming so that they often impair the automatic capability of design optimization and also deteriorate efficiency from the practical point of view.

  • PDF