• Title/Summary/Keyword: second order optimization

Search Result 351, Processing Time 0.03 seconds

ON SECOND ORDER NECESSARY OPTIMALITY CONDITIONS FOR VECTOR OPTIMIZATION PROBLEMS

  • Lee, Gue-Myung;Kim, Moon-Hee
    • Journal of the Korean Mathematical Society
    • /
    • v.40 no.2
    • /
    • pp.287-305
    • /
    • 2003
  • Second order necessary optimality condition for properly efficient solutions of a twice differentiable vector optimization problem is given. We obtain a nonsmooth version of the second order necessary optimality condition for properly efficient solutions of a nondifferentiable vector optimization problem. Furthermore, we prove a second order necessary optimality condition for weakly efficient solutions of a nondifferentiable vector optimization problem.

SECOND ORDER DUALITY IN VECTOR OPTIMIZATION OVER CONES

  • Suneja, S.K.;Sharma, Sunila;Vani, Vani
    • Journal of applied mathematics & informatics
    • /
    • v.26 no.1_2
    • /
    • pp.251-261
    • /
    • 2008
  • In this paper second order cone convex, second order cone pseudoconvex, second order strongly cone pseudoconvex and second order cone quasiconvex functions are introduced and their interrelations are discussed. Further a MondWeir Type second order dual is associated with the Vector Minimization Problem and the weak and strong duality theorems are established under these new generalized convexity assumptions.

  • PDF

A Study for Robustness of Objective Function and Constraints in Robust Design Optimization

  • Lee Tae-Won
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.10
    • /
    • pp.1662-1669
    • /
    • 2006
  • Since randomness and uncertainties of design parameters are inherent, the robust design has gained an ever increasing importance in mechanical engineering. The robustness is assessed by the measure of performance variability around mean value, which is called as standard deviation. Hence, constraints in robust optimization problem can be approached as probability constraints in reliability based optimization. Then, the FOSM (first order second moment) method or the AFOSM (advanced first order second moment) method can be used to calculate the mean values and the standard deviations of functions describing constraints and object. Among two methods, AFOSM method has some advantage over FOSM method in evaluation of probability. Nevertheless, it is difficult to obtain the mean value and the standard deviation of objective function using AFOSM method, because it requires that the mean value of function is always positive. This paper presented a special technique to overcome this weakness of AFOSM method. The mean value and the standard deviation of objective function by the proposed method are reliable as shown in examples compared with results by FOSM method.

SOLUTION SETS OF SECOND-ORDER CONE LINEAR FRACTIONAL OPTIMIZATION PROBLEMS

  • Kim, Gwi Soo;Kim, Moon Hee;Lee, Gue Myung
    • Nonlinear Functional Analysis and Applications
    • /
    • v.26 no.1
    • /
    • pp.65-70
    • /
    • 2021
  • We characterize the solution set for a second-order cone linear fractional optimization problem (P). We present sequential Lagrange multiplier characterizations of the solution set for the problem (P) in terms of sequential Lagrange multipliers of a known solution of (P).

Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization (효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현)

  • Song, Hwa Jeon;Jung, Ho Young;Park, Jeon Gue
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.23-30
    • /
    • 2016
  • This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.

Optimizing Food Processing through a New Approach to Response Surface Methodology

  • Sungsue Rheem
    • Food Science of Animal Resources
    • /
    • v.43 no.2
    • /
    • pp.374-381
    • /
    • 2023
  • In a previous study, 'response surface methodology (RSM) using a fullest balanced model' was proposed to improve the optimization of food processing when a standard second-order model has a significant lack of fit. However, that methodology can be used when each factor of the experimental design has five levels. In response surface experiments for optimization, not only five-level designs, but also three-level designs are used. Therefore, the present study aimed to improve the optimization of food processing when the experimental factors have three levels through a new approach to RSM. This approach employs three-step modeling based on a second-order model, a balanced higher-order model, and a balanced highest-order model. The dataset from the experimental data in a three-level, two-factor central composite design in a previous research was used to illustrate three-step modeling and the subsequent optimization. The proposed approach to RSM predicted improved results of optimization, which are different from the predicted optimization results in the previous research.

An Evaluation of the Second-order Approximation Method for Engineering Optimization (최적설계시 이차근사법의 수치성능 평가에 관한 연구)

  • 박영선;박경진;이완익
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.2
    • /
    • pp.236-247
    • /
    • 1992
  • Optimization has been developed to minimize the cost function while satisfying constraints. Nonlinear Programming method is used as a tool for the optimization. Usually, cost and constraint function calculations are required in the engineering applications, but those calculations are extremely expensive. Especially, the function and sensitivity analyses cause a bottleneck in structural optimization which utilizes the Finite Element Method. Also, when the functions are quite noisy, the informations do not carry out proper role in the optimization process. An algorithm called "Second-order Approximation Method" has been proposed to overcome the difficulties recently. The cost and constraint functions are approximated by the second-order Taylor series expansion on a nominal points in the algorithm. An optimal design problem is defined with the approximated functions and the approximated problem is solved by a nonlinear programming numerical algorithm. The solution is included in a candidate point set which is evaluated for a new nominal point. Since the functions are approximated only by the function values, sensitivity informations are not needed. One-dimensional line search is unnecessary due to the fact that the nonlinear algorithm handles the approximated functions. In this research, the method is analyzed and the performance is evaluated. Several mathematical problems are created and some standard engineering problems are selected for the evaluation. Through numerical results, applicabilities of the algorithm to large scale and complex problems are presented.presented.

SOLUTIONS OF NONCONVEX QUADRATIC OPTIMIZATION PROBLEMS VIA DIAGONALIZATION

  • YU, MOONSOOK;KIM, SUNYOUNG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.5 no.2
    • /
    • pp.137-147
    • /
    • 2001
  • Nonconvex Quadratic Optimization Problems (QOP) are solved approximately by SDP (semidefinite programming) relaxation and SOCP (second order cone programmming) relaxation. Nonconvex QOPs with special structures can be solved exactly by SDP and SOCP. We propose a method to formulate general nonconvex QOPs into the special form of the QOP, which can provide a way to find more accurate solutions. Numerical results are shown to illustrate advantages of the proposed method.

  • PDF

ON A SECOND ORDER PARALLEL VARIABLE TRANSFORMATION APPROACH

  • Pang, Li-Ping;Xia, Zun-Quan;Zhang, Li-Wei
    • Journal of applied mathematics & informatics
    • /
    • v.11 no.1_2
    • /
    • pp.201-213
    • /
    • 2003
  • In this paper we present a second order PVT (parallel variable transformation) algorithm converging to second order stationary points for minimizing smooth functions, based on the first order PVT algorithm due to Fukushima (1998). The corresponding stopping criterion, descent condition and descent step for the second order PVT algorithm are given.