• Title/Summary/Keyword: KKT condition

Search Result 7, Processing Time 0.017 seconds

THE KARUSH-KUHN-TUCKER OPTIMALITY CONDITIONS IN INTERVAL-VALUED MULTIOBJECTIVE PROGRAMMING PROBLEMS

  • Hosseinzade, Elham;Hassanpour, Hassan
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1157-1165
    • /
    • 2011
  • The Karush-Kuhn-Tucker (KKT) necessary optimality conditions for nonlinear differentiable programming problems are also sufficient under suitable convexity assumptions. The KKT conditions in multiobjective programming problems with interval-valued objective and constraint functions are derived in this paper. The main contribution of this paper is to obtain the Pareto optimal solutions by resorting to the sufficient optimality condition.

ON LINEARIZED VECTOR OPTIMIZATION PROBLEMS WITH PROPER EFFICIENCY

  • Kim, Moon-Hee
    • Journal of applied mathematics & informatics
    • /
    • v.27 no.3_4
    • /
    • pp.685-692
    • /
    • 2009
  • We consider the linearized (approximated) problem for differentiable vector optimization problem, and then we establish equivalence results between a differentiable vector optimization problem and its associated linearized problem under the proper efficiency.

  • PDF

PROXIMAL AUGMENTED LAGRANGIAN AND APPROXIMATE OPTIMAL SOLUTIONS IN NONLINEAR PROGRAMMING

  • Chen, Zhe;Huang, Hai Qiao;Zhao, Ke Quan
    • Journal of applied mathematics & informatics
    • /
    • v.27 no.1_2
    • /
    • pp.149-159
    • /
    • 2009
  • In this paper, we introduce some approximate optimal solutions and an augmented Lagrangian function in nonlinear programming, establish dual function and dual problem based on the augmented Lagrangian function, discuss the relationship between the approximate optimal solutions of augmented Lagrangian problem and that of primal problem, obtain approximate KKT necessary optimality condition of the augmented Lagrangian problem, prove that the approximate stationary points of augmented Lagrangian problem converge to that of the original problem. Our results improve and generalize some known results.

  • PDF

ON THE GLOBAL CONVERGENCE OF A MODIFIED SEQUENTIAL QUADRATIC PROGRAMMING ALGORITHM FOR NONLINEAR PROGRAMMING PROBLEMS WITH INEQUALITY CONSTRAINTS

  • Liu, Bingzhuang
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1395-1407
    • /
    • 2011
  • When a Sequential Quadratic Programming (SQP) method is used to solve the nonlinear programming problems, one of the main difficulties is that the Quadratic Programming (QP) subproblem may be incompatible. In this paper, an SQP algorithm is given by modifying the traditional QP subproblem and applying a class of $l_{\infty}$ penalty function whose penalty parameters can be adjusted automatically. The new QP subproblem is compatible. Under the extended Mangasarian-Fromovitz constraint qualification condition and the boundedness of the iterates, the algorithm is showed to be globally convergent to a KKT point of the non-linear programming problem.

Optimum Sensitivity of Objective Function Using Equality Constraint (등제한조건을 이용한 목적함수에 대한 최적민감도)

  • Shin Jung-Kyu;Lee Sang-Il;Park Gyung-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.12 s.243
    • /
    • pp.1629-1637
    • /
    • 2005
  • Optimum sensitivity analysis (OSA) is the process to find the sensitivity of optimum solution with respect to the parameter in the optimization problem. The prevalent OSA methods calculate the optimum sensitivity as a post-processing. In this research, a simple technique is proposed to obtain optimum sensitivity as a result of the original optimization problem, provided that the optimum sensitivity of objective function is required. The parameters are considered as additional design variables in the original optimization problem. And then, it is endowed with equality constraints to penalize the additional variables. When the optimization problem is solved, the optimum sensitivity of objective function is simultaneously obtained as Lagrange multiplier. Several mathematical and engineering examples are solved to show the applicability and efficiency of the method compared to other OSA ones.

Optimum Sensitivity of Objective Function using Equality Constraint (등제한조건을 이용한 목적함수에 대한 최적민감도)

  • Yi S.I.;Shin J.K.;Park G.J.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.10a
    • /
    • pp.464-469
    • /
    • 2005
  • Optimum sensitivity analysis (OSA) is the process to find the sensitivity of optimum solution with respect to the parameter in the optimization problem. The prevalent OSA methods calculate the optimum sensitivity as a post-processing. In this research, a simple technique is proposed to obtain optimum sensitivity as a result of the original optimization problem, provided that the optimum sensitivity of objective function is required. The parameters are considered as additional design variables in the original optimization problem. And then, it is endowed with equality constraints to penalize the additional variables. When the optimization problem is solved, the optimum sensitivity of objective function is simultaneously obtained as Lagrange multiplier. Several mathematical and engineering examples are solved to show the applicability and efficiency of the method compared to other OSA ones.

  • PDF

A Study on Teaching the Method of Lagrange Multipliers in the Era of Digital Transformation (라그랑주 승수법의 교수·학습에 대한 소고: 라그랑주 승수법을 활용한 주성분 분석 사례)

  • Lee, Sang-Gu;Nam, Yun;Lee, Jae Hwa
    • Communications of Mathematical Education
    • /
    • v.37 no.1
    • /
    • pp.65-84
    • /
    • 2023
  • The method of Lagrange multipliers, one of the most fundamental algorithms for solving equality constrained optimization problems, has been widely used in basic mathematics for artificial intelligence (AI), linear algebra, optimization theory, and control theory. This method is an important tool that connects calculus and linear algebra. It is actively used in artificial intelligence algorithms including principal component analysis (PCA). Therefore, it is desired that instructors motivate students who first encounter this method in college calculus. In this paper, we provide an integrated perspective for instructors to teach the method of Lagrange multipliers effectively. First, we provide visualization materials and Python-based code, helping to understand the principle of this method. Second, we give a full explanation on the relation between Lagrange multiplier and eigenvalues of a matrix. Third, we give the proof of the first-order optimality condition, which is a fundamental of the method of Lagrange multipliers, and briefly introduce the generalized version of it in optimization. Finally, we give an example of PCA analysis on a real data. These materials can be utilized in class for teaching of the method of Lagrange multipliers.