• 제목/요약/키워드: Hessian

검색결과 110건 처리시간 0.023초

드래그 감소를 위한 유체의 최적 엑티브 제어 및 최적화 알고리즘의 개발(3) - 트루 뉴턴법을 위한 정식화 개발 및 유체의 3차원 최적 엑티브 제어 (Optimal Active-Control & Development of Optimization Algorithm for Reduction of Drag in Flow Problems(3) -Construction of the Formulation for True Newton Method and Application to Viscous Drag Reduction of Three-Dimensional Flow)

  • 박재형
    • 한국전산구조공학회논문집
    • /
    • 제20권6호
    • /
    • pp.751-759
    • /
    • 2007
  • 저자는 기존의 연구에서 대용량-비선형성을 가지는 유체의 최적화를 수행하기 위해 몇 가지 강력한 방법들을 제시한 바 있다. 즉, 최적화 과정에서 수렴성을 높이기 위해 step by step기법을 사용하였고, 또한 수렴속도를 높이기 위하여 최적화이터레이션 과정에서 얻어지는 민감도정보를 이용하여 시스템 평형방정식의 해석을 위한 좋은 초기치를 제공하는 방법과, 평형방정식을 구속조건으로 사용하는 동시기법(simultaneous technique)에서 착안하여 해석과 최적화 수렴 판정치를 조작하는 방법을 제시한 바 있다. 그러나 그들 기법은 기본적으로 유사뉴턴법에 기본을 두고 있다. 현재까지 최적화에서 SQP기법을 사용할 때는 정확한 헤시안 매트릭스의 유도가 매우 까다롭고 힘들기 때문에 유사뉴턴법을 사용하고 있는 실정이다. 그러나 3차원 문제와 같이 더욱 큰 용량의 문제를 위해서는 진정한 의미에서의 뉴턴법, 트루 뉴턴법(true Newton method)을 사용할 필요가 있다. 본 연구에서는 트루 뉴턴법을 사용하기 위해 헤시안 매트릭스의 정확치를 얻는 과정을 유도하고 이를 기본으로 트루 뉴턴법을 이용한 최적화 루틴을 만들었다. 그리고 이를 3차원 문제에 적용하여 그 효과를 검증하였다.

A DUAL ALGORITHM FOR MINIMAX PROBLEMS

  • HE SUXIANG
    • Journal of applied mathematics & informatics
    • /
    • 제17권1_2_3호
    • /
    • pp.401-418
    • /
    • 2005
  • In this paper, a dual algorithm, based on a smoothing function of Bertsekas (1982), is established for solving unconstrained minimax problems. It is proven that a sequence of points, generated by solving a sequence of unconstrained minimizers of the smoothing function with changing parameter t, converges with Q-superlinear rate to a Kuhn-Thcker point locally under some mild conditions. The relationship between the condition number of the Hessian matrix of the smoothing function and the parameter is studied, which also validates the convergence theory. Finally the numerical results are reported to show the effectiveness of this algorithm.

A SELF SCALING MULTI-STEP RANK ONE PATTERN SEARCH ALGORITHM

  • Moghrabi, Issam A.R.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제15권4호
    • /
    • pp.267-275
    • /
    • 2011
  • This paper proposes a new quickly convergent pattern search quasi-Newton algorithm that employs the multi-step version of the Symmetric Rank One (SRI). The new algorithm works on the factorizations of the inverse Hessian approximations to make available a sequence of convergent positive bases required by the pattern search process. The algorithm, in principle, resembles that developed in [1] with multi-step methods dominating the dervation and with numerical improvements incurred, as shown by the numerical results presented herein.

Diagnostics for Regression with Finite-Order Autoregressive Disturbances

  • Lee, Young-Hoon;Jeong, Dong-Bin;Kim, Soon-Kwi
    • Journal of the Korean Statistical Society
    • /
    • 제31권2호
    • /
    • pp.237-250
    • /
    • 2002
  • Motivated by Cook's (1986) assessment of local influence by investigating the curvature of a surface associated with the overall discrepancy measure, this paper extends this idea to the linear regression model with AR(p) disturbances. Diagnostic for the linear regression models with AR(p) disturbances are discussed when simultaneous perturbations of the response vector are allowed. For the derived criterion, numerical studies demonstrate routine application of this work.

SCALING METHODS FOR QUASI-NEWTON METHODS

  • MOGHRABI, ISSAM A.R.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제6권1호
    • /
    • pp.91-107
    • /
    • 2002
  • This paper presents two new self-scaling variable-metric algorithms. The first is based on a known two-parameter family of rank-two updating formulae, the second employs an initial scaling of the estimated inverse Hessian which modifies the first self-scaling algorithm. The algorithms are compared with similar published algorithms, notably those due to Oren, Shanno and Phua, Biggs and with BFGS (the best known quasi-Newton method). The best of these new and published algorithms are also modified to employ inexact line searches with marginal effect. The new algorithms are superior, especially as the problem dimension increases.

  • PDF

AN UNCONDITIONALLY GRADIENT STABLE NUMERICAL METHOD FOR THE OHTA-KAWASAKI MODEL

  • Kim, Junseok;Shin, Jaemin
    • 대한수학회보
    • /
    • 제54권1호
    • /
    • pp.145-158
    • /
    • 2017
  • We present a finite difference method for solving the Ohta-Kawasaki model, representing a model of mesoscopic phase separation for the block copolymer. The numerical methods for solving the Ohta-Kawasaki model need to inherit the mass conservation and energy dissipation properties. We prove these characteristic properties and solvability and unconditionally gradient stability of the scheme by using Hessian matrices of a discrete functional. We present numerical results that validate the mass conservation, and energy dissipation, and unconditional stability of the method.

Confidence Interval Estimation Using SV in LS-SVM

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권3호
    • /
    • pp.451-459
    • /
    • 2003
  • The present paper suggests a method to estimate confidence interval using SV(Support Vector) in LS-SVM(Least-Squares Support Vector Machine). To get the proposed method we used the fact that the values of the hessian matrix obtained by full data set and SV are not different significantly. Since the suggested method implement only SV, a part of full data, we can save computing time and memory space. Through simulation study we justified the proposed method.

  • PDF

THE CONVERGENCE OF A DUAL ALGORITHM FOR NONLINEAR PROGRAMMING

  • Zhang, Li-Wei;He, Su-Xiang
    • Journal of applied mathematics & informatics
    • /
    • 제7권3호
    • /
    • pp.719-738
    • /
    • 2000
  • A dual algorithm based on the smooth function proposed by Polyak (1988) is constructed for solving nonlinear programming problems with inequality constraints. It generates a sequence of points converging locally to a Kuhn-Tucker point by solving an unconstrained minimizer of a smooth potential function with a parameter. We study the relationship between eigenvalues of the Hessian of this smooth potential function and the parameter, which is useful for analyzing the effectiveness of the dual algorithm.

ITERATIVE METHODS FOR LARGE-SCALE CONVEX QUADRATIC AND CONCAVE PROGRAMS

  • Oh, Se-Young
    • 대한수학회논문집
    • /
    • 제9권3호
    • /
    • pp.753-765
    • /
    • 1994
  • The linearly constrained quadratic programming(QP) considered is : $$ min f(x) = c^T x + \frac{1}{2}x^T Hx $$ $$ (1) subject to A^T x \geq b,$$ where $c,x \in R^n, b \in R^m, H \in R^{n \times n)}$, symmetric, and $A \in R^{n \times n}$. If there are bounds on x, these are included in the matrix $A^T$. The Hessian matrix H may be positive definite or negative semi-difinite. For large problems H and the constraint matrix A are assumed to be sparse.

  • PDF

GLOBAL CONVERGENCE PROPERTIES OF THE MODIFIED BFGS METHOD ASSOCIATING WITH GENERAL LINE SEARCH MODEL

  • Liu, Jian-Guo;Guo, Qiang
    • Journal of applied mathematics & informatics
    • /
    • 제16권1_2호
    • /
    • pp.195-205
    • /
    • 2004
  • To the unconstrained programme of non-convex function, this article give a modified BFGS algorithm. The idea of the algorithm is to modify the approximate Hessian matrix for obtaining the descent direction and guaranteeing the efficacious of the quasi-Newton iteration pattern. We prove the global convergence properties of the algorithm associating with the general form of line search, and prove the quadratic convergence rate of the algorithm under some conditions.