• Title/Summary/Keyword: Conjugate Gradient Methods

Search Result 71, Processing Time 0.029 seconds

Interior Point Methods for Network Problems (An Efficient Conjugate Gradient Method for Interior Point Methods) (네트워크 문제에서 내부점 방법의 활용 (내부점 선형계획법에서 효율적인 공액경사법))

  • 설동렬
    • Journal of the military operations research society of Korea
    • /
    • v.24 no.1
    • /
    • pp.146-156
    • /
    • 1998
  • Cholesky factorization is known to be inefficient to problems with dense column and network problems in interior point methods. We use the conjugate gradient method and preconditioners to improve the convergence rate of the conjugate gradient method. Several preconditioners were applied to LPABO 5.1 and the results were compared with those of CPLEX 3.0. The conjugate gradient method shows to be more efficient than Cholesky factorization to problems with dense columns and network problems. The incomplete Cholesky factorization preconditioner shows to be the most efficient among the preconditioners.

  • PDF

AN AFFINE SCALING INTERIOR ALGORITHM VIA CONJUGATE GRADIENT AND LANCZOS METHODS FOR BOUND-CONSTRAINED NONLINEAR OPTIMIZATION

  • Jia, Chunxia;Zhu, Detong
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.1_2
    • /
    • pp.173-190
    • /
    • 2011
  • In this paper, we construct a new approach of affine scaling interior algorithm using the affine scaling conjugate gradient and Lanczos methods for bound constrained nonlinear optimization. We get the iterative direction by solving quadratic model via affine scaling conjugate gradient and Lanczos methods. By using the line search backtracking technique, we will find an acceptable trial step length along this direction which makes the iterate point strictly feasible and the objective function nonmonotonically decreasing. Global convergence and local superlinear convergence rate of the proposed algorithm are established under some reasonable conditions. Finally, we present some numerical results to illustrate the effectiveness of the proposed algorithm.

Solving a Matrix Polynomial by Conjugate Gradient Methods

  • Ko, Hyun-Ji;Kim, Hyun-Min
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.11 no.4
    • /
    • pp.39-46
    • /
    • 2007
  • One of well known and much studied nonlinear matrix equations is the matrix polynomial which has the form G(X)=$A_0X^m+A_1X^{m-1}+{\cdots}+A_m$ where $A_0$, $A_1$, ${\cdots}$, $A_m$ and X are $n{\times}n$ real matrices. We show how the minimization methods can be used to solve the matrix polynomial G(X) and give some numerical experiments. We also compare Polak and Ribi$\acute{e}$re version and Fletcher and Reeves version of conjugate gradient method.

  • PDF

An Inverse Analysis of Two-Dimensional Heat Conduction Problem Using Regular and Modified Conjugate Gradient Method (표준공액구배법과 수정공액구배법을 이용한 2차원 열전도 문제의 역해석)

  • Choi, Eui-Rak;Kim, Woo-Seung
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.22 no.12
    • /
    • pp.1715-1725
    • /
    • 1998
  • A two-dimensional transient inverse heat conduction problem involving the estimation of the unknown location, ($X^*$, $Y^*$), and timewise varying unknown strength, $G({\tau})$, of a line heat source embedded inside a rectangular bar with insulated boundaries has been solved simultaneously. The regular conjugate gradient method, RCGM and the modified conjugate gradient method, MCGM with adjoint equation, are used alternately to estimate the unknown strength $G({\tau})$ of the source term, while the parameter estimation approach is used to estimate the unknown location ($X^*$, $Y^*$) of the line heat source. The alternate use of the regular and the modified conjugate gradient methods alleviates the convergence difficulties encountered at the initial and final times (i.e ${\tau}=0$ and ${\tau}={\tau}_f$), hence stabilizes the computation and fastens the convergence of the solution. In order to examine the effectiveness of this approach under severe test conditions, the unknown strength $G({\tau})$ is chosen in the form of rectangular, triangular and sinusoidal functions.

Regularized iterative image resotoration by using method of conjugate gradient with constrain (구속 조건을 사용한 공액 경사법에 의한 정칙화 반복 복원 처리)

  • 김승묵;홍성용;이태홍
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.9
    • /
    • pp.1985-1997
    • /
    • 1997
  • This paper proposed a regularized iterative image restoration by using method of conjugate gradient. Compared with conventional iterative methods, method of conjugate gradient has a merit to converte toward a solution as a super-linear convergence speed. But because of those properties, there are several artifacts like ringing effects and the partial magnification of the noise in the course of restoring the images that are degraded by a defocusing blur and additive noise. So, we proposed the regularized method of conjugate gradient applying constraints. By applying the projectiong constraint and regularization parameter into that method, it is possible to suppress the magnification of the additive noise. As a experimental results, we showed the superior convergence ratio of the proposed mehtod compared with conventional iterative regularized methods.

  • PDF

Comparison of Regularization Techniques For an Inverse Radiation Boundary Analysis (역복사경계해석을 위한 다양한 조정기법 비교)

  • Kim, Ki-Wan;Baek, Seung-Wook
    • Proceedings of the KSME Conference
    • /
    • 2004.11a
    • /
    • pp.1288-1293
    • /
    • 2004
  • Inverse radiation problems are solved for estimating the boundary conditions such as temperature distribution and wall emissivity in axisymmetric absorbing, emitting and scattering medium, given the measured incident radiative heat fluxes. Various regularization methods, such as hybrid genetic algorithm, conjugate-gradient method and Newton method, were adopted to solve the inverse problem, while discussing their features in terms of estimation accuracy and computational efficiency. Additionally, we propose a new combined approach of adopting the genetic algorithm as an initial value selector, whereas using the conjugate-gradient method and Newton method to reduce their dependence on the initial value.

  • PDF

A Study on Numerical Optimization Method for Aerodynamic Design (공력설계를 위한 수치최적설계기법의 연구)

  • Jin, Xue-Song;Choi, Jae-Ho;Kim, Kwang-Yong
    • The KSFM Journal of Fluid Machinery
    • /
    • v.2 no.1 s.2
    • /
    • pp.29-34
    • /
    • 1999
  • To develop the efficient numerical optimization method for the design of an airfoil, an evaluation of various methods coupled with two-dimensional Naviev-Stokes analysis is presented. Simplex method and Hook-Jeeves method we used as direct search methods, and steepest descent method, conjugate gradient method and DFP method are used as indirect search methods and are tested to determine the search direction. To determine the moving distance, the golden section method and cubic interpolation method are tested. The finite volume method is used to discretize two-dimensional Navier-Stokes equations, and SIMPLEC algorithm is used for a velocity-pressure correction method. For the optimal design of two-dimensional airfoil, maximum thickness, maximum ordinate of camber line and chordwise position of maximum ordinate are chosen as design variables, and the ratio of drag coefficient to lift coefficient is selected as an objective function. From the results, it is found that conjugate gradient method and cubic interpolation method are the most efficient for the determination of search direction and the moving distance, respectively.

  • PDF

Algorithm for stochastic Neighbor Embedding: Conjugate Gradient, Newton, and Trust-Region

  • Hongmo, Je;Kijoeng, Nam;Seungjin, Choi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.697-699
    • /
    • 2004
  • Stochastic Neighbor Embedding(SNE) is a probabilistic method of mapping high-dimensional data space into a low-dimensional representation with preserving neighbor identities. Even though SNE shows several useful properties, the gradient-based naive SNE algorithm has a critical limitation that it is very slow to converge. To overcome this limitation, faster optimization methods should be considered by using trust region method we call this method fast TR SNE. Moreover, this paper presents a couple of useful optimization methods(i.e. conjugate gradient method and Newton's method) to embody fast SNE algorithm. We compared above three methods and conclude that TR-SNE is the best algorithm among them considering speed and stability. Finally, we show several visualizing experiments of TR-SNE to confirm its stability by experiments.

  • PDF

Comparison with two Gradient Methods through the application to the Vector Linear Predictor (두가지 gradient 방법의 벡터 선형 예측기에 대한 적용 비교)

  • Shin, Kwang-Kyun;Yang, Seung-In
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1595-1597
    • /
    • 1987
  • Two gradient methods, steepest descent method and conjugate gradient descent method, are compar ed through application to vector linear predictors. It is found that the convergence rate of the conju-gate gradient descent method is much faster than that of the steepest descent method.

  • PDF

Induced Charge Distribution Using Accelerated Uzawa Method (가속 Uzawa 방법을 이용한 유도전하계산법)

  • Kim, Jae-Hyun;Jo, Gwanghyun;Ha, Youn Doh
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.4
    • /
    • pp.191-197
    • /
    • 2021
  • To calculate the induced charge of atoms in molecular dynamics, linear equations for the induced charges need to be solved. As induced charges are determined at each time step, the process involves considerable computational costs. Hence, an efficient method for calculating the induced charge distribution is required when analyzing large systems. This paper introduces the Uzawa method for solving saddle point problems, which occur in linear systems, for the solution of the Lagrange equation with constraints. We apply the accelerated Uzawa algorithm, which reduces computational costs noticeably using the Schur complement and preconditioned conjugate gradient methods, in order to overcome the drawback of the Uzawa parameter, which affects the convergence speed, and increase the efficiency of the matrix operation. Numerical models of molecular dynamics in which two gold nanoparticles are placed under external electric fields reveal that the proposed method provides improved results in terms of both convergence and efficiency. The computational cost was reduced by approximately 1/10 compared to that for the Gaussian elimination method, and fast convergence of the conjugate gradient, as compared to the basic Uzawa method, was verified.