• Title/Summary/Keyword: conjugate gradient

Search Result 252, Processing Time 0.034 seconds

NUMERICAL SOLUTIONS FOR MODELS OF LINEAR ELASTICITY USING FIRST-ORDER SYSTEM LEAST SQUARES

  • Lee, Chang-Ock
    • Korean Journal of Mathematics
    • /
    • v.7 no.2
    • /
    • pp.245-269
    • /
    • 1999
  • Multigrid method and acceleration by conjugate gradient method for first-order system least squares (FOSLS) using bilinear finite elements are developed for various boundary value problems of planar linear elasticity. They are two-stage algorithms that first solve for the displacement flux variable, then for the displacement itself. This paper focuses on solving for the displacement flux variable only. Numerical results show that the convergence is uniform even as the material becomes nearly incompressible. Computations for convergence factors and discretization errors are included. Heuristic arguments to improve the convergences are discussed as well.

  • PDF

A New Approach to Optimal Power Flow using Conjugate Gradient Method (공액 경사법을 사용한 최적조류계산에 대한 새로운 접근법)

  • Jo, Han-Hyung;Kim, Weon-Kyum;Kim, Kern-Joong
    • Proceedings of the KIEE Conference
    • /
    • 1990.07a
    • /
    • pp.139-142
    • /
    • 1990
  • This paper presents a new approach to optimal power flow (OPF) problem using conjugate gradient method, using this method. We can obtain initial feasible solution and lagrangian multiplier without calculation of matrix inversion.Test experiment shows a desiriable result and a stable convergence characteristic.

  • PDF

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

Iterative Least-Squares Method for Velocity Stack Inversion - Part B: CGG Method (속도중합역산을 위한 반복적 최소자승법 - Part B: CGG 방법)

  • Ji Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.2
    • /
    • pp.170-176
    • /
    • 2005
  • Recently the velocity stack inversion is having many attentions as an useful way to perform various seismic data processing. In order to be used in various seismic data processing, the inversion method used should have properties such as robustness to noise and parsimony of the velocity stack result. The IRLS (Iteratively Reweighted Least-Squares) method that minimizes ${L_1}-norm$ is the one used mostly. This paper introduce another method, CGG (Conjugate Guided Gradient) method, which can be used to achieve the same goal as the IRLS method does. The CGG method is a modified CG (Conjugate Gradient) method that minimizes ${L_1}-norm$. This paper explains the CGG method and compares the result of it with the one of IRSL methods. Testing on synthetic and real data demonstrates that CGG method can be used as an inversion method f3r minimizing various residual/model norms like IRLS methods.

Conjugate Gradient Least-Squares Algorithm for Three-Dimensional Magnetotelluric Inversion (3차원 MT 역산에서 CG 법의 효율적 적용)

  • Kim, Hee-Joon;Han, Nu-Ree;Choi, Ji-Hyang;Nam, Myung-Jin;Song, Yoon-Ho;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.2
    • /
    • pp.147-153
    • /
    • 2007
  • The conjugate gradient (CG) method is one of the most efficient algorithms for solving a linear system of equations. In addition to being used as a linear equation solver, it can be applied to a least-squares problem. When the CG method is applied to large-scale three-dimensional inversion of magnetotelluric data, two approaches have been pursued; one is the linear CG inversion in which each step of the Gauss-Newton iteration is incompletely solved using a truncated CG technique, and the other is referred to as the nonlinear CG inversion in which CG is directly applied to the minimization of objective functional for a nonlinear inverse problem. In each procedure we only need to compute the effect of the sensitivity matrix or its transpose multiplying an arbitrary vector, significantly reducing the computational requirements needed to do large-scale inversion.

Improvement of the Convergence Capability of a Single Loop Single Vector Approach Using Conjugate Gradient for a Concave Function (오목한 성능함수에서 공액경사도법을 이용한 단일루프 단일벡터 방법의 수렴성 개선)

  • Jeong, Seong-Beom;Lee, Se-Jung;Park, Gyung-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.7
    • /
    • pp.805-811
    • /
    • 2012
  • The reliability based design optimization (RBDO) approach requires high computing cost to consider uncertainties. In order to reduce the design cost, the single loop single vector (SLSV) approach has been developed for RBDO. This method can reduce the cost in calculating deign sensitivity by elimination of the nested optimization process. However, this process causes the increment of the instability or inaccuracy of the method according to the problem characteristics. Therefore, the method may not give accurate solution or the robustness of the solution is not guaranteed. Especially, when the function is concave, the process frequently diverges. In this research, the concept of the conjugate gradient method for unconstrained optimization is utilized to develop a new single loop single vector method. The conjugate gradient is calculated with gradient directions at the most probable points (MPP) of previous cycles. Mathematical examples are solved for the verification of the proposed method. The numeri cal performances of the obtained results are compared to those of other RBDO methods. The SLSV approach using conjugate gradient is not greatly influenced by the problem characteristics and improves its convergence capability.

AN OPTIMAL BOOSTING ALGORITHM BASED ON NONLINEAR CONJUGATE GRADIENT METHOD

  • CHOI, JOOYEON;JEONG, BORA;PARK, YESOM;SEO, JIWON;MIN, CHOHONG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.22 no.1
    • /
    • pp.1-13
    • /
    • 2018
  • Boosting, one of the most successful algorithms for supervised learning, searches the most accurate weighted sum of weak classifiers. The search corresponds to a convex programming with non-negativity and affine constraint. In this article, we propose a novel Conjugate Gradient algorithm with the Modified Polak-Ribiera-Polyak conjugate direction. The convergence of the algorithm is proved and we report its successful applications to boosting.

BACKPROPAGATION BASED ON THE CONJUGATE GRADIENT METHOD WITH THE LINEAR SEARCH BY ORDER STATISTICS AND GOLDEN SECTION

  • Choe, Sang-Woong;Lee, Jin-Choon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.107-112
    • /
    • 1998
  • In this paper, we propose a new paradigm (NEW_BP) to be capable of overcoming limitations of the traditional backpropagation(OLD_BP). NEW_BP is based on the method of conjugate gradients with the normalized direction vectors and computes step size through the linear search which may be characterized by order statistics and golden section. Simulation results showed that NEW_BP was definitely superior to both the stochastic OLD_BP and the deterministic OLD_BP in terms of accuracy and rate of convergence and might sumount the problem of local minima. Furthermore, they confirmed us that stagnant phenomenon of training in OLD_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

Hybrid of SA and CG Methods for Designing the Ka-Band Group-Delay Equalized Filter (Ka-대역 군지연-등화 여파기용 SA 기법과 CG 기법의 하이브리드 설계 기법)

  • Kahng, Sungtek
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.15 no.8
    • /
    • pp.775-780
    • /
    • 2004
  • This paper describes the realization of the Ka-band group-delay equalized filter desisted with the help of a new hybrid method of Simulated Annealing(SA) and Conjugate Gradient(CG), to be employed by the multi-channel Input Multiplexer for a satellite use, each channel of which comprises a channel filter and a group-delay equalizer. The SA and CG find circuit parameters of an 8th order elliptic function filter and a 2-pole equalizer, respectively. Measurement results demonstrate that the performances of the designed component meet the specifications, and validate the design methods.

Regularized Iterative Image Restoration by using Method of Conjugate Gradient (공액경사법을 이용한 정칙화 반복 복원 방법)

  • 홍성용
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.2
    • /
    • pp.139-146
    • /
    • 1998
  • This paper proposes a regularized iterative image restoration using method of conjugate gradient considering a priori information. Compared with conventional regularized method of conjugate gradient, this method has merits to prevent the artifacts by ringing effects and the partial magnification of the noise in the course of restoring the image degraded by blur and additive noise. Proposed method applies the constraints to accelerate the convergence ratio near the edge portions, and the regularized parameter suppresses the magnification of the noise. As experimental results, I show the superior convergence ratio and the suppression by the artifacts of the proposed method compared with conventional methods.

  • PDF