• Title/Summary/Keyword: conjugate-gradient method

Search Result 218, Processing Time 0.036 seconds

Regularized Iterative Image Restoration by using Method of Conjugate Gradient (공액경사법을 이용한 정칙화 반복 복원 방법)

  • 홍성용
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.2
    • /
    • pp.139-146
    • /
    • 1998
  • This paper proposes a regularized iterative image restoration using method of conjugate gradient considering a priori information. Compared with conventional regularized method of conjugate gradient, this method has merits to prevent the artifacts by ringing effects and the partial magnification of the noise in the course of restoring the image degraded by blur and additive noise. Proposed method applies the constraints to accelerate the convergence ratio near the edge portions, and the regularized parameter suppresses the magnification of the noise. As experimental results, I show the superior convergence ratio and the suppression by the artifacts of the proposed method compared with conventional methods.

  • PDF

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

A LOGARITHMIC CONJUGATE GRADIENT METHOD INVARIANT TO NONLINEAR SCALING

  • Moghrabi, I.A.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.8 no.2
    • /
    • pp.15-21
    • /
    • 2004
  • A Conjugate Gradiant (CG) method is proposed for unconstained optimization which is invariant to a nonlinear scaling of a strictly convex quadratic function. The technique has the same properties as the classical CG-method when applied to a quadratic function. The algorithm derived here is based on a logarithmic model and is compared to the standard CG method of Fletcher and Reeves [3]. Numerical results are encouraging and indicate that nonlinear scaling is promising and deserves further investigation.

  • PDF

Improvement of the Convergence Capability of a Single Loop Single Vector Approach Using Conjugate Gradient for a Concave Function (오목한 성능함수에서 공액경사도법을 이용한 단일루프 단일벡터 방법의 수렴성 개선)

  • Jeong, Seong-Beom;Lee, Se-Jung;Park, Gyung-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.7
    • /
    • pp.805-811
    • /
    • 2012
  • The reliability based design optimization (RBDO) approach requires high computing cost to consider uncertainties. In order to reduce the design cost, the single loop single vector (SLSV) approach has been developed for RBDO. This method can reduce the cost in calculating deign sensitivity by elimination of the nested optimization process. However, this process causes the increment of the instability or inaccuracy of the method according to the problem characteristics. Therefore, the method may not give accurate solution or the robustness of the solution is not guaranteed. Especially, when the function is concave, the process frequently diverges. In this research, the concept of the conjugate gradient method for unconstrained optimization is utilized to develop a new single loop single vector method. The conjugate gradient is calculated with gradient directions at the most probable points (MPP) of previous cycles. Mathematical examples are solved for the verification of the proposed method. The numeri cal performances of the obtained results are compared to those of other RBDO methods. The SLSV approach using conjugate gradient is not greatly influenced by the problem characteristics and improves its convergence capability.

Comparison with two Gradient Methods through the application to the Vector Linear Predictor (두가지 gradient 방법의 벡터 선형 예측기에 대한 적용 비교)

  • Shin, Kwang-Kyun;Yang, Seung-In
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1595-1597
    • /
    • 1987
  • Two gradient methods, steepest descent method and conjugate gradient descent method, are compar ed through application to vector linear predictors. It is found that the convergence rate of the conju-gate gradient descent method is much faster than that of the steepest descent method.

  • PDF

Algorithm for stochastic Neighbor Embedding: Conjugate Gradient, Newton, and Trust-Region

  • Hongmo, Je;Kijoeng, Nam;Seungjin, Choi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.697-699
    • /
    • 2004
  • Stochastic Neighbor Embedding(SNE) is a probabilistic method of mapping high-dimensional data space into a low-dimensional representation with preserving neighbor identities. Even though SNE shows several useful properties, the gradient-based naive SNE algorithm has a critical limitation that it is very slow to converge. To overcome this limitation, faster optimization methods should be considered by using trust region method we call this method fast TR SNE. Moreover, this paper presents a couple of useful optimization methods(i.e. conjugate gradient method and Newton's method) to embody fast SNE algorithm. We compared above three methods and conclude that TR-SNE is the best algorithm among them considering speed and stability. Finally, we show several visualizing experiments of TR-SNE to confirm its stability by experiments.

  • PDF

Iris Recognition using Multi-Resolution Frequency Analysis and Levenberg-Marquardt Back-Propagation

  • Jeong Yu-Jeong;Choi Gwang-Mi
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.3
    • /
    • pp.177-181
    • /
    • 2004
  • In this paper, we suggest an Iris recognition system with an excellent recognition rate and confidence as an alternative biometric recognition technique that solves the limit in an existing individual discrimination. For its implementation, we extracted coefficients feature values with the wavelet transformation mainly used in the signal processing, and we used neural network to see a recognition rate. However, Scale Conjugate Gradient of nonlinear optimum method mainly used in neural network is not suitable to solve the optimum problem for its slow velocity of convergence. So we intended to enhance the recognition rate by using Levenberg-Marquardt Back-propagation which supplements existing Scale Conjugate Gradient for an implementation of the iris recognition system. We improved convergence velocity, efficiency, and stability by changing properly the size according to both convergence rate of solution and variation rate of variable vector with the implementation of an applied algorithm.

BACKPROPAGATION BASED ON THE CONJUGATE GRADIENT METHOD WITH THE LINEAR SEARCH BY ORDER STATISTICS AND GOLDEN SECTION

  • Choe, Sang-Woong;Lee, Jin-Choon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.107-112
    • /
    • 1998
  • In this paper, we propose a new paradigm (NEW_BP) to be capable of overcoming limitations of the traditional backpropagation(OLD_BP). NEW_BP is based on the method of conjugate gradients with the normalized direction vectors and computes step size through the linear search which may be characterized by order statistics and golden section. Simulation results showed that NEW_BP was definitely superior to both the stochastic OLD_BP and the deterministic OLD_BP in terms of accuracy and rate of convergence and might sumount the problem of local minima. Furthermore, they confirmed us that stagnant phenomenon of training in OLD_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

An Efficient Traning of Multilayer Neural Newtorks Using Stochastic Approximation and Conjugate Gradient Method (확률적 근사법과 공액기울기법을 이용한 다층신경망의 효율적인 학습)

  • 조용현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.5
    • /
    • pp.98-106
    • /
    • 1998
  • This paper proposes an efficient learning algorithm for improving the training performance of the neural network. The proposed method improves the training performance by applying the backpropagation algorithm of a global optimization method which is a hybrid of a stochastic approximation and a conjugate gradient method. The approximate initial point for f a ~gtl obal optimization is estimated first by applying the stochastic approximation, and then the conjugate gradient method, which is the fast gradient descent method, is applied for a high speed optimization. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to those of the conventional backpropagation and the backpropagation algorithm which is a hyhrid of the stochastic approximation and steepest descent method.

  • PDF

Conjugate Gradient Least-Squares Algorithm for Three-Dimensional Magnetotelluric Inversion (3차원 MT 역산에서 CG 법의 효율적 적용)

  • Kim, Hee-Joon;Han, Nu-Ree;Choi, Ji-Hyang;Nam, Myung-Jin;Song, Yoon-Ho;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.2
    • /
    • pp.147-153
    • /
    • 2007
  • The conjugate gradient (CG) method is one of the most efficient algorithms for solving a linear system of equations. In addition to being used as a linear equation solver, it can be applied to a least-squares problem. When the CG method is applied to large-scale three-dimensional inversion of magnetotelluric data, two approaches have been pursued; one is the linear CG inversion in which each step of the Gauss-Newton iteration is incompletely solved using a truncated CG technique, and the other is referred to as the nonlinear CG inversion in which CG is directly applied to the minimization of objective functional for a nonlinear inverse problem. In each procedure we only need to compute the effect of the sensitivity matrix or its transpose multiplying an arbitrary vector, significantly reducing the computational requirements needed to do large-scale inversion.