• 제목/요약/키워드: conjugate gradient algorithm

Search Result 100, Processing Time 0.019 seconds

Iris Recognition using Multi-Resolution Frequency Analysis and Levenberg-Marquardt Back-Propagation

  • Jeong Yu-Jeong;Choi Gwang-Mi
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.3
    • /
    • pp.177-181
    • /
    • 2004
  • In this paper, we suggest an Iris recognition system with an excellent recognition rate and confidence as an alternative biometric recognition technique that solves the limit in an existing individual discrimination. For its implementation, we extracted coefficients feature values with the wavelet transformation mainly used in the signal processing, and we used neural network to see a recognition rate. However, Scale Conjugate Gradient of nonlinear optimum method mainly used in neural network is not suitable to solve the optimum problem for its slow velocity of convergence. So we intended to enhance the recognition rate by using Levenberg-Marquardt Back-propagation which supplements existing Scale Conjugate Gradient for an implementation of the iris recognition system. We improved convergence velocity, efficiency, and stability by changing properly the size according to both convergence rate of solution and variation rate of variable vector with the implementation of an applied algorithm.

MINIMIZATION OF EXTENDED QUADRATIC FUNCTIONS WITH INEXACT LINE SEARCHES

  • Moghrabi, Issam A.R.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.9 no.1
    • /
    • pp.55-61
    • /
    • 2005
  • A Conjugate Gradient algorithm for unconstrained minimization is proposed which is invariant to a nonlinear scaling of a strictly convex quadratic function and which generates mutually conjugate directions for extended quadratic functions. It is derived for inexact line searches and for general functions. It compares favourably in numerical tests (over eight test functions and dimensionality up to 1000) with the Dixon (1975) algorithm on which this new algorithm is based.

  • PDF

Comparison of Regularization Techniques for an Inverse Radiation Boundary Analysis (역복사경계해석을 위한 다양한 조정법 비교)

  • Kim, Ki-Wan;Shin, Byeong-Seon;Kil, Jeong-Ki;Yeo, Gwon-Koo;Baek, Seung-Wook
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.29 no.8 s.239
    • /
    • pp.903-910
    • /
    • 2005
  • Inverse radiation problems are solved for estimating the boundary conditions such as temperature distribution and wall emissivity in axisymmetric absorbing, emitting and scattering medium, given the measured incident radiative heat fluxes. Various regularization methods, such as hybrid genetic algorithm, conjugate-gradient method and finite-difference Newton method, were adopted to solve the inverse problem, while discussing their features in terms of estimation accuracy and computational efficiency. Additionally, we propose a new combined approach that adopts the hybrid genetic algorithm as an initial value selector and uses the finite-difference Newton method as an optimization procedure.

ON BI-POINTWISE CONTROL OF A WAVE EQUATION AND ALGORITHM

  • Kim, Hong-Chul;Lee, Young-Il
    • Journal of applied mathematics & informatics
    • /
    • v.7 no.3
    • /
    • pp.739-763
    • /
    • 2000
  • We are concerned with mathematical analysis related to the bi-pointwise control for a mixed type of wave equation. In particular, we are interested in the systematic build-up of the bi-pointwise control actuators;one at the boundary and the other at the interior point simultaneously. The main purpose is to examine Hilbert Uniqueness Method for the setting of bi-pointwise control actuators and to establish relevant algorithm based on our analysis. After discussing the weak solution for the state equation, we investigate bi-pointwise control mechanism and relevant mathematical analysis based on HUM. We then proceed to set up an algorithm based on the conjugate gradient method to establish bi-pointwise control actuators to halt the system.

A LOGARITHMIC CONJUGATE GRADIENT METHOD INVARIANT TO NONLINEAR SCALING

  • Moghrabi, I.A.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.8 no.2
    • /
    • pp.15-21
    • /
    • 2004
  • A Conjugate Gradiant (CG) method is proposed for unconstained optimization which is invariant to a nonlinear scaling of a strictly convex quadratic function. The technique has the same properties as the classical CG-method when applied to a quadratic function. The algorithm derived here is based on a logarithmic model and is compared to the standard CG method of Fletcher and Reeves [3]. Numerical results are encouraging and indicate that nonlinear scaling is promising and deserves further investigation.

  • PDF

An Efficient Traning of Multilayer Neural Newtorks Using Stochastic Approximation and Conjugate Gradient Method (확률적 근사법과 공액기울기법을 이용한 다층신경망의 효율적인 학습)

  • 조용현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.5
    • /
    • pp.98-106
    • /
    • 1998
  • This paper proposes an efficient learning algorithm for improving the training performance of the neural network. The proposed method improves the training performance by applying the backpropagation algorithm of a global optimization method which is a hybrid of a stochastic approximation and a conjugate gradient method. The approximate initial point for f a ~gtl obal optimization is estimated first by applying the stochastic approximation, and then the conjugate gradient method, which is the fast gradient descent method, is applied for a high speed optimization. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to those of the conventional backpropagation and the backpropagation algorithm which is a hyhrid of the stochastic approximation and steepest descent method.

  • PDF

Architectural Analysis of Type-2 Interval pRBF Neural Networks Using Space Search Evolutionary Algorithm (공간탐색 진화알고리즘을 이용한 Interval Type-2 pRBF 뉴럴 네트워크의 구조적 해석)

  • Oh, Sung-Kwun;Kim, Wook-Dong;Park, Ho-Sung;Lee, Young-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.12-18
    • /
    • 2011
  • In this paper, we proposed Interval Type-2 polynomial Radial Basis Function Neural Networks. In the receptive filed of hidden layer, Interval Type-2 fuzzy set is used. The characteristic of Interval Type-2 fuzzy set has Footprint Of Uncertainly(FOU), which denotes a certain level of robustness in the presence of un-known information when compared with the type-1 fuzzy set. In order to improve the performance of proposed model, we used the linear polynomial function as connection weight of network. The parameters such as center values of receptive field, constant deviation, and connection weight between hidden layer and output layer are optimized by Conjugate Gradient Method(CGM) and Space Search Evolutionary Algorithm(SSEA). The proposed model is applied to gas furnace dataset and its result are compared with those reported in the previous studies.

The Estimation of an Origin-Destination Matrix from Traffic Counts using Conjugate Gradient Method in Nationwide Networks (관측교통량 기반 기종점 OD행렬 추정모형의 대규모 가로망에 적용(CG모형 적용을 중심으로))

  • Lee, Heon-Ju;Lee, Seung-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.3 s.81
    • /
    • pp.61-71
    • /
    • 2005
  • We evaluated the availability of Origin-Destination Matrix from traffic counts Using conjugate gradient method to large scale networks by applying it to the networks in 246 zones. As a result of the analysis of the consistency of the model on Nationwide Networks, the upper and lower levels in model had the systematic relationship internally. From the analysis of the estimable power or the model according to the number of traffic counting links, the error in traffic volume had the estimable power in the range of permissible error. In addition, the estimable power of estimation of an Origin-Destination Matrix was more satisfactory than that of existing methods. We conclude that conjugate gradient method cab be applied to nationwide networks if we can make sure that the algorithm of the developed model is reliable by doing various kinds of experiment.

BACKPROPAGATION BASED ON THE CONJUGATE GRADIENT METHOD WITH THE LINEAR SEARCH BY ORDER STATISTICS AND GOLDEN SECTION

  • Choe, Sang-Woong;Lee, Jin-Choon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.107-112
    • /
    • 1998
  • In this paper, we propose a new paradigm (NEW_BP) to be capable of overcoming limitations of the traditional backpropagation(OLD_BP). NEW_BP is based on the method of conjugate gradients with the normalized direction vectors and computes step size through the linear search which may be characterized by order statistics and golden section. Simulation results showed that NEW_BP was definitely superior to both the stochastic OLD_BP and the deterministic OLD_BP in terms of accuracy and rate of convergence and might sumount the problem of local minima. Furthermore, they confirmed us that stagnant phenomenon of training in OLD_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

A Parallel Algorithm for Large DOF Structural Analysis Problems (대규모 자유도 문제의 구조해석을 위한 병렬 알고리즘)

  • Kim, Min-Seok;Lee, Jee-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.23 no.5
    • /
    • pp.475-482
    • /
    • 2010
  • In this paper, an efficient two-level parallel domain decomposition algorithm is suggested to solve large-DOF structural problems. Each subdomain is composed of the coarse problem and local problem. In the coarse problem, displacements at coarse nodes are computed by the iterative method that does not need to assemble a stiffness matrix for the whole coarse problem. Then displacements at local nodes are computed by Multi-Frontal Sparse Solver. A parallel version of PCG(Preconditioned Conjugate Gradient Method) is developed to solve the coarse problem iteratively, which minimizes the data communication amount between processors to increase the possible problem DOF size while maintaining the computational efficiency. The test results show that the suggested algorithm provides scalability on computing performance and an efficient approach to solve large-DOF structural problems.