• Title/Summary/Keyword: Gradient descent

Search Result 339, Processing Time 0.03 seconds

Nonlinear optimization algorithm using monotonically increasing quantization resolution

  • Jinwuk Seok;Jeong-Si Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.119-130
    • /
    • 2023
  • We propose a quantized gradient search algorithm that can achieve global optimization by monotonically reducing the quantization step with respect to time when quantization is composed of integer or fixed-point fractional values applied to an optimization algorithm. According to the white noise hypothesis states, a quantization step is sufficiently small and the quantization is well defined, the round-off error caused by quantization can be regarded as a random variable with identically independent distribution. Thus, we rewrite the searching equation based on a gradient descent as a stochastic differential equation and obtain the monotonically decreasing rate of the quantization step, enabling the global optimization by stochastic analysis for deriving an objective function. Consequently, when the search equation is quantized by a monotonically decreasing quantization step, which suitably reduces the round-off error, we can derive the searching algorithm evolving from an optimization algorithm. Numerical simulations indicate that due to the property of quantization-based global optimization, the proposed algorithm shows better optimization performance on a search space to each iteration than the conventional algorithm with a higher success rate and fewer iterations.

GLOBAL CONVERGENCE OF A NEW SPECTRAL PRP CONJUGATE GRADIENT METHOD

  • Liu, Jinkui
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1303-1309
    • /
    • 2011
  • Based on the PRP method, a new spectral PRP conjugate gradient method has been proposed to solve general unconstrained optimization problems which produce sufficient descent search direction at every iteration without any line search. Under the Wolfe line search, we prove the global convergence of the new method for general nonconvex functions. The numerical results show that the new method is efficient for the given test problems.

Target Prioritization for Multi-Function Radar Using Artificial Neural Network Based on Steepest Descent Method (최급 강하법 기반 인공 신경망을 이용한 다기능 레이다 표적 우선순위 할당에 대한 연구)

  • Jeong, Nam-Hoon;Lee, Seong-Hyeon;Kang, Min-Seok;Gu, Chang-Woo;Kim, Cheol-Ho;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.1
    • /
    • pp.68-76
    • /
    • 2018
  • Target prioritization is necessary for a multifunction radar(MFR) to track an important target and manage the resources of the radar platform efficiently. In this paper, we consider an artificial neural network(ANN) model that calculates the priority of the target. Furthermore, we propose a neural network learning algorithm based on the steepest descent method, which is more suitable for target prioritization by combining the conventional gradient descent method. Several simulation results show that the proposed scheme is much more superior to the traditional neural network model from analyzing the training data accuracy and the output priority relevance of the test scenarios.

Novel steepest descent adaptive filters derived from new performance function (새로운 성능지수 함수에 대한 직강하 적응필터)

  • 전병을;박동조
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.823-828
    • /
    • 1992
  • A novel steepest descent adaptive filter algorithm, which uses the instantaneous stochastic gradient for the steepest descent direction, is derived from a newly devised performance index function. The performance function for the new algorithm is improved from that for the LMS in consideration that the stochastic steepest descent method is utilized to minimize the performance index iterativly. Through mathematical analysis and computer simulations, it is verified that there are substantial improvements in convergence and misadjustments even though the computational simplicity and the robustness of the LMS algorithm are hardly sacrificed. On the other hand, the new algorithm can be interpreted as a variable step size adaptive filter, and in this respect a heuristic method is proposed in order to reduce the noise caused by the step size fluctuation.

  • PDF

Selecting Fuzzy Rules for Pattern Classification Systems

  • Lee, Sang-Bum;Lee, Sung-joo;Lee, Mai-Rey
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.159-165
    • /
    • 2002
  • This paper proposes a GA and Gradient Descent Method-based method for choosing an appropriate set of fuzzy rules for classification problems. The aim of the proposed method is to fond a minimum set of fuzzy rules that can correctly classify all training patterns. The number of inference rules and the shapes of the membership functions in the antecedent part of the fuzzy rules are determined by the genetic algorithms. The real numbers in the consequent parts of the fuzzy rules are obtained through the use of the descent method. A fitness function is used to maximize the number of correctly classified patterns, and to minimize the number of fuzzy rules. A solution obtained by the genetic algorithm is a set of fuzzy rules, and its fitness is determined by the two objectives, in a combinatorial optimization problem. In order to demonstrate the effectiveness of the proposed method, computer simulation results are shown.

A NONLINEAR CONJUGATE GRADIENT METHOD AND ITS GLOBAL CONVERGENCE ANALYSIS

  • CHU, AJIE;SU, YIXIAO;DU, SHOUQIANG
    • Journal of applied mathematics & informatics
    • /
    • v.34 no.1_2
    • /
    • pp.157-165
    • /
    • 2016
  • In this paper, we develop a new hybridization conjugate gradient method for solving the unconstrained optimization problem. Under mild assumptions, we get the sufficient descent property of the given method. The global convergence of the given method is also presented under the Wolfe-type line search and the general Wolfe line search. The numerical results show that the method is also efficient.

Adaptive stochastic gradient method under two mixing heterogenous models (두 이종 혼합 모형에서의 수정된 경사 하강법)

  • Moon, Sang Jun;Jeon, Jong-June
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1245-1255
    • /
    • 2017
  • The online learning is a process of obtaining the solution for a given objective function where the data is accumulated in real time or in batch units. The stochastic gradient descent method is one of the most widely used for the online learning. This method is not only easy to implement, but also has good properties of the solution under the assumption that the generating model of data is homogeneous. However, the stochastic gradient method could severely mislead the online-learning when the homogeneity is actually violated. We assume that there are two heterogeneous generating models in the observation, and propose the a new stochastic gradient method that mitigate the problem of the heterogeneous models. We introduce a robust mini-batch optimization method using statistical tests and investigate the convergence radius of the solution in the proposed method. Moreover, the theoretical results are confirmed by the numerical simulations.

An Efficient Fault-diagnosis of Digital Circuits Using Multilayer Neural Networks (다층신경망을 이용한 디지털회로의 효율적인 결함진단)

  • 조용현;박용수
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1033-1036
    • /
    • 1999
  • This paper proposes an efficient fault diagnosis for digital circuits using multilayer neural networks. The efficient learning algorithm is also proposed for the multilayer neural network, which is combined the steepest descent for high-speed optimization and the dynamic tunneling for global optimization. The fault-diagnosis system using the multilayer neural network of the proposed algorithm has been applied to the parity generator circuit. The simulation results shows that the proposed system is higher convergence speed and rate, in comparision with system using the backpropagation algorithm based on the gradient descent.

  • PDF

Identification of Dynamic Systems Using a Self Recurrent Wavelet Neural Network: Convergence Analysis Via Adaptive Learning Rates (자기 회귀 웨이블릿 신경 회로망을 이용한 다이나믹 시스템의 동정: 적응 학습률 기반 수렴성 분석)

  • Yoo, Sung-Jin;Choi, Yoon-Ho;Park, Jin-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.9
    • /
    • pp.781-788
    • /
    • 2005
  • This paper proposes an identification method using a self recurrent wavelet neural network (SRWNN) for dynamic systems. The architecture of the proposed SRWNN is a modified model of the wavelet neural network (WNN). But, unlike the WNN, since a mother wavelet layer of the SRWNN is composed of self-feedback neurons, the SRWNN has the ability to store the past information of the wavelet. Thus, in the proposed identification architecture, the SRWNN is used for identifying nonlinear dynamic systems. The gradient descent method with adaptive teaming rates (ALRs) is applied to 1.am the parameters of the SRWNN identifier (SRWNNI). The ALRs are derived from the discrete Lyapunov stability theorem, which are used to guarantee the convergence of an SRWNNI. Finally, through computer simulations, we demonstrate the effectiveness of the proposed SRWNNI.

Performance Comparison of Logistic Regression Algorithms on RHadoop

  • Jung, Byung Ho;Lim, Dong Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.9-16
    • /
    • 2017
  • Machine learning has found widespread implementations and applications in many different domains in our life. Logistic regression is a type of classification in machine leaning, and is used widely in many fields, including medicine, economics, marketing and social sciences. In this paper, we present the MapReduce implementation of three existing algorithms, this is, Gradient Descent algorithm, Cost Minimization algorithm and Newton-Raphson algorithm, for logistic regression on RHadoop that integrates R and Hadoop environment applicable to large scale data. We compare the performance of these algorithms for estimation of logistic regression coefficients with real and simulated data sets. We also compare the performance of our RHadoop and RHIPE platforms. The performance experiments showed that our Newton-Raphson algorithm when compared to Gradient Descent and Cost Minimization algorithms appeared to be better to all data tested, also showed that our RHadoop was better than RHIPE in real data, and was opposite in simulated data.