• Title/Summary/Keyword: gradient descent algorithm

Search Result 195, Processing Time 0.034 seconds

Tuning Method of the Membership Function for FLC using a Gradient Descent Algorithm (Gradient Descent 알고리즘을 이용한 퍼지제어기의 멤버십함수 동조 방법)

  • Choi, Hansoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7277-7282
    • /
    • 2014
  • In this study, the gradient descent algorithm was used for FLC analysis and the algorithm was used to represent the effects of nonlinear parameters, which alter the antecedent and consequence fuzzy variables of FLC. The controller parameters choose the control variable by iteration for gradient descent algorithm. The FLC consists of 7 membership functions, 49 rules and a two inputs - one output system. The system adopted the Min-Max inference method and triangle type membership function with a 13 quantization level.

Learning algorithms for big data logistic regression on RHIPE platform (RHIPE 플랫폼에서 빅데이터 로지스틱 회귀를 위한 학습 알고리즘)

  • Jung, Byung Ho;Lim, Dong Hoon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.911-923
    • /
    • 2016
  • Machine learning becomes increasingly important in the big data era. Logistic regression is a type of classification in machine leaning, and has been widely used in various fields, including medicine, economics, marketing, and social sciences. Rhipe that integrates R and Hadoop environment, has not been discussed by many researchers owing to the difficulty of its installation and MapReduce implementation. In this paper, we present the MapReduce implementation of Gradient Descent algorithm and Newton-Raphson algorithm for logistic regression using Rhipe. The Newton-Raphson algorithm does not require a learning rate, while Gradient Descent algorithm needs to manually pick a learning rate. We choose the learning rate by performing the mixed procedure of grid search and binary search for processing big data efficiently. In the performance study, our Newton-Raphson algorithm outpeforms Gradient Descent algorithm in all the tested data.

Fuzzy Modeling based on FCM Clustering Algorithm (FCM 클러스터링 알고리즘에 기초한 퍼지 모델링)

  • 윤기찬;오성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.373-373
    • /
    • 2000
  • In this paper, we propose a fuzzy modeling algorithm which divides the input space more efficiently than convention methods by taking into consideration correlations between components of sample data. The proposed fuzzy modeling algorithm consists of two steps: coarse tuning, which determines consequent parameters approximately using FCRM clustering method, and fine tuning, which adjusts the premise and consequent parameters more precisely by gradient descent algorithm. To evaluate the performance of the proposed fuzzy mode, we use the numerical data of nonlinear function.

  • PDF

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

Parameter Learning of Dynamic Bayesian Networks using Constrained Least Square Estimation and Steepest Descent Algorithm (제약조건을 갖는 최소자승 추정기법과 최급강하 알고리즘을 이용한 동적 베이시안 네트워크의 파라미터 학습기법)

  • Cho, Hyun-Cheol;Lee, Kwon-Soon;Koo, Kyung-Wan
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.58 no.2
    • /
    • pp.164-171
    • /
    • 2009
  • This paper presents new learning algorithm of dynamic Bayesian networks (DBN) by means of constrained least square (LS) estimation algorithm and gradient descent method. First, we propose constrained LS based parameter estimation for a Markov chain (MC) model given observation data sets. Next, a gradient descent optimization is utilized for online estimation of a hidden Markov model (HMM), which is bi-linearly constructed by adding an observation variable to a MC model. We achieve numerical simulations to prove its reliability and superiority in which a series of non stationary random signal is applied for the DBN models respectively.

Nonlinear optimization algorithm using monotonically increasing quantization resolution

  • Jinwuk Seok;Jeong-Si Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.119-130
    • /
    • 2023
  • We propose a quantized gradient search algorithm that can achieve global optimization by monotonically reducing the quantization step with respect to time when quantization is composed of integer or fixed-point fractional values applied to an optimization algorithm. According to the white noise hypothesis states, a quantization step is sufficiently small and the quantization is well defined, the round-off error caused by quantization can be regarded as a random variable with identically independent distribution. Thus, we rewrite the searching equation based on a gradient descent as a stochastic differential equation and obtain the monotonically decreasing rate of the quantization step, enabling the global optimization by stochastic analysis for deriving an objective function. Consequently, when the search equation is quantized by a monotonically decreasing quantization step, which suitably reduces the round-off error, we can derive the searching algorithm evolving from an optimization algorithm. Numerical simulations indicate that due to the property of quantization-based global optimization, the proposed algorithm shows better optimization performance on a search space to each iteration than the conventional algorithm with a higher success rate and fewer iterations.

A New Block-based Gradient Descent Search Algorithm for a Fast Block Matching (고속 블록 정합을 위한 새로운 블록 기반 경사 하강 탐색 알고리즘)

  • 곽성근
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.731-740
    • /
    • 2003
  • Since motion estimation remove the redundant data to employ the temporal correlations between adjacent frames in a video sequence, it plays an important role in digital video coding. And in the block matching algorithm, search patterns of different shapes or sizes and the distribution of motion vectors have a large impact on both the searching speed and the image quality. In this paper, we propose a new fast block matching algorithm using the small-cross search pattern and the block-based gradient descent search pattern. Our algorithm first finds the motion vectors that are close to the center of search window using the small-cross search pattern, and then quickly finds the other motion vectors that are not close to the center of search window using the block-based gradient descent search pattern. Through experiments, compared with the block-based gradient descent search algorithm(BBGDS), the proposed search algorithm improves as high as 26-40% in terms of average number of search point per motion vector estimation.

  • PDF

A survey on parallel training algorithms for deep neural networks (심층 신경망 병렬 학습 방법 연구 동향)

  • Yook, Dongsuk;Lee, Hyowon;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.505-514
    • /
    • 2020
  • Since a large amount of training data is typically needed to train Deep Neural Networks (DNNs), a parallel training approach is required to train the DNNs. The Stochastic Gradient Descent (SGD) algorithm is one of the most widely used methods to train the DNNs. However, since the SGD is an inherently sequential process, it requires some sort of approximation schemes to parallelize the SGD algorithm. In this paper, we review various efforts on parallelizing the SGD algorithm, and analyze the computational overhead, communication overhead, and the effects of the approximations.

Self-Organizing Fuzzy Modeling Based on Hyperplane-Shaped Clusters (다차원 평면 클러스터를 이용한 자기 구성 퍼지 모델링)

  • Koh, Taek-Beom
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.12
    • /
    • pp.985-992
    • /
    • 2001
  • This paper proposes a self-organizing fuzzy modeling(SOFUM)which an create a new hyperplane shaped cluster and adjust parameters of the fuzzy model in repetition. The suggested algorithm SOFUM is composed of four steps: coarse tuning. fine tuning cluster creation and optimization of learning rates. In the coarse tuning fuzzy C-regression model(FCRM) clustering and weighted recursive least squared (WRLS) algorithm are used and in the fine tuning gradient descent algorithm is used to adjust parameters of the fuzzy model precisely. In the cluster creation, a new hyperplane shaped cluster is created by applying multiple regression to input/output data with relatively large fuzzy entropy based on parameter tunings of fuzzy model. And learning rates are optimized by utilizing meiosis-genetic algorithm in the optimization of learning rates To check the effectiveness of the suggested algorithm two examples are examined and the performance of the identified fuzzy model is demonstrated via computer simulation.

  • PDF

Model Reference Adaptive Control Using Non-Euclidean Gradient Descent

  • Lee, Sang-Heon;Robert Mahony;Kim, Il-Soo
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.4
    • /
    • pp.330-340
    • /
    • 2002
  • In this Paper. a non-linear approach to a design of model reference adaptive control is presented. The approach is demonstrated by a case study of a simple single-pole and no zero, linear, discrete-time plant. The essence of the idea is to generate a full non-linear model of the plant dynamics and the parameter adaptation dynamics as a gradient descent algorithm with respect to a Riemannian metric. It is shown how a Riemannian metric can be chosen so that the modelled plant dynamics do in fact match the true plant dynamics. The performance of the proposed scheme is compared to a traditional model reference adaptive control scheme using the classical sensitivity derivatives (Euclidean gradients) for the descent algorithm.