• Title/Summary/Keyword: Backpropagation Learning Algorithm

Search Result 182, Processing Time 0.044 seconds

Performance Improvement of Backpropagation Algorithm by Automatic Tuning of Learning Rate using Fuzzy Logic System

  • Jung, Kyung-Kwon;Lim, Joong-Kyu;Chung, Sung-Boo;Eom, Ki-Hwan
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.3
    • /
    • pp.157-162
    • /
    • 2003
  • We propose a learning method for improving the performance of the backpropagation algorithm. The proposed method is using a fuzzy logic system for automatic tuning of the learning rate of each weight. Instead of choosing a fixed learning rate, the fuzzy logic system is used to dynamically adjust the learning rate. The inputs of fuzzy logic system are delta and delta bar, and the output of fuzzy logic system is the learning rate. In order to verify the effectiveness of the proposed method, we performed simulations on the XOR problem, character classification, and function approximation. The results show that the proposed method considerably improves the performance compared to the general backpropagation, the backpropagation with momentum, and the Jacobs'delta-bar-delta algorithm.

Improved Error Backpropagation Algorithm using Modified Activation Function Derivative (수정된 Activation Function Derivative를 이용한 오류 역전파 알고리즘의 개선)

  • 권희용;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.3
    • /
    • pp.274-280
    • /
    • 1992
  • In this paper, an Improved Error Back Propagation Algorithm is introduced, which avoids Network Paralysis, one of the problems of the Error Backpropagation learning rule. For this purpose, we analyzed the reason for Network Paralysis and modified the Activation Function Derivative of the standard Error Backpropagation Algorithm which is regarded as the cause of the phenomenon. The characteristics of the modified Activation Function Derivative is analyzed. The performance of the modified Error Backpropagation Algorithm is shown to be better than that of the standard Error Back Propagation algorithm by various experiments.

  • PDF

A new learning algorithm for multilayer neural networks (새로운 다층 신경망 학습 알고리즘)

  • 고진욱;이철희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1285-1288
    • /
    • 1998
  • In this paper, we propose a new learning algorithm for multilayer neural networks. In the error backpropagation that is widely used for training multilayer neural networks, weights are adjusted to reduce the error function that is sum of squared error for all the neurons in the output layer of the network. In the proposed learning algorithm, we consider each output of the output layer as a function of weights and adjust the weights directly so that the output neurons produce the desired outputs. Experiments show that the proposed algorithm outperforms the backpropagation learning algorithm.

  • PDF

A Study on the Learning Efficiency of Multilayered Neural Networks using Variable Slope (기울기 조정에 의한 다층 신경회로망의 학습효율 개선방법에 대한 연구)

  • 이형일;남재현;지선수
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.42
    • /
    • pp.161-169
    • /
    • 1997
  • A variety of learning methods are used for neural networks. Among them, the backpropagation algorithm is most widely used in such image processing, speech recognition, and pattern recognition. Despite its popularity for these application, its main problem is associated with the running time, namely, too much time is spent for the learning. This paper suggests a method which maximize the convergence speed of the learning. Such reduction in e learning time of the backpropagation algorithm is possible through an adaptive adjusting of the slope of the activation function depending on total errors, which is named as the variable slope algorithm. Moreover experimental results using this variable slope algorithm is compared against conventional backpropagation algorithm and other variations; which shows an improvement in the performance over pervious algorithms.

  • PDF

A Backpropagation Learning Algorithm for pRAM Networks (pRAM회로망을 위한 역전파 학습 알고리즘)

  • 완재희;채수익
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.1
    • /
    • pp.107-114
    • /
    • 1994
  • Hardware implementation of the on-chip learning artificial neural networks is important for real-time processing. A pRAM model is based on probabilistic firing of a biological neuron and can be implemented in the VLSI circuit with learning capability. We derive a backpropagation learning algorithm for the pRAM networks and present its circuit implementation with stochastic computation. The simulation results confirm the good convergence of the learning algorithm for the pRAM networks.

  • PDF

Auto-Tuning Method of Learning Rate for Performance Improvement of Backpropagation Algorithm (역전파 알고리즘의 성능개선을 위한 학습율 자동 조정 방식)

  • Kim, Joo-Woong;Jung, Kyung-Kwon;Eom, Ki-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.4
    • /
    • pp.19-27
    • /
    • 2002
  • We proposed an auto-tuning method of learning rate for performance improvement of backpropagation algorithm. Proposed method is used a fuzzy logic system for automatic tuning of learning rate. Instead of choosing a fixed learning rate, the fuzzy logic system is used to dynamically adjust learning rate. The inputs of fuzzy logic system are ${\Delta}$ and $\bar{{\Delta}}$, and the output is the learning rate. In order to verify the effectiveness of the proposed method, we performed simulations on a N-parity problem, function approximation, and Arabic numerals classification. The results show that the proposed method has considerably improved the performance compared to the backpropagation, the backpropagation with momentum, and the Jacobs' delta-bar-delta.

The Comparison of Neural Network Learning Paradigms: Backpropagation, Simulated Annealing, Genetic Algorithm, and Tabu Search

  • Chen Ming-Kuen
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.696-704
    • /
    • 1998
  • Artificial neural networks (ANN) have successfully applied into various areas. But, How to effectively established network is the one of the critical problem. This study will focus on this problem and try to extensively study. Firstly, four different learning algorithms ANNs were constructed. The learning algorithms include backpropagation, simulated annealing, genetic algorithm, and tabu search. The experimental results of the above four different learning algorithms were tested by statistical analysis. The training RMS, training time, and testing RMS were used as the comparison criteria.

  • PDF

A Learning Algorithm for a Recurrent Neural Network Base on Dual Extended Kalman Filter (두개의 Extended Kalman Filter를 이용한 Recurrent Neural Network 학습 알고리듬)

  • Song, Myung-Geun;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.349-351
    • /
    • 2004
  • The classical dynamic backpropagation learning algorithm has the problems of learning speed and the determine of learning parameter. The Extend Kalman Filter(EKF) is used effectively for a state estimation method for a non linear dynamic system. This paper presents a learning algorithm using Dual Extended Kalman Filter(DEKF) for Fully Recurrent Neural Network(FRNN). This DEKF learning algorithm gives the minimum variance estimate of the weights and the hidden outputs. The proposed DEKF learning algorithm is applied to the system identification of a nonlinear SISO system and compared with dynamic backpropagation learning algorithm.

  • PDF

Enhanced Backpropagation Algorithm by Auto-Tuning Method of Learning Rate using Fuzzy Control System (퍼지 제어 시스템을 이용한 학습률 자동 조정 방법에 의한 개선된 역전파 알고리즘)

  • 김광백;박충식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.464-470
    • /
    • 2004
  • We propose an enhanced backpropagation algorithm by auto-tuning of learning rate using fuzzy control system for performance improvement of backpropagation algorithm. We propose two methods, which improve local minima and loaming times problem. First, if absolute value of difference between target and actual output value is smaller than $\varepsilon$ or the same, we define it as correctness. And if bigger than $\varepsilon$, we define it as incorrectness. Second, instead of choosing a fixed learning rate, the proposed method is used to dynamically adjust learning rate using fuzzy control system. The inputs of fuzzy control system are number of correctness and incorrectness, and the output is the Loaming rate. For the evaluation of performance of the proposed method, we applied the XOR problem and numeral patterns classification The experimentation results showed that the proposed method has improved the performance compared to the conventional backpropagatiot the backpropagation with momentum, and the Jacob's delta-bar-delta method.

Constructing Neural Networks Using Genetic Algorithm and Learning Neural Networks Using Various Learning Algorithms (유전알고리즘을 이용한 신경망의 구성 및 다양한 학습 알고리즘을 이용한 신경망의 학습)

  • 양영순;한상민
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1998.04a
    • /
    • pp.216-225
    • /
    • 1998
  • Although artificial neural network based on backpropagation algorithm is an excellent system simulator, it has still unsolved problems of its structure-decision and learning method. That is, we cannot find a general approach to decide the structure of the neural network and cannot train it satisfactorily because of the local optimum point which it frequently falls into. In addition, although there are many successful applications using backpropagation learning algorithm, there are few efforts to improve the learning algorithm itself. In this study, we suggest a general way to construct the hidden layer of the neural network using binary genetic algorithm and also propose the various learning methods by which the global minimum value of the teaming error can be obtained. A XOR problem and line heating problems are investigated as examples.

  • PDF