• Title/Summary/Keyword: backpropagation algorithm

Search Result 350, Processing Time 0.03 seconds

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

A new training method of multilayer neural networks using a hybrid of backpropagation algorithm and dynamic tunneling system (후향전파 알고리즘과 동적터널링 시스템을 조합한 다층신경망의 새로운 학습방법)

  • 조용현
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.201-208
    • /
    • 1996
  • This paper proposes an efficient method for improving the training performance of the neural network using a hybrid of backpropagation algorithm and dynamic tunneling system.The backpropagation algorithm, which is the fast gradient descent method, is applied for high-speed optimization. The dynamic tunneling system, which is the deterministic method iwth a tunneling phenomenone, is applied for blobal optimization. Converging to the local minima by using the backpropagation algorithm, the approximate initial point for escaping the local minima is estimated by the pattern classification, and the simulation results show that the performance of proposed method is superior th that of backpropagation algorithm with randomized initial point settings.

  • PDF

Improved Error Backpropagation Algorithm using Modified Activation Function Derivative (수정된 Activation Function Derivative를 이용한 오류 역전파 알고리즘의 개선)

  • 권희용;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.3
    • /
    • pp.274-280
    • /
    • 1992
  • In this paper, an Improved Error Back Propagation Algorithm is introduced, which avoids Network Paralysis, one of the problems of the Error Backpropagation learning rule. For this purpose, we analyzed the reason for Network Paralysis and modified the Activation Function Derivative of the standard Error Backpropagation Algorithm which is regarded as the cause of the phenomenon. The characteristics of the modified Activation Function Derivative is analyzed. The performance of the modified Error Backpropagation Algorithm is shown to be better than that of the standard Error Back Propagation algorithm by various experiments.

  • PDF

Forecasting algorithm using an improved genetic algorithm based on backpropagation neural network model (개선된 유전자 역전파 신경망에 기반한 예측 알고리즘)

  • Yoon, YeoChang;Jo, Na Rae;Lee, Sung Duck
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1327-1336
    • /
    • 2017
  • In this study, the problems in the short term stock market forecasting are analyzed and the feasibility of the ARIMA method and the backpropagation neural network is discussed. Neural network and genetic algorithm in short term stock forecasting is also examined. Since the backpropagation algorithm often falls into the local minima trap, we optimized the backpropagation neural network and established a genetic algorithm based on backpropagation neural network for forecasting model in order to achieve high forecasting accuracy. The experiments adopted the korea composite stock price index series to make prediction and provided corresponding error analysis. The results show that the genetic algorithm based on backpropagation neural network model proposed in this study has a significant improvement in stock price index series forecasting accuracy.

Improving the Training Performance of Multilayer Neural Network by Using Stochastic Approximation and Backpropagation Algorithm (확률적 근사법과 후형질과 알고리즘을 이용한 다층 신경망의 학습성능 개선)

  • 조용현;최흥문
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.145-154
    • /
    • 1994
  • This paper proposes an efficient method for improving the training performance of the neural network by using a hybrid of a stochastic approximation and a backpropagation algorithm. The proposed method improves the performance of the training by appliying a global optimization method which is a hybrid of a stochastic approximation and a backpropagation algorithm. The approximate initial point for a stochastic approximation and a backpropagation algorihtm. The approximate initial point for fast global optimization is estimated first by applying the stochastic approximation, and then the backpropagation algorithm, which is the fast gradient descent method, is applied for a high speed global optimization. And further speed-up of training is made possible by adjusting the training parameters of each of the output and the hidden layer adaptively to the standard deviation of the neuron output of each layer. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to that of the backpropagation, the Baba's MROM, and the Sun's method with randomized initial point settings. The results of adaptive adjusting of the training parameters show that the proposed method further improves the convergence speed about 20% in training.

  • PDF

Performance Improvement of Backpropagation Algorithm by Automatic Tuning of Learning Rate using Fuzzy Logic System

  • Jung, Kyung-Kwon;Lim, Joong-Kyu;Chung, Sung-Boo;Eom, Ki-Hwan
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.3
    • /
    • pp.157-162
    • /
    • 2003
  • We propose a learning method for improving the performance of the backpropagation algorithm. The proposed method is using a fuzzy logic system for automatic tuning of the learning rate of each weight. Instead of choosing a fixed learning rate, the fuzzy logic system is used to dynamically adjust the learning rate. The inputs of fuzzy logic system are delta and delta bar, and the output of fuzzy logic system is the learning rate. In order to verify the effectiveness of the proposed method, we performed simulations on the XOR problem, character classification, and function approximation. The results show that the proposed method considerably improves the performance compared to the general backpropagation, the backpropagation with momentum, and the Jacobs'delta-bar-delta algorithm.

A Study on the Learning Efficiency of Multilayered Neural Networks using Variable Slope (기울기 조정에 의한 다층 신경회로망의 학습효율 개선방법에 대한 연구)

  • 이형일;남재현;지선수
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.20 no.42
    • /
    • pp.161-169
    • /
    • 1997
  • A variety of learning methods are used for neural networks. Among them, the backpropagation algorithm is most widely used in such image processing, speech recognition, and pattern recognition. Despite its popularity for these application, its main problem is associated with the running time, namely, too much time is spent for the learning. This paper suggests a method which maximize the convergence speed of the learning. Such reduction in e learning time of the backpropagation algorithm is possible through an adaptive adjusting of the slope of the activation function depending on total errors, which is named as the variable slope algorithm. Moreover experimental results using this variable slope algorithm is compared against conventional backpropagation algorithm and other variations; which shows an improvement in the performance over pervious algorithms.

  • PDF

A new learning algorithm for multilayer neural networks (새로운 다층 신경망 학습 알고리즘)

  • 고진욱;이철희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1285-1288
    • /
    • 1998
  • In this paper, we propose a new learning algorithm for multilayer neural networks. In the error backpropagation that is widely used for training multilayer neural networks, weights are adjusted to reduce the error function that is sum of squared error for all the neurons in the output layer of the network. In the proposed learning algorithm, we consider each output of the output layer as a function of weights and adjust the weights directly so that the output neurons produce the desired outputs. Experiments show that the proposed algorithm outperforms the backpropagation learning algorithm.

  • PDF

A Separate Learning Algorithm of Two-Layered Networks with Target Values of Hidden Nodes (은닉노드 목표 값을 가진 2개 층 신경망의 분리학습 알고리즘)

  • Choi, Bum-Ghi;Lee, Ju-Hong;Park, Tae-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.999-1007
    • /
    • 2006
  • The Backpropagation learning algorithm is known to have slow and false convergence aroused from plateau and local minima. Many substitutes for backpropagation announced so far appear to pay some trade-off for convergence speed and stability of convergence according to parameters. Here, a new algorithm is proposed, which avoids some of those problems associated with the conventional backpropagation problems, especially with local minima, and gives relatively stable and fast convergence with low storage requirement. This is the separate learning algorithm in which the upper connections, hidden-to-output, and the lower connections, input-to-hidden, separately trained. This algorithm requires less computational work than the conventional backpropagation and other improved algorithms. It is shown in various classification problems to be relatively reliable on the overall performance.

The Performance Advancement of Test Algorithm for Inner Defects in Semiconductor Packages (반도체 패키지의 내부 결함 검사용 알고리즘 성능 향상)

  • 김재열;윤성운;한재호;김창현;양동조;송경석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.345-350
    • /
    • 2002
  • In this study, researchers classifying the artificial flaws in semiconductor packages are performed by pattern recognition technology. For this purposes, image pattern recognition package including the user made software was developed and total procedure including ultrasonic image acquisition, equalization filtration, binary process, edge detection and classifier design is treated by Backpropagation Neural Network. Specially, it is compared with various weights of Backpropagation Neural Network and it is compared with threshold level of edge detection in preprocessing method fur entrance into Multi-Layer Perceptron(Backpropagation Neural network). Also, the pattern recognition techniques is applied to the classification problem of defects in semiconductor packages as normal, crack, delamination. According to this results, it is possible to acquire the recognition rate of 100% for Backpropagation Neural Network.

  • PDF