• Title/Summary/Keyword: Backpropagation learning rule

Search Result 19, Processing Time 0.027 seconds

Improved Error Backpropagation Algorithm using Modified Activation Function Derivative (수정된 Activation Function Derivative를 이용한 오류 역전파 알고리즘의 개선)

  • 권희용;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.3
    • /
    • pp.274-280
    • /
    • 1992
  • In this paper, an Improved Error Back Propagation Algorithm is introduced, which avoids Network Paralysis, one of the problems of the Error Backpropagation learning rule. For this purpose, we analyzed the reason for Network Paralysis and modified the Activation Function Derivative of the standard Error Backpropagation Algorithm which is regarded as the cause of the phenomenon. The characteristics of the modified Activation Function Derivative is analyzed. The performance of the modified Error Backpropagation Algorithm is shown to be better than that of the standard Error Back Propagation algorithm by various experiments.

  • PDF

Fuzzy Neural Network Using a Learning Rule utilizing Selective Learning Rate (선택적 학습률을 활용한 학습법칙을 사용한 신경회로망)

  • Baek, Young-Sun;Kim, Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.672-676
    • /
    • 2010
  • This paper presents a learning rule that weights more on data near decision boundary. This learning rule generates better decision boundary by reducing the effect of outliers on the decision boundary. The proposed learning rule is integrated into IAFC neural network. IAFC neural network is stable to maintain previous learning results and is plastic to learn new data. The performance of the proposed fuzzy neural network is compared with performances of LVQ neural network and backpropagation neural network. The results show that the performance of the proposed fuzzy neural network is better than those of LVQ neural network and backpropagation neural network.

A Study on the Neuro-Fuzzy Control for an Inverted Pendulum System (도립진자 시스템의 뉴로-퍼지 제어에 관한 연구)

  • 소명옥;류길수
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.20 no.4
    • /
    • pp.11-19
    • /
    • 1996
  • Recently, fuzzy and neural network techniques have been successfully applied to control of complex and ill-defined system in a wide variety of areas, such as robot, water purification, automatic train operation system and automatic container crane operation system, etc. In this paper, we present a neuro-fuzzy controller which unifies both fuzzy logic and multi-layered feedforward neural networks. Fuzzy logic provides a means for converting linguistic control knowledge into control actions. On the other hand, feedforward neural networks provide salient features, such as learning and parallelism. In the proposed neuro-fuzzy controller, the parameters of membership functions in the antecedent part of fuzzy inference rules are identified by using the error backpropagation algorithm as a learning rule, while the coefficients of the linear combination of input variables in the consequent part are determined by using the least square estimation method. Finally, the effectiveness of the proposed controller is verified through computer simulation of an inverted pendulum system.

  • PDF

Improved Learning Algorithm with Variable Activating Functions

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.815-821
    • /
    • 2005
  • Among the various artificial neural networks the backpropagation network (BPN) has become a standard one. One of the components in a neural network is an activating function or a transfer function of which a representative function is a sigmoid. We have discovered that by updating the slope parameter of a sigmoid function simultaneous with the weights could improve performance of a BPN.

  • PDF

Multi-layer Neural Network with Hybrid Learning Rules for Improved Robust Capability (Robustness를 형성시키기 위한 Hybrid 학습법칙을 갖는 다층구조 신경회로망)

  • 정동규;이수영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.8
    • /
    • pp.211-218
    • /
    • 1994
  • In this paper we develope a hybrid learning rule to improve the robustness of multi-layer Perceptions. In most neural networks the activation of a neuron is deternined by a nonlinear transformation of the weighted sum of inputs to the neurons. Investigating the behaviour of activations of hidden layer neurons a new learning algorithm is developed for improved robustness for multi-layer Perceptrons. Unlike other methods which reduce the network complexity by putting restrictions on synaptic weights our method based on error-backpropagation increases the complexity of the underlying proplem by imposing it saturation requirement on hidden layer neurons. We also found that the additional gradient-descent term for the requirement corresponds to the Hebbian rule and our algorithm incorporates the Hebbian learning rule into the error back-propagation rule. Computer simulation demonstrates fast learning convergence as well as improved robustness for classification and hetero-association of patterns.

  • PDF

A New Evolutionary Programming Algorithm using the Learning Rule of a Neural Network for Mutation of Individuals (신경회로망의 학습 알고리듬을 이용하여 돌연변이를 수행하는 새로운 진화 프로그래밍 알고리듬)

  • 임종화;최두현;황찬식
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.3
    • /
    • pp.58-64
    • /
    • 1999
  • Evolutionary programming is mainly characterized by two factors; one is the selection strategy and the other the mutation rule. In this paper, a new mutation rule that is the same form of well-known backpropagation learning rule of neural networks has been presented. The proposed mutation rule adapts the best individual's value as the target value at the generation. The temporal error improves the exploration through guiding the direction of evolution and the momentum speeds up convergence. The efficiency and robustness of the proposed algorithm have been verified through benchmark test functions.

  • PDF

Fuzzy Neural Network Model Using Asymmetric Fuzzy Learning Rates (비대칭 퍼지 학습률을 이용한 퍼지 신경회로망 모델)

  • Kim Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.800-804
    • /
    • 2005
  • This paper presents a fuzzy learning rule which is the fuzzified version of LVQ(Learning Vector Quantization). This fuzzy learning rule 3 uses fuzzy learning rates. instead of the traditional learning rates. LVQ uses the same learning rate regardless of correctness of classification. But, the new fuzzy learning rule uses the different learning rates depending on whether classification is correct or not. The new fuzzy learning rule is integrated into the improved IAFC(Integrated Adaptive Fuzzy Clustering) neural network. The improved IAFC neural network is both stable and plastic. The iris data set is used to compare the performance of the supervised IAFC neural network 3 with the performance of backprogation neural network. The results show that the supervised IAFC neural network 3 is better than backpropagation neural network.

Piece-wise linear estimation of mechanical properties of materials with neural networks

  • Shin, Inho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.181-186
    • /
    • 1992
  • Many real-world problems are concerned with estimation rather than classification. This paper presents an adaptive technique to estimate the mechanical properties of materials from acoustoultrasonic waveforms. This is done by adapting a piece-wise linear approximation technique to a multi-layered neural network architecture. The piece-wise linear approximation network (PWLAN) finds a set of connected hyperplanes that fit all input vectors as close as possible. A corresponding architecture requires only one hidden layer to estimate any curve as an output pattern. A learning rule for PWLAN is developed and applied to the acousto-ultrasonic data. The efficiency of the PWLAN is compared with that of classical backpropagation network which uses generalized delta rule as a learning algorithm.

  • PDF

A Coevolution of Artificial-Organism Using Classification Rule And Enhanced Backpropagation Neural Network (분류규칙과 강화 역전파 신경망을 이용한 이종 인공유기체의 공진화)

  • Cho Nam-Deok;Kim Ki-Tae
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.349-356
    • /
    • 2005
  • Artificial Organism-used application areas are expanding at a break-neck speed with a view to getting things done in a dynamic and Informal environment. A use of general programming or traditional hi methods as the representation of Artificial Organism behavior knowledge in these areas can cause problems related to frequent modifications and bad response in an unpredictable situation. Strategies aimed at solving these problems in a machine-learning fashion includes Genetic Programming and Evolving Neural Networks. But the learning method of Artificial-Organism is not good yet, and can't represent life in the environment. With this in mind, this research is designed to come up with a new behavior evolution model. The model represents behavior knowledge with Classification Rules and Enhanced Backpropation Neural Networks and discriminate the denomination. To evaluate the model, the researcher applied it to problems with the competition of Artificial-Organism in the Simulator and compared with other system. The survey shows that the model prevails in terms of the speed and Qualify of learning. The model is characterized by the simultaneous learning of classification rules and neural networks represented on chromosomes with the help of Genetic Algorithm and the consolidation of learning ability caused by the hybrid processing of the classification rules and Enhanced Backpropagation Neural Network.

A Study of Initial Determination for Performance Enhancement in Backpropagation (에러 역전파 학습 성능 향상을 위한 초기 가중치 결정에 관한 연구)

  • 김웅명;이현수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.333-335
    • /
    • 1998
  • 에러 역전파 신경망에서 학습속도와 수렴률은 초기 가중의 분포에 따라 크게 영향을 받는다. 본 연구에서는 이를 위하여 비교사 학습 신경망(Hebbian learning rule)을 이용한 새로운 초기 가중치 결정 방법을 제안한다. 또는 비교사 학습 신경망이 에러 역전파 신경망 학습에 적당하도록 은닉층의 각 뉴런과 연결된 가중치의 norm을 이용하여 학습하였다. 시뮬레이션을 통하여 기존 에러 역전파 신경망 학습과 그 성능을 비교한 결과 제안한 초기 가중치 표현이 학습속도와 수렴능력에서 우수함을 나타낸다.

  • PDF