• Title/Summary/Keyword: Backpropagation(BP)

Search Result 55, Processing Time 0.027 seconds

Pattern Recognition Using BP Learning Algorithm of Multiple Valued Logic Neural Network (다치 신경 망의 BP 학습 알고리즘을 이용한 패턴 인식)

  • 김두완;정환묵
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.502-505
    • /
    • 2002
  • 본 논문은 다치(MVL:Multiple Valued Logic) 신경망의 BP(Backpropagation) 학습 알고리즘을 이용하여 패턴 인식에 이용하는 방법을 제안한다. MVL 신경망을 이용하여 패턴 인식에 이용함으로서, 네트워크에 필요한 시간 및 기억 공간을 최소화할 수 있고 환경 변화에 적응할 수 있는 가능성을 제시하였다. MVL 신경망은 다치 논리 함수를 기반으로 신경망을 구성하였으며, 입력은 리터럴 함수로 변환시키고, 출력은 MIN과 MAX 연산을 사용하여 구하였고, 학습을 하기 위해 다치 논리식의 편 미분을 사용하였다.

BACKPROPAGATION BASED ON THE CONJUGATE GRADIENT METHOD WITH THE LINEAR SEARCH BY ORDER STATISTICS AND GOLDEN SECTION

  • Choe, Sang-Woong;Lee, Jin-Choon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.107-112
    • /
    • 1998
  • In this paper, we propose a new paradigm (NEW_BP) to be capable of overcoming limitations of the traditional backpropagation(OLD_BP). NEW_BP is based on the method of conjugate gradients with the normalized direction vectors and computes step size through the linear search which may be characterized by order statistics and golden section. Simulation results showed that NEW_BP was definitely superior to both the stochastic OLD_BP and the deterministic OLD_BP in terms of accuracy and rate of convergence and might sumount the problem of local minima. Furthermore, they confirmed us that stagnant phenomenon of training in OLD_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

Adaptive Error Constrained Backpropagation Algorithm (적응 오류 제약 Backpropagation 알고리즘)

  • 최수용;고균병;홍대식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.10C
    • /
    • pp.1007-1012
    • /
    • 2003
  • In order to accelerate the convergence speed of the conventional BP algorithm, constrained optimization techniques are applied to the BP algorithm. First, the noise-constrained least mean square algorithm and the zero noise-constrained LMS algorithm are applied (designated the NCBP and ZNCBP algorithms, respectively). These methods involve an important assumption: the filter or the receiver in the NCBP algorithm must know the noise variance. By means of extension and generalization of these algorithms, the authors derive an adaptive error-constrained BP algorithm, in which the error variance is estimated. This is achieved by modifying the error function of the conventional BP algorithm using Lagrangian multipliers. The convergence speeds of the proposed algorithms are 20 to 30 times faster than those of the conventional BP algorithm, and are faster than or almost the same as that achieved with a conventional linear adaptive filter using an LMS algorithm.

Enhanced Backpropagation : Algorithm and Numeric Examples (개선된 역전파법 : 알고리즘과 수치예제)

  • Han Hong-Su;Choi Sang-Ung;Jeong Hyeon-Sik;No Jeong-Gu
    • Management & Information Systems Review
    • /
    • v.2
    • /
    • pp.75-93
    • /
    • 1998
  • In this paper, we propose a new algorithm(N_BP) to be capable of overcoming limitations of the traditional backpropagation(O_BP). The N_BP is based on the method of conjugate gradients and calculates learning parameters through the line search which may be characterized by order statistics and golden section. Experimental results showed that the N_BP was definitely superior to the O_BP with and without a stochastic term in terms of accuracy and rate of convergence and might surmount the problem of local minima. Furthermore, they confirmed us that the stagnant phenomenon of learning in the O_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

A Study on Methodology of Soil Resistivity Estimation Using the BP (역전과 알고리즘(BP)을 이용한 대지저항률 추청 방법에 관한 연구)

  • Ryu, Bo-Hyeok;Wi, Won-Seok;Kim, Jeong-Hun
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.51 no.2
    • /
    • pp.76-82
    • /
    • 2002
  • This paper presents the method of sail-resistivity estimation using the backpropagation(BP) neural network. Existing estimation programs are expensive, and their estimation methods need complex techniques and take much time. Also, those programs have not become well spreaded in Korea yet. Soil resistivity estimation method using BP algorithm has studied for the reason mentioned above. This paper suggests the method which differs from expensive program or graphic technology requiring many input stages, complicated calculation and professional knowledge. The equivalent earth resistivity can be presented immediately after inputting apparent resistivity through the personal computer with a simplified Program without many Processing stages. This program has the advantages of reasonable accuracy, rapid processing time and confident of anti users.

Genetic Algorithm with the Local Fine-Tuning Mechanism (유전자 알고리즘을 위한 지역적 미세 조정 메카니즘)

  • 임영희
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.2
    • /
    • pp.181-200
    • /
    • 1994
  • In the learning phase of multilyer feedforword neural network,there are problems such that local minimum,learning praralysis and slow learning speed when backpropagation algorithm used.To overcome these problems, the genetic algorithm has been used as learing method in the multilayer feedforword neural network instead of backpropagation algorithm.However,because the genetic algorith, does not have any mechanism for fine-tuned local search used in backpropagation method,it takes more time that the genetic algorithm converges to a global optimal solution.In this paper,we suggest a new GA-BP method which provides a fine-tunes local search to the genetic algorithm.GA-BP method uses gradient descent method as one of genetic algorithm's operators such as mutation or crossover.To show the effciency of the developed method,we applied it to the 3-parity bit problem with analysis.

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

Input-Output Linearization of Nonlinear Systems via Dynamic Feedback (비선형 시스템의 동적 궤환 입출력 선형화)

  • Cho, Hyun-Seob
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.4
    • /
    • pp.238-242
    • /
    • 2013
  • We consider the problem of constructing observers for nonlinear systems with unknown inputs. Connectionist networks, also called neural networks, have been broadly applied to solve many different problems since McCulloch and Pitts had shown mathematically their information processing ability in 1943. In this thesis, we present a genetic neuro-control scheme for nonlinear systems. Our method is different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its training.

A study on Performance Improvement of Neural Networks Using Genetic algorithms (유전자 알고리즘을 이용한 신경 회로망 성능향상에 관한 연구)

  • Lim, Jung-Eun;Kim, Hae-Jin;Chang, Byung-Chan;Seo, Bo-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.2075-2076
    • /
    • 2006
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Backpropagation(BP). The conventional BP does not guarantee that the BP generated through learning has the optimal network architecture. But the proposed GA-based BP enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional BP. The experimental results in BP neural network optimization show that this algorithm can effectively avoid BP network converging to local optimum. It is found by comparison that the improved genetic algorithm can almost avoid the trap of local optimum and effectively improve the convergent speed.

  • PDF

Construction of coordinate transformation map using neural network

  • Lee, Wonchang;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1845-1847
    • /
    • 1991
  • In general, it is not easy to find the linearizing coordinate transformation map for a class of systems which are state equivalent to linear systems, because it is required to solve a set of partial differential equations. It is possible to construct an arbitrary nonlinear function with a backpropagation(BP) net. Utilizing this property of BP neural net, we construct a desired linearizing coordinate transformation map. That is, we implement a unknown coordinate transformation map through the training of neural weights. We have shown an example which supports this idea.

  • PDF