• Title/Summary/Keyword: Backpropagation neural network(BP)

Search Result 47, Processing Time 0.022 seconds

Pattern Recognition Using BP Learning Algorithm of Multiple Valued Logic Neural Network (다치 신경 망의 BP 학습 알고리즘을 이용한 패턴 인식)

  • 김두완;정환묵
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.502-505
    • /
    • 2002
  • 본 논문은 다치(MVL:Multiple Valued Logic) 신경망의 BP(Backpropagation) 학습 알고리즘을 이용하여 패턴 인식에 이용하는 방법을 제안한다. MVL 신경망을 이용하여 패턴 인식에 이용함으로서, 네트워크에 필요한 시간 및 기억 공간을 최소화할 수 있고 환경 변화에 적응할 수 있는 가능성을 제시하였다. MVL 신경망은 다치 논리 함수를 기반으로 신경망을 구성하였으며, 입력은 리터럴 함수로 변환시키고, 출력은 MIN과 MAX 연산을 사용하여 구하였고, 학습을 하기 위해 다치 논리식의 편 미분을 사용하였다.

A Study on Methodology of Soil Resistivity Estimation Using the BP (역전과 알고리즘(BP)을 이용한 대지저항률 추청 방법에 관한 연구)

  • Ryu, Bo-Hyeok;Wi, Won-Seok;Kim, Jeong-Hun
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.51 no.2
    • /
    • pp.76-82
    • /
    • 2002
  • This paper presents the method of sail-resistivity estimation using the backpropagation(BP) neural network. Existing estimation programs are expensive, and their estimation methods need complex techniques and take much time. Also, those programs have not become well spreaded in Korea yet. Soil resistivity estimation method using BP algorithm has studied for the reason mentioned above. This paper suggests the method which differs from expensive program or graphic technology requiring many input stages, complicated calculation and professional knowledge. The equivalent earth resistivity can be presented immediately after inputting apparent resistivity through the personal computer with a simplified Program without many Processing stages. This program has the advantages of reasonable accuracy, rapid processing time and confident of anti users.

A study on Performance Improvement of Neural Networks Using Genetic algorithms (유전자 알고리즘을 이용한 신경 회로망 성능향상에 관한 연구)

  • Lim, Jung-Eun;Kim, Hae-Jin;Chang, Byung-Chan;Seo, Bo-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.2075-2076
    • /
    • 2006
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Backpropagation(BP). The conventional BP does not guarantee that the BP generated through learning has the optimal network architecture. But the proposed GA-based BP enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional BP. The experimental results in BP neural network optimization show that this algorithm can effectively avoid BP network converging to local optimum. It is found by comparison that the improved genetic algorithm can almost avoid the trap of local optimum and effectively improve the convergent speed.

  • PDF

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

Forecasting algorithm using an improved genetic algorithm based on backpropagation neural network model (개선된 유전자 역전파 신경망에 기반한 예측 알고리즘)

  • Yoon, YeoChang;Jo, Na Rae;Lee, Sung Duck
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1327-1336
    • /
    • 2017
  • In this study, the problems in the short term stock market forecasting are analyzed and the feasibility of the ARIMA method and the backpropagation neural network is discussed. Neural network and genetic algorithm in short term stock forecasting is also examined. Since the backpropagation algorithm often falls into the local minima trap, we optimized the backpropagation neural network and established a genetic algorithm based on backpropagation neural network for forecasting model in order to achieve high forecasting accuracy. The experiments adopted the korea composite stock price index series to make prediction and provided corresponding error analysis. The results show that the genetic algorithm based on backpropagation neural network model proposed in this study has a significant improvement in stock price index series forecasting accuracy.

Trajectoroy control for a Robot Manipulator by Using Multilayer Neural Network (다층 신경회로망을 사용한 로봇 매니퓰레이터의 궤적제어)

  • 안덕환;이상효
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.11
    • /
    • pp.1186-1193
    • /
    • 1991
  • This paper proposed a trajectory controlmethod for a robot manipulator by using neural networks. The total torque for a manipulator is a sum of the linear feedback controller torque and the neural network feedfoward controller torque. The proposed neural network is a multilayer neural network with time delay elements, and learns the inverse dynamics of manipulator by means of PD(propotional denvative)controller error torque. The error backpropagation (BP) learning neural network controller does not directly require manipulator dynamics information. Instead, it learns the information by training and stores the information and connection weights. The control effects of the proposed system are verified by computer simulation.

  • PDF

Dynamic Control of Robot Manipulators Using Multilayer Neural Networks and Error Backpropagation (다층 신경회로 및 역전달 학습방법에 의한 로보트 팔의 다이나믹 제어)

  • 오세영;류연식
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.12
    • /
    • pp.1306-1316
    • /
    • 1990
  • A controller using a multilayer neural network is proposed to the dynamic control of a PUMA 560 robot arm. This controller is developed based on an error back-propagation (BP) neural network. Since the neural network can model an arbitrary nonlinear mapping, it is used as a commanded feedforward torque generator. A Proportional Derivative (PD) feedback controller is used in parallel with the feedforward neural network to train the system. The neural network was trained by the current state of the manipulator as well as the PD feedback error torque. No a priori knowledge on system dynamics is needed and this information is rather implicitly stored in the interconnection weights of the neural network. In another experiment, the neural network was trained with the current, past and future positions only without any use of velocity sensors. Form this thim window of position values, BP network implicitly filters out the velocity and acceleration components for each joint. Computer simulation demonstrates such powerful characteristics of the neurocontroller as adaptation to changing environments, robustness to sensor noise, and continuous performance improvement with self-learning.

  • PDF

Input-Output Linearization of Nonlinear Systems via Dynamic Feedback (비선형 시스템의 동적 궤환 입출력 선형화)

  • Cho, Hyun-Seob
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.4
    • /
    • pp.238-242
    • /
    • 2013
  • We consider the problem of constructing observers for nonlinear systems with unknown inputs. Connectionist networks, also called neural networks, have been broadly applied to solve many different problems since McCulloch and Pitts had shown mathematically their information processing ability in 1943. In this thesis, we present a genetic neuro-control scheme for nonlinear systems. Our method is different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its training.

Genetic Algorithm with the Local Fine-Tuning Mechanism (유전자 알고리즘을 위한 지역적 미세 조정 메카니즘)

  • 임영희
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.2
    • /
    • pp.181-200
    • /
    • 1994
  • In the learning phase of multilyer feedforword neural network,there are problems such that local minimum,learning praralysis and slow learning speed when backpropagation algorithm used.To overcome these problems, the genetic algorithm has been used as learing method in the multilayer feedforword neural network instead of backpropagation algorithm.However,because the genetic algorith, does not have any mechanism for fine-tuned local search used in backpropagation method,it takes more time that the genetic algorithm converges to a global optimal solution.In this paper,we suggest a new GA-BP method which provides a fine-tunes local search to the genetic algorithm.GA-BP method uses gradient descent method as one of genetic algorithm's operators such as mutation or crossover.To show the effciency of the developed method,we applied it to the 3-parity bit problem with analysis.

Construction of coordinate transformation map using neural network

  • Lee, Wonchang;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1845-1847
    • /
    • 1991
  • In general, it is not easy to find the linearizing coordinate transformation map for a class of systems which are state equivalent to linear systems, because it is required to solve a set of partial differential equations. It is possible to construct an arbitrary nonlinear function with a backpropagation(BP) net. Utilizing this property of BP neural net, we construct a desired linearizing coordinate transformation map. That is, we implement a unknown coordinate transformation map through the training of neural weights. We have shown an example which supports this idea.

  • PDF