• 제목/요약/키워드: Backpropagation Neural Network

검색결과 449건 처리시간 0.039초

수정된 Activation Function Derivative를 이용한 오류 역전파 알고리즘의 개선 (Improved Error Backpropagation Algorithm using Modified Activation Function Derivative)

  • 권희용;황희영
    • 대한전기학회논문지
    • /
    • 제41권3호
    • /
    • pp.274-280
    • /
    • 1992
  • In this paper, an Improved Error Back Propagation Algorithm is introduced, which avoids Network Paralysis, one of the problems of the Error Backpropagation learning rule. For this purpose, we analyzed the reason for Network Paralysis and modified the Activation Function Derivative of the standard Error Backpropagation Algorithm which is regarded as the cause of the phenomenon. The characteristics of the modified Activation Function Derivative is analyzed. The performance of the modified Error Backpropagation Algorithm is shown to be better than that of the standard Error Back Propagation algorithm by various experiments.

  • PDF

신경망을 이용한 선물가 예측 (Features Price Prediction Using Backpropagation Neural Network)

  • 김성환;이상훈;김기태
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2003년도 봄 학술발표논문집 Vol.30 No.1 (B)
    • /
    • pp.467-469
    • /
    • 2003
  • 본 논문에서는 KOSIP 200선물을 예측하기 위한 시스템으로 과거의 자료를 사용하여 거래패턴과 그 변화 및 시장의 가격과 거래량의 패턴을 학습하며, 미래의 선물가를 예측하는 시스템으로 역전파 신경망(Backpropagation Neural Network)을 학습 알고리즘으로 하는 L2K시스템 실험과 다양한 입력데이터와 훈련데이터의 변화를 테스트 하여 최적의 네트워크 구성하여 정확도를 향상 시켰다.

  • PDF

다치 신경 망의 BP 학습 알고리즘을 이용한 패턴 인식 (Pattern Recognition Using BP Learning Algorithm of Multiple Valued Logic Neural Network)

  • 김두완;정환묵
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2002년도 추계학술대회 및 정기총회
    • /
    • pp.502-505
    • /
    • 2002
  • 본 논문은 다치(MVL:Multiple Valued Logic) 신경망의 BP(Backpropagation) 학습 알고리즘을 이용하여 패턴 인식에 이용하는 방법을 제안한다. MVL 신경망을 이용하여 패턴 인식에 이용함으로서, 네트워크에 필요한 시간 및 기억 공간을 최소화할 수 있고 환경 변화에 적응할 수 있는 가능성을 제시하였다. MVL 신경망은 다치 논리 함수를 기반으로 신경망을 구성하였으며, 입력은 리터럴 함수로 변환시키고, 출력은 MIN과 MAX 연산을 사용하여 구하였고, 학습을 하기 위해 다치 논리식의 편 미분을 사용하였다.

Configuration design of the trainset of a high-speed train using neural networks

  • Lee, Jangyong;Soonhung Han
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2001년도 The Pacific Aisan Confrence On Intelligent Systems 2001
    • /
    • pp.116-121
    • /
    • 2001
  • Prediction of the top(service) speeds of high-speed trains and configuration design to trainset of them has been studied using the neural network system The traction system. The traction system of high-speed trains is composed of transformers, motor blocks, and traction motors of which locations and number in the trainset formation should be determine in the early stage of train conceptural design. Components of the traction system are the heaviest parts in a train so that it gives strong influence to the top speeds of high-speed trains. Prediction of the top speeds has been performed mainly with data associated with the traction system based on the frequently used neural network system-backpropagation. The neural network has been trained with the data of the high-speed trains such as TGV, ICE, and Shinkanse. Configuration design of the trainset determines the number of trains motor cars, traction motors, weights and power of trains. Configuration results from the neural network are more accurate if neural networks is trained with data of the same type of trains will be designed.

  • PDF

보정신경망을 이용한 냉연 압하력 적중율 향상 (Improvement of roll force precalculation accuracy in cold mill using a corrective neural network)

  • 이종영;조형석;조성준;조용중;윤성철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.1083-1086
    • /
    • 1996
  • Cold rolling mill process in steel works uses stands of rolls to flatten a strip to a desired thickness. At cold rolling mill process, precalculation determines the mill settings before a strip actually enters the mill and is done by an outdated mathematical model. A corrective neural network model is proposed to improve the accuracy of the roll force prediction. Additional variables to be fed to the network include the chemical composition of the coil, its coiling temperature and the aggregated amount of processed strips of each roll. The network was trained using a standard backpropagation with 4,944 process data collected from no.1 cold rolling mill process from March 1995 through December 1995, then was tested on the unseen 1,586 data from Jan 1996 through April 1996. The combined model reduced the prediction error by 32.8% on average.

  • PDF

수정된 하니발 구조를 이용한 신경회로망의 하드웨어 구현 (A hardware implementation of neural network with modified HANNIBAL architecture)

  • 이범엽;정덕진
    • 대한전기학회논문지
    • /
    • 제45권3호
    • /
    • pp.444-450
    • /
    • 1996
  • A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). refs., figs., tabs.

  • PDF

확률적 근사법과 후형질과 알고리즘을 이용한 다층 신경망의 학습성능 개선 (Improving the Training Performance of Multilayer Neural Network by Using Stochastic Approximation and Backpropagation Algorithm)

  • 조용현;최흥문
    • 전자공학회논문지B
    • /
    • 제31B권4호
    • /
    • pp.145-154
    • /
    • 1994
  • This paper proposes an efficient method for improving the training performance of the neural network by using a hybrid of a stochastic approximation and a backpropagation algorithm. The proposed method improves the performance of the training by appliying a global optimization method which is a hybrid of a stochastic approximation and a backpropagation algorithm. The approximate initial point for a stochastic approximation and a backpropagation algorihtm. The approximate initial point for fast global optimization is estimated first by applying the stochastic approximation, and then the backpropagation algorithm, which is the fast gradient descent method, is applied for a high speed global optimization. And further speed-up of training is made possible by adjusting the training parameters of each of the output and the hidden layer adaptively to the standard deviation of the neuron output of each layer. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to that of the backpropagation, the Baba's MROM, and the Sun's method with randomized initial point settings. The results of adaptive adjusting of the training parameters show that the proposed method further improves the convergence speed about 20% in training.

  • PDF

두개의 Extended Kalman Filter를 이용한 Recurrent Neural Network 학습 알고리듬 (A Learning Algorithm for a Recurrent Neural Network Base on Dual Extended Kalman Filter)

  • 송명근;김상희;박원우
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.349-351
    • /
    • 2004
  • The classical dynamic backpropagation learning algorithm has the problems of learning speed and the determine of learning parameter. The Extend Kalman Filter(EKF) is used effectively for a state estimation method for a non linear dynamic system. This paper presents a learning algorithm using Dual Extended Kalman Filter(DEKF) for Fully Recurrent Neural Network(FRNN). This DEKF learning algorithm gives the minimum variance estimate of the weights and the hidden outputs. The proposed DEKF learning algorithm is applied to the system identification of a nonlinear SISO system and compared with dynamic backpropagation learning algorithm.

  • PDF

자기조직화 지도를 이용한 반도체 패키지 내부결함의 패턴분류 알고리즘 개발 (The Development of Pattern Classification for Inner Defects in Semiconductor Packages by Self-Organizing Map)

  • 김재열;윤성운;김훈조;김창현;양동조;송경석
    • 한국공작기계학회논문집
    • /
    • 제12권2호
    • /
    • pp.65-70
    • /
    • 2003
  • In this study, researchers developed the estimative algorithm for artificial defect in semiconductor packages and performed it by pattern recognition technology. For this purpose, the estimative algorithm was included that researchers made software with MATLAB. The software consists of some procedures including ultrasonic image acquisition, equalization filtering, Self-Organizing Map and Backpropagation Neural Network. Self-organizing Map and Backpropagation Neural Network are belong to methods of Neural Networks. And the pattern recognition technology has applied to classify three kinds of detective patterns in semiconductor packages : Crack, Delamination and Normal. According to the results, we were confirmed that estimative algerian was provided the recognition rates of 75.7% (for Crack) and 83.4% (for Delamination) and 87.2 % (for Normal).

Scanning Acoustic Tomograph 방식을 이용한 지능형 반도체 평가 알고리즘 (The Intelligence Algorithm of Semiconductor Package Evaluation by using Scanning Acoustic Tomograph)

  • 김재열;김창현;송경석;양동조;장종훈
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2005년도 춘계학술대회 논문집
    • /
    • pp.91-96
    • /
    • 2005
  • In this study, researchers developed the estimative algorithm for artificial defects in semiconductor packages and performed it by pattern recognition technology. For this purpose, the estimative algorithm was included that researchers made software with MATLAB. The software consists of some procedures including ultrasonic image acquisition, equalization filtering, Self-Organizing Map and Backpropagation Neural Network. Self-Organizing Map and Backpropagation Neural Network are belong to methods of Neural Networks. And the pattern recognition technology has applied to classify three kinds of detective patterns in semiconductor packages: Crack, Delamination and Normal. According to the results, we were confirmed that estimative algorithm was provided the recognition rates of $75.7\%$ (for Crack) and $83_4\%$ (for Delamination) and $87.2\%$ (for Normal).

  • PDF