• Title/Summary/Keyword: Error Backpropagation Algorithm

Search Result 88, Processing Time 0.023 seconds

Airline In-flight Meal Demand Forecasting with Neural Networks and Time Series Models

  • Lee, Young-Chan
    • Proceedings of the Korea Association of Information Systems Conference
    • /
    • 2000.11a
    • /
    • pp.36-44
    • /
    • 2000
  • The purpose of this study is to introduce a more efficient forecasting technique, which could help result the reduction of cost in removing the waste of airline in-flight meals. We will use a neural network approach known to many researchers as the “Outstanding Forecasting Technique”. We employed a multi-layer perceptron neural network using a backpropagation algorithm. We also suggested using other related information to improve the forecasting performances of neural networks. We divided the data into three sets, which are training data set, cross validation data set, and test data set. Time lag variables are still employed in our model according to the general view of time series forecasting. We measured the accuracy of our model by “Mean Square Error”(MSE). The suggested model proved most excellent in serving economy class in-flight meals. Forecasting the exact amount of meals needed for each airline could reduce the waste of meals and therefore, lead to the reduction of cost. Better yet, it could enhance the cost competition of each airline, keep the schedules on time, and lead to better service.

  • PDF

Design and Implementation of the Quality Performance Improvement for Process System Using Neural Network (가공시스템에서 신경회로망을 이용한 품질의 성능 개선에 관한 설계 및 구현)

  • 문희근;김영탁;김수정;김관형;탁한호;이상배
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.179-182
    • /
    • 2002
  • In this paper, this system makes use of the analog sensor and converts the feature of fish analog signal when sensor is operating with CPU(80C196KC). Then, After signal processing, this feature Is classified a special feature and a outline of fish by using the neural network, one of the artificial intelligence scheme. This neural network classifies fish pattern of very simple and short calculation. This has linear activation function and the error backpropagation is used as a learning algorithm. And the neural network is learned in off-line process. Because an adaptation period of neural network is too long time when random initial weights are used, off-line learning Is induced to decrease the Progress time We confirmed this method has better performance than somewhat outdated machines.

The Optimal Bidding Strategy based on Error Backpropagation Algorithm in a Two-Way Bidding Pool Applying Cournot Model (쿠르노 모형을 적용한 양방향입찰 풀시장에서 오차 역전파 알고리즘을 이용한 최적 입찰전략수립)

  • Kwon, Byeong-Gook;Lee, Seung-Chul;Kim, Jong-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2003.11a
    • /
    • pp.475-478
    • /
    • 2003
  • 본 논문에서는 쿠르노 모형을 적용한 양방향입찰 전력 풀시장에서 입찰에 참여하는 발전기가 최대 이익을 얻기 위한 입찰전략으로서 신경회로망의 오차 역전파 알고리즘을 이용하여 최적 입찰발전량과 입찰가격을 수립하는 기법에 관하여 연구한다. 전력시장 환경은 n 개의 발전기들이 참여하는 비협조적 불완전정보 시장으로 설정하고 Bayesian의 조건부 확률이론을 적용하여 상대 발전기들의 발전비용함수와 시장의 수요함수를 추정하여 발전기 상호간 쿠르노-내쉬균형점을 이루는 최적 입찰발전량을 예측한다. 그리고 이익을 극대화시키기 위해 오차 역전파 알고리즘을 이용하여 시장의 가격 탄력성과 쿠르노 시장균형가격에 연결가중치를 조절함으로써 입찰가격이 계통한계가격에 근접하도록 최적 입찰전략을 수립한다.

  • PDF

The Speed Control of Vector controlled Induction Motor Based on Neural Networks (뉴럴 네트워크 방식의 벡터제어에 의한 유도전동기의 속도 제어)

  • Lee, Dong-Bin;Ryu, Chang-Wan;Hong, Dae-Seung;Yim, Wha-Yeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.5
    • /
    • pp.463-471
    • /
    • 1999
  • This paper presents a vector controlled induction motor is implemented by neural networks system compared with PI controller for the speed control. The design employed the training strategy with Neural Network Controller(NNC) and Neural Network Emulator(NNE) for speed. In order to update the weights of the controller First of all Emulator updates its parameters by identifying the motor input and output next it supplies the error path to the output stage of the controller using backpropagation algorithm, As Controller produces an adequate output to the system due to neural networks learning capability Vector controlled induction motor characteristics actual motor speed with based on neural network system follows the reference speed better than that of linear PI speed controller.

  • PDF

Neural Network-based Time Series Modeling of Optical Emission Spectroscopy Data for Fault Prediction in Reactive Ion Etching

  • Sang Jeen Hong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.131-135
    • /
    • 2023
  • Neural network-based time series models called time series neural networks (TSNNs) are trained by the error backpropagation algorithm and used to predict process shifts of parameters such as gas flow, RF power, and chamber pressure in reactive ion etching (RIE). The training data consists of process conditions, as well as principal components (PCs) of optical emission spectroscopy (OES) data collected in-situ. Data are generated during the etching of benzocyclobutene (BCB) in a SF6/O2 plasma. Combinations of baseline and faulty responses for each process parameter are simulated, and a moving average of TSNN predictions successfully identifies process shifts in the recipe parameters for various degrees of faults.

  • PDF

A piecewise affine approximation of sigmoid activation functions in multi-layered perceptrons and a comparison with a quantization scheme (다중계층 퍼셉트론 내 Sigmoid 활성함수의 구간 선형 근사와 양자화 근사와의 비교)

  • 윤병문;신요안
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.2
    • /
    • pp.56-64
    • /
    • 1998
  • Multi-layered perceptrons that are a nonlinear neural network model, have been widely used for various applications mainly thanks to good function approximation capability for nonlinear fuctions. However, for digital hardware implementation of the multi-layere perceptrons, the quantization scheme using "look-up tables (LUTs)" is commonly employed to handle nonlinear signmoid activation functions in the neworks, and thus requires large amount of storage to prevent unacceptable quantization errors. This paper is concerned with a new effective methodology for digital hardware implementation of multi-layered perceptrons, and proposes a "piecewise affine approximation" method in which input domain is divided into (small number of) sub-intervals and nonlinear sigmoid function is linearly approximated within each sub-interval. Using the proposed method, we develop an expression and an error backpropagation type learning algorithm for a multi-layered perceptron, and compare the performance with the quantization method through Monte Carlo simulations on XOR problems. Simulation results show that, in terms of learning convergece, the proposed method with a small number of sub-intervals significantly outperforms the quantization method with a very large storage requirement. We expect from these results that the proposed method can be utilized in digital system implementation to significantly reduce the storage requirement, quantization error, and learning time of the quantization method.quantization method.

  • PDF

The Speed Control of an Induction Motor Based on Neural Networks (뉴럴 네트워크를 이용한 유도 전동기의 속도 제어)

  • Lee, Dong-Bin;Ryu, Chang-Wan;Hong, Dae-Seung;Ko, Jae-Ho;Yim, Wha-Yeong
    • Proceedings of the KIEE Conference
    • /
    • 1999.07b
    • /
    • pp.516-518
    • /
    • 1999
  • This paper presents an feed-forward neural network design instead PI controller for the speed control of an Induction Motor. The design employs the training strategy with Neural Network Controller(NNC) and Neural Network Emulator(NNE). Emulator identifies the motor by simulating the input and output map. In order to update the weights of the Controller. Emulator supplies the error path to the output stage of the controller using backpropagation algorithm. and then Controller produces an adequate output to the system due to neural networks learning capability. Therefore it becomes adjustable to the system with changing characteristics caused by a load. The speed control based on neural networks for induction motor is implemented by a vector controlled induction motor. The simulation results demonstrate that actual motor speed with neural network system well follows the reference speed minimizing the error and is available to implement on the vector control theory.

  • PDF

Detection of High Impedance Fault Using Adaptive Neuro-Fuzzy Inference System (적응 뉴로 퍼지 추론 시스템을 이용한 고임피던스 고장검출)

  • 유창완
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.4
    • /
    • pp.426-435
    • /
    • 1999
  • A high impedance fault(HIF) is one of the serious problems facing the electric utility industry today. Because of the high impedance of a downed conductor under some conditions these faults are not easily detected by over-current based protection devices and can cause fires and personal hazard. In this paper a new method for detection of HIF which uses adaptive neuro-fuzzy inference system (ANFIS) is proposed. Since arcing fault current shows different changes during high and low voltage portion of conductor voltage waveform we firstly divided one cycle of fault current into equal spanned four data windows according to the mangnitude of conductor voltage. Fast fourier transform(FFT) is applied to each data window and the frequency spectrum of current waveform are chosen asinputs of ANFIS after input selection method is preprocessed. Using staged fault and normal data ANFIS is trained to discriminate between normal and HIF status by hybrid learning algorithm. This algorithm adapted gradient descent and least square method and shows rapid convergence speed and improved convergence error. The proposed method represent good performance when applied to staged fault data and HIFLL(high impedance like load)such as arc-welder.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Algorithm for Predicting Functionally Equivalent Proteins from BLAST and HMMER Searches

  • Yu, Dong Su;Lee, Dae-Hee;Kim, Seong Keun;Lee, Choong Hoon;Song, Ju Yeon;Kong, Eun Bae;Kim, Jihyun F.
    • Journal of Microbiology and Biotechnology
    • /
    • v.22 no.8
    • /
    • pp.1054-1058
    • /
    • 2012
  • In order to predict biologically significant attributes such as function from protein sequences, searching against large databases for homologous proteins is a common practice. In particular, BLAST and HMMER are widely used in a variety of biological fields. However, sequence-homologous proteins determined by BLAST and proteins having the same domains predicted by HMMER are not always functionally equivalent, even though their sequences are aligning with high similarity. Thus, accurate assignment of functionally equivalent proteins from aligned sequences remains a challenge in bioinformatics. We have developed the FEP-BH algorithm to predict functionally equivalent proteins from protein-protein pairs identified by BLAST and from protein-domain pairs predicted by HMMER. When examined against domain classes of the Pfam-A seed database, FEP-BH showed 71.53% accuracy, whereas BLAST and HMMER were 57.72% and 36.62%, respectively. We expect that the FEP-BH algorithm will be effective in predicting functionally equivalent proteins from BLAST and HMMER outputs and will also suit biologists who want to search out functionally equivalent proteins from among sequence-homologous proteins.