• Title/Summary/Keyword: Feedforward Neural Network Model

Search Result 70, Processing Time 0.037 seconds

The Prediction and Analysis of the Power Energy Time Series by Using the Elman Recurrent Neural Network (엘만 순환 신경망을 사용한 전력 에너지 시계열의 예측 및 분석)

  • Lee, Chang-Yong;Kim, Jinho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.41 no.1
    • /
    • pp.84-93
    • /
    • 2018
  • In this paper, we propose an Elman recurrent neural network to predict and analyze a time series of power energy consumption. To this end, we consider the volatility of the time series and apply the sample variance and the detrended fluctuation analyses to the volatilities. We demonstrate that there exists a correlation in the time series of the volatilities, which suggests that the power consumption time series contain a non-negligible amount of the non-linear correlation. Based on this finding, we adopt the Elman recurrent neural network as the model for the prediction of the power consumption. As the simplest form of the recurrent network, the Elman network is designed to learn sequential or time-varying pattern and could predict learned series of values. The Elman network has a layer of "context units" in addition to a standard feedforward network. By adjusting two parameters in the model and performing the cross validation, we demonstrated that the proposed model predicts the power consumption with the relative errors and the average errors in the range of 2%~5% and 3kWh~8kWh, respectively. To further confirm the experimental results, we performed two types of the cross validations designed for the time series data. We also support the validity of the model by analyzing the multi-step forecasting. We found that the prediction errors tend to be saturated although they increase as the prediction time step increases. The results of this study can be used to the energy management system in terms of the effective control of the cross usage of the electric and the gas energies.

A New Approach to Short-term Price Forecast Strategy with an Artificial Neural Network Approach: Application to the Nord Pool

  • Kim, Mun-Kyeom
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1480-1491
    • /
    • 2015
  • In new deregulated electricity market, short-term price forecasting is key information for all market players. A better forecast of market-clearing price (MCP) helps market participants to strategically set up their bidding strategies for energy markets in the short-term. This paper presents a new prediction strategy to improve the need for more accurate short-term price forecasting tool at spot market using an artificial neural networks (ANNs). To build the forecasting ANN model, a three-layered feedforward neural network trained by the improved Levenberg-marquardt (LM) algorithm is used to forecast the locational marginal prices (LMPs). To accurately predict LMPs, actual power generation and load are considered as the input sets, and then the difference is used to predict price differences in the spot market. The proposed ANN model generalizes the relationship between the LMP in each area and the unconstrained MCP during the same period of time. The LMP calculation is iterated so that the capacity between the areas is maximized and the mechanism itself helps to relieve grid congestion. The addition of flow between the areas gives the LMPs a new equilibrium point, which is balanced when taking the transfer capacity into account, LMP forecasting is then possible. The proposed forecasting strategy is tested on the spot market of the Nord Pool. The validity, the efficiency, and effectiveness of the proposed approach are shown by comparing with time-series models

Genetic Algorithms based Optimal Polynomial Neural Network Model (유전자 알고리즘 기반 최적 다항식 뉴럴네트워크 모델)

  • Kim, Wan-Su;Kim, Hyun-Ki;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2876-2878
    • /
    • 2005
  • In this paper, we propose Genetic Algorithms(GAs)-based Optimal Polynomial Neural Networks(PNN). The proposed algorithm is based on Group Method of Data Handling(GMDH) method and its structure is similar to feedforward Neural Networks. But the structure of PNN is not fixed like in conventional Neural Networks and can be generated. The each node of PNN structure uses several types of high-order polynomial such as linear, quadratic and modified quadratic, and is connected as various kinds of multi-variable inputs. The conventional PNN depends on experience of a designer that select No. of input variable, input variable and polynomial type. Therefore it is very difficult a organizing of optimized network. The proposed algorithm identified and selected No. of input variable, input variable and polynomial type by using Genetic Algorithms(GAs). In the sequel the proposed model shows not only superior results to the existing models, but also pliability in organizing of optimal network. The study is illustrated with the ACI Distance Relay Data for application to power systems.

  • PDF

A Study on a Rrecurrent Multilayer Feedforward Neural Network (자체반복구조를 갖는 다층신경망에 관한 연구)

  • Lee, Ji-Hong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.10
    • /
    • pp.149-157
    • /
    • 1994
  • A method of applying a recurrent backpropagation network to identifying or modelling a dynamic system is proposed. After the recurrent backpropagation network having both the characteristicsof interpolative network and associative network is applied to XOR problem, a new model of recurrent backpropagation network is proposed and compared with the original recurrent backpropagation network by applying them to XOR problem. based on the observation thata function can be approximated with polynomials to arbitrary accuracy, the new model is developed so that it may generate higher-order terms in the internal states Moreover, it is shown that the new network is succesfully applied to recognizing noisy patterns of numbers.

  • PDF

A Study on Optimal Polynomial Neural Network for Nonlinear Process (비선형 공정을 위한 최적 다항식 뉴럴네트워크에 관한 연구)

  • Kim, Wan-Su;Oh, Sung-Kwun;Kim, Hyun-Ki
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.149-151
    • /
    • 2005
  • In this paper, we propose the Optimal Polynomial Neural Networks(PNN) for nonlinear process. The PNN is based on Group Method of Data Handling(GMDH) method and its structure is similar to feedforward Neural Networks. But the structure of PNN is not fixed like in conventional Neural Networks and can be generated. The each node of PNN structure uses several types of high-order polynomial such as linear, quadratic and modified quadratic, and is connected as various kinds of multi-variable inputs. The conventional PNN depends on experience of a designer that select No. of input variable, input variable and polynomial type. Therefore it is very difficult a organizing of optimized network. The proposed algorithm identified and selected No. of input variable, input variable and polynomial type by using Genetic Algorithms(GAs). In the sequel the proposed model shows not only superior results to the existing models, but also pliability in organizing of optimal network. Medical Imaging System(MIS) data is simulated in order to confirm the efficiency and feasibility of the proposed approach in this paper.

  • PDF

Design of RFNN Controller for high performance Control of SynRM Drive (SynRM 드라이브의 고성능 제어를 위한 RFNN 제어기 설계)

  • Ko, Jae-Sub;Chung, Dong-Hwa
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.25 no.9
    • /
    • pp.33-43
    • /
    • 2011
  • Since the fuzzy neural network(FNN) is universal approximators, the development of FNN control systems have also grown rapidly to deal with non-linearities and uncertainties. However, the major drawback of the existing FNNs is that their processor is limited to static problems due to their feedforward network structure. This paper proposes the recurrent FNN(RFNN) for high performance and robust control of SynRM. RFNN is applied to speed controller for SynRM drive and model reference adaptive fuzzy controller(MFC) that combine adaptive fuzzy learning controller(AFLC) and fuzzy logic control(FLC), is applied to current controller. Also, this paper proposes speed estimation algorithm using artificial neural network(ANN). The proposed method is analyzed and compared to conventional PI and FNN controller in various operating condition such as parameter variation, steady and transient states etc.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Prediction of Ammonia Emission Rate from Field-applied Animal Manure using the Artificial Neural Network (인공신경망을 이용한 시비된 분뇨로부터의 암모니아 방출량 예측)

  • Moon, Young-Sil;Lim, Youngil;Kim, Tae-Wan
    • Korean Chemical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.133-142
    • /
    • 2007
  • As the environmental pollution caused by excessive uses of chemical fertilizers and pesticides is aggravated, organic farming using pasture and livestock manure is gaining an increased necessity. The application rate of the organic farming materials to the field is determined as a function of crops and soil types, weather and cultivation surroundings. When livestock manure is used for organic farming materials, the volatilization of ammonia from field-spread animal manure is a major source of atmospheric pollution and leads to a significant reduction in the fertilizer value of the manure. Therefore, an ammonia emission model should be presented to reduce the ammonia emission and to know appropriate application rate of manure. In this study, the ammonia emission rate from field-applied pig manure is predicted using an artificial neural network (ANN) method, where the Michaelis-Menten equation is employed for the ammonia emission rate model. Two model parameters (total loss of ammonia emission rate and time to reach the half of the total emission rate) of the model are predicted using a feedforward-backpropagation ANN on the basis of the ALFAM (Ammonia Loss from Field-applied Animal Manure) database in Europe. The relative importance among 15 input variables influencing ammonia loss is identified using the weight partitioning method. As a result, the ammonia emission is influenced mush by the weather and the manure state.

Robust Tracking Control Based on Intelligent Sliding-Mode Model-Following Position Controllers for PMSM Servo Drives

  • El-Sousy Fayez F.M.
    • Journal of Power Electronics
    • /
    • v.7 no.2
    • /
    • pp.159-173
    • /
    • 2007
  • In this paper, an intelligent sliding-mode position controller (ISMC) for achieving favorable decoupling control and high precision position tracking performance of permanent-magnet synchronous motor (PMSM) servo drives is proposed. The intelligent position controller consists of a sliding-mode position controller (SMC) in the position feed-back loop in addition to an on-line trained fuzzy-neural-network model-following controller (FNNMFC) in the feedforward loop. The intelligent position controller combines the merits of the SMC with robust characteristics and the FNNMFC with on-line learning ability for periodic command tracking of a PMSM servo drive. The theoretical analyses of the sliding-mode position controller are described with a second order switching surface (PID) which is insensitive to parameter uncertainties and external load disturbances. To realize high dynamic performance in disturbance rejection and tracking characteristics, an on-line trained FNNMFC is proposed. The connective weights and membership functions of the FNNMFC are trained on-line according to the model-following error between the outputs of the reference model and the PMSM servo drive system. The FNNMFC generates an adaptive control signal which is added to the SMC output to attain robust model-following characteristics under different operating conditions regardless of parameter uncertainties and load disturbances. A computer simulation is developed to demonstrate the effectiveness of the proposed intelligent sliding mode position controller. The results confirm that the proposed ISMC grants robust performance and precise response to the reference model regardless of load disturbances and PMSM parameter uncertainties.

Whole learning algorithm of the neural network for modeling nonlinear and dynamic behavior of RC members

  • Satoh, Kayo;Yoshikawa, Nobuhiro;Nakano, Yoshiaki;Yang, Won-Jik
    • Structural Engineering and Mechanics
    • /
    • v.12 no.5
    • /
    • pp.527-540
    • /
    • 2001
  • A new sort of learning algorithm named whole learning algorithm is proposed to simulate the nonlinear and dynamic behavior of RC members for the estimation of structural integrity. A mathematical technique to solve the multi-objective optimization problem is applied for the learning of the feedforward neural network, which is formulated so as to minimize the Euclidean norm of the error vector defined as the difference between the outputs and the target values for all the learning data sets. The change of the outputs is approximated in the first-order with respect to the amount of weight modification of the network. The governing equation for weight modification to make the error vector null is constituted with the consideration of the approximated outputs for all the learning data sets. The solution is neatly determined by means of the Moore-Penrose generalized inverse after summarization of the governing equation into the linear simultaneous equations with a rectangular matrix of coefficients. The learning efficiency of the proposed algorithm from the viewpoint of computational cost is verified in three types of problems to learn the truth table for exclusive or, the stress-strain relationship described by the Ramberg-Osgood model and the nonlinear and dynamic behavior of RC members observed under an earthquake.