• Title/Summary/Keyword: dynamic recurrent neural network

Search Result 82, Processing Time 0.024 seconds

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Robot Trajectory Control using Prefilter Type Chaotic Neural Networks Compensator (Prefilter 형태의 카오틱 신경망을 이용한 로봇 경로 제어)

  • 강원기;최운하김상희
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.263-266
    • /
    • 1998
  • This paper propose a prefilter type inverse control algorithm using chaotic neural networks. Since the chaotic neural networks show robust characteristics in approximation and adaptive learning for nonlinear dynamic system, the chaotic neural networks are suitable for controlling robotic manipulators. The structure of the proposed prefilter type controller compensate velocity of the PD controller. To estimate the proposed controller, we implemented to the Cartesian space control of three-axis PUMA robot and compared the final result with recurrent neural network(RNN) controller.

  • PDF

Nonlinear Prediction using Gamma Multilayered Neural Network (Gamma 다층 신경망을 이용한 비선형 적응예측)

  • Kim Jong-In;Go Il-Hwan;Choi Han-Go
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.7 no.2
    • /
    • pp.53-59
    • /
    • 2006
  • Dynamic neural networks have been applied to diverse fields requiring temporal signal processing such as system identification and signal prediction. This paper proposes the gamma neural network(GAM), which uses gamma memory kernel in the hidden layer of feedforward multilayered network, to improve dynamics of networks and then describes nonlinear adaptive prediction using the proposed network as an adaptive filter. The proposed network is evaluated in nonlinear signal prediction and compared with feedforword(FNN) and recurrent neural networks(RNN) for the relative comparison of prediction performance. Simulation results show that the GAM network performs better with respect to the convergence speed and prediction accuracy, indicating that it can be a more effective prediction model than conventional multilayered networks in nonlinear prediction for nonstationary signals.

  • PDF

A Backstepping Control of LSM Drive Systems Using Adaptive Modified Recurrent Laguerre OPNNUO

  • Lin, Chih-Hong
    • Journal of Power Electronics
    • /
    • v.16 no.2
    • /
    • pp.598-609
    • /
    • 2016
  • The good control performance of permanent magnet linear synchronous motor (LSM) drive systems is difficult to achieve using linear controllers because of uncertainty effects, such as fictitious forces. A backstepping control system using adaptive modified recurrent Laguerre orthogonal polynomial neural network uncertainty observer (OPNNUO) is proposed to increase the robustness of LSM drive systems. First, a field-oriented mechanism is applied to formulate a dynamic equation for an LSM drive system. Second, a backstepping approach is proposed to control the motion of the LSM drive system. With the proposed backstepping control system, the mover position of the LSM drive achieves good transient control performance and robustness. As the LSM drive system is prone to nonlinear and time-varying uncertainties, an adaptive modified recurrent Laguerre OPNNUO is proposed to estimate lumped uncertainties and thereby enhance the robustness of the LSM drive system. The on-line parameter training methodology of the modified recurrent Laguerre OPNN is based on the Lyapunov stability theorem. Furthermore, two optimal learning rates of the modified recurrent Laguerre OPNN are derived to accelerate parameter convergence. Finally, the effectiveness of the proposed control system is verified by experimental results.

Neural Network-based FMCW Radar System for Detecting a Drone (소형 무인 항공기 탐지를 위한 인공 신경망 기반 FMCW 레이다 시스템)

  • Jang, Myeongjae;Kim, Soontae
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.6
    • /
    • pp.289-296
    • /
    • 2018
  • Drone detection in FMCW radar system needs complex techniques because a drone beat frequency is highly dynamic and unpredictable. Therefore, the current static signal processing algorithms cannot show appropriate detection accuracy. With dynamic signal fluctuation and environmental clutters, it can fail to detect a drone or make false detection. It affects to the radar system integrity and safety. Constant false alarm rate (CFAR), one of famous static signal process algorithm is effective for static environment. But for drone detection, it shows low detection accuracy. In this paper, we suggest neural network based FMCW radar system for detecting a drone. We use recurrent neural network (RNN) because it is the effective neural network for signal processing. In our FMCW radar system, one transmitter emits FMCW signal and four-way fixed receivers detect reflected drone beat frequency. The coordinate of the drone can be calculated with four receivers information by triangulation. Therefore, RNN only learns and inferences reflected drone beat frequency. It helps higher learning and detection accuracy. With several drone flight experiments, RNN shows false detection rate and detection accuracy as 21.1% and 96.4%, respectively.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Speech Recognition Using MSVQ/TDRNN (MSVQ/TDRNN을 이용한 음성인식)

  • Kim, Sung-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.4
    • /
    • pp.268-272
    • /
    • 2014
  • This paper presents a method for speech recognition using multi-section vector-quantization (MSVQ) and time-delay recurrent neural network (TDTNN). The MSVQ generates the codebook with normalized uniform sections of voice signal, and the TDRNN performs the speech recognition using the MSVQ codebook. The TDRNN is a time-delay recurrent neural network classifier with two different representations of dynamic context: the time-delayed input nodes represent local dynamic context, while the recursive nodes are able to represent long-term dynamic context of voice signal. The cepstral PLP coefficients were used as speech features. In the speech recognition experiments, the MSVQ/TDRNN speech recognizer shows 97.9 % word recognition rate for speaker independent recognition.

Design of Neural Network Controller Using RTDNN and FLC (RTDNN과 FLC를 사용한 신경망제어기 설계)

  • Shin, Wee-Jae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.13 no.4
    • /
    • pp.233-237
    • /
    • 2012
  • In this paper, We propose a control system which compensate a output of a main Neual Network using a RTDNN(Recurrent Time Delayed Neural Network) with a FLC(Fuzzy Logic Controller)After a learn of main neural network, it can occur a Over shoot or Under shoot from a disturbance or a load variations. In order to adjust above case, we used the fuzzy compensator to get an expected results. And the weight of main neural network can be changed with the result of learning a inverse model neural network of plant, so a expected dynamic characteristics of plant can be got. We can confirm good response characteristics of proposed neural network controller by the results of simulation.

Recurrent Neural Network Modeling of Etch Tool Data: a Preliminary for Fault Inference via Bayesian Networks

  • Nawaz, Javeria;Arshad, Muhammad Zeeshan;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.239-240
    • /
    • 2012
  • With advancements in semiconductor device technologies, manufacturing processes are getting more complex and it became more difficult to maintain tighter process control. As the number of processing step increased for fabricating complex chip structure, potential fault inducing factors are prevail and their allowable margins are continuously reduced. Therefore, one of the key to success in semiconductor manufacturing is highly accurate and fast fault detection and classification at each stage to reduce any undesired variation and identify the cause of the fault. Sensors in the equipment are used to monitor the state of the process. The idea is that whenever there is a fault in the process, it appears as some variation in the output from any of the sensors monitoring the process. These sensors may refer to information about pressure, RF power or gas flow and etc. in the equipment. By relating the data from these sensors to the process condition, any abnormality in the process can be identified, but it still holds some degree of certainty. Our hypothesis in this research is to capture the features of equipment condition data from healthy process library. We can use the health data as a reference for upcoming processes and this is made possible by mathematically modeling of the acquired data. In this work we demonstrate the use of recurrent neural network (RNN) has been used. RNN is a dynamic neural network that makes the output as a function of previous inputs. In our case we have etch equipment tool set data, consisting of 22 parameters and 9 runs. This data was first synchronized using the Dynamic Time Warping (DTW) algorithm. The synchronized data from the sensors in the form of time series is then provided to RNN which trains and restructures itself according to the input and then predicts a value, one step ahead in time, which depends on the past values of data. Eight runs of process data were used to train the network, while in order to check the performance of the network, one run was used as a test input. Next, a mean squared error based probability generating function was used to assign probability of fault in each parameter by comparing the predicted and actual values of the data. In the future we will make use of the Bayesian Networks to classify the detected faults. Bayesian Networks use directed acyclic graphs that relate different parameters through their conditional dependencies in order to find inference among them. The relationships between parameters from the data will be used to generate the structure of Bayesian Network and then posterior probability of different faults will be calculated using inference algorithms.

  • PDF

Short-term Electric Load Forecasting in Winter and Summer Seasons using a NARX Neural Network (NARX 신경망을 이용한 동·하계 단기부하예측에 관한 연구)

  • Jeong, Hee-Myung;Park, June Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1001-1006
    • /
    • 2017
  • In this study the NARX was proposed as a novel approach to forecast electric load more accurately. The NARX model is a recurrent dynamic network. ISO-NewEngland dataset was employed to evaluate and validate the proposed approach. Obtained results were compared with NAR network and some other popular statistical methods. This study showed that the proposed approach can be applied to forecast electric load and NARX has high potential to be utilized in modeling dynamic systems effectively.