• Title/Summary/Keyword: Recurrent Neural Networks

Search Result 280, Processing Time 0.027 seconds

Generalized Binary Second-order Recurrent Neural Networks Equivalent to Regular Grammars (정규문법과 동등한 일반화된 이진 이차 재귀 신경망)

  • Jung Soon-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.107-123
    • /
    • 2006
  • We propose the Generalized Binary Second-order Recurrent Neural Networks(GBSRNNf) being equivalent to regular grammars and ?how the implementation of lexical analyzer recognizing the regular languages by using it. All the equivalent representations of regular grammars can be implemented in circuits by using GSBRNN, since it has binary-valued components and shows the structural relationship of a regular grammar. For a regular grammar with the number of symbols m, the number of terminals p, the number of nonterminals q, and the length of input string k, the size of the corresponding GBSRNN is $O(m(p+q)^2)$ and its parallel processing time is O(k) and its sequential processing time, $O(k(p+q)^2)$.

  • PDF

Exploring process prediction based on deep learning: Focusing on dynamic recurrent neural networks (딥러닝 기반의 프로세스 예측에 관한 연구: 동적 순환신경망을 중심으로)

  • Kim, Jung-Yeon;Yoon, Seok-Joon;Lee, Bo-Kyoung
    • The Journal of Information Systems
    • /
    • v.27 no.4
    • /
    • pp.115-128
    • /
    • 2018
  • Purpose The purpose of this study is to predict future behaviors of business process. Specifically, this study tried to predict the last activities of process instances. It contributes to overcoming the limitations of existing approaches that they do not accurately reflect the actual behavior of business process and it requires a lot of effort and time every time they are applied to specific processes. Design/methodology/approach This study proposed a novel approach based using deep learning in the form of dynamic recurrent neural networks. To improve the accuracy of our prediction model based on the approach, we tried to adopt the latest techniques including new initialization functions(Xavier and He initializations). The proposed approach has been verified using real-life data of a domestic small and medium-sized business. Findings According to the experiment result, our approach achieves better prediction accuracy than the latest approach based on the static recurrent neural networks. It is also proved that much less effort and time are required to predict the behavior of business processes.

A New Recurrent Neural Network Architecture for Pattern Recognition and Its Convergence Results

  • Lee, Seong-Whan;Kim, Young-Joon;Song, Hee-Heon
    • Journal of Electrical Engineering and information Science
    • /
    • v.1 no.1
    • /
    • pp.108-117
    • /
    • 1996
  • In this paper, we propose a new type of recurrent neural network architecture in which each output unit is connected with itself and fully-connected with other output units and all hidden units. The proposed recurrent network differs from Jordan's and Elman's recurrent networks in view of functions and architectures because it was originally extended from the multilayer feedforward neural network for improving the discrimination and generalization power. We also prove the convergence property of learning algorithm of the proposed recurrent neural network and analyze the performance of the proposed recurrent neural network by performing recognition experiments with the totally unconstrained handwritten numeral database of Concordia University of Canada. Experimental results confirmed that the proposed recurrent neural network improves the discrimination and generalization power in recognizing spatial patterns.

  • PDF

Fuzzy Inference-based Reinforcement Learning of Dynamic Recurrent Neural Networks

  • Jun, Hyo-Byung;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.7 no.5
    • /
    • pp.60-66
    • /
    • 1997
  • This paper presents a fuzzy inference-based reinforcement learning algorithm of dynamci recurrent neural networks, which is very similar to the psychological learning method of higher animals. By useing the fuzzy inference technique the linguistic and concetional expressions have an effect on the controller's action indirectly, which is shown in human's behavior. The intervlas of fuzzy membership functions are found optimally by genetic algorithms. And using recurrent neural networks composed of dynamic neurons as action-generation networks, past state as well as current state is considered to make an action in dynamical environment. We show the validity of the proposed learning algorithm by applying it to the inverted pendulum control problem.

  • PDF

Parameter Estimation of Recurrent Neural Equalizers Using the Derivative-Free Kalman Filter

  • Kwon, Oh-Shin
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.3
    • /
    • pp.267-272
    • /
    • 2010
  • For the last decade, recurrent neural networks (RNNs) have been commonly applied to communications channel equalization. The major problems of gradient-based learning techniques, employed to train recurrent neural networks are slow convergence rates and long training sequences. In high-speed communications system, short training symbols and fast convergence speed are essentially required. In this paper, the derivative-free Kalman filter, so called the unscented Kalman filter (UKF), for training a fully connected RNN is presented in a state-space formulation of the system. The main features of the proposed recurrent neural equalizer are fast convergence speed and good performance using relatively short training symbols without the derivative computation. Through experiments of nonlinear channel equalization, the performance of the RNN with a derivative-free Kalman filter is evaluated.

A Study on Development of Embedded System for Speech Recognition using Multi-layer Recurrent Neural Prediction Models & HMM (다층회귀신경예측 모델 및 HMM 를 이용한 임베디드 음성인식 시스템 개발에 관한 연구)

  • Kim, Jung hoon;Jang, Won il;Kim, Young tak;Lee, Sang bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.273-278
    • /
    • 2004
  • In this paper, the recurrent neural networks (RNN) is applied to compensate for HMM recognition algorithm, which is commonly used as main recognizer. Among these recurrent neural networks, the multi-layer recurrent neural prediction model (MRNPM), which allows operating in real-time, is used to implement learning and recognition, and HMM and MRNPM are used to design a hybrid-type main recognizer. After testing the designed speech recognition algorithm with Korean number pronunciations (13 words), which are hardly distinct, for its speech-independent recognition ratio, about 5% improvement was obtained comparing with existing HMM recognizers. Based on this result, only optimal (recognition) codes were extracted in the actual DSP (TMS320C6711) environment, and the embedded speech recognition system was implemented. Similarly, the implementation result of the embedded system showed more improved recognition system implementation than existing solid HMM recognition systems.

Control of Chaos Dynamics in Jordan Recurrent Neural Networks

  • Jin, Sang-Ho;Kenichi, Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.43.1-43
    • /
    • 2001
  • We propose two control methods of the Lyapunov exponents for Jordan-type recurrent neural networks. Both the two methods are formulated by a gradient-based learning method. The first method is derived strictly from the definition of the Lyapunov exponents that are represented by the state transition of the recurrent networks. The first method can control the complete set of the exponents, called the Lyapunov spectrum, however, it is computationally expensive because of its inherent recursive way to calculate the changes of the network parameters. Also this recursive calculation causes an unstable control when, at least, one of the exponents is positive, such as the largest Lyapunov exponent in the recurrent networks with chaotic dynamics. To improve stability in the chaotic situation, we propose a non recursive formulation by approximating ...

  • PDF

The Precision Position Control of the Pneumatic Rodless Cylinder Using Recurrent Neural Networks (리커런트 신경회로망을 이용한 공압 로드레스 실린더의 정밀위치제어)

  • 노철하;김영식;김상희
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.7
    • /
    • pp.84-90
    • /
    • 2003
  • This paper develops a control method that is composed of the proportional control algorithm and the learning algorithm based on the recurrent neural networks (RNN) for the position control of a pneumatic rodless cylinder. The proportional control algorithm is suggested for the modeled pneumatic system, which is obtained easily simplifying the system, and the RNN is suggested for the compensation of the modeling errors and uncertainties of the pneumatic system. In the proportional control, two zones are suggested in the phase plane. One is the transient zone for the smooth tracking and the other is the small movement zone for the accurate position control with eliminating the stick-slip phenomenon. The RNN is connected in parallel with the proportional control for the compensation of modeling errors and frictions, compressibilities, and parameter uncertainties in the pneumatic control system. This paper experimentally verifies the feasibility of the proposed control algorithm for such pneumatic systems.

GRADIENT EXPLOSION FREE ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS

  • HONG, SEOYOUNG;JEON, HYERIN;LEE, BYUNGJOON;MIN, CHOHONG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.24 no.4
    • /
    • pp.331-350
    • /
    • 2020
  • Exploding gradient is a widely known problem in training recurrent neural networks. The explosion problem has often been coped with cutting off the gradient norm by some fixed value. However, this strategy, commonly referred to norm clipping, is an ad hoc approach to attenuate the explosion. In this research, we opt to view the problem from a different perspective, the discrete-time optimal control with infinite horizon for a better understanding of the problem. Through this perspective, we fathom the region at which gradient explosion occurs. Based on the analysis, we introduce a gradient-explosion-free algorithm that keeps the training process away from the region. Numerical tests show that this algorithm is at least three times faster than the clipping strategy.

A Study on Speech Recognition using Recurrent Neural Networks (회귀신경망을 이용한 음성인식에 관한 연구)

  • 한학용;김주성;허강인
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.62-67
    • /
    • 1999
  • In this paper, we investigates a reliable model of the Predictive Recurrent Neural Network for the speech recognition. Predictive Neural Networks are modeled by syllable units. For the given input syllable, then a model which gives the minimum prediction error is taken as the recognition result. The Predictive Neural Network which has the structure of recurrent network was composed to give the dynamic feature of the speech pattern into the network. We have compared with the recognition ability of the Recurrent Network proposed by Elman and Jordan. ETRI's SAMDORI has been used for the speech DB. In order to find a reliable model of neural networks, the changes of two recognition rates were compared one another in conditions of: (1) changing prediction order and the number of hidden units: and (2) accumulating previous values with self-loop coefficient in its context. The result shows that the optimum prediction order, the number of hidden units, and self-loop coefficient have differently responded according to the structure of neural network used. However, in general, the Jordan's recurrent network shows relatively higher recognition rate than Elman's. The effects of recognition rate on the self-loop coefficient were variable according to the structures of neural network and their values.

  • PDF