• 제목/요약/키워드: Recurrent neural networks

검색결과 286건 처리시간 0.026초

정규문법과 동등한 일반화된 이진 이차 재귀 신경망 (Generalized Binary Second-order Recurrent Neural Networks Equivalent to Regular Grammars)

  • 정순호
    • 지능정보연구
    • /
    • 제12권1호
    • /
    • pp.107-123
    • /
    • 2006
  • 이 논문은 정규문법과 동등한 의미를 가지는 일반적인 이진 이차 재귀 신경망(Generalized Binary Second-order Recurrent Neural Networks: GBSRNN)의 구조 및 학습 방법을 제안하며 이를 이용하여 정규언어를 인식하는 어휘분석기 구현을 소개한다. GSBRNN는 성분들의 이진값 표현으로 정규문법과 동치인 모든 표현에 대하여 하드웨어로 표현할 수 있는 방법을 제공하며 정규 문법과의 구조적 관련성을 보여준다. 정규문법에서 심볼들의 개수 m, 비단말 심볼의 개수 p, 단말 심볼의 개수 q, k인 문자열이 입력된다고 할 때, GBSRNN의 크기는 $O(m(p+q)^2)$ 이고 병렬처리 시간은 O(k)이며 순차처리 시간은 $O(k(p+q)^2)$이다.

  • PDF

딥러닝 기반의 프로세스 예측에 관한 연구: 동적 순환신경망을 중심으로 (Exploring process prediction based on deep learning: Focusing on dynamic recurrent neural networks)

  • 김정연;윤석준;이보경
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제27권4호
    • /
    • pp.115-128
    • /
    • 2018
  • Purpose The purpose of this study is to predict future behaviors of business process. Specifically, this study tried to predict the last activities of process instances. It contributes to overcoming the limitations of existing approaches that they do not accurately reflect the actual behavior of business process and it requires a lot of effort and time every time they are applied to specific processes. Design/methodology/approach This study proposed a novel approach based using deep learning in the form of dynamic recurrent neural networks. To improve the accuracy of our prediction model based on the approach, we tried to adopt the latest techniques including new initialization functions(Xavier and He initializations). The proposed approach has been verified using real-life data of a domestic small and medium-sized business. Findings According to the experiment result, our approach achieves better prediction accuracy than the latest approach based on the static recurrent neural networks. It is also proved that much less effort and time are required to predict the behavior of business processes.

A New Recurrent Neural Network Architecture for Pattern Recognition and Its Convergence Results

  • Lee, Seong-Whan;Kim, Young-Joon;Song, Hee-Heon
    • Journal of Electrical Engineering and information Science
    • /
    • 제1권1호
    • /
    • pp.108-117
    • /
    • 1996
  • In this paper, we propose a new type of recurrent neural network architecture in which each output unit is connected with itself and fully-connected with other output units and all hidden units. The proposed recurrent network differs from Jordan's and Elman's recurrent networks in view of functions and architectures because it was originally extended from the multilayer feedforward neural network for improving the discrimination and generalization power. We also prove the convergence property of learning algorithm of the proposed recurrent neural network and analyze the performance of the proposed recurrent neural network by performing recognition experiments with the totally unconstrained handwritten numeral database of Concordia University of Canada. Experimental results confirmed that the proposed recurrent neural network improves the discrimination and generalization power in recognizing spatial patterns.

  • PDF

Fuzzy Inference-based Reinforcement Learning of Dynamic Recurrent Neural Networks

  • Jun, Hyo-Byung;Sim, Kwee-Bo
    • 한국지능시스템학회논문지
    • /
    • 제7권5호
    • /
    • pp.60-66
    • /
    • 1997
  • This paper presents a fuzzy inference-based reinforcement learning algorithm of dynamci recurrent neural networks, which is very similar to the psychological learning method of higher animals. By useing the fuzzy inference technique the linguistic and concetional expressions have an effect on the controller's action indirectly, which is shown in human's behavior. The intervlas of fuzzy membership functions are found optimally by genetic algorithms. And using recurrent neural networks composed of dynamic neurons as action-generation networks, past state as well as current state is considered to make an action in dynamical environment. We show the validity of the proposed learning algorithm by applying it to the inverted pendulum control problem.

  • PDF

Parameter Estimation of Recurrent Neural Equalizers Using the Derivative-Free Kalman Filter

  • Kwon, Oh-Shin
    • Journal of information and communication convergence engineering
    • /
    • 제8권3호
    • /
    • pp.267-272
    • /
    • 2010
  • For the last decade, recurrent neural networks (RNNs) have been commonly applied to communications channel equalization. The major problems of gradient-based learning techniques, employed to train recurrent neural networks are slow convergence rates and long training sequences. In high-speed communications system, short training symbols and fast convergence speed are essentially required. In this paper, the derivative-free Kalman filter, so called the unscented Kalman filter (UKF), for training a fully connected RNN is presented in a state-space formulation of the system. The main features of the proposed recurrent neural equalizer are fast convergence speed and good performance using relatively short training symbols without the derivative computation. Through experiments of nonlinear channel equalization, the performance of the RNN with a derivative-free Kalman filter is evaluated.

다층회귀신경예측 모델 및 HMM 를 이용한 임베디드 음성인식 시스템 개발에 관한 연구 (A Study on Development of Embedded System for Speech Recognition using Multi-layer Recurrent Neural Prediction Models & HMM)

  • 김정훈;장원일;김영탁;이상배
    • 한국지능시스템학회논문지
    • /
    • 제14권3호
    • /
    • pp.273-278
    • /
    • 2004
  • 본 논문은 주인식기로 흔히 사용되는 HMM 인식 알고리즘을 보완하기 위한 방법으로 회귀신경회로망(Recurrent neural networks : RNN)을 적용하였다. 이 회귀신경회로망 중에서 실 시간적으로 동작이 가능하게 한 방법인 다층회귀신경예측 모델 (Multi-layer Recurrent Neural Prediction Model : MRNPM)을 사용하여 학습 및 인식기로 구현하였으며, HMM과 MRNPM 을 이용하여 Hybrid형태의 주 인식기로 설계하였다. 설계된 음성 인식 알고리즘을 잘 구별되지 않는 한국어 숫자음(13개 단어)에 대해 화자 독립형으로 인식률 테스트 한 결과 기존의 HMM인식기 보다 5%정도의 인식률 향상이 나타났다. 이 결과를 이용하여 실제 DSP(TMS320C6711) 환경 내에서 최적(인식) 코드만을 추출하여 임베디드 음성 인식 시스템을 구현하였다. 마찬가지로 임베디드 시스템의 구현 결과도 기존 단독 HMM 인식시스템보다 향상된 인식시스템을 구현할 수 있게 되었다.

Control of Chaos Dynamics in Jordan Recurrent Neural Networks

  • Jin, Sang-Ho;Kenichi, Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.43.1-43
    • /
    • 2001
  • We propose two control methods of the Lyapunov exponents for Jordan-type recurrent neural networks. Both the two methods are formulated by a gradient-based learning method. The first method is derived strictly from the definition of the Lyapunov exponents that are represented by the state transition of the recurrent networks. The first method can control the complete set of the exponents, called the Lyapunov spectrum, however, it is computationally expensive because of its inherent recursive way to calculate the changes of the network parameters. Also this recursive calculation causes an unstable control when, at least, one of the exponents is positive, such as the largest Lyapunov exponent in the recurrent networks with chaotic dynamics. To improve stability in the chaotic situation, we propose a non recursive formulation by approximating ...

  • PDF

리커런트 신경회로망을 이용한 공압 로드레스 실린더의 정밀위치제어 (The Precision Position Control of the Pneumatic Rodless Cylinder Using Recurrent Neural Networks)

  • 노철하;김영식;김상희
    • 한국정밀공학회지
    • /
    • 제20권7호
    • /
    • pp.84-90
    • /
    • 2003
  • This paper develops a control method that is composed of the proportional control algorithm and the learning algorithm based on the recurrent neural networks (RNN) for the position control of a pneumatic rodless cylinder. The proportional control algorithm is suggested for the modeled pneumatic system, which is obtained easily simplifying the system, and the RNN is suggested for the compensation of the modeling errors and uncertainties of the pneumatic system. In the proportional control, two zones are suggested in the phase plane. One is the transient zone for the smooth tracking and the other is the small movement zone for the accurate position control with eliminating the stick-slip phenomenon. The RNN is connected in parallel with the proportional control for the compensation of modeling errors and frictions, compressibilities, and parameter uncertainties in the pneumatic control system. This paper experimentally verifies the feasibility of the proposed control algorithm for such pneumatic systems.

GRADIENT EXPLOSION FREE ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS

  • HONG, SEOYOUNG;JEON, HYERIN;LEE, BYUNGJOON;MIN, CHOHONG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제24권4호
    • /
    • pp.331-350
    • /
    • 2020
  • Exploding gradient is a widely known problem in training recurrent neural networks. The explosion problem has often been coped with cutting off the gradient norm by some fixed value. However, this strategy, commonly referred to norm clipping, is an ad hoc approach to attenuate the explosion. In this research, we opt to view the problem from a different perspective, the discrete-time optimal control with infinite horizon for a better understanding of the problem. Through this perspective, we fathom the region at which gradient explosion occurs. Based on the analysis, we introduce a gradient-explosion-free algorithm that keeps the training process away from the region. Numerical tests show that this algorithm is at least three times faster than the clipping strategy.

회귀신경망을 이용한 음성인식에 관한 연구 (A Study on Speech Recognition using Recurrent Neural Networks)

  • 한학용;김주성;허강인
    • 한국음향학회지
    • /
    • 제18권3호
    • /
    • pp.62-67
    • /
    • 1999
  • 본 논문은 회귀신경망을 이용한 음성인식에 관한 연구이다. 예측형 신경망으로 음절단위로 모델링한 후 미지의 입력음성에 대하여 예측오차가 최소가 되는 모델을 인식결과로 한다. 이를 위해서 예측형으로 구성된 신경망에 음성의 시변성을 신경망 내부에 흡수시키기 위해서 회귀구조의 동적인 신경망인 회귀예측신경망을 구성하고 Elman과 Jordan이 제안한 회귀구조에 따라 인식성능을 서로 비교하였다. 음성DB는 ETRI의 샘돌이 음성 데이터를 사용하였다. 그리고, 신경망의 최적모델을 구하기 위하여 예측차수와 은닉층 유니트 수의 변화에 따른 인식률의 변화와 문맥층에서 자기회귀계수를 두어 이전의 값들이 문맥층에서 누적되도록 하였을 경우에 대한 인식률의 변화를 비교하였다. 실험결과, 최적의 예측차수, 은닉층 유니트수, 자기회귀계수는 신경망의 구조에 따라 차이가 나타났으며, 전반적으로 Jordan망이 Elman망보다 인식률이 높았으며, 자기회귀계수에 대한 영향은 신경망의 구조와 계수값에 따라 불규칙하게 나타났다.

  • PDF