• Title/Summary/Keyword: RNN (recurrent neural networks)

Search Result 106, Processing Time 0.027 seconds

System Identification Using Hybrid Recurrent Neural Networks (Hybrid 리커런트 신경망을 이용한 시스템 식별)

  • Choi Han-Go;Go Il-Whan;Kim Jong-In
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.1
    • /
    • pp.45-52
    • /
    • 2005
  • Dynamic neural networks have been applied to diverse fields requiring temporal signal processing. This paper describes system identification using the hybrid neural network, composed of locally(LRNN) and globally recurrent neural networks(GRNN) to improve dynamics of multilayered recurrent networks(RNN). The structure of the hybrid nework combines IIR-MLP as LRNN and Elman RNN as GRNN. The hybrid network is evaluated in linear and nonlinear system identification, and compared with Elman RNN and IIR-MLP networks for the relative comparison of its performance. Simulation results show that the hybrid network performs better with respect to the convergence and accuracy, indicating that it can be a more effective network than conventional multilayered recurrent networks in system identification.

  • PDF

Nonlinear Adaptive Prediction using Locally and Globally Recurrent Neural Networks (지역 및 광역 리커런트 신경망을 이용한 비선형 적응예측)

  • 최한고
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.1
    • /
    • pp.139-147
    • /
    • 2003
  • Dynamic neural networks have been applied to diverse fields requiring temporal signal processing such as signal prediction. This paper proposes the hybrid network, composed of locally(LRNN) and globally recurrent neural networks(GRNN), to improve dynamics of multilayered recurrent networks(RNN) and then describes nonlinear adaptive prediction using the proposed network as an adaptive filter. The hybrid network consists of IIR-MLP and Elman RNN as LRNN and GRNN, respectively. The proposed network is evaluated in nonlinear signal prediction and compared with Elman RNN and IIR-MLP networks for the relative comparison of prediction performance. Experimental results show that the hybrid network performs better with respect to convergence speed and accuracy, indicating that the proposed network can be a more effective prediction model than conventional multilayered recurrent networks in nonlinear prediction for nonstationary signals.

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

Time-Series Prediction of Baltic Dry Index (BDI) Using an Application of Recurrent Neural Networks (Recurrent Neural Networks를 활용한 Baltic Dry Index (BDI) 예측)

  • Han, Min-Soo;Yu, Song-Jin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2017.11a
    • /
    • pp.50-53
    • /
    • 2017
  • Not only growth of importance to understanding economic trends, but also the prediction to overcome the uncertainty is coming up for long-term maritime recession. This paper discussed about the prediction of BDI with artificial neural networks (ANN). ANN is one of emerging applications that can be the finest solution to the knotty problems that may not easy to achieve by humankind. Proposed a prediction by implementing neural networks that have recurrent architecture which are a Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). And for the reason of comparison, trained Multi Layer Perceptron (MLP) from 2009.04.01 to 2017.07.31. Also made a comparison with conventional statistics, prediction tools; ARIMA. As a result, recurrent net, especially RNN outperformed and also could discover the applicability of LSTM to specific time-series (BDI).

  • PDF

Water Level Forecasting based on Deep Learning: A Use Case of Trinity River-Texas-The United States (딥러닝 기반 침수 수위 예측: 미국 텍사스 트리니티강 사례연구)

  • Tran, Quang-Khai;Song, Sa-kwang
    • Journal of KIISE
    • /
    • v.44 no.6
    • /
    • pp.607-612
    • /
    • 2017
  • This paper presents an attempt to apply Deep Learning technology to solve the problem of forecasting floods in urban areas. We employ Recurrent Neural Networks (RNNs), which are suitable for analyzing time series data, to learn observed data of river water and to predict the water level. To test the model, we use water observation data of a station in the Trinity river, Texas, the U.S., with data from 2013 to 2015 for training and data in 2016 for testing. Input of the neural networks is a 16-record-length sequence of 15-minute-interval time-series data, and output is the predicted value of the water level at the next 30 minutes and 60 minutes. In the experiment, we compare three Deep Learning models including standard RNN, RNN trained with Back Propagation Through Time (RNN-BPTT), and Long Short-Term Memory (LSTM). The prediction quality of LSTM can obtain Nash Efficiency exceeding 0.98, while the standard RNN and RNN-BPTT also provide very high accuracy.

Estimating speech parameters for ultrasonic Doppler signal using LSTM recurrent neural networks (LSTM 순환 신경망을 이용한 초음파 도플러 신호의 음성 패러미터 추정)

  • Joo, Hyeong-Kil;Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.433-441
    • /
    • 2019
  • In this paper, a method of estimating speech parameters for ultrasonic Doppler signals reflected from the articulatory muscles using LSTM (Long Short Term Memory) RNN (Recurrent Neural Networks) was introduced and compared with the method using MLP (Multi-Layer Perceptrons). LSTM RNN were used to estimate the Fourier transform coefficients of speech signals from the ultrasonic Doppler signals. The log energy value of the Mel frequency band and the Fourier transform coefficients, which were extracted respectively from the ultrasonic Doppler signal and the speech signal, were used as the input and reference for training LSTM RNN. The performance of LSTM RNN and MLP was evaluated and compared by experiments using test data, and the RMSE (Root Mean Squared Error) was used as a measure. The RMSE of each experiment was 0.5810 and 0.7380, respectively. The difference was about 0.1570, so that it confirmed that the performance of the method using the LSTM RNN was better.

The Precision Position Control of the Pneumatic Rodless Cylinder Using Recurrent Neural Networks (리커런트 신경회로망을 이용한 공압 로드레스 실린더의 정밀위치제어)

  • 노철하;김영식;김상희
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.7
    • /
    • pp.84-90
    • /
    • 2003
  • This paper develops a control method that is composed of the proportional control algorithm and the learning algorithm based on the recurrent neural networks (RNN) for the position control of a pneumatic rodless cylinder. The proportional control algorithm is suggested for the modeled pneumatic system, which is obtained easily simplifying the system, and the RNN is suggested for the compensation of the modeling errors and uncertainties of the pneumatic system. In the proportional control, two zones are suggested in the phase plane. One is the transient zone for the smooth tracking and the other is the small movement zone for the accurate position control with eliminating the stick-slip phenomenon. The RNN is connected in parallel with the proportional control for the compensation of modeling errors and frictions, compressibilities, and parameter uncertainties in the pneumatic control system. This paper experimentally verifies the feasibility of the proposed control algorithm for such pneumatic systems.

Sequence-Based Travel Route Recommendation Systems Using Deep Learning - A Case of Jeju Island - (딥러닝을 이용한 시퀀스 기반의 여행경로 추천시스템 -제주도 사례-)

  • Lee, Hee Jun;Lee, Won Sok;Choi, In Hyeok;Lee, Choong Kwon
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.45-50
    • /
    • 2020
  • With the development of deep learning, studies using artificial neural networks based on deep learning in recommendation systems are being actively conducted. Especially, the recommendation system based on RNN (Recurrent Neural Network) shows good performance because it considers the sequential characteristics of data. This study proposes a travel route recommendation system using GRU(Gated Recurrent Unit) and Session-based Parallel Mini-batch which are RNN-based algorithm. This study improved the recommendation performance through an ensemble of top1 and bpr(Bayesian personalized ranking) error functions. In addition, it was confirmed that the RNN-based recommendation system considering the sequential characteristics in the data makes a recommendation reflecting the meaning of the travel destination inherent in the travel route.

Parameter Estimation of Recurrent Neural Networks Using A Unscented Kalman Filter Training Algorithm and Its Applications to Nonlinear Channel Equalization (언센티드 칼만필터 훈련 알고리즘에 의한 순환신경망의 파라미터 추정 및 비선형 채널 등화에의 응용)

  • Kwon Oh-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.552-559
    • /
    • 2005
  • Recurrent neural networks(RNNs) trained with gradient based such as real time recurrent learning(RTRL) has a drawback of slor convergence rate. This algorithm also needs the derivative calculation which is not trivialized in error back propagation process. In this paper a derivative free Kalman filter, so called the unscented Kalman filter(UKF), for training a fully connected RNN is presented in a state space formulation of the system. A derivative free Kalman filler learning algorithm makes the RNN have fast convergence speed and good tracking performance without the derivative computation. Through experiments of nonlinear channel equalization, performance of the RNNs with a derivative free Kalman filter teaming algorithm is evaluated.

Parameter Estimation of Recurrent Neural Equalizers Using the Derivative-Free Kalman Filter

  • Kwon, Oh-Shin
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.3
    • /
    • pp.267-272
    • /
    • 2010
  • For the last decade, recurrent neural networks (RNNs) have been commonly applied to communications channel equalization. The major problems of gradient-based learning techniques, employed to train recurrent neural networks are slow convergence rates and long training sequences. In high-speed communications system, short training symbols and fast convergence speed are essentially required. In this paper, the derivative-free Kalman filter, so called the unscented Kalman filter (UKF), for training a fully connected RNN is presented in a state-space formulation of the system. The main features of the proposed recurrent neural equalizer are fast convergence speed and good performance using relatively short training symbols without the derivative computation. Through experiments of nonlinear channel equalization, the performance of the RNN with a derivative-free Kalman filter is evaluated.