• Title/Summary/Keyword: recurrent neural network

Search Result 571, Processing Time 0.029 seconds

Comparative Analysis of PM10 Prediction Performance between Neural Network Models

  • Jung, Yong-Jin;Oh, Chang-Heon
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.241-247
    • /
    • 2021
  • Particulate matter has emerged as a serious global problem, necessitating highly reliable information on the matter. Therefore, various algorithms have been used in studies to predict particulate matter. In this study, we compared the prediction performance of neural network models that have been actively studied for particulate matter prediction. Among the neural network algorithms, a deep neural network (DNN), a recurrent neural network, and long short-term memory were used to design the optimal prediction model using a hyper-parameter search. In the comparative analysis of the prediction performance of each model, the DNN model showed a lower root mean square error (RMSE) than the other algorithms in the performance comparison using the RMSE and the level of accuracy as metrics for evaluation. The stability of the recurrent neural network was slightly lower than that of the other algorithms, although the accuracy was higher.

A Recurrent Neural Network Training and Equalization of Channels using Sigma-point Kalman Filter (시그마포인트 칼만필터를 이용한 순환신경망 학습 및 채널등화)

  • Kwon, Oh-Shin
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.3-5
    • /
    • 2007
  • This paper presents decision feedback equalizers using a recurrent neural network trained algorithm using extended Kalman filter(EKF) and sigma-point Kalman filter(SPKF). EKF is propagated, analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the Gaussian random variable. The SPKF addresses this problem by using a deterministic sampling approach. The features of the proposed recurrent neural equalizer And we investigate the bit error rate(BER) between EKF and SPKF.

  • PDF

Design of Recurrent Time Delayed Neural Network Controller Using Fuzzy Compensator (퍼지 보상기를 사용한 리커런트 시간지연 신경망 제어기 설계)

  • 이상윤;한성현;신위재
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.04a
    • /
    • pp.463-468
    • /
    • 2002
  • In this paper, we proposed a recurrent time delayed neural network controller which compensate a output of neural network controller. Even if learn by neural network controller, it can occur an bad results from disturbance or load variations. So in order to adjust above case, we used the fuzzy compensator to get an expected results. And the weight of main neural network can be changed with the result of learning a inverse model neural network of plant, so a expected dynamic characteristics of plant can be got. As the results of simulation through the second order plant, we confirmed that the proposed recurrent time delayed neural network controller get a good response compare with a time delayed neural network controller.

  • PDF

Sources separation of passive sonar array signal using recurrent neural network-based deep neural network with 3-D tensor (3-D 텐서와 recurrent neural network기반 심층신경망을 활용한 수동소나 다중 채널 신호분리 기술 개발)

  • Sangheon Lee;Dongku Jung;Jaesok Yu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.357-363
    • /
    • 2023
  • In underwater signal processing, separating individual signals from mixed signals has long been a challenge due to low signal quality. The common method using Short-time Fourier transform for spectrogram analysis has faced criticism for its complex parameter optimization and loss of phase data. We propose a Triple-path Recurrent Neural Network, based on the Dual-path Recurrent Neural Network's success in long time series signal processing, to handle three-dimensional tensors from multi-channel sensor input signals. By dividing input signals into short chunks and creating a 3D tensor, the method accounts for relationships within and between chunks and channels, enabling local and global feature learning. The proposed technique demonstrates improved Root Mean Square Error and Scale Invariant Signal to Noise Ratio compared to the existing method.

Sliding Mode Control based on Recurrent Neural Network (회귀신경망을 이용한 슬라이딩 모드 제어)

  • 홍경수;이건복
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.10a
    • /
    • pp.135-139
    • /
    • 2000
  • This research proposes a nonlinear sliding mode control. The sliding mode control is designed according to Lyapunov function. The equivalent control term is estimated by neural network. To estimate the unknown part in the control law in on-line fashion, A recurrent neural network is given as on-line estimator. The stability of the control system is guaranteed owing to the on-line learning ability of the recurrent neural network. It is certificated through simulation results to be applied to nonlinear system that the function approximation and the proposed control scheme is very effective.

  • PDF

A Study on the Recognition of Korean Numerals Using Recurrent Neural Predictive HMM (회귀신경망 예측 HMM을 이용한 숫자음 인식에 관한 연구)

  • 김수훈;고시영;허강인
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.12-18
    • /
    • 2001
  • In this paper, we propose the Recurrent Neural Predictive HMM (RNPHMM). The RNPHMM is the hybrid network of the recurrent neural network and HMM. The predictive recurrent neural network trained to predict the future vector based on several last feature vectors, and defined every state of HMM. This method uses the prediction value from the predictive recurrent neural network, which is dynamically changing due to the effects of the previous feature vectors instead of the stable average vectors. The models of the RNPHMM are Elman network prediction HMM and Jordan network prediction HMM. In the experiment, we compared the recognition abilities of the RNPHMM as we increased the state number, prediction order, and number of hidden nodes for the isolated digits. As a result of the experiments, Elman network prediction HMM and Jordan network prediction HMM have good recognition ability as 98.5% for test data, respectively.

  • PDF

Isolated Digit Recognition Combined with Recurrent Neural Prediction Models and Chaotic Neural Networks (회귀예측 신경모델과 카오스 신경회로망을 결합한 고립 숫자음 인식)

  • Kim, Seok-Hyun;Ryeo, Ji-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.6
    • /
    • pp.129-135
    • /
    • 1998
  • In this paper, the recognition rate of isolated digits has been improved using the multiple neural networks combined with chaotic recurrent neural networks and MLP. Generally, the recognition rate has been increased from 1.2% to 2.5%. The experiments tell that the recognition rate is increased because MLP and CRNN(chaotic recurrent neural network) compensate for each other. Besides this, the chaotic dynamic properties have helped more in speech recognition. The best recognition rate is when the algorithm combined with MLP and chaotic multiple recurrent neural network has been used. However, in the respect of simple algorithm and reliability, the multiple neural networks combined with MLP and chaotic single recurrent neural networks have better properties. Largely, MLP has very good recognition rate in korean digits "il", "oh", while the chaotic recurrent neural network has best recognition in "young", "sam", "chil".

  • PDF

Differential Geometric Conditions for the state Observation using a Recurrent Neural Network in a Stochastic Nonlinear System

  • Seok, Jin-Wuk;Mah, Pyeong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.592-597
    • /
    • 2003
  • In this paper, some differential geometric conditions for the observer using a recurrent neural network are provided in terms of a stochastic nonlinear system control. In the stochastic nonlinear system, it is necessary to make an additional condition for observation of stochastic nonlinear system, called perfect filtering condition. In addition, we provide a observer using a recurrent neural network for the observation of a stochastic nonlinear system with the proposed observation conditions. Computer simulation shows that the control performance of the stochastic nonlinear system with a observer using a recurrent neural network satisfying the proposed conditions is more efficient than the conventional observer as Kalman filter

  • PDF

EEG Signal Prediction by using State Feedback Real-Time Recurrent Neural Network (상태피드백 실시간 회귀 신경회망을 이용한 EEG 신호 예측)

  • Kim, Taek-Soo
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.1
    • /
    • pp.39-42
    • /
    • 2002
  • For the purpose of modeling EEG signal which has nonstationary and nonlinear dynamic characteristics, this paper propose a state feedback real time recurrent neural network model. The state feedback real time recurrent neural network is structured to have memory structure in the state of hidden layers so that it has arbitrary dynamics and ability to deal with time-varying input through its own temporal operation. For the model test, Mackey-Glass time series is used as a nonlinear dynamic system and the model is applied to the prediction of three types of EEG, alpha wave, beta wave and epileptic EEG. Experimental results show that the performance of the proposed model is better than that of other neural network models which are compared in this paper in some view points of the converging speed in learning stage and normalized mean square error for the test data set.

Forecasting realized volatility using data normalization and recurrent neural network

  • Yoonjoo Lee;Dong Wan Shin;Ji Eun Choi
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.1
    • /
    • pp.105-127
    • /
    • 2024
  • We propose recurrent neural network (RNN) methods for forecasting realized volatility (RV). The data are RVs of ten major stock price indices, four from the US, and six from the EU. Forecasts are made for relative ratio of adjacent RVs instead of the RV itself in order to avoid the out-of-scale issue. Forecasts of RV ratios distribution are first constructed from which those of RVs are computed which are shown to be better than forecasts constructed directly from RV. The apparent asymmetry of RV ratio is addressed by the Piecewise Min-max (PM) normalization. The serial dependence of the ratio data renders us to consider two architectures, long short-term memory (LSTM) and gated recurrent unit (GRU). The hyperparameters of LSTM and GRU are tuned by the nested cross validation. The RNN forecast with the PM normalization and ratio transformation is shown to outperform other forecasts by other RNN models and by benchmarking models of the AR model, the support vector machine (SVM), the deep neural network (DNN), and the convolutional neural network (CNN).