• Title/Summary/Keyword: LSTM-RNN

Search Result 203, Processing Time 0.03 seconds

Performance Comparison of PM10 Prediction Models Based on RNN and LSTM (RNN과 LSTM 기반의 PM10 예측 모델 성능 비교)

  • Jung, Yong-jin;Lee, Jong-sung;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.280-282
    • /
    • 2021
  • A particular matter prediction model was designed using a deep learning algorithm to solve the problem of particular matter forecast with subjective judgment applied. RNN and LSTM were used among deep learning algorithms, and it was designed by applying optimal parameters by proceeding with hyperparametric navigation. The predicted performance of the two models was evaluated through RMSE and predicted accuracy. The performance assessment confirmed that there was no significant difference between the RMSE and accuracy, but there was a difference in the detailed forecast accuracy.

  • PDF

Automated Vehicle Research by Recognizing Maneuvering Modes using LSTM Model (LSTM 모델 기반 주행 모드 인식을 통한 자율 주행에 관한 연구)

  • Kim, Eunhui;Oh, Alice
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.4
    • /
    • pp.153-163
    • /
    • 2017
  • This research is based on the previous research that personally preferred safe distance, rotating angle and speed are differentiated. Thus, we use machine learning model for recognizing maneuvering modes trained per personal or per similar driving pattern groups, and we evaluate automatic driving according to maneuvering modes. By utilizing driving knowledge, we subdivided 8 kinds of longitudinal modes and 4 kinds of lateral modes, and by combining the longitudinal and lateral modes, we build 21 kinds of maneuvering modes. we train the labeled data set per time stamp through RNN, LSTM and Bi-LSTM models by the trips of drivers, which are supervised deep learning models, and evaluate the maneuvering modes of automatic driving for the test data set. The evaluation dataset is aggregated of living trips of 3,000 populations by VTTI in USA for 3 years and we use 1500 trips of 22 people and training, validation and test dataset ratio is 80%, 10% and 10%, respectively. For recognizing longitudinal 8 kinds of maneuvering modes, RNN achieves better accuracy compared to LSTM, Bi-LSTM. However, Bi-LSTM improves the accuracy in recognizing 21 kinds of longitudinal and lateral maneuvering modes in comparison with RNN and LSTM as 1.54% and 0.47%, respectively.

LSTM RNN-based Korean Speech Recognition System Using CTC (CTC를 이용한 LSTM RNN 기반 한국어 음성인식 시스템)

  • Lee, Donghyun;Lim, Minkyu;Park, Hosung;Kim, Ji-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.93-99
    • /
    • 2017
  • A hybrid approach using Long Short Term Memory (LSTM) Recurrent Neural Network (RNN) has showed great improvement in speech recognition accuracy. For training acoustic model based on hybrid approach, it requires forced alignment of HMM state sequence from Gaussian Mixture Model (GMM)-Hidden Markov Model (HMM). However, high computation time for training GMM-HMM is required. This paper proposes an end-to-end approach for LSTM RNN-based Korean speech recognition to improve learning speed. A Connectionist Temporal Classification (CTC) algorithm is proposed to implement this approach. The proposed method showed almost equal performance in recognition rate, while the learning speed is 1.27 times faster.

Analysis of Accuracy and Loss Performance According to Hyperparameter in RNN Model (RNN모델에서 하이퍼파라미터 변화에 따른 정확도와 손실 성능 분석)

  • Kim, Joon-Yong;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.31-38
    • /
    • 2021
  • In this paper, in order to obtain the optimization of the RNN model used for sentiment analysis, the correlation of each model was studied by observing the trend of loss and accuracy according to hyperparameter tuning. As a research method, after configuring the hidden layer with LSTM and the embedding layer that are most optimized to process sequential data, the loss and accuracy of each model were measured by tuning the unit, batch-size, and embedding size of the LSTM. As a result of the measurement, the loss was 41.9% and the accuracy was 11.4%, and the trend of the optimization model showed a consistently stable graph, confirming that the tuning of the hyperparameter had a profound effect on the model. In addition, it was confirmed that the decision of the embedding size among the three hyperparameters had the greatest influence on the model. In the future, this research will be continued, and research on an algorithm that allows the model to directly find the optimal hyperparameter will continue.

Long Short Term Memory based Political Polarity Analysis in Cyber Public Sphere

  • Kang, Hyeon;Kang, Dae-Ki
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.57-62
    • /
    • 2017
  • In this paper, we applied long short term memory(LSTM) for classifying political polarity in cyber public sphere. The data collected from the cyber public sphere is transformed into word corpus data through word embedding. Based on this word corpus data, we train recurrent neural network (RNN) which is connected by LSTM's. Softmax function is applied at the output of the RNN. We conducted our proposed system to obtain experimental results, and we will enhance our proposed system by refining LSTM in our system.

Procedure for monitoring autocorrelated processes using LSTM Autoencoder (LSTM Autoencoder를 이용한 자기상관 공정의 모니터링 절차)

  • Pyoungjin Ji;Jaeheon Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.191-207
    • /
    • 2024
  • Many studies have been conducted to quickly detect out-of-control situations in autocorrelated processes. The most traditionally used method is a residual control chart, which uses residuals calculated from a fitted time series model. However, many procedures for monitoring autocorrelated processes using statistical learning methods have recently been proposed. In this paper, we propose a monitoring procedure using the latent vector of LSTM Autoencoder, a deep learning-based unsupervised learning method. We compare the performance of this procedure with the LSTM Autoencoder procedure based on the reconstruction error, the RNN classification procedure, and the residual charting procedure through simulation studies. Simulation results show that the performance of the proposed procedure and the RNN classification procedure are similar, but the proposed procedure has the advantage of being useful in processes where sufficient out-of-control data cannot be obtained, because it does not require out-of-control data for training.

EEG Dimensional Reduction with Stack AutoEncoder for Emotional Recognition using LSTM/RNN (LSTM/RNN을 사용한 감정인식을 위한 스택 오토 인코더로 EEG 차원 감소)

  • Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.717-724
    • /
    • 2020
  • Due to the important role played by emotion in human interaction, affective computing is dedicated in trying to understand and regulate emotion through human-aware artificial intelligence. By understanding, emotion mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction will be better managed as they are all associated with emotion. Various studies for emotion recognition have been conducted to solve these problems. In applying machine learning for the emotion recognition, the efforts to reduce the complexity of the algorithm and improve the accuracy are required. In this paper, we investigate emotion Electroencephalogram (EEG) feature reduction and classification using Stack AutoEncoder (SAE) and Long-Short-Term-Memory/Recurrent Neural Networks (LSTM/RNN) classification respectively. The proposed method reduced the complexity of the model and significantly enhance the performance of the classifiers.

A Study on the Forecasting of Bunker Price Using Recurrent Neural Network

  • Kim, Kyung-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.179-184
    • /
    • 2021
  • In this paper, we propose the deep learning-based neural network model to predict bunker price. In the shipping industry, since fuel oil accounts for the largest portion of ship operation costs and its price is highly volatile, so companies can secure market competitiveness by making fuel oil purchasing decisions based on rational and scientific method. In this paper, short-term predictive analysis of HSFO 380CST in Singapore is conducted by using three recurrent neural network models like RNN, LSTM, and GRU. As a result, first, the forecasting performance of RNN models is better than LSTM and GRUs using long-term memory, and thus the predictive contribution of long-term information is low. Second, since the predictive performance of recurrent neural network models is superior to the previous studies using econometric models, it is confirmed that the recurrent neural network models should consider nonlinear properties of bunker price. The result of this paper will be helpful to improve the decision quality of bunker purchasing.

Performance of Exercise Posture Correction System Based on Deep Learning (딥러닝 기반 운동 자세 교정 시스템의 성능)

  • Hwang, Byungsun;Kim, Jeongho;Lee, Ye-Ram;Kyeong, Chanuk;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.177-183
    • /
    • 2022
  • Recently, interesting of home training is getting bigger due to COVID-19. Accordingly, research on applying HAR(human activity recognition) technology to home training has been conducted. However, existing paper of HAR proposed static activity instead of dynamic activity. In this paper, the deep learning model where dynamic exercise posture can be analyzed and the accuracy of the user's exercise posture can be shown is proposed. Fitness images of AI-hub are analyzed by blaze pose. The experiment is compared with three types of deep learning model: RNN(recurrent neural network), LSTM(long short-term memory), CNN(convolution neural network). In simulation results, it was shown that the f1-score of RNN, LSTM and CNN is 0.49, 0.87 and 0.98, respectively. It was confirmed that CNN is more suitable for human activity recognition than other models from simulation results. More exercise postures can be analyzed using a variety learning data.