• Title/Summary/Keyword: LSTM/GRU학습모델

Search Result 17, Processing Time 0.032 seconds

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

Comparison of Deep Learning Models Using Protein Sequence Data (단백질 기능 예측 모델의 주요 딥러닝 모델 비교 실험)

  • Lee, Jeung Min;Lee, Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.245-254
    • /
    • 2022
  • Proteins are the basic unit of all life activities, and understanding them is essential for studying life phenomena. Since the emergence of the machine learning methodology using artificial neural networks, many researchers have tried to predict the function of proteins using only protein sequences. Many combinations of deep learning models have been reported to academia, but the methods are different and there is no formal methodology, and they are tailored to different data, so there has never been a direct comparative analysis of which algorithms are more suitable for handling protein data. In this paper, the single model performance of each algorithm was compared and evaluated based on accuracy and speed by applying the same data to CNN, LSTM, and GRU models, which are the most frequently used representative algorithms in the convergence research field of predicting protein functions, and the final evaluation scale is presented as Micro Precision, Recall, and F1-score. The combined models CNN-LSTM and CNN-GRU models also were evaluated in the same way. Through this study, it was confirmed that the performance of LSTM as a single model is good in simple classification problems, overlapping CNN was suitable as a single model in complex classification problems, and the CNN-LSTM was relatively better as a combination model.

A Fuzzy-AHP-based Movie Recommendation System using the GRU Language Model (GRU 언어 모델을 이용한 Fuzzy-AHP 기반 영화 추천 시스템)

  • Oh, Jae-Taek;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.319-325
    • /
    • 2021
  • With the advancement of wireless technology and the rapid growth of the infrastructure of mobile communication technology, systems applying AI-based platforms are drawing attention from users. In particular, the system that understands users' tastes and interests and recommends preferred items is applied to advanced e-commerce customized services and smart homes. However, there is a problem that these recommendation systems are difficult to reflect in real time the preferences of various users for tastes and interests. In this research, we propose a Fuzzy-AHP-based movies recommendation system using the Gated Recurrent Unit (GRU) language model to address a problem. In this system, we apply Fuzzy-AHP to reflect users' tastes or interests in real time. We also apply GRU language model-based models to analyze the public interest and the content of the film to recommend movies similar to the user's preferred factors. To validate the performance of this recommendation system, we measured the suitability of the learning model using scraping data used in the learning module, and measured the rate of learning performance by comparing the Long Short-Term Memory (LSTM) language model with the learning time per epoch. The results show that the average cross-validation index of the learning model in this work is suitable at 94.8% and that the learning performance rate outperforms the LSTM language model.

Prediction of Sea Water Temperature by Using Deep Learning Technology Based on Ocean Buoy (해양관측부위 자료 기반 딥러닝 기술을 활용한 해양 혼합층 수온 예측)

  • Ko, Kwan-Seob;Byeon, Seong-Hyeon;Kim, Young-Won
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.299-309
    • /
    • 2022
  • Recently, The sea water temperature around Korean Peninsula is steadily increasing. Water temperature changes not only affect the fishing ecosystem, but also are closely related to military operations in the sea. The purpose of this study is to suggest which model is more suitable for the field of water temperature prediction by attempting short-term water temperature prediction through various prediction models based on deep learning technology. The data used for prediction are water temperature data from the East Sea (Goseong, Yangyang, Gangneung, and Yeongdeok) from 2016 to 2020, which were observed through marine observation by the National Fisheries Research Institute. In addition, we use Long Short-Term Memory (LSTM), Bidirectional LSTM, and Gated Recurrent Unit (GRU) techniques that show excellent performance in predicting time series data as models for prediction. While the previous study used only LSTM, in this study, the prediction accuracy of each technique and the performance time were compared by applying various techniques in addition to LSTM. As a result of the study, it was confirmed that Bidirectional LSTM and GRU techniques had the least error between actual and predicted values at all observation points based on 1 hour prediction, and GRU was the fastest in learning time. Through this, it was confirmed that a method using Bidirectional LSTM was required for water temperature prediction to improve accuracy while reducing prediction errors. In areas that require real-time prediction in addition to accuracy, such as anti-submarine operations, it is judged that the method of using the GRU technique will be more appropriate.

Real-Time Streaming Traffic Prediction Using Deep Learning Models Based on Recurrent Neural Network (순환 신경망 기반 딥러닝 모델들을 활용한 실시간 스트리밍 트래픽 예측)

  • Jinho, Kim;Donghyeok, An
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.53-60
    • /
    • 2023
  • Recently, the demand and traffic volume for various multimedia contents are rapidly increasing through real-time streaming platforms. In this paper, we predict real-time streaming traffic to improve the quality of service (QoS). Statistical models have been used to predict network traffic. However, since real-time streaming traffic changes dynamically, we used recurrent neural network-based deep learning models rather than a statistical model. Therefore, after the collection and preprocessing for real-time streaming data, we exploit vanilla RNN, LSTM, GRU, Bi-LSTM, and Bi-GRU models to predict real-time streaming traffic. In evaluation, the training time and accuracy of each model are measured and compared.

A Comparative study on smoothing techniques for performance improvement of LSTM learning model

  • Tae-Jin, Park;Gab-Sig, Sim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.17-26
    • /
    • 2023
  • In this paper, we propose a several smoothing techniques are compared and applied to increase the application of the LSTM-based learning model and its effectiveness. The applied smoothing technique is Savitky-Golay, exponential smoothing, and weighted moving average. Through this study, the LSTM algorithm with the Savitky-Golay filter applied in the preprocessing process showed significant best results in prediction performance than the result value shown when applying the LSTM model to Bitcoin data. To confirm the predictive performance results, the learning loss rate and verification loss rate according to the Savitzky-Golay LSTM model were compared with the case of LSTM used to remove complex factors from Bitcoin price prediction, and experimented with an average value of 20 times to increase its reliability. As a result, values of (3.0556, 0.00005) and (1.4659, 0.00002) could be obtained. As a result, since crypto-currencies such as Bitcoin have more volatility than stocks, noise was removed by applying the Savitzky-Golay in the data preprocessing process, and the data after preprocessing were obtained the most-significant to increase the Bitcoin prediction rate through LSTM neural network learning.

Implementation of real-time water level prediction system using LSTM-GRU model (LSTM-GRU 모델을 활용한 실시간 수위 예측 시스템 구현)

  • Cho, Minwoo;Jeong, HanGyeol;Park, Bumjin;Im, Haran;Lim, Ine;Jung, Heokyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.216-218
    • /
    • 2022
  • Natural disasters caused by abnormal climates are continuously increasing, and the types of natural disasters that cause the most damage are flood damage caused by heavy rains and typhoons. Therefore, in order to reduce flood damage, this paper proposes a system that can predict the water level, a major parameter of flood, in real time using LSTM and GRU. The input data used for flood prediction are upstream and downstream water levels, temperature, humidity, and precipitation, and real-time prediction is performed through the pre-trained LSTM-GRU model. The input data uses data from the past 20 hours to predict the water level for the next 3 hours. Through the system proposed in this paper, if the risk determination function can be added and an evacuation order can be issued to the people exposed to the flood, it is thought that a lot of damage caused by the flood can be reduced.

  • PDF

Utility of Deep Learning Model for Improving Dam and Reservoir Operation: A Case Study of Seonjin River Dam (섬진강 댐의 수문학적 예측을 위한 딥러닝 모델 활용)

  • Lee, Eunmi;Kam, Jonghun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.483-483
    • /
    • 2022
  • 댐과 저수지의 운영 최적화를 위한 수문학적 예보는 현재 수동적인 댐 운영이 주를 이루면서 활용도가 높지 않다. 불확실한 기후변화나 기후재난 상황에서 우리 사회에 악영향을 최소화하기 위해 선제적으로 대응/대비할 수 있는 댐 운영 방안이 불가피하다. 강우량 예측 기술은 기후변화로 인해 제한적인 상황이다. 실례로, 2020년 8월에 섬진강의 댐이 극심한 집중 강우로 인해 무너지는 사태가 발생하였고 이로 인해 지역사회에 막대한 경제적 피해가 발생하였다. 선제적 댐 방류량 운영 기술은 또한 환경적인 변화로 인한 영향을 완화하기 위해 필요한 것이다. 제한적인 기상 예보 기술을 극복하고자 심화학습이나 강화학습 같은 인공지능 모델들의 활용성에 대한 연구가 시도되고 있다. 따라서 본 연구는 섬진강 댐의 시간당 수문 데이터를 이용하여 댐 운영을 위한 심화학습 모델을 개발하고 그 활용도를 평가하였다. 댐 운영을 위한 심화학습 모델로서 시계열 데이터 예측에 적합한 Long Sort Term Memory(LSTM)과 Gated Recurrent Unit(GRU) 알고리즘을 구축하고 댐 수위를 예측하였다. 분석 자료는 WAMIS에서 제공하는 2000년부터 2021년까지의 시간당 데이터를 사용하였다. 입력 데이터로서 시간당 유입량, 강우량과 방류량을, 출력 데이터로서 시간당 수위 자료를 각각 사용하였으며. 결정계수(R2 Score)를 통해 모델의 예측 성능을 평가하였다. 댐 수위 예측값 개선을 위해 하이퍼파라미터의 '최적값'이 존재하는 범위를 줄여나가는 하이퍼파라미터 최적화를 두 가지 방법으로 진행하였다. 첫 번째 방법은 수동적 탐색(Manual Search) 방법으로 Sequence Length를 24, 48, 72시간, Hidden Layer를 1, 3, 5개로 설정하여 하이퍼파라미터의 조합에 따른 LSTM와 GRU의 민감도를 평가하였다. 두 번째 방법은 Grid Search로 최적의 하이퍼파라미터를 찾았다. 이 두가지 방법에서는 같은 하이퍼파라미터 안에서 GRU가 LSTM에 비해 더 높은 예측 정확도를 보였고 Sequence Length가 높을수록 정확도가 높아지는 경향을 보였다. Manual Search 방법의 경우 R2가 최대 0.72의 정확도를 보였고 Grid Search 방법의 경우 R2가 0.79의 정확도를 보였다. 본 연구 결과는 가뭄과 홍수와 같은 물 재해에 사전 대응하고 기후변화에 적응할 수 있는 댐 운영 개선에 도움을 줄 수 있을 것으로 판단된다.

  • PDF

Prediction of Power Consumptions Based on Gated Recurrent Unit for Internet of Energy (에너지 인터넷을 위한 GRU기반 전력사용량 예측)

  • Lee, Dong-gu;Sun, Young-Ghyu;Sim, Is-sac;Hwang, Yu-Min;Kim, Sooh-wan;Kim, Jin-Young
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.120-126
    • /
    • 2019
  • Recently, accurate prediction of power consumption based on machine learning techniques in Internet of Energy (IoE) has been actively studied using the large amount of electricity data acquired from advanced metering infrastructure (AMI). In this paper, we propose a deep learning model based on Gated Recurrent Unit (GRU) as an artificial intelligence (AI) network that can effectively perform pattern recognition of time series data such as the power consumption, and analyze performance of the prediction based on real household power usage data. In the performance analysis, performance comparison between the proposed GRU-based learning model and the conventional learning model of Long Short Term Memory (LSTM) is described. In the simulation results, mean squared error (MSE), mean absolute error (MAE), forecast skill score, normalized root mean square error (RMSE), and normalized mean bias error (NMBE) are used as performance evaluation indexes, and we confirm that the performance of the prediction of the proposed GRU-based learning model is greatly improved.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.