• Title/Summary/Keyword: GRU Model

Search Result 123, Processing Time 0.03 seconds

Gated Recurrent Unit based Prefetching for Graph Processing (그래프 프로세싱을 위한 GRU 기반 프리페칭)

  • Shivani Jadhav;Farman Ullah;Jeong Eun Nah;Su-Kyung Yoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.6-10
    • /
    • 2023
  • High-potential data can be predicted and stored in the cache to prevent cache misses, thus reducing the processor's request and wait times. As a result, the processor can work non-stop, hiding memory latency. By utilizing the temporal/spatial locality of memory access, the prefetcher introduced to improve the performance of these computers predicts the following memory address will be accessed. We propose a prefetcher that applies the GRU model, which is advantageous for handling time series data. Display the currently accessed address in binary and use it as training data to train the Gated Recurrent Unit model based on the difference (delta) between consecutive memory accesses. Finally, using a GRU model with learned memory access patterns, the proposed data prefetcher predicts the memory address to be accessed next. We have compared the model with the multi-layer perceptron, but our prefetcher showed better results than the Multi-Layer Perceptron.

  • PDF

Comparison of Anomaly Detection Performance Based on GRU Model Applying Various Data Preprocessing Techniques and Data Oversampling (다양한 데이터 전처리 기법과 데이터 오버샘플링을 적용한 GRU 모델 기반 이상 탐지 성능 비교)

  • Yoo, Seung-Tae;Kim, Kangseok
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.201-211
    • /
    • 2022
  • According to the recent change in the cybersecurity paradigm, research on anomaly detection methods using machine learning and deep learning techniques, which are AI implementation technologies, is increasing. In this study, a comparative study on data preprocessing techniques that can improve the anomaly detection performance of a GRU (Gated Recurrent Unit) neural network-based intrusion detection model using NGIDS-DS (Next Generation IDS Dataset), an open dataset, was conducted. In addition, in order to solve the class imbalance problem according to the ratio of normal data and attack data, the detection performance according to the oversampling ratio was compared and analyzed using the oversampling technique applied with DCGAN (Deep Convolutional Generative Adversarial Networks). As a result of the experiment, the method preprocessed using the Doc2Vec algorithm for system call feature and process execution path feature showed good performance, and in the case of oversampling performance, when DCGAN was used, improved detection performance was shown.

Flow rate prediction at Paldang Bridge using deep learning models (딥러닝 모형을 이용한 팔당대교 지점에서의 유량 예측)

  • Seong, Yeongjeong;Park, Kidoo;Jung, Younghun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.8
    • /
    • pp.565-575
    • /
    • 2022
  • Recently, in the field of water resource engineering, interest in predicting time series water levels and flow rates using deep learning technology that has rapidly developed along with the Fourth Industrial Revolution is increasing. In addition, although water-level and flow-rate prediction have been performed using the Long Short-Term Memory (LSTM) model and Gated Recurrent Unit (GRU) model that can predict time-series data, the accuracy of flow-rate prediction in rivers with rapid temporal fluctuations was predicted to be very low compared to that of water-level prediction. In this study, the Paldang Bridge Station of the Han River, which has a large flow-rate fluctuation and little influence from tidal waves in the estuary, was selected. In addition, time-series data with large flow fluctuations were selected to collect water-level and flow-rate data for 2 years and 7 months, which are relatively short in data length, to be used as training and prediction data for the LSTM and GRU models. When learning time-series water levels with very high time fluctuation in two models, the predicted water-level results in both models secured appropriate accuracy compared to observation water levels, but when training rapidly temporal fluctuation flow rates directly in two models, the predicted flow rates deteriorated significantly. Therefore, in this study, in order to accurately predict the rapidly changing flow rate, the water-level data predicted by the two models could be used as input data for the rating curve to significantly improve the prediction accuracy of the flow rates. Finally, the results of this study are expected to be sufficiently used as the data of flood warning system in urban rivers where the observation length of hydrological data is not relatively long and the flow-rate changes rapidly.

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.

Prediction of Sea Water Temperature by Using Deep Learning Technology Based on Ocean Buoy (해양관측부위 자료 기반 딥러닝 기술을 활용한 해양 혼합층 수온 예측)

  • Ko, Kwan-Seob;Byeon, Seong-Hyeon;Kim, Young-Won
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.299-309
    • /
    • 2022
  • Recently, The sea water temperature around Korean Peninsula is steadily increasing. Water temperature changes not only affect the fishing ecosystem, but also are closely related to military operations in the sea. The purpose of this study is to suggest which model is more suitable for the field of water temperature prediction by attempting short-term water temperature prediction through various prediction models based on deep learning technology. The data used for prediction are water temperature data from the East Sea (Goseong, Yangyang, Gangneung, and Yeongdeok) from 2016 to 2020, which were observed through marine observation by the National Fisheries Research Institute. In addition, we use Long Short-Term Memory (LSTM), Bidirectional LSTM, and Gated Recurrent Unit (GRU) techniques that show excellent performance in predicting time series data as models for prediction. While the previous study used only LSTM, in this study, the prediction accuracy of each technique and the performance time were compared by applying various techniques in addition to LSTM. As a result of the study, it was confirmed that Bidirectional LSTM and GRU techniques had the least error between actual and predicted values at all observation points based on 1 hour prediction, and GRU was the fastest in learning time. Through this, it was confirmed that a method using Bidirectional LSTM was required for water temperature prediction to improve accuracy while reducing prediction errors. In areas that require real-time prediction in addition to accuracy, such as anti-submarine operations, it is judged that the method of using the GRU technique will be more appropriate.

Comparative characteristic of ensemble machine learning and deep learning models for turbidity prediction in a river (딥러닝과 앙상블 머신러닝 모형의 하천 탁도 예측 특성 비교 연구)

  • Park, Jungsu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.1
    • /
    • pp.83-91
    • /
    • 2021
  • The increased turbidity in rivers during flood events has various effects on water environmental management, including drinking water supply systems. Thus, prediction of turbid water is essential for water environmental management. Recently, various advanced machine learning algorithms have been increasingly used in water environmental management. Ensemble machine learning algorithms such as random forest (RF) and gradient boosting decision tree (GBDT) are some of the most popular machine learning algorithms used for water environmental management, along with deep learning algorithms such as recurrent neural networks. In this study GBDT, an ensemble machine learning algorithm, and gated recurrent unit (GRU), a recurrent neural networks algorithm, are used for model development to predict turbidity in a river. The observation frequencies of input data used for the model were 2, 4, 8, 24, 48, 120 and 168 h. The root-mean-square error-observations standard deviation ratio (RSR) of GRU and GBDT ranges between 0.182~0.766 and 0.400~0.683, respectively. Both models show similar prediction accuracy with RSR of 0.682 for GRU and 0.683 for GBDT. The GRU shows better prediction accuracy when the observation frequency is relatively short (i.e., 2, 4, and 8 h) where GBDT shows better prediction accuracy when the observation frequency is relatively long (i.e. 48, 120, 160 h). The results suggest that the characteristics of input data should be considered to develop an appropriate model to predict turbidity.

A New Distributed Log Anomaly Detection Method based on Message Middleware and ATT-GRU

  • Wei Fang;Xuelei Jia;Wen Zhang;Victor S. Sheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.486-503
    • /
    • 2023
  • Logs play an important role in mastering the health of the system, experienced operation and maintenance engineer can judge which part of the system has a problem by checking the logs. In recent years, many system architectures have changed from single application to distributed application, which leads to a very huge number of logs in the system and manually check the logs to find system errors impractically. To solve the above problems, we propose a method based on Message Middleware and ATT-GRU (Attention Gate Recurrent Unit) to detect the logs anomaly of distributed systems. The works of this paper mainly include two aspects: (1) We design a high-performance distributed logs collection architecture to complete the logs collection of the distributed system. (2)We improve the existing GRU by introducing the attention mechanism to weight the key parts of the logs sequence, which can improve the training efficiency and recognition accuracy of the model to a certain extent. The results of experiments show that our method has better superiority and reliability.

Estimation of Frost Occurrence using Multi-Input Deep Learning (다중 입력 딥러닝을 이용한 서리 발생 추정)

  • Yongseok Kim;Jina Hur;Eung-Sup Kim;Kyo-Moon Shim;Sera Jo;Min-Gu Kang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.53-62
    • /
    • 2024
  • In this study, we built a model to estimate frost occurrence in South Korea using single-input deep learning and multi-input deep learning. Meteorological factors used as learning data included minimum temperature, wind speed, relative humidity, cloud cover, and precipitation. As a result of statistical analysis for each factor on days when frost occurred and days when frost did not occur, significant differences were found. When evaluating the frost occurrence models based on single-input deep learning and multi-input deep learning model, the model using both GRU and MLP was highest accuracy at 0.8774 on average. As a result, it was found that frost occurrence model adopting multi-input deep learning improved performance more than using MLP, LSTM, GRU respectively.

Android Malware Detection using Machine Learning Techniques KNN-SVM, DBN and GRU

  • Sk Heena Kauser;V.Maria Anu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.202-209
    • /
    • 2023
  • Android malware is now on the rise, because of the rising interest in the Android operating system. Machine learning models may be used to classify unknown Android malware utilizing characteristics gathered from the dynamic and static analysis of an Android applications. Anti-virus software simply searches for the signs of the virus instance in a specific programme to detect it while scanning. Anti-virus software that competes with it keeps these in large databases and examines each file for all existing virus and malware signatures. The proposed model aims to provide a machine learning method that depend on the malware detection method for Android inability to detect malware apps and improve phone users' security and privacy. This system tracks numerous permission-based characteristics and events collected from Android apps and analyses them using a classifier model to determine whether the program is good ware or malware. This method used the machine learning techniques KNN-SVM, DBN, and GRU in which help to find the accuracy which gives the different values like KNN gives 87.20 percents accuracy, SVM gives 91.40 accuracy, Naive Bayes gives 85.10 and DBN-GRU Gives 97.90. Furthermore, in this paper, we simply employ standard machine learning techniques; but, in future work, we will attempt to improve those machine learning algorithms in order to develop a better detection algorithm.

A Study on the Korean Interest Rate Spread Prediction Model Using the US Interest Rate Spread : SVR-Ensemble (RNN, LSTM, GRU) Model based (미국 금리 스프레드를 이용한 한국 금리 스프레드 예측 모델에 관한 연구 : SVR-앙상블(RNN, LSTM, GRU) 모델 기반)

  • Jeong, Sun-Ho;Kim, Young-Hoo;Song, Myung-Jin;Chung, Yun-Jae;Ko, Sung-Seok
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.3
    • /
    • pp.1-9
    • /
    • 2020
  • Interest rate spreads indicate the conditions of the economy and serve as an indicator of the recession. The purpose of this study is to predict Korea's interest rate spreads using US data with long-term continuity. To this end, 27 US economic data were used, and the entire data was reduced to 5 dimensions through principal component analysis to build a dataset necessary for prediction. In the prediction model of this study, three RNN models (BasicRNN, LSTM, and GRU) predict the US interest rate spread and use the predicted results in the SVR ensemble model to predict the Korean interest rate spread. The SVR ensemble model predicted Korea's interest rate spread as RMSE 0.0658, which showed more accurate predictive power than the general ensemble model predicted as RMSE 0.0905, and showed excellent performance in terms of tendency to respond to fluctuations. In addition, improved prediction performance was confirmed through period division according to policy changes. This study presented a new way to predict interest rates and yielded better results. We predict that if you use refined data that represents the global economic situation through follow-up studies, you will be able to show higher interest rate predictions and predict economic conditions in Korea as well as other countries.