• Title/Summary/Keyword: LSTM model

Search Result 686, Processing Time 0.031 seconds

Comparative Study of Performance of Deep Learning Algorithms in Particulate Matter Concentration Prediction (미세먼지 농도 예측을 위한 딥러닝 알고리즘별 성능 비교)

  • Cho, Kyoung-Woo;Jung, Yong-jin;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.5
    • /
    • pp.409-414
    • /
    • 2021
  • The growing concerns on the emission of particulate matter has prompted a demand for highly reliable particulate matter forecasting. Currently, several studies on particulate matter prediction use various deep learning algorithms. In this study, we compared the predictive performances of typical neural networks used for particulate matter prediction. We used deep neural network(DNN), recurrent neural network, and long short-term memory algorithms to design an optimal predictive model on the basis of a hyperparameter search. The results of a comparative analysis of the predictive performances of the models indicate that the variation trend of the actual and predicted values generally showed a good performance. In the analysis based on the root mean square error and accuracy, the DNN-based prediction model showed a higher reliability for prediction errors compared with the other prediction models.

Improving prediction performance of network traffic using dense sampling technique (밀집 샘플링 기법을 이용한 네트워크 트래픽 예측 성능 향상)

  • Jin-Seon Lee;Il-Seok Oh
    • Smart Media Journal
    • /
    • v.13 no.6
    • /
    • pp.24-34
    • /
    • 2024
  • If the future can be predicted from network traffic data, which is a time series, it can achieve effects such as efficient resource allocation, prevention of malicious attacks, and energy saving. Many models based on statistical and deep learning techniques have been proposed, and most of these studies have focused on improving model structures and learning algorithms. Another approach to improving the prediction performance of the model is to obtain a good-quality data. With the aim of obtaining a good-quality data, this paper applies a dense sampling technique that augments time series data to the application of network traffic prediction and analyzes the performance improvement. As a dataset, UNSW-NB15, which is widely used for network traffic analysis, is used. Performance is analyzed using RMSE, MAE, and MAPE. To increase the objectivity of performance measurement, experiment is performed independently 10 times and the performance of existing sparse sampling and dense sampling is compared as a box plot. As a result of comparing the performance by changing the window size and the horizon factor, dense sampling consistently showed a better performance.

Short-and Mid-term Power Consumption Forecasting using Prophet and GRU (Prophet와 GRU을 이용하여 단중기 전력소비량 예측)

  • Nam Rye Son;Eun Ju Kang
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.18-26
    • /
    • 2023
  • The building energy management system (BEMS), a system designed to efficiently manage energy production and consumption, aims to address the variable nature of power consumption within buildings due to their physical characteristics, necessitating stable power supply. In this context, accurate prediction of building energy consumption becomes crucial for ensuring reliable power delivery. Recent research has explored various approaches, including time series analysis, statistical analysis, and artificial intelligence, to predict power consumption. This paper analyzes the strengths and weaknesses of the Prophet model, choosing to utilize its advantages such as growth, seasonality, and holiday patterns, while also addressing its limitations related to data complexity and external variables like climatic data. To overcome these challenges, the paper proposes an algorithm that combines the Prophet model's strengths with the gated recurrent unit (GRU) to forecast short-term (2 days) and medium-term (7 days, 15 days, 30 days) building energy consumption. Experimental results demonstrate the superior performance of the proposed approach compared to conventional GRU and Prophet models.

Learning Data Model Definition and Machine Learning Analysis for Data-Based Li-Ion Battery Performance Prediction (데이터 기반 리튬 이온 배터리 성능 예측을 위한 학습 데이터 모델 정의 및 기계학습 분석 )

  • Byoungwook Kim;Ji Su Park;Hong-Jun Jang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.3
    • /
    • pp.133-140
    • /
    • 2023
  • The performance of lithium ion batteries depends on the usage environment and the combination ratio of cathode materials. In order to develop a high-performance lithium-ion battery, it is necessary to manufacture the battery and measure its performance while varying the cathode material ratio. However, it takes a lot of time and money to directly develop batteries and measure their performance for all combinations of variables. Therefore, research to predict the performance of a battery using an artificial intelligence model has been actively conducted. However, since measurement experiments were conducted with the same battery in the existing published battery data, the cathode material combination ratio was fixed and was not included as a data attribute. In this paper, we define a training data model required to develop an artificial intelligence model that can predict battery performance according to the combination ratio of cathode materials. We analyzed the factors that can affect the performance of lithium-ion batteries and defined the mass of each cathode material and battery usage environment (cycle, current, temperature, time) as input data and the battery power and capacity as target data. In the battery data in different experimental environments, each battery data maintained a unique pattern, and the battery classification model showed that each battery was classified with an error of about 2%.

A Study on the traffic flow prediction through Catboost algorithm (Catboost 알고리즘을 통한 교통흐름 예측에 관한 연구)

  • Cheon, Min Jong;Choi, Hye Jin;Park, Ji Woong;Choi, HaYoung;Lee, Dong Hee;Lee, Ook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.58-64
    • /
    • 2021
  • As the number of registered vehicles increases, traffic congestion will worsen worse, which may act as an inhibitory factor for urban social and economic development. Through accurate traffic flow prediction, various AI techniques have been used to prevent traffic congestion. This paper uses the data from a VDS (Vehicle Detection System) as input variables. This study predicted traffic flow in five levels (free flow, somewhat delayed, delayed, somewhat congested, and congested), rather than predicting traffic flow in two levels (free flow and congested). The Catboost model, which is a machine-learning algorithm, was used in this study. This model predicts traffic flow in five levels and compares and analyzes the accuracy of the prediction with other algorithms. In addition, the preprocessed model that went through RandomizedSerachCv and One-Hot Encoding was compared with the naive one. As a result, the Catboost model without any hyper-parameter showed the highest accuracy of 93%. Overall, the Catboost model analyzes and predicts a large number of categorical traffic data better than any other machine learning and deep learning models, and the initial set parameters are optimized for Catboost.

A Non-annotated Recurrent Neural Network Ensemble-based Model for Near-real Time Detection of Erroneous Sea Level Anomaly in Coastal Tide Gauge Observation (비주석 재귀신경망 앙상블 모델을 기반으로 한 조위관측소 해수위의 준실시간 이상값 탐지)

  • LEE, EUN-JOO;KIM, YOUNG-TAEG;KIM, SONG-HAK;JU, HO-JEONG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.4
    • /
    • pp.307-326
    • /
    • 2021
  • Real-time sea level observations from tide gauges include missing and erroneous values. Classification as abnormal values can be done for the latter by the quality control procedure. Although the 3𝜎 (three standard deviations) rule has been applied in general to eliminate them, it is difficult to apply it to the sea-level data where extreme values can exist due to weather events, etc., or where erroneous values can exist even within the 3𝜎 range. An artificial intelligence model set designed in this study consists of non-annotated recurrent neural networks and ensemble techniques that do not require pre-labeling of the abnormal values. The developed model can identify an erroneous value less than 20 minutes of tide gauge recording an abnormal sea level. The validated model well separates normal and abnormal values during normal times and weather events. It was also confirmed that abnormal values can be detected even in the period of years when the sea level data have not been used for training. The artificial neural network algorithm utilized in this study is not limited to the coastal sea level, and hence it can be extended to the detection model of erroneous values in various oceanic and atmospheric data.

Application of spatiotemporal transformer model to improve prediction performance of particulate matter concentration (미세먼지 예측 성능 개선을 위한 시공간 트랜스포머 모델의 적용)

  • Kim, Youngkwang;Kim, Bokju;Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.329-352
    • /
    • 2022
  • It is reported that particulate matter(PM) penetrates the lungs and blood vessels and causes various heart diseases and respiratory diseases such as lung cancer. The subway is a means of transportation used by an average of 10 million people a day, and although it is important to create a clean and comfortable environment, the level of particulate matter pollution is shown to be high. It is because the subways run through an underground tunnel and the particulate matter trapped in the tunnel moves to the underground station due to the train wind. The Ministry of Environment and the Seoul Metropolitan Government are making various efforts to reduce PM concentration by establishing measures to improve air quality at underground stations. The smart air quality management system is a system that manages air quality in advance by collecting air quality data, analyzing and predicting the PM concentration. The prediction model of the PM concentration is an important component of this system. Various studies on time series data prediction are being conducted, but in relation to the PM prediction in subway stations, it is limited to statistical or recurrent neural network-based deep learning model researches. Therefore, in this study, we propose four transformer-based models including spatiotemporal transformers. As a result of performing PM concentration prediction experiments in the waiting rooms of subway stations in Seoul, it was confirmed that the performance of the transformer-based models was superior to that of the existing ARIMA, LSTM, and Seq2Seq models. Among the transformer-based models, the performance of the spatiotemporal transformers was the best. The smart air quality management system operated through data-based prediction becomes more effective and energy efficient as the accuracy of PM prediction improves. The results of this study are expected to contribute to the efficient operation of the smart air quality management system.

Study on data preprocessing methods for considering snow accumulation and snow melt in dam inflow prediction using machine learning & deep learning models (머신러닝&딥러닝 모델을 활용한 댐 일유입량 예측시 융적설을 고려하기 위한 데이터 전처리에 대한 방법 연구)

  • Jo, Youngsik;Jung, Kwansue
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.1
    • /
    • pp.35-44
    • /
    • 2024
  • Research in dam inflow prediction has actively explored the utilization of data-driven machine learning and deep learning (ML&DL) tools across diverse domains. Enhancing not just the inherent model performance but also accounting for model characteristics and preprocessing data are crucial elements for precise dam inflow prediction. Particularly, existing rainfall data, derived from snowfall amounts through heating facilities, introduces distortions in the correlation between snow accumulation and rainfall, especially in dam basins influenced by snow accumulation, such as Soyang Dam. This study focuses on the preprocessing of rainfall data essential for the application of ML&DL models in predicting dam inflow in basins affected by snow accumulation. This is vital to address phenomena like reduced outflow during winter due to low snowfall and increased outflow during spring despite minimal or no rain, both of which are physical occurrences. Three machine learning models (SVM, RF, LGBM) and two deep learning models (LSTM, TCN) were built by combining rainfall and inflow series. With optimal hyperparameter tuning, the appropriate model was selected, resulting in a high level of predictive performance with NSE ranging from 0.842 to 0.894. Moreover, to generate rainfall correction data considering snow accumulation, a simulated snow accumulation algorithm was developed. Applying this correction to machine learning and deep learning models yielded NSE values ranging from 0.841 to 0.896, indicating a similarly high level of predictive performance compared to the pre-snow accumulation application. Notably, during the snow accumulation period, adjusting rainfall during the training phase was observed to lead to a more accurate simulation of observed inflow when predicted. This underscores the importance of thoughtful data preprocessing, taking into account physical factors such as snowfall and snowmelt, in constructing data models.

A Research about Time Domain Estimation Method for Greenhouse Environmental Factors based on Artificial Intelligence (인공지능 기반 온실 환경인자의 시간영역 추정)

  • Lee, JungKyu;Oh, JongWoo;Cho, YongJin;Lee, Donghoon
    • Journal of Bio-Environment Control
    • /
    • v.29 no.3
    • /
    • pp.277-284
    • /
    • 2020
  • To increase the utilization of the intelligent methodology of smart farm management, estimation modeling techniques are required to assess prior examination of crops and environment changes in realtime. A mandatory environmental factor such as CO2 is challenging to establish a reliable estimation model in time domain accounted for indoor agricultural facilities where various correlated variables are highly coupled. Thus, this study was conducted to develop an artificial neural network for reducing time complexity by using environmental information distributed in adjacent areas from a time perspective as input and output variables as CO2. The environmental factors in the smart farm were continuously measured using measuring devices that integrated sensors through experiments. Modeling 1 predicted by the mean data of the experiment period and modeling 2 predicted by the day-to-day data were constructed to predict the correlation of CO2. Modeling 2 predicted by the previous day's data learning performed better than Modeling 1 predicted by the 60-day average value. Until 30 days, most of them showed a coefficient of determination between 0.70 and 0.88, and Model 2 was about 0.05 higher. However, after 30 days, the modeling coefficients of both models showed low values below 0.50. According to the modeling approach, comparing and analyzing the values of the determinants showed that data from adjacent time zones were relatively high performance at points requiring prediction rather than a fixed neural network model.

Leased Line Traffic Prediction Using a Recurrent Deep Neural Network Model (순환 심층 신경망 모델을 이용한 전용회선 트래픽 예측)

  • Lee, In-Gyu;Song, Mi-Hwa
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.10
    • /
    • pp.391-398
    • /
    • 2021
  • Since the leased line is a structure that exclusively uses two connected areas for data transmission, a stable quality level and security are ensured, and despite the rapid increase in the number of switched lines, it is a line method that is continuously used a lot in companies. However, because the cost is relatively high, one of the important roles of the network operator in the enterprise is to maintain the optimal state by properly arranging and utilizing the resources of the network leased line. In other words, in order to properly support business service requirements, it is essential to properly manage bandwidth resources of leased lines from the viewpoint of data transmission, and properly predicting and managing leased line usage becomes a key factor. Therefore, in this study, various prediction models were applied and performance was evaluated based on the actual usage rate data of leased lines used in corporate networks. In general, the performance of each prediction was measured and compared by applying the smoothing model and ARIMA model, which are widely used as statistical methods, and the representative models of deep learning based on artificial neural networks, which are being studied a lot these days. In addition, based on the experimental results, we proposed the items to be considered in order for each model to achieve good performance for prediction from the viewpoint of effective operation of leased line resources.