• Title/Summary/Keyword: LSTM-RNN

Search Result 205, Processing Time 0.03 seconds

Development of radar-based nowcasting method using Generative Adversarial Network (적대적 생성 신경망을 이용한 레이더 기반 초단시간 강우예측 기법 개발)

  • Yoon, Seong Sim;Shin, Hongjoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.64-64
    • /
    • 2022
  • 이상기후로 인해 돌발적이고 국지적인 호우 발생의 빈도가 증가하게 되면서 짧은 선행시간(~3 시간) 범위에서 수치예보보다 높은 정확도를 갖는 초단시간 강우예측자료가 돌발홍수 및 도시홍수의 조기경보를 위해 유용하게 사용되고 있다. 일반적으로 초단시간 강우예측 정보는 레이더를 활용하여 외삽 및 이동벡터 기반의 예측기법으로 산정한다. 최근에는 장기간 레이더 관측자료의 확보와 충분한 컴퓨터 연산자원으로 인해 레이더 자료를 활용한 인공지능 심층학습 기반(RNN(Recurrent Neural Network), CNN(Convolutional Neural Network), Conv-LSTM 등)의 강우예측이 국외에서 확대되고 있고, 국내에서도 ConvLSTM 등을 활용한 연구들이 진행되었다. CNN 심층신경망 기반의 초단기 예측 모델의 경우 대체적으로 외삽기반의 예측성능보다 우수한 경향이 있었으나, 예측시간이 길어질수록 공간 평활화되는 경향이 크게 나타나므로 고강도의 뚜렷한 강수 특징을 예측하기 힘들어 예측정확도를 향상시키는데 중요한 소규모 기상현상을 왜곡하게 된다. 본 연구에서는 이러한 한계를 보완하기 위해 적대적 생성 신경망(Generative Adversarial Network, GAN)을 적용한 초단시간 예측기법을 활용하고자 한다. GAN은 생성모형과 판별모형이라는 두 신경망이 서로간의 적대적인 경쟁을 통해 학습하는 신경망으로, 데이터의 확률분포를 학습하고 학습된 분포에서 샘플을 쉽게 생성할 수 있는 기법이다. 본 연구에서는 2017년부터 2021년까지의 환경부 대형 강우레이더 합성장을 수집하고, 강우발생 사례를 대상으로 학습을 수행하여 신경망을 최적화하고자 한다. 학습된 신경망으로 강우예측을 수행하여, 국내 기상청과 환경부에서 생산한 레이더 초단시간 예측강우와 정량적인 정확도를 비교평가 하고자 한다.

  • PDF

MAGRU: Multi-layer Attention with GRU for Logistics Warehousing Demand Prediction

  • Ran Tian;Bo Wang;Chu Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.528-550
    • /
    • 2024
  • Warehousing demand prediction is an essential part of the supply chain, providing a fundamental basis for product manufacturing, replenishment, warehouse planning, etc. Existing forecasting methods cannot produce accurate forecasts since warehouse demand is affected by external factors such as holidays and seasons. Some aspects, such as consumer psychology and producer reputation, are challenging to quantify. The data can fluctuate widely or do not show obvious trend cycles. We introduce a new model for warehouse demand prediction called MAGRU, which stands for Multi-layer Attention with GRU. In the model, firstly, we perform the embedding operation on the input sequence to quantify the external influences; after that, we implement an encoder using GRU and the attention mechanism. The hidden state of GRU captures essential time series. In the decoder, we use attention again to select the key hidden states among all-time slices as the data to be fed into the GRU network. Experimental results show that this model has higher accuracy than RNN, LSTM, GRU, Prophet, XGboost, and DARNN. Using mean absolute error (MAE) and symmetric mean absolute percentage error(SMAPE) to evaluate the experimental results, MAGRU's MAE, RMSE, and SMAPE decreased by 7.65%, 10.03%, and 8.87% over GRU-LSTM, the current best model for solving this type of problem.

LSTM-based Deep Learning for Time Series Forecasting: The Case of Corporate Credit Score Prediction (시계열 예측을 위한 LSTM 기반 딥러닝: 기업 신용평점 예측 사례)

  • Lee, Hyun-Sang;Oh, Sehwan
    • The Journal of Information Systems
    • /
    • v.29 no.1
    • /
    • pp.241-265
    • /
    • 2020
  • Purpose Various machine learning techniques are used to implement for predicting corporate credit. However, previous research doesn't utilize time series input features and has a limited prediction timing. Furthermore, in the case of corporate bond credit rating forecast, corporate sample is limited because only large companies are selected for corporate bond credit rating. To address limitations of prior research, this study attempts to implement a predictive model with more sample companies, which can adjust the forecasting point at the present time by using the credit score information and corporate information in time series. Design/methodology/approach To implement this forecasting model, this study uses the sample of 2,191 companies with KIS credit scores for 18 years from 2000 to 2017. For improving the performance of the predictive model, various financial and non-financial features are applied as input variables in a time series through a sliding window technique. In addition, this research also tests various machine learning techniques that were traditionally used to increase the validity of analysis results, and the deep learning technique that is being actively researched of late. Findings RNN-based stateful LSTM model shows good performance in credit rating prediction. By extending the forecasting time point, we find how the performance of the predictive model changes over time and evaluate the feature groups in the short and long terms. In comparison with other studies, the results of 5 classification prediction through label reclassification show good performance relatively. In addition, about 90% accuracy is found in the bad credit forecasts.

Development of Demand Forecasting Algorithm in Smart Factory using Hybrid-Time Series Models (Hybrid 시계열 모델을 활용한 스마트 공장 내 수요예측 알고리즘 개발)

  • Kim, Myungsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.5
    • /
    • pp.187-194
    • /
    • 2019
  • Traditional demand forecasting methods are difficult to meet the needs of companies due to rapid changes in the market and the diversification of individual consumer needs. In a diversified production environment, the right demand forecast is an important factor for smooth yield management. Many of the existing predictive models commonly used in industry today are limited in function by little. The proposed model is designed to overcome these limitations, taking into account the part where each model performs better individually. In this paper, variables are extracted through Gray Relational analysis suitable for dynamic process analysis, and statistically predicted data is generated that includes characteristics of historical demand data produced through ARIMA forecasts. In combination with the LSTM model, demand forecasts can then be calculated by reflecting the many factors that affect demand forecast through an architecture that is structured to avoid the long-term dependency problems that the neural network model has.

Performance comparison of various deep neural network architectures using Merlin toolkit for a Korean TTS system (Merlin 툴킷을 이용한 한국어 TTS 시스템의 심층 신경망 구조 성능 비교)

  • Hong, Junyoung;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.57-64
    • /
    • 2019
  • In this paper, we construct a Korean text-to-speech system using the Merlin toolkit which is an open source system for speech synthesis. In the text-to-speech system, the HMM-based statistical parametric speech synthesis method is widely used, but it is known that the quality of synthesized speech is degraded due to limitations of the acoustic modeling scheme that includes context factors. In this paper, we propose an acoustic modeling architecture that uses deep neural network technique, which shows excellent performance in various fields. Fully connected deep feedforward neural network (DNN), recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), bidirectional LSTM (BLSTM) are included in the architecture. Experimental results have shown that the performance is improved by including sequence modeling in the architecture, and the architecture with LSTM or BLSTM shows the best performance. It has been also found that inclusion of delta and delta-delta components in the acoustic feature parameters is advantageous for performance improvement.

Non-Intrusive Load Monitoring Method based on Long-Short Term Memory to classify Power Usage of Appliances (가전제품 전력 사용 분류를 위한 장단기 메모리 기반 비침입 부하 모니터링 기법)

  • Kyeong, Chanuk;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2021
  • In this paper, we propose a non-intrusive load monitoring(NILM) system which can find the power of each home appliance from the aggregated total power as the activation in the trading market of the distributed resource and the increasing importance of energy management. We transform the amount of appliances' power into a power on-off state by preprocessing. We use LSTM as a model for predicting states based on these data. Accuracy is measured by comparing predicted states with real ones after postprocessing. In this paper, the accuracy is measured with the different number of electronic products, data postprocessing method, and Time step size. When the number of electronic products is 6, the data postprocessing method using the Round function is used, and Time step size is set to 6, the maximum accuracy can be obtained.

A Systems Engineering Approach for Predicting NPP Response under Steam Generator Tube Rupture Conditions using Machine Learning

  • Tran Canh Hai, Nguyen;Aya, Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.94-107
    • /
    • 2022
  • Accidents prevention and mitigation is the highest priority of nuclear power plant (NPP) operation, particularly in the aftermath of the Fukushima Daiichi accident, which has reignited public anxieties and skepticism regarding nuclear energy usage. To deal with accident scenarios more effectively, operators must have ample and precise information about key safety parameters as well as their future trajectories. This work investigates the potential of machine learning in forecasting NPP response in real-time to provide an additional validation method and help reduce human error, especially in accident situations where operators are under a lot of stress. First, a base-case SGTR simulation is carried out by the best-estimate code RELAP5/MOD3.4 to confirm the validity of the model against results reported in the APR1400 Design Control Document (DCD). Then, uncertainty quantification is performed by coupling RELAP5/MOD3.4 and the statistical tool DAKOTA to generate a large enough dataset for the construction and training of neural-based machine learning (ML) models, namely LSTM, GRU, and hybrid CNN-LSTM. Finally, the accuracy and reliability of these models in forecasting system response are tested by their performance on fresh data. To facilitate and oversee the process of developing the ML models, a Systems Engineering (SE) methodology is used to ensure that the work is consistently in line with the originating mission statement and that the findings obtained at each subsequent phase are valid.

A Study On The Classification Of Driver's Sleep State While Driving Through BCG Signal Optimization (BCG 신호 최적화를 통한 주행중 운전자 수면 상태 분류에 관한 연구)

  • Park, Jin Su;Jeong, Ji Seong;Yang, Chul Seung;Lee, Jeong Gi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.905-910
    • /
    • 2022
  • Drowsy driving requires a lot of social attention because it increases the incidence of traffic accidents and leads to fatal accidents. The number of accidents caused by drowsy driving is increasing every year. Therefore, in order to solve this problem all over the world, research for measuring various biosignals is being conducted. Among them, this paper focuses on non-contact biosignal analysis. Various noises such as engine, tire, and body vibrations are generated in a running vehicle. To measure the driver's heart rate and respiration rate in a driving vehicle with a piezoelectric sensor, a sensor plate that can cushion vehicle vibrations was designed and noise generated from the vehicle was reduced. In addition, we developed a system for classifying whether the driver is sleeping or not by extracting the model using the CNN-LSTM ensemble learning technique based on the signal of the piezoelectric sensor. In order to learn the sleep state, the subject's biosignals were acquired every 30 seconds, and 797 pieces of data were comparatively analyzed.

A Study on the Data Driven Neural Network Model for the Prediction of Time Series Data: Application of Water Surface Elevation Forecasting in Hangang River Bridge (시계열 자료의 예측을 위한 자료 기반 신경망 모델에 관한 연구: 한강대교 수위예측 적용)

  • Yoo, Hyungju;Lee, Seung Oh;Choi, Seohye;Park, Moonhyung
    • Journal of Korean Society of Disaster and Security
    • /
    • v.12 no.2
    • /
    • pp.73-82
    • /
    • 2019
  • Recently, as the occurrence frequency of sudden floods due to climate change increased, the flood damage on riverside social infrastructures was extended so that there has been a threat of overflow. Therefore, a rapid prediction of potential flooding in riverside social infrastructure is necessary for administrators. However, most current flood forecasting models including hydraulic model have limitations which are the high accuracy of numerical results but longer simulation time. To alleviate such limitation, data driven models using artificial neural network have been widely used. However, there is a limitation that the existing models can not consider the time-series parameters. In this study the water surface elevation of the Hangang River bridge was predicted using the NARX model considering the time-series parameter. And the results of the ANN and RNN models are compared with the NARX model to determine the suitability of NARX model. Using the 10-year hydrological data from 2009 to 2018, 70% of the hydrological data were used for learning and 15% was used for testing and evaluation respectively. As a result of predicting the water surface elevation after 3 hours from the Hangang River bridge in 2018, the ANN, RNN and NARX models for RMSE were 0.20 m, 0.11 m, and 0.09 m, respectively, and 0.12 m, 0.06 m, and 0.05 m for MAE, and 1.56 m, 0.55 m and 0.10 m for peak errors respectively. By analyzing the error of the prediction results considering the time-series parameters, the NARX model is most suitable for predicting water surface elevation. This is because the NARX model can learn the trend of the time series data and also can derive the accurate prediction value even in the high water surface elevation prediction by using the hyperbolic tangent and Rectified Linear Unit function as an activation function. However, the NARX model has a limit to generate a vanishing gradient as the sequence length becomes longer. In the future, the accuracy of the water surface elevation prediction will be examined by using the LSTM model.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.