• Title/Summary/Keyword: LSTM(Long Short Term Memory)

Search Result 505, Processing Time 0.029 seconds

Linkage of Numerical Analysis Model and Machine Learning for Real-time Flood Risk Prediction (도시홍수 위험도 실시간 표출을 위한 수치해석 모형과 기계학습의 연계)

  • Kim, Hyun Il;Han, Kun Yeun;Kim, Tae Hyung;Choi, Kyu Hyun;Cho, Hyo Seop
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.332-332
    • /
    • 2021
  • 도시화가 상당히 이뤄지고 기습적인 폭우의 발생이 불확실하게 나타나는 시점에서 재산 및 인명피해를 야기할 수 있는 내수침수에 대한 위험도가 증가하고 있다. 내수침수에 대한 예측을 위하여 실측강우 또는 확률강우량 시나리오를 참조하고 연구대상 지역에 대한 1차원 그리고 2차원 수리학적 해석을 실시하는 연구가 오랫동안 진행되어 왔으나, 수치해석 모형의 경우 다양한 수문-지형학적 자료 및 계측 자료를 요구하고 집약적인 계산과정을 통한 단기간 예측에 어려움이 있음이 언급되어 왔다. 본 연구에서는 위와 같은 문제점을 해결하기 위하여 단일 도시 배수분구를 대상으로 관측 강우 자료, 1, 2차원 수치해석 모형, 기계학습 및 딥러닝 기법을 적용한 실시간 홍수위험지도 예측 모형을 개발하였다. 강우자료에 대하여 실시간으로 홍수량을 예측할 수 있도록 LSTM(Long-Short Term Memory) 기법을 적용하였으며, 전국단위 강우에 대한 다양한 1차원 도시유출해석 결과를 학습시킴으로써 예측을 수행하였다. 침수심의 공간적 분포의 경우 로지스틱 회귀를 이용하여, 기준 침수심에 대한 예측을 각각 수행하였다. 홍수위험 등급의 경우 침수심, 유속 그리고 잔해인자를 고려한 홍수위험등급 공식을 적용하여 산정하였으며, 이 결과를 랜덤포레스트(Random Forest)에 학습함으로써 실시간 예측을 수행할 수 있도록 개발하였다. 침수범위 및 홍수위험등급에 대한 예측은 격자 단위로 이뤄졌으며, 검증 자료의 부족으로 침수 흔적도를 통하여 검증된 2차원 침수해석 결과와 비교함으로써 예측력을 평가하였다. 본 기법은 특정 관측강우 또는 예측강우 자료가 입력되었을 때에, 도시 유역 단위로 접근이 불가하여 통제해야 할 구간을 실시간으로 예측하여 관리할 수 있을 것으로 판단된다.

  • PDF

Prediction of rainfall abstraction based on deep learning considering watershed and rainfall characteristic factors (유역 및 강우 특성인자를 고려한 딥러닝 기반의 강우손실 예측)

  • Jeong, Minyeob;Kim, Dae-Hong;Kim, Seokgyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.37-37
    • /
    • 2022
  • 유효우량 산정을 위하여 국내에서 주로 사용되는 모형은 NRCS-CN(Natural Resources Conservation Service - curve number) 모형으로, 유역의 유출 능력을 나타내는 유출곡선지수(runoff curve number, CN)와 같은 NRCS-CN 모형의 매개변수들은 관측 강우-유출자료 또는 토양도, 토지피복지도 등을 이용하여 유역마다 결정된 값이 사용되고 있다. 그러나 유역의 CN값은 유역의 토양 상태와 같은 환경적 조건에 따라 달라질 수 있으며, 이를 반영하기 위하여 선행토양함수조건(antecedent moisture condition, AMC)을 이용하여 CN값을 조정하는 방법이 사용되고 있으나, AMC 조건에 따른 CN 값의 갑작스런 변화는 유출량의 극단적인 변화를 가져올 수 있다. NRCS-CN 모형과 더불어 강우 손실량 산정에 많이 사용되는 모형으로 Green-Ampt 모형이 있다. Green-Ampt 모형은 유역에서 발생하는 침투현상의 물리적 과정을 고려하는 모형이라는 장점이 있으나, 모형에 활용되는 다양한 물리적인 매개변수들을 산정하기 위해서는 유역에 대한 많은 조사가 선행되어야 한다. 또한 이렇게 산정된 매개변수들은 유역 내 토양이나 식생 조건 등에 따른 여러 불확실성을 내포하고 있어 실무적용에 어려움이 있다. 따라서 본 연구에서는, 현재 사용되고 있는 강우손실 모형들의 매개변수를 추정하기 위한 방법을 제시하고자 하였다. 본 연구에서 제시하는 방법은 인공지능(AI) 기술 중 하나인 딥러닝(deep-learning) 기법을 기반으로 하고 있으며, 딥러닝 모형으로는 장단기 메모리(Long Short-Term Memory, LSTM) 모형이 활용되었다. 딥러닝 모형의 입력 데이터는 유역에서의 강우특성이나 토양수분, 증발산, 식생 특성들을 나타내는 인자이며, 모의 결과는 유역에서 발생한 총 유출량으로 강우손실 모형들의 매개변수 값들은 이들을 활용하여 도출될 수 있다. 산정된 매개변수 값들을 강우손실 모형에 적용하여 실제 유역들에서의 유효우량 산정에 활용해보았으며, 동역학파 기반의 강우-유출 모형을 사용하여 유출을 예측해보았다. 예측된 유출수문곡선을 관측 자료와 비교 시 NSE=0.5 이상으로 산정되어 유출이 적절히 예측되었음을 확인했다.

  • PDF

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Development of a Framework for Improvement of Sensor Data Quality from Weather Buoys (해양기상부표의 센서 데이터 품질 향상을 위한 프레임워크 개발)

  • Ju-Yong Lee;Jae-Young Lee;Jiwoo Lee;Sangmun Shin;Jun-hyuk Jang;Jun-Hee Han
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.186-197
    • /
    • 2023
  • In this study, we focus on the improvement of data quality transmitted from a weather buoy that guides a route of ships. The buoy has an Internet-of-Thing (IoT) including sensors to collect meteorological data and the buoy's status, and it also has a wireless communication device to send them to the central database in a ground control center and ships nearby. The time interval of data collected by the sensor is irregular, and fault data is often detected. Therefore, this study provides a framework to improve data quality using machine learning models. The normal data pattern is trained by machine learning models, and the trained models detect the fault data from the collected data set of the sensor and adjust them. For determining fault data, interquartile range (IQR) removes the value outside the outlier, and an NGBoost algorithm removes the data above the upper bound and below the lower bound. The removed data is interpolated using NGBoost or long-short term memory (LSTM) algorithm. The performance of the suggested process is evaluated by actual weather buoy data from Korea to improve the quality of 'AIR_TEMPERATURE' data by using other data from the same buoy. The performance of our proposed framework has been validated through computational experiments based on real-world data, confirming its suitability for practical applications in real-world scenarios.

Developing Cryptocurrency Trading Strategies with Time Series Forecasting Model (시계열 예측 모델을 활용한 암호화폐 투자 전략 개발)

  • Hyun-Sun Kim;Jae Joon Ahn
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.152-159
    • /
    • 2023
  • This study endeavors to enrich investment prospects in cryptocurrency by establishing a rationale for investment decisions. The primary objective involves evaluating the predictability of four prominent cryptocurrencies - Bitcoin, Ethereum, Litecoin, and EOS - and scrutinizing the efficacy of trading strategies developed based on the prediction model. To identify the most effective prediction model for each cryptocurrency annually, we employed three methodologies - AutoRegressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and Prophet - representing traditional statistics and artificial intelligence. These methods were applied across diverse periods and time intervals. The result suggested that Prophet trained on the previous 28 days' price history at 15-minute intervals generally yielded the highest performance. The results were validated through a random selection of 100 days (20 target dates per year) spanning from January 1st, 2018, to December 31st, 2022. The trading strategies were formulated based on the optimal-performing prediction model, grounded in the simple principle of assigning greater weight to more predictable assets. When the forecasting model indicates an upward trend, it is recommended to acquire the cryptocurrency with the investment amount determined by its performance. Experimental results consistently demonstrated that the proposed trading strategy yields higher returns compared to an equal portfolio employing a buy-and-hold strategy. The cryptocurrency trading model introduced in this paper carries two significant implications. Firstly, it facilitates the evolution of cryptocurrencies from speculative assets to investment instruments. Secondly, it plays a crucial role in advancing deep learning-based investment strategies by providing sound evidence for portfolio allocation. This addresses the black box issue, a notable weakness in deep learning, offering increased transparency to the model.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on Performance Improvement of Recurrent Neural Networks Algorithm using Word Group Expansion Technique (단어그룹 확장 기법을 활용한 순환신경망 알고리즘 성능개선 연구)

  • Park, Dae Seung;Sung, Yeol Woo;Kim, Cheong Ghil
    • Journal of Industrial Convergence
    • /
    • v.20 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • Recently, with the development of artificial intelligence (AI) and deep learning, the importance of conversational artificial intelligence chatbots is being highlighted. In addition, chatbot research is being conducted in various fields. To build a chatbot, it is developed using an open source platform or a commercial platform for ease of development. These chatbot platforms mainly use RNN and application algorithms. The RNN algorithm has the advantages of fast learning speed, ease of monitoring and verification, and good inference performance. In this paper, a method for improving the inference performance of RNNs and applied algorithms was studied. The proposed method used the word group expansion learning technique of key words for each sentence when RNN and applied algorithm were applied. As a result of this study, the RNN, GRU, and LSTM three algorithms with a cyclic structure achieved a minimum of 0.37% and a maximum of 1.25% inference performance improvement. The research results obtained through this study can accelerate the adoption of artificial intelligence chatbots in related industries. In addition, it can contribute to utilizing various RNN application algorithms. In future research, it will be necessary to study the effect of various activation functions on the performance improvement of artificial neural network algorithms.

Research on the Application of AI Techniques to Advance Dam Operation (댐 운영 고도화를 위한 AI 기법 적용 연구)

  • Choi, Hyun Gu;Jeong, Seok Il;Park, Jin Yong;Kwon, E Jae;Lee, Jun Yeol
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.387-387
    • /
    • 2022
  • 기존 홍수기시 댐 운영은 예측 강우와 실시간 관측 강우를 이용하여 댐 운영 모형을 수행하며, 예측 결과에 따라 의사결정 및 댐 운영을 실시하게 된다. 하지만 이 과정에서 반복적인 분석이 필요하며, 댐 운영 모형 수행자의 경험에 따라 예측 결과가 달라져서 반복작업에 대한 자동화, 모형 수행자에 따라 달라지지 않는 예측 결과의 일반화가 필요한 상황이다. 이에 댐 운영 모형에 AI 기법을 적용하여, 다양한 강우 상황에 따른 자동 예측 및 모형 결과의 일반화를 구현하고자 하였다. 이를 위해 수자원 분야에 적용된 국내외 129개 연구논문에서 사용된 딥러닝 기법의 활용성을 분석하였으며, 다양한 수자원 분야 AI 적용 사례 중에서 댐 운영 예측 모형에 적용한 사례는 없었지만 유사한 분야로는 장기 저수지 운영 예측과 댐 상·하류 수위, 유량 예측이 있었다. 수자원의 시계열 자료 활용을 위해서는 Long-Short Term Memory(LSTM) 기법의 적용 활용성이 높은 것으로 분석되었다. 댐 운영 모형에서 AI 적용은 2개 분야에서 진행하였다. 기존 강우관측소의 관측 강우를 활용하여 강우의 패턴분석을 수행하는 과정과, 강우에서 댐 유입량 산정시 매개변수 최적화 분야에 적용하였다. 강우 패턴분석에서는 유사한 표본끼리 묶음을 생성하는 K-means 클러스터링 알고리즘과 시계열 데이터의 유사도 분석 방법인 Dynamic Time Warping을 결합하여 적용하였다. 강우 패턴분석을 통해서 지점별로 월별, 태풍 및 장마기간에 가장 많이 관측되었던 강우 패턴을 제시하며, 이를 모형에서 직접적으로 활용할 수 있도록 구성하였다. 강우에서 댐 유입량을 산정시 활용되는 매개변수 최적화를 위해서는 3층의 Multi-Layer LSTM 기법과 경사하강법을 적용하였다. 매개변수 최적화에 적용되는 매개변수는 중권역별 8개이며, 매개변수 최적화 과정을 통해 산정되는 결과물은 실측값과 오차가 제일 적은 유량(유입량)이 된다. 댐 운영 모형에 AI 기법을 적용한 결과 기존 반복작업에 대한 자동화는 이뤘으며, 댐 운영에 따른 상·하류 제약사항 표출 기능을 추가하여 의사결정에 소요되는 시간도 많이 줄일 수 있었다. 하지만, 매개변수 최적화 부분에서 기존 댐운영 모형에 적용되어 있는 고전적인 매개변수 추정기법보다 추정시간이 오래 소요되며, 매개변수 추정결과의 일반화가 이뤄지지 않아 이 부분에 대한 추가적인 연구가 필요하다.

  • PDF

Comparative analysis of activation functions of artificial neural network for prediction of optimal groundwater level in the middle mountainous area of Pyoseon watershed in Jeju Island (제주도 표선유역 중산간지역의 최적 지하수위 예측을 위한 인공신경망의 활성화함수 비교분석)

  • Shin, Mun-Ju;Kim, Jin-Woo;Moon, Duk-Chul;Lee, Jeong-Han;Kang, Kyung Goo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1143-1154
    • /
    • 2021
  • The selection of activation function has a great influence on the groundwater level prediction performance of artificial neural network (ANN) model. In this study, five activation functions were applied to ANN model for two groundwater level observation wells in the middle mountainous area of the Pyoseon watershed in Jeju Island. The results of the prediction of the groundwater level were compared and analyzed, and the optimal activation function was derived. In addition, the results of LSTM model, which is a widely used recurrent neural network model, were compared and analyzed with the results of the ANN models with each activation function. As a result, ELU and Leaky ReLU functions were derived as the optimal activation functions for the prediction of the groundwater level for observation well with relatively large fluctuations in groundwater level and for observation well with relatively small fluctuations, respectively. On the other hand, sigmoid function had the lowest predictive performance among the five activation functions for training period, and produced inappropriate results in peak and lowest groundwater level prediction. The ANN-ELU and ANN-Leaky ReLU models showed groundwater level prediction performance comparable to that of the LSTM model, and thus had sufficient potential for application. The methods and results of this study can be usefully used in other studies.