• Title/Summary/Keyword: LSTM algorithm

Search Result 201, Processing Time 0.021 seconds

Flood prediction in the Namgang Dam basin using a long short-term memory (LSTM) algorithm

  • Lee, Seungsoo;An, Hyunuk;Hur, Youngteck;Kim, Yeonsu;Byun, Jisun
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.3
    • /
    • pp.471-483
    • /
    • 2020
  • Flood prediction is an important issue to prevent damages by flood inundation caused by increasing high-intensity rainfall with climate change. In recent years, machine learning algorithms have been receiving attention in many scientific fields including hydrology, water resources, natural hazards, etc. The performance of a machine learning algorithm was investigated to predict the water elevation of a river in this study. The aim of this study was to develop a new method for securing a large enough lead time for flood defenses by predicting river water elevation using the a long- short-term memory (LSTM) technique. The water elevation data at the Oisong gauging station were selected to evaluate its applicability. The test data were the water elevation data measured by K-water from 15 February 2013 to 26 August 2018, approximately 5 years 6 months, at 1 hour intervals. To investigate the predictability of the data in terms of the data characteristics and the lead time of the prediction data, the data were divided into the same interval data (group-A) and time average data (group-B) set. Next, the predictability was evaluated by constructing a total of 36 cases. Based on the results, group-A had a more stable water elevation prediction skill compared to group-B with a lead time from 1 to 6 h. Thus, the LSTM technique using only measured water elevation data can be used for securing the appropriate lead time for flood defense in a river.

Development of leakage detection model in water distribution networks applying LSTM-based deep learning algorithm (LSTM 기반 딥러닝 알고리즘을 적용한 상수도시스템 누수인지 모델 개발)

  • Lee, Chan Wook;Yoo, Do Guen
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.8
    • /
    • pp.599-606
    • /
    • 2021
  • Water Distribution Networks, one of the social infrastructures buried underground, has the function of transporting and supplying purified water to customers. In recent years, as measurement capability is improved, a number of studies related to leak recognition and detection by applying a deep learning technique based on flow rate data have been conducted. In this study, a cognitive model for leak occurrence was developed using an LSTM-based deep learning algorithm that has not been applied to the waterworks field until now. The model was verified based on the assumed data, and it was found that all cases of leaks of 2% or more can be recognized. In the future, based on the proposed model, it is believed that more precise results can be derived in the prediction of flow data.

Diagnosis of Sarcopenia in the Elderly and Development of Deep Learning Algorithm Exploiting Smart Devices (스마트 디바이스를 활용한 노약자 근감소증 진단과 딥러닝 알고리즘)

  • Yun, Younguk;Sohn, Jung-woo
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.3
    • /
    • pp.433-443
    • /
    • 2022
  • Purpose: In this paper, we propose a study of deep learning algorithms that estimate and predict sarcopenia by exploiting the high penetration rate of smart devices. Method: To utilize deep learning techniques, experimental data were collected by using the inertial sensor embedded in the smart device. We implemented a smart device application for data collection. The data are collected by labeling normal and abnormal gait and five states of running, falling and squat posture. Result: The accuracy was analyzed by comparative analysis of LSTM, CNN, and RNN models, and binary classification accuracy of 99.87% and multiple classification accuracy of 92.30% were obtained using the CNN-LSTM fusion algorithm. Conclusion: A study was conducted using a smart sensoring device, focusing on the fact that gait abnormalities occur for people with sarcopenia. It is expected that this study can contribute to strengthening the safety issues caused by sarcopenia.

The Study of Failure Mode Data Development and Feature Parameter's Reliability Verification Using LSTM Algorithm for 2-Stroke Low Speed Engine for Ship's Propulsion (선박 추진용 2행정 저속엔진의 고장모드 데이터 개발 및 LSTM 알고리즘을 활용한 특성인자 신뢰성 검증연구)

  • Jae-Cheul Park;Hyuk-Chan Kwon;Chul-Hwan Kim;Hwa-Sup Jang
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.2
    • /
    • pp.95-109
    • /
    • 2023
  • In the 4th industrial revolution, changes in the technological paradigm have had a direct impact on the maintenance system of ships. The 2-stroke low speed engine system integrates with the core equipment required for propulsive power. The Condition Based Management (CBM) is defined as a technology that predictive maintenance methods in existing calender-based or running time based maintenance systems by monitoring the condition of machinery and diagnosis/prognosis failures. In this study, we have established a framework for CBM technology development on our own, and are engaged in engineering-based failure analysis, data development and management, data feature analysis and pre-processing, and verified the reliability of failure mode DB using LSTM algorithms. We developed various simulated failure mode scenarios for 2-stroke low speed engine and researched to produce data on onshore basis test_beds. The analysis and pre-processing of normal and abnormal status data acquired through failure mode simulation experiment used various Exploratory Data Analysis (EDA) techniques to feature extract not only data on the performance and efficiency of 2-stroke low speed engine but also key feature data using multivariate statistical analysis. In addition, by developing an LSTM classification algorithm, we tried to verify the reliability of various failure mode data with time-series characteristics.

Comparison of Stock Price Prediction Using Time Series and Non-Time Series Data

  • Min-Seob Song;Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.67-75
    • /
    • 2023
  • Stock price prediction is an important topic extensively discussed in the financial market, but it is considered a challenging subject due to numerous factors that can influence it. In this research, performance was compared and analyzed by applying time series prediction models (LSTM, GRU) and non-time series prediction models (RF, SVR, KNN, LGBM) that do not take into account the temporal dependence of data into stock price prediction. In addition, various data such as stock price data, technical indicators, financial statements indicators, buy sell indicators, short selling, and foreign indicators were combined to find optimal predictors and analyze major factors affecting stock price prediction by industry. Through the hyperparameter optimization process, the process of improving the prediction performance for each algorithm was also conducted to analyze the factors affecting the performance. As a result of feature selection and hyperparameter optimization, it was found that the forecast accuracy of the time series prediction algorithm GRU and LSTM+GRU was the highest.

Prediction of Dissolved Oxygen in Jindong Bay Using Time Series Analysis (시계열 분석을 이용한 진동만의 용존산소량 예측)

  • Han, Myeong-Soo;Park, Sung-Eun;Choi, Youngjin;Kim, Youngmin;Hwang, Jae-Dong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.4
    • /
    • pp.382-391
    • /
    • 2020
  • In this study, we used artificial intelligence algorithms for the prediction of dissolved oxygen in Jindong Bay. To determine missing values in the observational data, we used the Bidirectional Recurrent Imputation for Time Series (BRITS) deep learning algorithm, Auto-Regressive Integrated Moving Average (ARIMA), a widely used time series analysis method, and the Long Short-Term Memory (LSTM) deep learning method were used to predict the dissolved oxygen. We also compared accuracy of ARIMA and LSTM. The missing values were determined with high accuracy by BRITS in the surface layer; however, the accuracy was low in the lower layers. The accuracy of BRITS was unstable due to the experimental conditions in the middle layer. In the middle and bottom layers, the LSTM model showed higher accuracy than the ARIMA model, whereas the ARIMA model showed superior performance in the surface layer.

LSTM RNN-based Korean Speech Recognition System Using CTC (CTC를 이용한 LSTM RNN 기반 한국어 음성인식 시스템)

  • Lee, Donghyun;Lim, Minkyu;Park, Hosung;Kim, Ji-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.93-99
    • /
    • 2017
  • A hybrid approach using Long Short Term Memory (LSTM) Recurrent Neural Network (RNN) has showed great improvement in speech recognition accuracy. For training acoustic model based on hybrid approach, it requires forced alignment of HMM state sequence from Gaussian Mixture Model (GMM)-Hidden Markov Model (HMM). However, high computation time for training GMM-HMM is required. This paper proposes an end-to-end approach for LSTM RNN-based Korean speech recognition to improve learning speed. A Connectionist Temporal Classification (CTC) algorithm is proposed to implement this approach. The proposed method showed almost equal performance in recognition rate, while the learning speed is 1.27 times faster.

Comparison of Learning Techniques of LSTM Network for State of Charge Estimation in Lithium-Ion Batteries (리튬 이온 배터리의 충전 상태 추정을 위한 LSTM 네트워크 학습 방법 비교)

  • Hong, Seon-Ri;Kang, Moses;Kim, Gun-Woo;Jeong, Hak-Geun;Beak, Jong-Bok;Kim, Jong-Hoon
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1328-1336
    • /
    • 2019
  • To maintain the safe and optimal performance of batteries, accurate estimation of state of charge (SOC) is critical. In this paper, Long short-term memory network (LSTM) based on the artificial intelligence algorithm is applied to address the problem of the conventional coulomb-counting method. Different discharge cycles are concatenated to form the dataset for training and verification. In oder to improve the quality of input data for learning, preprocessing was performed. In addition, we compared learning ability and SOC estimation performance according to the structure of LSTM model and hyperparameter setup. The trained model was verified with a UDDS profile and achieved estimated accuracy of RMSE 0.82% and MAX 2.54%.

An Encrypted Speech Retrieval Scheme Based on Long Short-Term Memory Neural Network and Deep Hashing

  • Zhang, Qiu-yu;Li, Yu-zhou;Hu, Ying-jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2612-2633
    • /
    • 2020
  • Due to the explosive growth of multimedia speech data, how to protect the privacy of speech data and how to efficiently retrieve speech data have become a hot spot for researchers in recent years. In this paper, we proposed an encrypted speech retrieval scheme based on long short-term memory (LSTM) neural network and deep hashing. This scheme not only achieves efficient retrieval of massive speech in cloud environment, but also effectively avoids the risk of sensitive information leakage. Firstly, a novel speech encryption algorithm based on 4D quadratic autonomous hyperchaotic system is proposed to realize the privacy and security of speech data in the cloud. Secondly, the integrated LSTM network model and deep hashing algorithm are used to extract high-level features of speech data. It is used to solve the high dimensional and temporality problems of speech data, and increase the retrieval efficiency and retrieval accuracy of the proposed scheme. Finally, the normalized Hamming distance algorithm is used to achieve matching. Compared with the existing algorithms, the proposed scheme has good discrimination and robustness and it has high recall, precision and retrieval efficiency under various content preserving operations. Meanwhile, the proposed speech encryption algorithm has high key space and can effectively resist exhaustive attacks.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.