• Title/Summary/Keyword: LSTM Algorithm

Search Result 192, Processing Time 0.031 seconds

Servo control strategy for uni-axial shake tables using long short-term memory networks

  • Pei-Ching Chen;Kui-Xing Lai
    • Smart Structures and Systems
    • /
    • v.32 no.6
    • /
    • pp.359-369
    • /
    • 2023
  • Servo-motor driven uniaxial shake tables have been widely used for education and research purposes in earthquake engineering. These shake tables are mostly displacement-controlled by a digital proportional-integral-derivative (PID) controller; however, accurate reproduction of acceleration time histories is not guaranteed. In this study, a control strategy is proposed and verified for uniaxial shake tables driven by a servo-motor. This strategy incorporates a deep-learning algorithm named Long Short-Term Memory (LSTM) network into a displacement PID feedback controller. The LSTM controller is trained by using a large number of experimental data of a self-made servo-motor driven uniaxial shake table. After the training is completed, the LSTM controller is implemented for directly generating the command voltage for the servo motor to drive the shake table. Meanwhile, a displacement PID controller is tuned and implemented close to the LSTM controller to prevent the shake table from permanent drift. The control strategy is named the LSTM-PID control scheme. Experimental results demonstrate that the proposed LSTM-PID improves the acceleration tracking performance of the uniaxial shake table for both bare condition and loaded condition with a slender specimen.

Prediction of the Stress-Strain Curve of Materials under Uniaxial Compression by Using LSTM Recurrent Neural Network (LSTM 순환 신경망을 이용한 재료의 단축하중 하에서의 응력-변형률 곡선 예측 연구)

  • Byun, Hoon;Song, Jae-Joon
    • Tunnel and Underground Space
    • /
    • v.28 no.3
    • /
    • pp.277-291
    • /
    • 2018
  • LSTM (Long Short-Term Memory) algorithm which is a kind of recurrent neural network was used to establish a model to predict the stress-strain curve of an material under uniaxial compression. The model was established from the stress-strain data from uniaxial compression tests of silica-gypsum specimens. After training the model, it can predict the behavior of the material up to the failure state by using an early stage of stress-strain curve whose stress is very low. Because the LSTM neural network predict a value by using the previous state of data and proceed forward step by step, a higher error was found at the prediction of higher stress state due to the accumulation of error. However, this model generally predict the stress-strain curve with high accuracy. The accuracy of both LSTM and tangential prediction models increased with increased length of input data, while a difference in performance between them decreased as the amount of input data increased. LSTM model showed relatively superior performance to the tangential prediction when only few input data was given, which enhanced the necessity for application of the model.

Traffic-based reinforcement learning with neural network algorithm in fog computing environment

  • Jung, Tae-Won;Lee, Jong-Yong;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.144-150
    • /
    • 2020
  • Reinforcement learning is a technology that can present successful and creative solutions in many areas. This reinforcement learning technology was used to deploy containers from cloud servers to fog servers to help them learn the maximization of rewards due to reduced traffic. Leveraging reinforcement learning is aimed at predicting traffic in the network and optimizing traffic-based fog computing network environment for cloud, fog and clients. The reinforcement learning system collects network traffic data from the fog server and IoT. Reinforcement learning neural networks, which use collected traffic data as input values, can consist of Long Short-Term Memory (LSTM) neural networks in network environments that support fog computing, to learn time series data and to predict optimized traffic. Description of the input and output values of the traffic-based reinforcement learning LSTM neural network, the composition of the node, the activation function and error function of the hidden layer, the overfitting method, and the optimization algorithm.

PREDICTING KOREAN FRUIT PRICES USING LSTM ALGORITHM

  • PARK, TAE-SU;KEUM, JONGHAE;KIM, HOISUB;KIM, YOUNG ROCK;MIN, YOUNGHO
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.26 no.1
    • /
    • pp.23-48
    • /
    • 2022
  • In this paper, we provide predictive models for the market price of fruits, and analyze the performance of each fruit price predictive model. The data used to create the predictive models are fruit price data, weather data, and Korea composite stock price index (KOSPI) data. We collect these data through Open-API for 10 years period from year 2011 to year 2020. Six types of fruit price predictive models are constructed using the LSTM algorithm, a special form of deep learning RNN algorithm, and the performance is measured using the root mean square error. For each model, the data from year 2011 to year 2018 are trained to predict the fruit price in year 2019, and the data from year 2011 to year 2019 are trained to predict the fruit price in year 2020. By comparing the fruit price predictive models of year 2019 and those models of year 2020, the model with excellent efficiency is identified and the best model to provide the service is selected. The model we made will be available in other countries and regions as well.

Comparison of High Concentration Prediction Performance of Particulate Matter by Deep Learning Algorithm (딥러닝 알고리즘별 미세먼지 고농도 예측 성능 비교)

  • Lee, Jong-sung;Jung, Yong-jin;Oh, Chang-heon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.348-350
    • /
    • 2021
  • When predicting the concentration of fine dust using deep learning, there is a problem that the characteristics of a high concentration of 81㎍/m3 or more are not well reflected in the prediction model. In this paper, a comparison through predictive performance was conducted to confirm the results of reflecting the characteristics of fine dust in the high concentration area according to the deep learning algorithm. As a result of performance evaluation, overall, similar levels of results were shown, but the RNN model showed higher accuracy than other models at concentrations of "very bad" based on AQI. This confirmed that the RNN algorithm reflected the characteristics of the high concentration better than the DNN and LSTM algorithms.

  • PDF

A study on real-time internet comment system through sentiment analysis and deep learning application

  • Hae-Jong Joo;Ho-Bin Song
    • Journal of Platform Technology
    • /
    • v.11 no.2
    • /
    • pp.3-14
    • /
    • 2023
  • This paper proposes a big data sentiment analysis method and deep learning implementation method to provide a webtoon comment analysis web page for convenient comment confirmation and feedback of webtoon writers for the development of the cartoon industry in the video animation field. In order to solve the difficulty of automatic analysis due to the nature of Internet comments and provide various sentiment analysis information, LSTM(Long Short-Term Memory) algorithm, ranking algorithm, and word2vec algorithm are applied in parallel, and actual popular works are used to verify the validity. If the analysis method of this paper is used, it is easy to expand to other domestic and overseas platforms, and it is expected that it can be used in various video animation content fields, not limited to the webtoon field

  • PDF

Long Short-Term Memory Network for INS Positioning During GNSS Outages: A Preliminary Study on Simple Trajectories

  • Yujin Shin;Cheolmin Lee;Doyeon Jung;Euiho Kim
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.13 no.2
    • /
    • pp.137-147
    • /
    • 2024
  • This paper presents a novel Long Short-Term Memory (LSTM) network architecture for the integration of an Inertial Measurement Unit (IMU) and Global Navigation Satellite Systems (GNSS). The proposed algorithm consists of two independent LSTM networks and the LSTM networks are trained to predict attitudes and velocities from the sequence of IMU measurements and mechanization solutions. In this paper, three GNSS receivers are used to provide Real Time Kinematic (RTK) GNSS attitude and position information of a vehicle, and the information is used as a target output while training the network. The performance of the proposed method was evaluated with both experimental and simulation data using a lowcost IMU and three RTK-GNSS receivers. The test results showed that the proposed LSTM network could improve positioning accuracy by more than 90% compared to the position solutions obtained using a conventional Kalman filter based IMU/GNSS integration for more than 30 seconds of GNSS outages.

A Comparative study on smoothing techniques for performance improvement of LSTM learning model

  • Tae-Jin, Park;Gab-Sig, Sim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.17-26
    • /
    • 2023
  • In this paper, we propose a several smoothing techniques are compared and applied to increase the application of the LSTM-based learning model and its effectiveness. The applied smoothing technique is Savitky-Golay, exponential smoothing, and weighted moving average. Through this study, the LSTM algorithm with the Savitky-Golay filter applied in the preprocessing process showed significant best results in prediction performance than the result value shown when applying the LSTM model to Bitcoin data. To confirm the predictive performance results, the learning loss rate and verification loss rate according to the Savitzky-Golay LSTM model were compared with the case of LSTM used to remove complex factors from Bitcoin price prediction, and experimented with an average value of 20 times to increase its reliability. As a result, values of (3.0556, 0.00005) and (1.4659, 0.00002) could be obtained. As a result, since crypto-currencies such as Bitcoin have more volatility than stocks, noise was removed by applying the Savitzky-Golay in the data preprocessing process, and the data after preprocessing were obtained the most-significant to increase the Bitcoin prediction rate through LSTM neural network learning.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Comparison of Deep Learning Models Using Protein Sequence Data (단백질 기능 예측 모델의 주요 딥러닝 모델 비교 실험)

  • Lee, Jeung Min;Lee, Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.245-254
    • /
    • 2022
  • Proteins are the basic unit of all life activities, and understanding them is essential for studying life phenomena. Since the emergence of the machine learning methodology using artificial neural networks, many researchers have tried to predict the function of proteins using only protein sequences. Many combinations of deep learning models have been reported to academia, but the methods are different and there is no formal methodology, and they are tailored to different data, so there has never been a direct comparative analysis of which algorithms are more suitable for handling protein data. In this paper, the single model performance of each algorithm was compared and evaluated based on accuracy and speed by applying the same data to CNN, LSTM, and GRU models, which are the most frequently used representative algorithms in the convergence research field of predicting protein functions, and the final evaluation scale is presented as Micro Precision, Recall, and F1-score. The combined models CNN-LSTM and CNN-GRU models also were evaluated in the same way. Through this study, it was confirmed that the performance of LSTM as a single model is good in simple classification problems, overlapping CNN was suitable as a single model in complex classification problems, and the CNN-LSTM was relatively better as a combination model.