• Title/Summary/Keyword: LSTM algorithm

Search Result 199, Processing Time 0.026 seconds

Estimation of reaction forces at the seabed anchor of the submerged floating tunnel using structural pattern recognition

  • Seongi Min;Kiwon Jeong;Yunwoo Lee;Donghwi Jung;Seungjun Kim
    • Computers and Concrete
    • /
    • v.31 no.5
    • /
    • pp.405-417
    • /
    • 2023
  • The submerged floating tunnel (SFT) is tethered by mooring lines anchored to the seabed, therefore, the structural integrity of the anchor should be sensitively managed. Despite their importance, reaction forces cannot be simply measured by attaching sensors or load cells because of the structural and environmental characteristics of the submerged structure. Therefore, we propose an effective method for estimating the reaction forces at the seabed anchor of a submerged floating tunnel using a structural pattern model. First, a structural pattern model is established to use the correlation between tunnel motion and anchor reactions via a deep learning algorithm. Once the pattern model is established, it is directly used to estimate the reaction forces by inputting the tunnel motion data, which can be directly measured inside the tunnel. Because the sequential characteristics of responses in the time domain should be considered, the long short-term memory (LSTM) algorithm is mainly used to recognize structural behavioral patterns. Using hydrodynamics-based simulations, big data on the structural behavior of the SFT under various waves were generated, and the prepared datasets were used to validate the proposed method. The simulation-based validation results clearly show that the proposed method can precisely estimate time-series reactions using only acceleration data. In addition to real-time structural health monitoring, the proposed method can be useful for forensics when an unexpected accident or failure is related to the seabed anchors of the SFT.

Cryptocurrency Auto-trading Program Development Using Prophet Algorithm (Prophet 알고리즘을 활용한 가상화폐의 자동 매매 프로그램 개발)

  • Hyun-Sun Kim;Jae Joon Ahn
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.105-111
    • /
    • 2023
  • Recently, research on prediction algorithms using deep learning has been actively conducted. In addition, algorithmic trading (auto-trading) based on predictive power of artificial intelligence is also becoming one of the main investment methods in stock trading field, building its own history. Since the possibility of human error is blocked at source and traded mechanically according to the conditions, it is likely to be more profitable than humans in the long run. In particular, for the virtual currency market at least for now, unlike stocks, it is not possible to evaluate the intrinsic value of each cryptocurrencies. So it is far effective to approach them with technical analysis and cryptocurrency market might be the field that the performance of algorithmic trading can be maximized. Currently, the most commonly used artificial intelligence method for financial time series data analysis and forecasting is Long short-term memory(LSTM). However, even t4he LSTM also has deficiencies which constrain its widespread use. Therefore, many improvements are needed in the design of forecasting and investment algorithms in order to increase its utilization in actual investment situations. Meanwhile, Prophet, an artificial intelligence algorithm developed by Facebook (META) in 2017, is used to predict stock and cryptocurrency prices with high prediction accuracy. In particular, it is evaluated that Prophet predicts the price of virtual currencies better than that of stocks. In this study, we aim to show Prophet's virtual currency price prediction accuracy is higher than existing deep learning-based time series prediction method. In addition, we execute mock investment with Prophet predicted value. Evaluating the final value at the end of the investment, most of tested coins exceeded the initial investment recording a positive profit. In future research, we continue to test other coins to determine whether there is a significant difference in the predictive power by coin and therefore can establish investment strategies.

Machine learning model for residual chlorine prediction in sediment basin to control pre-chlorination in water treatment plant (정수장 전염소 공정제어를 위한 침전지 잔류염소농도 예측 머신러닝 모형)

  • Kim, Juhwan;Lee, Kyunghyuk;Kim, Soojun;Kim, Kyunghun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1283-1293
    • /
    • 2022
  • The purpose of this study is to predict residual chlorine in order to maintain stable residual chlorine concentration in sedimentation basin by using artificial intelligence algorithms in water treatment process employing pre-chlorination. Available water quantity and quality data are collected and analyzed statistically to apply into mathematical multiple regression and artificial intelligence models including multi-layer perceptron neural network, random forest, long short term memory (LSTM) algorithms. Water temperature, turbidity, pH, conductivity, flow rate, alkalinity and pre-chlorination dosage data are used as the input parameters to develop prediction models. As results, it is presented that the random forest algorithm shows the most moderate prediction result among four cases, which are long short term memory, multi-layer perceptron, multiple regression including random forest. Especially, it is result that the multiple regression model can not represent the residual chlorine with the input parameters which varies independently with seasonal change, numerical scale and dimension difference between quantity and quality. For this reason, random forest model is more appropriate for predict water qualities than other algorithms, which is classified into decision tree type algorithm. Also, it is expected that real time prediction by artificial intelligence models can play role of the stable operation of residual chlorine in water treatment plant including pre-chlorination process.

Electromyography Pattern Recognition and Classification using Circular Structure Algorithm (원형 구조 알고리즘을 이용한 근전도 패턴 인식 및 분류)

  • Choi, Yuna;Sung, Minchang;Lee, Seulah;Choi, Youngjin
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.62-69
    • /
    • 2020
  • This paper proposes a pattern recognition and classification algorithm based on a circular structure that can reflect the characteristics of the sEMG (surface electromyogram) signal measured in the arm without putting the placement limitation of electrodes. In order to recognize the same pattern at all times despite the electrode locations, the data acquisition of the circular structure is proposed so that all sEMG channels can be connected to one another. For the performance verification of the sEMG pattern recognition and classification using the developed algorithm, several experiments are conducted. First, although there are no differences in the sEMG signals themselves, the similar patterns are much better identified in the case of the circular structure algorithm than that of conventional linear ones. Second, a comparative analysis is shown with the supervised learning schemes such as MLP, CNN, and LSTM. In the results, the classification recognition accuracy of the circular structure is above 98% in all postures. It is much higher than the results obtained when the linear structure is used. The recognition difference between the circular and linear structures was the biggest with about 4% when the MLP network was used.

Groundwater Level Prediction using ANFIS Algorithm (딥러닝을 이용한 하천 유량 예측 알고리즘)

  • Bak, Gwi-Man;Oh, Se-Rang;Park, Geun-Ho;Bae, Young-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1239-1248
    • /
    • 2021
  • In this paper, we present FDNN algorithm to perform prediction based on academic understanding. In order to apply prediction based on academic understanding rather than data-dependent prediction to deep learning, we constructed algorithm based on mathematical and hydrology. We construct a model that predicts flow rate of a river as an input of precipitation, and measure the model's performance through K-fold cross validation.

A Comparative Study of Machine Learning Algorithms Using LID-DS DataSet (LID-DS 데이터 세트를 사용한 기계학습 알고리즘 비교 연구)

  • Park, DaeKyeong;Ryu, KyungJoon;Shin, DongIl;Shin, DongKyoo;Park, JeongChan;Kim, JinGoog
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.91-98
    • /
    • 2021
  • Today's information and communication technology is rapidly developing, the security of IT infrastructure is becoming more important, and at the same time, cyber attacks of various forms are becoming more advanced and sophisticated like intelligent persistent attacks (Advanced Persistent Threat). Early defense or prediction of increasingly sophisticated cyber attacks is extremely important, and in many cases, the analysis of network-based intrusion detection systems (NIDS) related data alone cannot prevent rapidly changing cyber attacks. Therefore, we are currently using data generated by intrusion detection systems to protect against cyber attacks described above through Host-based Intrusion Detection System (HIDS) data analysis. In this paper, we conducted a comparative study on machine learning algorithms using LID-DS (Leipzig Intrusion Detection-Data Set) host-based intrusion detection data including thread information, metadata, and buffer data missing from previously used data sets. The algorithms used were Decision Tree, Naive Bayes, MLP (Multi-Layer Perceptron), Logistic Regression, LSTM (Long Short-Term Memory model), and RNN (Recurrent Neural Network). Accuracy, accuracy, recall, F1-Score indicators and error rates were measured for evaluation. As a result, the LSTM algorithm had the highest accuracy.

Time Series Data Analysis and Prediction System Using PCA (주성분 분석 기법을 활용한 시계열 데이터 분석 및 예측 시스템)

  • Jin, Young-Hoon;Ji, Se-Hyun;Han, Kun-Hee
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.99-107
    • /
    • 2021
  • We live in a myriad of data. Various data are created in all situations in which we work, and we discover the meaning of data through big data technology. Many efforts are underway to find meaningful data. This paper introduces an analysis technique that enables humans to make better choices through the trend and prediction of time series data as a principal component analysis technique. Principal component analysis constructs covariance through the input data and presents eigenvectors and eigenvalues that can infer the direction of the data. The proposed method computes a reference axis in a time series data set having a similar directionality. It predicts the directionality of data in the next section through the angle between the directionality of each time series data constituting the data set and the reference axis. In this paper, we compare and verify the accuracy of the proposed algorithm with LSTM (Long Short-Term Memory) through cryptocurrency trends. As a result of comparative verification, the proposed method recorded relatively few transactions and high returns(112%) compared to LSTM in data with high volatility. It can mean that the signal was analyzed and predicted relatively accurately, and it is expected that better results can be derived through a more accurate threshold setting.

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

Improved Convolutional Neural Network Based Cooperative Spectrum Sensing For Cognitive Radio

  • Uppala, Appala Raju;Narasimhulu C, Venkata;Prasad K, Satya
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2128-2147
    • /
    • 2021
  • Cognitive radio systems are being implemented recently to tackle spectrum underutilization problems and aid efficient data traffic. Spectrum sensing is the crucial step in cognitive applications in which cognitive user detects the presence of primary user (PU) in a particular channel thereby switching to another channel for continuous transmission. In cognitive radio systems, the capacity to precisely identify the primary user's signal is essential to secondary user so as to use idle licensed spectrum. Based on the inherent capability, a new spectrum sensing technique is proposed in this paper to identify all types of primary user signals in a cognitive radio condition. Hence, a spectrum sensing algorithm using improved convolutional neural network and long short-term memory (CNN-LSTM) is presented. The principle used in our approach is simulated annealing that discovers reasonable number of neurons for each layer of a completely associated deep neural network to tackle the streamlining issue. The probability of detection is considered as the determining parameter to find the efficiency of the proposed algorithm. Experiments are carried under different signal to noise ratio to indicate better performance of the proposed algorithm. The PU signal will have an associated modulation format and hence identifying the presence of a modulation format itself establishes the presence of PU signal.

A Study on Performance Improvement of Recurrent Neural Networks Algorithm using Word Group Expansion Technique (단어그룹 확장 기법을 활용한 순환신경망 알고리즘 성능개선 연구)

  • Park, Dae Seung;Sung, Yeol Woo;Kim, Cheong Ghil
    • Journal of Industrial Convergence
    • /
    • v.20 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • Recently, with the development of artificial intelligence (AI) and deep learning, the importance of conversational artificial intelligence chatbots is being highlighted. In addition, chatbot research is being conducted in various fields. To build a chatbot, it is developed using an open source platform or a commercial platform for ease of development. These chatbot platforms mainly use RNN and application algorithms. The RNN algorithm has the advantages of fast learning speed, ease of monitoring and verification, and good inference performance. In this paper, a method for improving the inference performance of RNNs and applied algorithms was studied. The proposed method used the word group expansion learning technique of key words for each sentence when RNN and applied algorithm were applied. As a result of this study, the RNN, GRU, and LSTM three algorithms with a cyclic structure achieved a minimum of 0.37% and a maximum of 1.25% inference performance improvement. The research results obtained through this study can accelerate the adoption of artificial intelligence chatbots in related industries. In addition, it can contribute to utilizing various RNN application algorithms. In future research, it will be necessary to study the effect of various activation functions on the performance improvement of artificial neural network algorithms.