• Title/Summary/Keyword: Attention LSTM

Search Result 105, Processing Time 0.038 seconds

Prediction of dam inflow based on LSTM-s2s model using luong attention (Attention 기법을 적용한 LSTM-s2s 모델 기반 댐유입량 예측 연구)

  • Lee, Jonghyeok;Choi, Suyeon;Kim, Yeonjoo
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.7
    • /
    • pp.495-504
    • /
    • 2022
  • With the recent development of artificial intelligence, a Long Short-Term Memory (LSTM) model that is efficient with time-series analysis is being used to increase the accuracy of predicting the inflow of dams. In this study, we predict the inflow of the Soyang River dam, using the LSTM model with the Sequence-to-Sequence (LSTM-s2s) and attention mechanism (LSTM-s2s with attention) that can further improve the LSTM performance. Hourly inflow, temperature, and precipitation data from 2013 to 2020 were used to train the model, and validate and test for evaluating the performance of the models. As a result, the LSTM-s2s with attention showed better performance than the LSTM-s2s in general as well as in predicting a peak value. Both models captured the inflow pattern during the peaks but detailed hourly variability is limitedly simulated. We conclude that the proposed LSTM-s2s with attention can improve inflow forecasting despite its limits in hourly prediction.

Stride Length Estimation Using LSTM-Attention (LSTM-Attention을 이용한 보폭 추정)

  • Tae, Min-Woo;Kang, Kyung-Hoon;Choi, Sang-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.331-332
    • /
    • 2022
  • 본 논문에서는 3축 가속도와 3축 각속도 센서로 구성된 관성 측정 장치(IMU)와 압력센서가 내장되어있는 스마트 인솔을 착용하여 얻어진 보행 데이터를 통해 보폭을 추정하는 방법을 제안한다. 먼저 압력센서를 활용하여 한 걸음 주기로 나눈 뒤 나누어진 가속도와 각속도 센서 데이터를 LSTM과 Attention 계층을 결합한 딥러닝 모델에 학습하여 보폭 추정을 시행하였다. LSTM-Attention 모델은 기존 LSTM 모델보다 약 1.14%의 성능 향상을 보였다.

  • PDF

Improving dam inflow prediction in LSTM-s2s model with luong attention (Attention 기법을 통한 LSTM-s2s 모델의 댐유입량 예측 개선)

  • Jonghyeok Lee;Yeonjoo Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.226-226
    • /
    • 2023
  • 하천유량, 댐유입량 등을 예측하기 위해 다양한 Long Short-Term Memory (LSTM) 방법들이 활발하게 적용 및 개발되고 있다. 최근 연구들은 s2s (sequence-to-sequence), Attention 기법 등을 통해 LSTM의 성능을 개선할 수 있음을 제시하고 있다. 이에 따라 본 연구에서는 LSTM-s2s와 LSTM-s2s에 attention까지 첨가한 모델을 구축하고, 시간 단위 자료를 사용하여 유입량 예측을 수행하여, 이의 실제 댐 운영에 모델들의 활용 가능성을 확인하고자 하였다. 소양강댐 유역을 대상으로 2013년부터 2020년까지의 유입량 시자료와 종관기상관측기온 및 강수량 데이터를 학습, 검증, 평가로 나누어 훈련한 후, 모델의 성능 평가를 진행하였다. 최적 시퀀스 길이를 결정하기 위해 R2, RRMSE, CC, NSE, 그리고 PBIAS을 사용하였다. 분석 결과, LSTM-s2s 모델보다 attention까지 첨가한 모델이 전반적으로 성능이 우수했으며, attention 첨가 모델이 첨두값 예측에서도 높은 정확도를 보였다. 두 모델 모두 첨두값 발생 동안 유량 패턴을 잘 반영하였지만 세밀한 시간 단위 변화량 패턴 모의에는 한계가 있었다. 시간 단위 예측의 한계에도 불구하고, LSTM-s2s에 attention까지 추가한 모델은 향후 댐유입량 예측에 활용될 수 있을 것으로 판단한다.

  • PDF

Two-Dimensional Attention-Based LSTM Model for Stock Index Prediction

  • Yu, Yeonguk;Kim, Yoon-Joong
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1231-1242
    • /
    • 2019
  • This paper presents a two-dimensional attention-based long short-memory (2D-ALSTM) model for stock index prediction, incorporating input attention and temporal attention mechanisms for weighting of important stocks and important time steps, respectively. The proposed model is designed to overcome the long-term dependency, stock selection, and stock volatility delay problems that negatively affect existing models. The 2D-ALSTM model is validated in a comparative experiment involving the two attention-based models multi-input LSTM (MI-LSTM) and dual-stage attention-based recurrent neural network (DARNN), with real stock data being used for training and evaluation. The model achieves superior performance compared to MI-LSTM and DARNN for stock index prediction on a KOSPI100 dataset.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

MALICIOUS URL RECOGNITION AND DETECTION USING ATTENTION-BASED CNN-LSTM

  • Peng, Yongfang;Tian, Shengwei;Yu, Long;Lv, Yalong;Wang, Ruijin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5580-5593
    • /
    • 2019
  • A malicious Uniform Resource Locator (URL) recognition and detection method based on the combination of Attention mechanism with Convolutional Neural Network and Long Short-Term Memory Network (Attention-Based CNN-LSTM), is proposed. Firstly, the WHOIS check method is used to extract and filter features, including the URL texture information, the URL string statistical information of attributes and the WHOIS information, and the features are subsequently encoded and pre-processed followed by inputting them to the constructed Convolutional Neural Network (CNN) convolution layer to extract local features. Secondly, in accordance with the weights from the Attention mechanism, the generated local features are input into the Long-Short Term Memory (LSTM) model, and subsequently pooled to calculate the global features of the URLs. Finally, the URLs are detected and classified by the SoftMax function using global features. The results demonstrate that compared with the existing methods, the Attention-based CNN-LSTM mechanism has higher accuracy for malicious URL detection.

Image Captioning with Synergy-Gated Attention and Recurrent Fusion LSTM

  • Yang, You;Chen, Lizhi;Pan, Longyue;Hu, Juntao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3390-3405
    • /
    • 2022
  • Long Short-Term Memory (LSTM) combined with attention mechanism is extensively used to generate semantic sentences of images in image captioning models. However, features of salient regions and spatial information are not utilized sufficiently in most related works. Meanwhile, the LSTM also suffers from the problem of underutilized information in a single time step. In the paper, two innovative approaches are proposed to solve these problems. First, the Synergy-Gated Attention (SGA) method is proposed, which can process the spatial features and the salient region features of given images simultaneously. SGA establishes a gated mechanism through the global features to guide the interaction of information between these two features. Then, the Recurrent Fusion LSTM (RF-LSTM) mechanism is proposed, which can predict the next hidden vectors in one time step and improve linguistic coherence by fusing future information. Experimental results on the benchmark dataset of MSCOCO show that compared with the state-of-the-art methods, the proposed method can improve the performance of image captioning model, and achieve competitive performance on multiple evaluation indicators.

Statistical Method and Deep Learning Model for Sea Surface Temperature Prediction (수온 데이터 예측 연구를 위한 통계적 방법과 딥러닝 모델 적용 연구)

  • Moon-Won Cho;Heung-Bae Choi;Myeong-Soo Han;Eun-Song Jung;Tae-Soon Kang
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.543-551
    • /
    • 2023
  • As climate change continues to prompt an increasing demand for advancements in disaster and safety management technologies to address abnormal high water temperatures, typhoons, floods, and droughts, sea surface temperature has emerged as a pivotal factor for swiftly assessing the impacts of summer harmful algal blooms in the seas surrounding Korean Peninsula and the formation and dissipation of cold water along the East Coast of Korea. Therefore, this study sought to gauge predictive performance by leveraging statistical methods and deep learning algorithms to harness sea surface temperature data effectively for marine anomaly research. The sea surface temperature data employed in the predictions spans from 2018 to 2022 and originates from the Heuksando Tidal Observatory. Both traditional statistical ARIMA methods and advanced deep learning models, including long short-term memory (LSTM) and gated recurrent unit (GRU), were employed. Furthermore, prediction performance was evaluated using the attention LSTM technique. The technique integrated an attention mechanism into the sequence-to-sequence (s2s), further augmenting the performance of LSTM. The results showed that the attention LSTM model outperformed the other models, signifying its superior predictive performance. Additionally, fine-tuning hyperparameters can improve sea surface temperature performance.

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

Video Compression Standard Prediction using Attention-based Bidirectional LSTM (어텐션 알고리듬 기반 양방향성 LSTM을 이용한 동영상의 압축 표준 예측)

  • Kim, Sangmin;Park, Bumjun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.870-878
    • /
    • 2019
  • In this paper, we propose an Attention-based BLSTM for predicting the video compression standard of a video. Recently, in NLP, many researches have been studied to predict the next word of sentences, classify and translate sentences by their semantics using the structure of RNN, and they were commercialized as chatbots, AI speakers and translator applications, etc. LSTM is designed to solve the gradient vanishing problem in RNN, and is used in NLP. The proposed algorithm makes video compression standard prediction possible by applying BLSTM and Attention algorithm which focuses on the most important word in a sentence to a bitstream of a video, not an sentence of a natural language.