• Title/Summary/Keyword: Attention LSTM

Search Result 105, Processing Time 0.025 seconds

Prediction of Sea Surface Temperature and Detection of Ocean Heat Wave in the South Sea of Korea Using Time-series Deep-learning Approaches (시계열 기계학습을 이용한 한반도 남해 해수면 온도 예측 및 고수온 탐지)

  • Jung, Sihun;Kim, Young Jun;Park, Sumin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1077-1093
    • /
    • 2020
  • Sea Surface Temperature (SST) is an important environmental indicator that affects climate coupling systems around the world. In particular, coastal regions suffer from abnormal SST resulting in huge socio-economic damage. This study used Long Short Term Memory (LSTM) and Convolutional Long Short Term Memory (ConvLSTM) to predict SST up to 7 days in the south sea region in South Korea. The results showed that the ConvLSTM model outperformed the LSTM model, resulting in a root mean square error (RMSE) of 0.33℃ and a mean difference of -0.0098℃. Seasonal comparison also showed the superiority of ConvLSTM to LSTM for all seasons. However, in summer, the prediction accuracy for both models with all lead times dramatically decreased, resulting in RMSEs of 0.48℃ and 0.27℃ for LSTM and ConvLSTM, respectively. This study also examined the prediction of abnormally high SST based on three ocean heatwave categories (i.e., warning, caution, and attention) with the lead time from one to seven days for an ocean heatwave case in summer 2017. ConvLSTM was able to successfully predict ocean heatwave five days in advance.

A study on data augmentation methods for sound data classification (소리 데이터 분류에 대한 데이터 증대 방법 연구)

  • Chang, Il-Sik;Park, Goo-man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1308-1310
    • /
    • 2022
  • 소리 데이터 분류는 단순 소리를 통한 분류, 감정 인식등 다양한 연구가 진행중이다. 심층 신경망에서 데이터의 부족과 과적합 문제를 개선하는 방법으로 데이터 증강은 중요하다. 본 논문에서는 3가지의 소리데이터(UrbanSound8K, RAVDESS, IRMAS)를 사용하였으며, 소리데이터는 멜 스펙트로그램을 통한 변환과정을 거쳐 네트워크 망에 입력된다. 입력된 신호는 다양한 네크워크 신경망(Bidirection LSTM, Bidirection LSTM Attention, Multi-Head Attention, CNN)을 통해 학습되어지며, 각각의 네트워크 신경망에서 데이터 증강 전후의 분류 정확도를 확인 하였다. 다양한 데이터셋과 다양한 네트워크 망에서의 데이터 증강 방법의 결과 비교를 통한 통찰을 얻을수 있을 것이다.

  • PDF

Flight State Prediction Techniques Using a Hybrid CNN-LSTM Model (CNN-LSTM 혼합모델을 이용한 비행상태 예측 기법)

  • Park, Jinsang;Song, Min jae;Choi, Eun ju;Kim, Byoung soo;Moon, Young ho
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.4
    • /
    • pp.45-52
    • /
    • 2022
  • In the field of UAM, which is attracting attention as a next-generation transportation system, technology developments for using UAVs have been actively conducted in recent years. Since UAVs adopted with these technologies are mainly operated in urban areas, it is imperative that accidents are prevented. However, it is not easy to predict the abnormal flight state of an UAV causing a crash, because of its strong non-linearity. In this paper, we propose a method for predicting a flight state of an UAV, based on a CNN-LSTM hybrid model. To predict flight state variables at a specific point in the future, the proposed model combines the CNN model extracting temporal and spatial features between flight data, with the LSTM model extracting a short and long-term temporal dependence of the extracted features. Simulation results show that the proposed method has better performance than the prediction methods, which are based on the existing artificial neural network model.

Sketch Recognition Using LSTM with Attention Mechanism and Minimum Cost Flow Algorithm

  • Nguyen-Xuan, Bac;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.8-15
    • /
    • 2019
  • This paper presents a solution of the 'Quick, Draw! Doodle Recognition Challenge' hosted by Google. Doodles are drawings comprised of concrete representational meaning or abstract lines creatively expressed by individuals. In this challenge, a doodle is presented as a sequence of sketches. From the view of at the sketch level, to learn the pattern of strokes representing a doodle, we propose a sequential model stacked with multiple convolution layers and Long Short-Term Memory (LSTM) cells following the attention mechanism [15]. From the view at the image level, we use multiple models pre-trained on ImageNet to recognize the doodle. Finally, an ensemble and a post-processing method using the minimum cost flow algorithm are introduced to combine multiple models in achieving better results. In this challenge, our solutions garnered 11th place among 1,316 teams. Our performance was 0.95037 MAP@3, only 0.4% lower than the winner. It demonstrates that our method is very competitive. The source code for this competition is published at: https://github.com/ngxbac/Kaggle-QuickDraw.

Video Saliency Detection Using Bi-directional LSTM

  • Chi, Yang;Li, Jinjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2444-2463
    • /
    • 2020
  • Significant detection of video can more rationally allocate computing resources and reduce the amount of computation to improve accuracy. Deep learning can extract the edge features of the image, providing technical support for video saliency. This paper proposes a new detection method. We combine the Convolutional Neural Network (CNN) and the Deep Bidirectional LSTM Network (DB-LSTM) to learn the spatio-temporal features by exploring the object motion information and object motion information to generate video. A continuous frame of significant images. We also analyzed the sample database and found that human attention and significant conversion are time-dependent, so we also considered the significance detection of video cross-frame. Finally, experiments show that our method is superior to other advanced methods.

Two-dimensional attention-based multi-input LSTM for time series prediction

  • Kim, Eun Been;Park, Jung Hoon;Lee, Yung-Seop;Lim, Changwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.39-57
    • /
    • 2021
  • Time series prediction is an area of great interest to many people. Algorithms for time series prediction are widely used in many fields such as stock price, temperature, energy and weather forecast; in addtion, classical models as well as recurrent neural networks (RNNs) have been actively developed. After introducing the attention mechanism to neural network models, many new models with improved performance have been developed; in addition, models using attention twice have also recently been proposed, resulting in further performance improvements. In this paper, we consider time series prediction by introducing attention twice to an RNN model. The proposed model is a method that introduces H-attention and T-attention for output value and time step information to select useful information. We conduct experiments on stock price, temperature and energy data and confirm that the proposed model outperforms existing models.

Performance Assessment of Two-stream Convolutional Long- and Short-term Memory Model for September Arctic Sea Ice Prediction from 2001 to 2021 (Two-stream Convolutional Long- and Short-term Memory 모델의 2001-2021년 9월 북극 해빙 예측 성능 평가)

  • Chi, Junhwa
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1047-1056
    • /
    • 2022
  • Sea ice, frozen sea water, in the Artic is a primary indicator of global warming. Due to its importance to the climate system, shipping-route navigation, and fisheries, Arctic sea ice prediction has gained increased attention in various disciplines. Recent advances in artificial intelligence (AI), motivated by a desire to develop more autonomous and efficient future predictions, have led to the development of new sea ice prediction models as alternatives to conventional numerical and statistical prediction models. This study aims to evaluate the performance of the two-stream convolutional long-and short-term memory (TS-ConvLSTM) AI model, which is designed for learning both global and local characteristics of the Arctic sea ice changes, for the minimum September Arctic sea ice from 2001 to 2021, and to show the possibility for an operational prediction system. Although the TS-ConvLSTM model generally increased the prediction performance as training data increased, predictability for the marginal ice zone, 5-50% concentration, showed a negative trend due to increasing first-year sea ice and warming. Additionally, a comparison of sea ice extent predicted by the TS-ConvLSTM with the median Sea Ice Outlooks (SIOs) submitted to the Sea Ice Prediction Network has been carried out. Unlike the TS-ConvLSTM, the median SIOs did not show notable improvements as time passed (i.e., the amount of training data increased). Although the TS-ConvLSTM model has shown the potential for the operational sea ice prediction system, learning more spatio-temporal patterns in the difficult-to-predict natural environment for the robust prediction system should be considered in future work.

Semantic Role Labeling using Biaffine Average Attention Model (Biaffine Average Attention 모델을 이용한 의미역 결정)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.662-667
    • /
    • 2022
  • Semantic role labeling task(SRL) is to extract predicate and arguments such as agent, patient, place, time. In the previously SRL task studies, a pipeline method extracting linguistic features of sentence has been proposed, but in this method, errors of each extraction work in the pipeline affect semantic role labeling performance. Therefore, methods using End-to-End neural network model have recently been proposed. In this paper, we propose a neural network model using the Biaffine Average Attention model for SRL task. The proposed model consists of a structure that can focus on the entire sentence information regardless of the distance between the predicate in the sentence and the arguments, instead of LSTM model that uses the surrounding information for prediction of a specific token proposed in the previous studies. For evaluation, we used F1 scores to compare two models based BERT model that proposed in existing studies using F1 scores, and found that 76.21% performance was higher than comparison models.

Attention-based Next Utterance Classification in Dialogue System (Attention 기반의 대화 발화 예측 모델)

  • Whang, Taesun;Lee, Dongyub;Lim, Hueiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.40-43
    • /
    • 2018
  • 대화 발화 예측(Next Utterance Classification)은 Multi-turn 대화에서 마지막에 올 발화를 정답 후보들 중에서 예측을 하는 연구이다. 기존에 제안된 LSTM 기반의 Dual Encoder를 이용한 모델에서는 대화와 정답 발화에 대한 관계를 고려하지 않는 문제와 대화의 길이가 너무 길어 중간 정보의 손실되는 문제가 존재한다. 본 연구에서는 이러한 두 문제를 해결하기 위하여 ESIM구조를 통한 단어 단위의 attention, 대화의 turn별 문장 단위의 attention을 제안한다. 실험 결과 총 5000개의 검증 대화 데이터에 대하여 1 in 100 Recall@1의 성능이 37.64%로 기존 모델 대비 약 2배 높은 성능 향상을 나타내었다.

  • PDF

A Study on the Classification of Fault Motors using Sound Data (소리 데이터를 이용한 불량 모터 분류에 관한 연구)

  • Il-Sik, Chang;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.885-896
    • /
    • 2022
  • Motor failure in manufacturing plays an important role in future A/S and reliability. Motor failure is detected by measuring sound, current, and vibration. For the data used in this paper, the sound of the car's side mirror motor gear box was used. Motor sound consists of three classes. Sound data is input to the network model through a conversion process through MelSpectrogram. In this paper, various methods were applied, such as data augmentation to improve the performance of classifying fault motors and various methods according to class imbalance were applied resampling, reweighting adjustment, change of loss function and representation learning and classification into two stages. In addition, the curriculum learning method and self-space learning method were compared through a total of five network models such as Bidirectional LSTM Attention, Convolutional Recurrent Neural Network, Multi-Head Attention, Bidirectional Temporal Convolution Network, and Convolution Neural Network, and the optimal configuration was found for motor sound classification.