• 제목/요약/키워드: attention mechanism

검색결과 771건 처리시간 0.021초

Time-Series Forecasting Based on Multi-Layer Attention Architecture

  • Na Wang;Xianglian Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권1호
    • /
    • pp.1-14
    • /
    • 2024
  • Time-series forecasting is extensively used in the actual world. Recent research has shown that Transformers with a self-attention mechanism at their core exhibit better performance when dealing with such problems. However, most of the existing Transformer models used for time series prediction use the traditional encoder-decoder architecture, which is complex and leads to low model processing efficiency, thus limiting the ability to mine deep time dependencies by increasing model depth. Secondly, the secondary computational complexity of the self-attention mechanism also increases computational overhead and reduces processing efficiency. To address these issues, the paper designs an efficient multi-layer attention-based time-series forecasting model. This model has the following characteristics: (i) It abandons the traditional encoder-decoder based Transformer architecture and constructs a time series prediction model based on multi-layer attention mechanism, improving the model's ability to mine deep time dependencies. (ii) A cross attention module based on cross attention mechanism was designed to enhance information exchange between historical and predictive sequences. (iii) Applying a recently proposed sparse attention mechanism to our model reduces computational overhead and improves processing efficiency. Experiments on multiple datasets have shown that our model can significantly increase the performance of current advanced Transformer methods in time series forecasting, including LogTrans, Reformer, and Informer.

Linear-Time Korean Morphological Analysis Using an Action-based Local Monotonic Attention Mechanism

  • Hwang, Hyunsun;Lee, Changki
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.101-107
    • /
    • 2020
  • For Korean language processing, morphological analysis is a critical component that requires extensive work. This morphological analysis can be conducted in an end-to-end manner without requiring a complicated feature design using a sequence-to-sequence model. However, the sequence-to-sequence model has a time complexity of O(n2) for an input length n when using the attention mechanism technique for high performance. In this study, we propose a linear-time Korean morphological analysis model using a local monotonic attention mechanism relying on monotonic alignment, which is a characteristic of Korean morphological analysis. The proposed model indicates an extreme improvement in a single threaded environment and a high morphometric F1-measure even for a hard attention model with the elimination of the attention mechanism formula.

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • 제43권5호
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

Balanced Attention Mechanism을 활용한 CG/VR 영상의 초해상화 (CG/VR Image Super-Resolution Using Balanced Attention Mechanism)

  • 김소원;박한훈
    • 융합신호처리학회논문지
    • /
    • 제22권4호
    • /
    • pp.156-163
    • /
    • 2021
  • 어텐션(Attention) 메커니즘은 딥러닝 기술을 활용한 다양한 컴퓨터 비전 시스템에서 활용되고 있으며, 초해상화(Super-resolution)를 위한 딥러닝 모델에도 어텐션 메커니즘을 적용하고 있다. 하지만 어텐션 메커니즘이 적용된 대부분의 초해상화 기법들은 Real 영상의 초해상화에만 초점을 맞추어서 연구되어, 어텐션 메커니즘을 적용한 초해상화가 CG나 VR 영상 초해상화에도 유효한지는 알기 어렵다. 본 논문에서는 최근에 제안된 어텐션 메커니즘 모듈인 BAM(Balanced Attention Mechanism) 모듈을 12개의 초해상화 딥러닝 모델에 적용한 후, CG나 VR 영상에서도 성능 향상 효과를 보이는지 확인하는 실험을 진행하였다. 실험 결과, BAM 모듈은 제한적으로 CG나 VR 영상의 초해상화 성능 향상에 기여하였으며, 데이터 특징과 크기, 그리고 네트워크 종류에 따라 성능 향상도가 달라진다는 것을 확인할 수 있었다.

비지역 희소 어텐션 메커니즘을 활용한 초해상화 (Super-Resolution Using NLSA Mechanism)

  • 김소원;박한훈
    • 융합신호처리학회논문지
    • /
    • 제23권1호
    • /
    • pp.8-14
    • /
    • 2022
  • 딥러닝이 발전하면서 초해상화 기술은 단순 보간법(Interpolation)에서 벗어나 딥러닝을 활용해 발전하고 있다. 딥러닝을 사용한 초해상화 기술은 합성곱 신경망(Convolutional Neural Network, CNN) 기반의 연구가 일반적이지만, 최근에는 어텐션(Attention) 메커니즘을 활용한 초해상화 연구가 활발히 진행되고 있다. 본 논문에서는 어텐션 메커니즘 중 하나인 비지역 희소 어텐션(Non-Local Sparse Attention, NLSA)을 활용한 초해상화 성능 향상 방법을 제안한다. 실험을 통해 NLSA를 함께 활용하면 기존 초해상화 신경망 모델인 IMDN, CARN, OISR-LF-s의 성능이 향상되는 것을 확인할 수 있었다.

Recovery of underwater images based on the attention mechanism and SOS mechanism

  • Li, Shiwen;Liu, Feng;Wei, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권8호
    • /
    • pp.2552-2570
    • /
    • 2022
  • Underwater images usually have various problems, such as the color cast of underwater images due to the attenuation of different lights in water, the darkness of image caused by the lack of light underwater, and the haze effect of underwater images because of the scattering of light. To address the above problems, the channel attention mechanism, strengthen-operate-subtract (SOS) boosting mechanism and gated fusion module are introduced in our paper, based on which, an underwater image recovery network is proposed. First, for the color cast problem of underwater images, the channel attention mechanism is incorporated in our model, which can well alleviate the color cast of underwater images. Second, as for the darkness of underwater images, the similarity between the target underwater image after dehazing and color correcting, and the image output by our model is used as the loss function, so as to increase the brightness of the underwater image. Finally, we employ the SOS boosting module to eliminate the haze effect of underwater images. Moreover, experiments were carried out to evaluate the performance of our model. The qualitative analysis results show that our method can be applied to effectively recover the underwater images, which outperformed most methods for comparison according to various criteria in the quantitative analysis.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Crack detection based on ResNet with spatial attention

  • Yang, Qiaoning;Jiang, Si;Chen, Juan;Lin, Weiguo
    • Computers and Concrete
    • /
    • 제26권5호
    • /
    • pp.411-420
    • /
    • 2020
  • Deep Convolution neural network (DCNN) has been widely used in the healthy maintenance of civil infrastructure. Using DCNN to improve crack detection performance has attracted many researchers' attention. In this paper, a light-weight spatial attention network module is proposed to strengthen the representation capability of ResNet and improve the crack detection performance. It utilizes attention mechanism to strengthen the interested objects in global receptive field of ResNet convolution layers. Global average spatial information over all channels are used to construct an attention scalar. The scalar is combined with adaptive weighted sigmoid function to activate the output of each channel's feature maps. Salient objects in feature maps are refined by the attention scalar. The proposed spatial attention module is stacked in ResNet50 to detect crack. Experiments results show that the proposed module can got significant performance improvement in crack detection.

CNN과 Attention을 통한 깊이 화면 내 예측 방법 (Intra Prediction Method for Depth Picture Using CNN and Attention Mechanism)

  • 윤재혁;이동석;윤병주;권순각
    • 한국산업정보학회논문지
    • /
    • 제29권2호
    • /
    • pp.35-45
    • /
    • 2024
  • 본 논문에서는 CNN과 Attention 기법을 통한 깊이 영상의 화면 내 예측 방법을 제안한다. 제안하는 방법을 통해 예측하고자 하는 블록 내 화소마다 참조 화소를 선택할 수 있도록 한다. CNN을 통해 예측 블록의 상단과 좌단에서 각각 수직방향과 수평 방향의 공간적 특징을 검출한다. 두 공간적 특징은 예측블록과 참조 화소들에 대한 특징을 예측하기 위해 각각 특징차원과 공간적 차원으로 병합된다. Attention을 통해 예측 블록과 참조 화소간의 상관성을 입력된 공간적 특징을 통해 예측한다. Attention을 통해 예측된 상관성은 CNN 레이어를 통해 화소 도메인으로 복원되어 블록 내 화소 값이 예측된다. 제안된 방법이 VVC의 인트라 모드에 추가되었을 때 화면 예측 오차가 평균 5.8% 감소하였다.

주목 메커니즘 기반의 심층신경망을 이용한 음성 감정인식 (Speech emotion recognition using attention mechanism-based deep neural networks)

  • 고상선;조혜승;김형국
    • 한국음향학회지
    • /
    • 제36권6호
    • /
    • pp.407-412
    • /
    • 2017
  • 본 논문에서는 주목 메커니즘 기반의 심층 신경망을 사용한 음성 감정인식 방법을 제안한다. 제안하는 방식은 CNN(Convolution Neural Networks), GRU(Gated Recurrent Unit), DNN(Deep Neural Networks)의 결합으로 이루어진 심층 신경망 구조와 주목 메커니즘으로 구성된다. 음성의 스펙트로그램에는 감정에 따른 특징적인 패턴이 포함되어 있으므로 제안하는 방식에서는 일반적인 CNN에서 컨벌루션 필터를 tuned Gabor 필터로 사용하는 GCNN(Gabor CNN)을 사용하여 패턴을 효과적으로 모델링한다. 또한 CNN과 FC(Fully-Connected)레이어 기반의 주목 메커니즘을 적용하여 추출된 특징의 맥락 정보를 고려한 주목 가중치를 구해 감정인식에 사용한다. 본 논문에서 제안하는 방식의 검증을 위해 6가지 감정에 대해 인식 실험을 진행하였다. 실험 결과, 제안한 방식이 음성 감정인식에서 기존의 방식보다 더 높은 성능을 보였다.