• Title/Summary/Keyword: local attention mechanism

Search Result 38, Processing Time 0.029 seconds

Linear-Time Korean Morphological Analysis Using an Action-based Local Monotonic Attention Mechanism

  • Hwang, Hyunsun;Lee, Changki
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.101-107
    • /
    • 2020
  • For Korean language processing, morphological analysis is a critical component that requires extensive work. This morphological analysis can be conducted in an end-to-end manner without requiring a complicated feature design using a sequence-to-sequence model. However, the sequence-to-sequence model has a time complexity of O(n2) for an input length n when using the attention mechanism technique for high performance. In this study, we propose a linear-time Korean morphological analysis model using a local monotonic attention mechanism relying on monotonic alignment, which is a characteristic of Korean morphological analysis. The proposed model indicates an extreme improvement in a single threaded environment and a high morphometric F1-measure even for a hard attention model with the elimination of the attention mechanism formula.

Industrial Process Monitoring and Fault Diagnosis Based on Temporal Attention Augmented Deep Network

  • Mu, Ke;Luo, Lin;Wang, Qiao;Mao, Fushun
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.242-252
    • /
    • 2021
  • Following the intuition that the local information in time instances is hardly incorporated into the posterior sequence in long short-term memory (LSTM), this paper proposes an attention augmented mechanism for fault diagnosis of the complex chemical process data. Unlike conventional fault diagnosis and classification methods, an attention mechanism layer architecture is introduced to detect and focus on local temporal information. The augmented deep network results preserve each local instance's importance and contribution and allow the interpretable feature representation and classification simultaneously. The comprehensive comparative analyses demonstrate that the developed model has a high-quality fault classification rate of 95.49%, on average. The results are comparable to those obtained using various other techniques for the Tennessee Eastman benchmark process.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Super-Resolution Using NLSA Mechanism (비지역 희소 어텐션 메커니즘을 활용한 초해상화)

  • Kim, Sowon;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.1
    • /
    • pp.8-14
    • /
    • 2022
  • With the development of deep learning, super-resolution (SR) methods have tried to use deep learning mechanism, instead of using simple interpolation. SR methods using deep learning is generally based on convolutional neural networks (CNN), but recently, SR researches using attention mechanism have been actively conducted. In this paper, we propose an approach of improving SR performance using one of the attention mechanisms, non-local sparse attention (NLSA). Through experiments, we confirmed that the performance of the existing SR models, IMDN, CARN, and OISR-LF-s can be improved by using NLSA.

MALICIOUS URL RECOGNITION AND DETECTION USING ATTENTION-BASED CNN-LSTM

  • Peng, Yongfang;Tian, Shengwei;Yu, Long;Lv, Yalong;Wang, Ruijin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5580-5593
    • /
    • 2019
  • A malicious Uniform Resource Locator (URL) recognition and detection method based on the combination of Attention mechanism with Convolutional Neural Network and Long Short-Term Memory Network (Attention-Based CNN-LSTM), is proposed. Firstly, the WHOIS check method is used to extract and filter features, including the URL texture information, the URL string statistical information of attributes and the WHOIS information, and the features are subsequently encoded and pre-processed followed by inputting them to the constructed Convolutional Neural Network (CNN) convolution layer to extract local features. Secondly, in accordance with the weights from the Attention mechanism, the generated local features are input into the Long-Short Term Memory (LSTM) model, and subsequently pooled to calculate the global features of the URLs. Finally, the URLs are detected and classified by the SoftMax function using global features. The results demonstrate that compared with the existing methods, the Attention-based CNN-LSTM mechanism has higher accuracy for malicious URL detection.

Deep Learning-based Deraining: Performance Comparison and Trends (딥러닝 기반 Deraining 기법 비교 및 연구 동향)

  • Cho, Minji;Park, Ye-In;Cho, Yubin;Kang, Suk-Ju
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.225-232
    • /
    • 2021
  • Deraining is one of the image restoration tasks and should consider a tradeoff between local details and broad contextual information while recovering images. Current studies adopt an attention mechanism which has been actively researched in natural language processing to deal with both global and local features. This paper classifies existing deraining methods and provides comparative analysis and performance comparison by using several datasets in terms of generalization.

Attention Capsule Network for Aspect-Level Sentiment Classification

  • Deng, Yu;Lei, Hang;Li, Xiaoyu;Lin, Yiou;Cheng, Wangchi;Yang, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1275-1292
    • /
    • 2021
  • As a fine-grained classification problem, aspect-level sentiment classification predicts the sentiment polarity for different aspects in context. To address this issue, researchers have widely used attention mechanisms to abstract the relationship between context and aspects. Still, it is difficult to effectively obtain a more profound semantic representation, and the strong correlation between local context features and the aspect-based sentiment is rarely considered. In this paper, a hybrid attention capsule network for aspect-level sentiment classification (ABASCap) was proposed. In this model, the multi-head self-attention was improved, and a context mask mechanism based on adjustable context window was proposed, so as to effectively obtain the internal association between aspects and context. Moreover, the dynamic routing algorithm and activation function in capsule network were optimized to meet the task requirements. Finally, sufficient experiments were conducted on three benchmark datasets in different domains. Compared with other baseline models, ABASCap achieved better classification results, and outperformed the state-of-the-art methods in this task after incorporating pre-training BERT.

Passive sonar signal classification using attention based gated recurrent unit (어텐션 기반 게이트 순환 유닛을 이용한 수동소나 신호분류)

  • Kibae Lee;Guhn Hyeok Ko;Chong Hyun Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.345-356
    • /
    • 2023
  • Target signal of passive sonar shows narrow band harmonic characteristic with a variation in intensity within a few seconds and long term frequency variation due to the Lloyd's mirror effect. We propose a signal classification algorithm based on Gated Recurrent Unit (GRU) that learns local and global time series features. The algorithm proposed implements a multi layer network using GRU and extracts local and global time series features via dilated connections. We learns attention mechanism to weight time series features and classify passive sonar signals. In experiments using public underwater acoustic data, the proposed network showed superior classification accuracy of 96.50 %. This result is 4.17 % higher classification accuracy compared to existing skip connected GRU network.

Generative Adversarial Networks for single image with high quality image

  • Zhao, Liquan;Zhang, Yupeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4326-4344
    • /
    • 2021
  • The SinGAN is one of generative adversarial networks that can be trained on a single nature image. It has poor ability to learn more global features from nature image, and losses much local detail information when it generates arbitrary size image sample. To solve the problem, a non-linear function is firstly proposed to control downsampling ratio that is ratio between the size of current image and the size of next downsampled image, to increase the ratio with increase of the number of downsampling. This makes the low-resolution images obtained by downsampling have higher proportion in all downsampled images. The low-resolution images usually contain much global information. Therefore, it can help the model to learn more global feature information from downsampled images. Secondly, the attention mechanism is introduced to the generative network to increase the weight of effective image information. This can make the network learn more local details. Besides, in order to make the output image more natural, the TVLoss function is introduced to the loss function of SinGAN, to reduce the difference between adjacent pixels and smear phenomenon for the output image. A large number of experimental results show that our proposed model has better performance than other methods in generating random samples with fixed size and arbitrary size, image harmonization and editing.

A Facial Expression Recognition Method Using Two-Stream Convolutional Networks in Natural Scenes

  • Zhao, Lixin
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.399-410
    • /
    • 2021
  • Aiming at the problem that complex external variables in natural scenes have a greater impact on facial expression recognition results, a facial expression recognition method based on two-stream convolutional neural network is proposed. The model introduces exponentially enhanced shared input weights before each level of convolution input, and uses soft attention mechanism modules on the space-time features of the combination of static and dynamic streams. This enables the network to autonomously find areas that are more relevant to the expression category and pay more attention to these areas. Through these means, the information of irrelevant interference areas is suppressed. In order to solve the problem of poor local robustness caused by lighting and expression changes, this paper also performs lighting preprocessing with the lighting preprocessing chain algorithm to eliminate most of the lighting effects. Experimental results on AFEW6.0 and Multi-PIE datasets show that the recognition rates of this method are 95.05% and 61.40%, respectively, which are better than other comparison methods.