• 제목/요약/키워드: attention module

검색결과 245건 처리시간 0.025초

초고해상도 복원에서 성능 향상을 위한 다양한 Attention 연구 (A Study on Various Attention for Improving Performance in Single Image Super Resolution)

  • 문환복;윤상민
    • 방송공학회논문지
    • /
    • 제25권6호
    • /
    • pp.898-910
    • /
    • 2020
  • 컴퓨터 비전에서 단일 영상 기반의 초고해상도 영상 복원의 중요성과 확장성으로 관련 분야에서 많은 연구가 진행되어 왔으며, 최근 딥러닝에 대한 관심이 증가하면서 딥러닝을 활용한 단안 영상 기반 초고해상도 연구가 활발히 진행되고 있다. 대부분의 딥러닝을 기반으로 하는 단안 영상 기반 초고해상도 복원 연구는 복원 성능을 향상시키기 위해 네트워크의 구조, 손실 함수, 학습 방법에 초점이 맞추어 연구가 진행되었다. 한편, 딥러닝 네트워크를 깊게 쌓지 않고 초고해상도 영상 복원 성능을 향상시키기 위해 추출된 특징 맵을 강조하는 Attention Module에 대한 연구가 다양한 분야에 적용되어 왔다. Attention Module은 다양한 관점에서 네트워크의 목적에 맞는 특징 정보를 강조 및 스케일링 한다. 본 논문에서는 초고해상도 복원 네트워크를 기반으로 다양한 구조의 Channel Attention과 Spatial Attention을 설계하고, 다양한 관점에서 특징 맵을 강조하기 위해 다중 Attention Module 구조를 설계하여 성능을 분석 및 비교한다.

Crack detection based on ResNet with spatial attention

  • Yang, Qiaoning;Jiang, Si;Chen, Juan;Lin, Weiguo
    • Computers and Concrete
    • /
    • 제26권5호
    • /
    • pp.411-420
    • /
    • 2020
  • Deep Convolution neural network (DCNN) has been widely used in the healthy maintenance of civil infrastructure. Using DCNN to improve crack detection performance has attracted many researchers' attention. In this paper, a light-weight spatial attention network module is proposed to strengthen the representation capability of ResNet and improve the crack detection performance. It utilizes attention mechanism to strengthen the interested objects in global receptive field of ResNet convolution layers. Global average spatial information over all channels are used to construct an attention scalar. The scalar is combined with adaptive weighted sigmoid function to activate the output of each channel's feature maps. Salient objects in feature maps are refined by the attention scalar. The proposed spatial attention module is stacked in ResNet50 to detect crack. Experiments results show that the proposed module can got significant performance improvement in crack detection.

MLSE-Net: Multi-level Semantic Enriched Network for Medical Image Segmentation

  • Di Gai;Heng Luo;Jing He;Pengxiang Su;Zheng Huang;Song Zhang;Zhijun Tu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2458-2482
    • /
    • 2023
  • Medical image segmentation techniques based on convolution neural networks indulge in feature extraction triggering redundancy of parameters and unsatisfactory target localization, which outcomes in less accurate segmentation results to assist doctors in diagnosis. In this paper, we propose a multi-level semantic-rich encoding-decoding network, which consists of a Pooling-Conv-Former (PCFormer) module and a Cbam-Dilated-Transformer (CDT) module. In the PCFormer module, it is used to tackle the issue of parameter explosion in the conservative transformer and to compensate for the feature loss in the down-sampling process. In the CDT module, the Cbam attention module is adopted to highlight the feature regions by blending the intersection of attention mechanisms implicitly, and the Dilated convolution-Concat (DCC) module is designed as a parallel concatenation of multiple atrous convolution blocks to display the expanded perceptual field explicitly. In addition, MultiHead Attention-DwConv-Transformer (MDTransformer) module is utilized to evidently distinguish the target region from the background region. Extensive experiments on medical image segmentation from Glas, SIIM-ACR, ISIC and LGG demonstrated that our proposed network outperforms existing advanced methods in terms of both objective evaluation and subjective visual performance.

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

Attention 기법에 기반한 적대적 공격의 강건성 향상 연구 (Improving Adversarial Robustness via Attention)

  • 김재욱;오명교;박래현;권태경
    • 정보보호학회논문지
    • /
    • 제33권4호
    • /
    • pp.621-631
    • /
    • 2023
  • 적대적 학습은 적대적 샘플에 대한 딥러닝 모델의 강건성을 향상시킨다. 하지만 기존의 적대적 학습 기법은 입력단계의 작은 섭동마저도 은닉층의 특징에 큰 변화를 일으킨다는 점을 간과하여 adversarial loss function에만집중한다. 그 결과로 일반 샘플 또는 다른 공격 기법과 같이 학습되지 않은 다양한 상황에 대한 정확도가 감소한다. 이 문제를 해결하기 위해서는 특징 표현 능력을 향상시키는 모델 아키텍처에 대한 분석이 필요하다. 본 논문에서는 입력 이미지의 attention map을 생성하는 attention module을 일반 모델에 적용하고 PGD 적대적학습을수행한다. CIFAR-10 dataset에서의 제안된 기법은 네트워크 구조에 상관없이 적대적 학습을 수행한 일반 모델보다 적대적 샘플에 대해 더 높은 정확도를 보였다. 특히 우리의 접근법은 PGD, FGSM, BIM과 같은 다양한 공격과 더 강력한 adversary에 대해서도 더 강건했다. 나아가 우리는 attention map을 시각화함으로써 attention module이 적대적 샘플에 대해서도 정확한 클래스의 특징을 추출한다는 것을 확인했다.

Dual Attention Based Image Pyramid Network for Object Detection

  • Dong, Xiang;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권12호
    • /
    • pp.4439-4455
    • /
    • 2021
  • Compared with two-stage object detection algorithms, one-stage algorithms provide a better trade-off between real-time performance and accuracy. However, these methods treat the intermediate features equally, which lacks the flexibility to emphasize meaningful information for classification and location. Besides, they ignore the interaction of contextual information from different scales, which is important for medium and small objects detection. To tackle these problems, we propose an image pyramid network based on dual attention mechanism (DAIPNet), which builds an image pyramid to enrich the spatial information while emphasizing multi-scale informative features based on dual attention mechanisms for one-stage object detection. Our framework utilizes a pre-trained backbone as standard detection network, where the designed image pyramid network (IPN) is used as auxiliary network to provide complementary information. Here, the dual attention mechanism is composed of the adaptive feature fusion module (AFFM) and the progressive attention fusion module (PAFM). AFFM is designed to automatically pay attention to the feature maps with different importance from the backbone and auxiliary network, while PAFM is utilized to adaptively learn the channel attentive information in the context transfer process. Furthermore, in the IPN, we build an image pyramid to extract scale-wise features from downsampled images of different scales, where the features are further fused at different states to enrich scale-wise information and learn more comprehensive feature representations. Experimental results are shown on MS COCO dataset. Our proposed detector with a 300 × 300 input achieves superior performance of 32.6% mAP on the MS COCO test-dev compared with state-of-the-art methods.

DA-Res2Net: a novel Densely connected residual Attention network for image semantic segmentation

  • Zhao, Xiaopin;Liu, Weibin;Xing, Weiwei;Wei, Xiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4426-4442
    • /
    • 2020
  • Since scene segmentation is becoming a hot topic in the field of autonomous driving and medical image analysis, researchers are actively trying new methods to improve segmentation accuracy. At present, the main issues in image semantic segmentation are intra-class inconsistency and inter-class indistinction. From our analysis, the lack of global information as well as macroscopic discrimination on the object are the two main reasons. In this paper, we propose a Densely connected residual Attention network (DA-Res2Net) which consists of a dense residual network and channel attention guidance module to deal with these problems and improve the accuracy of image segmentation. Specifically, in order to make the extracted features equipped with stronger multi-scale characteristics, a densely connected residual network is proposed as a feature extractor. Furthermore, to improve the representativeness of each channel feature, we design a Channel-Attention-Guide module to make the model focusing on the high-level semantic features and low-level location features simultaneously. Experimental results show that the method achieves significant performance on various datasets. Compared to other state-of-the-art methods, the proposed method reaches the mean IOU accuracy of 83.2% on PASCAL VOC 2012 and 79.7% on Cityscapes dataset, respectively.

특징기반 주의 모듈을 사용하는 CMOS 디지털 이미지 센서 (A CMOS Digital Image Sensor with a Feature-Driven Attention Module)

  • 박민철;최경주
    • 정보처리학회논문지B
    • /
    • 제15B권3호
    • /
    • pp.189-196
    • /
    • 2008
  • 본 논문에서는 A/D 변환기, 모션 예측 회로와 ROI(Region of Interest) 탐지를 위한 주의 모듈로 구성된 CMOS 디지털 이미지 센서를 소개한다. 현재 논문에서 제시하고 있는 이미지 센서의 A/D 변환기와 모션 예측 기능은 하드웨어인 $0.6{\mu}m$의 CMOS 프로세싱 회로(processing circuit)로 구현되어 있으며, ROI 탐지는 주의 모듈로서 소프트웨어로 구현되어 있다. 현재의 이미지 센서는 명암도의 변화에 반응하며, 모션을 예측하기 위해 시간정보를 사용하기 때문에 이미지 센서의 응용분야는 한정되어 있다. 센서라는 본래의 특징을 가지게 하면서 이의 응용분야를 확장하기 위하여 정지영상 및 동영상을 위한 특징기반 주의 모듈을 사용하여 이미지 센서에 인지기능을 부여하고자 한다. 이러한 접근법을 통해 이미지 센서는 모션이 예측되지 않다거나 명암도 변화가 감지되지 않을 경우에도 부가적인 기능을 할 수 있다. 실험결과를 통해 현재 구현된 이미지 센서의 효율성 및 다양한 분야로의 확장가능성을 확인할 수 있었다.

Balanced Attention Mechanism을 활용한 CG/VR 영상의 초해상화 (CG/VR Image Super-Resolution Using Balanced Attention Mechanism)

  • 김소원;박한훈
    • 융합신호처리학회논문지
    • /
    • 제22권4호
    • /
    • pp.156-163
    • /
    • 2021
  • 어텐션(Attention) 메커니즘은 딥러닝 기술을 활용한 다양한 컴퓨터 비전 시스템에서 활용되고 있으며, 초해상화(Super-resolution)를 위한 딥러닝 모델에도 어텐션 메커니즘을 적용하고 있다. 하지만 어텐션 메커니즘이 적용된 대부분의 초해상화 기법들은 Real 영상의 초해상화에만 초점을 맞추어서 연구되어, 어텐션 메커니즘을 적용한 초해상화가 CG나 VR 영상 초해상화에도 유효한지는 알기 어렵다. 본 논문에서는 최근에 제안된 어텐션 메커니즘 모듈인 BAM(Balanced Attention Mechanism) 모듈을 12개의 초해상화 딥러닝 모델에 적용한 후, CG나 VR 영상에서도 성능 향상 효과를 보이는지 확인하는 실험을 진행하였다. 실험 결과, BAM 모듈은 제한적으로 CG나 VR 영상의 초해상화 성능 향상에 기여하였으며, 데이터 특징과 크기, 그리고 네트워크 종류에 따라 성능 향상도가 달라진다는 것을 확인할 수 있었다.

Multi-level Cross-attention Siamese Network For Visual Object Tracking

  • Zhang, Jianwei;Wang, Jingchao;Zhang, Huanlong;Miao, Mengen;Cai, Zengyu;Chen, Fuguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권12호
    • /
    • pp.3976-3990
    • /
    • 2022
  • Currently, cross-attention is widely used in Siamese trackers to replace traditional correlation operations for feature fusion between template and search region. The former can establish a similar relationship between the target and the search region better than the latter for robust visual object tracking. But existing trackers using cross-attention only focus on rich semantic information of high-level features, while ignoring the appearance information contained in low-level features, which makes trackers vulnerable to interference from similar objects. In this paper, we propose a Multi-level Cross-attention Siamese network(MCSiam) to aggregate the semantic information and appearance information at the same time. Specifically, a multi-level cross-attention module is designed to fuse the multi-layer features extracted from the backbone, which integrate different levels of the template and search region features, so that the rich appearance information and semantic information can be used to carry out the tracking task simultaneously. In addition, before cross-attention, a target-aware module is introduced to enhance the target feature and alleviate interference, which makes the multi-level cross-attention module more efficient to fuse the information of the target and the search region. We test the MCSiam on four tracking benchmarks and the result show that the proposed tracker achieves comparable performance to the state-of-the-art trackers.