• 제목/요약/키워드: Attention Model

검색결과 2,864건 처리시간 0.025초

유사도 기반 이미지 캡션을 이용한 시각질의응답 연구 (Using similarity based image caption to aid visual question answering)

  • 강준서;임창원
    • 응용통계연구
    • /
    • 제34권2호
    • /
    • pp.191-204
    • /
    • 2021
  • 시각질의응답과 이미지 캡셔닝은 이미지의 특징과 문장의 언어적인 특징을 이해하는 것을 요구하는 작업이다. 따라서 두 가지 작업 모두 이미지와 텍스트를 연결해 줄 수 있는 공동 어텐션이 핵심이라고 할 수 있다. 본 논문에서는 MSCOCO 데이터 셋에 대하여 사전 훈련된 transformer 모델을 이용 하여 캡션을 생성한 후 이를 활용해 시각질의응답의 성능을 높이는 모델을 제안하고자 한다. 이때 질 문과 관계없는 캡션은 오히려 시각질의응답에서 답을 맞히는데 방해가 될 수 있기 때문에 질문과의 유사도를 기반으로 질문과 유사한 일부의 캡션을 활용하도록 하였다. 또한 캡션에서 불용어는 답을 맞히는데 영향을 주지 못하거나 방해가 될 수 있기 때문에 제거한 후에 실험을 진행하였다. 기존 시 각질의응답에서 이미지와 텍스트간의 공동 어텐션을 활용하여 좋은 성능을 보였던 deep modular co-attention network (MCAN)과 유사도 기반의 선별된 캡션을 사용하여 VQA-v2 데이터에 대하여 실험을 진행하였다. 그 결과 기존의 MCAN모델과 비교하여 유사도 기반으로 선별된 캡션을 활용했을 때 성능 향상을 확인하였다.

Semi-Supervised Spatial Attention Method for Facial Attribute Editing

  • Yang, Hyeon Seok;Han, Jeong Hoon;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3685-3707
    • /
    • 2021
  • In recent years, facial attribute editing has been successfully used to effectively change face images of various attributes based on generative adversarial networks and encoder-decoder models. However, existing models have a limitation in that they may change an unintended part in the process of changing an attribute or may generate an unnatural result. In this paper, we propose a model that improves the learning of the attention mask by adding a spatial attention mechanism based on the unified selective transfer network (referred to as STGAN) using semi-supervised learning. The proposed model can edit multiple attributes while preserving details independent of the attributes being edited. This study makes two main contributions to the literature. First, we propose an encoder-decoder model structure that learns and edits multiple facial attributes and suppresses distortion using an attention mask. Second, we define guide masks and propose a method and an objective function that use the guide masks for multiple facial attribute editing through semi-supervised learning. Through qualitative and quantitative evaluations of the experimental results, the proposed method was proven to yield improved results that preserve the image details by suppressing unintended changes than existing methods.

Adaptive Attention Annotation Model: Optimizing the Prediction Path through Dependency Fusion

  • Wang, Fangxin;Liu, Jie;Zhang, Shuwu;Zhang, Guixuan;Zheng, Yang;Li, Xiaoqian;Liang, Wei;Li, Yuejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권9호
    • /
    • pp.4665-4683
    • /
    • 2019
  • Previous methods build image annotation model by leveraging three basic dependencies: relations between image and label (image/label), between images (image/image) and between labels (label/label). Even though plenty of researches show that multiple dependencies can work jointly to improve annotation performance, different dependencies actually do not "work jointly" in their diagram, whose performance is largely depending on the result predicted by image/label section. To address this problem, we propose the adaptive attention annotation model (AAAM) to associate these dependencies with the prediction path, which is composed of a series of labels (tags) in the order they are detected. In particular, we optimize the prediction path by detecting the relevant labels from the easy-to-detect to the hard-to-detect, which are found using Binary Cross-Entropy (BCE) and Triplet Margin (TM) losses, respectively. Besides, in order to capture the inforamtion of each label, instead of explicitly extracting regional featutres, we propose the self-attention machanism to implicitly enhance the relevant region and restrain those irrelevant. To validate the effective of the model, we conduct experiments on three well-known public datasets, COCO 2014, IAPR TC-12 and NUSWIDE, and achieve better performance than the state-of-the-art methods.

임의배율 초해상도를 위한 하이브리드 도메인 고주파 집중 네트워크 (Hybrid-Domain High-Frequency Attention Network for Arbitrary Magnification Super-Resolution)

  • 윤준석;이성진;유석봉;한승회
    • 한국정보통신학회논문지
    • /
    • 제25권11호
    • /
    • pp.1477-1485
    • /
    • 2021
  • 최근 이미지 초해상도는 정수배율만 가능한 모델에만 집중적으로 연구되고 있다. 하지만 관심 객체 인식, 디스플레이 화질 개선 등 실제 초해상도 기술의 대표 적용 분야에서는 소수 배율을 포함하는 임의배율 확대 필요성이 대두되고 있다. 본 논문에서는 기존 정수배율 모델의 가중치를 활용하여 임의배율을 실행할 수 있는 모델을 제안한다. 이 모델은 정수배율에 의해 우수한 성능을 가진 초해상도 결과를 DCT 스펙트럼 도메인으로 변환하여 임의배율을 위한 공간을 확장한다. DCT 스펙트럼 도메인에 의한 확장으로 인해 발생하는 이미지의 고주파 정보 손실 문제를 줄이기 위해 고주파 스펙트럼 정보를 적절히 복원할 수 있는 모델인 고주파 집중 네트워크를 제안한다. 제안된 네트워크는 고주파 정보를 제대로 생성하기 위해서 RGB 채널간의 상관관계를 학습하는 레이어인 channel attention을 활용하고, 잔차 학습 구조를 통해 모델을 깊게 만들어 성능을 향상시켰다.

Aspect-Based Sentiment Analysis with Position Embedding Interactive Attention Network

  • Xiang, Yan;Zhang, Jiqun;Zhang, Zhoubin;Yu, Zhengtao;Xian, Yantuan
    • Journal of Information Processing Systems
    • /
    • 제18권5호
    • /
    • pp.614-627
    • /
    • 2022
  • Aspect-based sentiment analysis is to discover the sentiment polarity towards an aspect from user-generated natural language. So far, most of the methods only use the implicit position information of the aspect in the context, instead of directly utilizing the position relationship between the aspect and the sentiment terms. In fact, neighboring words of the aspect terms should be given more attention than other words in the context. This paper studies the influence of different position embedding methods on the sentimental polarities of given aspects, and proposes a position embedding interactive attention network based on a long short-term memory network. Firstly, it uses the position information of the context simultaneously in the input layer and the attention layer. Secondly, it mines the importance of different context words for the aspect with the interactive attention mechanism. Finally, it generates a valid representation of the aspect and the context for sentiment classification. The model which has been posed was evaluated on the datasets of the Semantic Evaluation 2014. Compared with other baseline models, the accuracy of our model increases by about 2% on the restaurant dataset and 1% on the laptop dataset.

CT 영상에서 폐 결절 분할을 위한 경계 및 역 어텐션 기법 (Boundary and Reverse Attention Module for Lung Nodule Segmentation in CT Images)

  • 황경연;지예원;윤학영;이상준
    • 대한임베디드공학회논문지
    • /
    • 제17권5호
    • /
    • pp.265-272
    • /
    • 2022
  • As the risk of lung cancer has increased, early-stage detection and treatment of cancers have received a lot of attention. Among various medical imaging approaches, computer tomography (CT) has been widely utilized to examine the size and growth rate of lung nodules. However, the process of manual examination is a time-consuming task, and it causes physical and mental fatigue for medical professionals. Recently, many computer-aided diagnostic methods have been proposed to reduce the workload of medical professionals. In recent studies, encoder-decoder architectures have shown reliable performances in medical image segmentation, and it is adopted to predict lesion candidates. However, localizing nodules in lung CT images is a challenging problem due to the extremely small sizes and unstructured shapes of nodules. To solve these problems, we utilize atrous spatial pyramid pooling (ASPP) to minimize the loss of information for a general U-Net baseline model to extract rich representations from various receptive fields. Moreover, we propose mixed-up attention mechanism of reverse, boundary and convolutional block attention module (CBAM) to improve the accuracy of segmentation small scale of various shapes. The performance of the proposed model is compared with several previous attention mechanisms on the LIDC-IDRI dataset, and experimental results demonstrate that reverse, boundary, and CBAM (RB-CBAM) are effective in the segmentation of small nodules.

시각적 선택에 대한 신경 망 모형FeatureGate 모형의 하향식 기제 (A Neural Network Model for Visual Selection: Top-down mechanism of Feature Gate model)

  • 김민식
    • 인지과학
    • /
    • 제10권3호
    • /
    • pp.1-15
    • /
    • 1999
  • 시각적 선택에 대한 과거 정신물리학적, 신경 생리학적 연구결과를 토대로 Feature Gate 라는 신경 망 모형을 제안하였다. 이 모형에는 공간 배치도가 위계 적으로 구성되어 있으며, 정보의 흐름이 위계의 각 수준으로부터 그 다음 수준으로 넘어갈 때 주의 게이트에 의해 조절되도록 되어 있다. 주의 게이트들은 독특한 세부 특징을 가진 위치에 반응하는 상향식 시스템과 표적 세부 특징이 있는 위치에 반응하는 하향식 기제 모두에 의해 조절된다. 본 연구는 Feature Gate 모형의 하향식 기제에 초점을 맞추어 모형을 설명하고, 현재 다른 모형들이 설명하지 못하는 Moran & Desimone(1985)의 연구결과를 이 모형이 어떻게 설명하는지를 제시하고자 한다. Feature Gate 모형은 병렬 적인 세부특징 검색, 계열 적 접합표적 검색, 단서에 의한 주의의 점진적 감소 모형, 세부특징-주도적인 공간적 선택, 주의의 분할, 방해자극 위치의 억제, 주변 억제 등을 포함한 시각적 주의 연구의 여러 가지 많은 현상들을 설명하는데 하나의 일관적인 해석을 제공해 준다. 앞으로 이 모형을 더욱 확장, 발전 시켜 세부특징의 조합된 배열에 반응하는 상위 수준의 유닛을 사용한다면 시각적 선택과정이 포함된 형태 재인 모형으로 개발될 수 있다.

  • PDF

신경망 근사에 의한 다중 레이어의 클래스 활성화 맵을 이용한 블랙박스 모델의 시각적 설명 기법 (Visual Explanation of Black-box Models Using Layer-wise Class Activation Maps from Approximating Neural Networks)

  • 강준규;전민경;이현석;김성찬
    • 대한임베디드공학회논문지
    • /
    • 제16권4호
    • /
    • pp.145-151
    • /
    • 2021
  • In this paper, we propose a novel visualization technique to explain the predictions of deep neural networks. We use knowledge distillation (KD) to identify the interior of a black-box model for which we know only inputs and outputs. The information of the black box model will be transferred to a white box model that we aim to create through the KD. The white box model will learn the representation of the black-box model. Second, the white-box model generates attention maps for each of its layers using Grad-CAM. Then we combine the attention maps of different layers using the pixel-wise summation to generate a final saliency map that contains information from all layers of the model. The experiments show that the proposed technique found important layers and explained which part of the input is important. Saliency maps generated by the proposed technique performed better than those of Grad-CAM in deletion game.

Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명 (Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization)

  • 장청롱;안현철
    • 한국IT서비스학회지
    • /
    • 제21권2호
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

Stereo Image Quality Assessment Using Visual Attention and Distortion Predictors

  • Hwang, Jae-Jeong;Wu, Hong Ren
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제5권9호
    • /
    • pp.1613-1631
    • /
    • 2011
  • Several metrics have been reported in the literature to assess stereo image quality, mostly based on visual attention or human visual sensitivity based distortion prediction with the help of disparity information, which do not consider the combined aspects of human visual processing. In this paper, visual attention and depth assisted stereo image quality assessment model (VAD-SIQAM) is devised that consists of three main components, i.e., stereo attention predictor (SAP), depth variation (DV), and stereo distortion predictor (SDP). Visual attention is modeled based on entropy and inverse contrast to detect regions or objects of interest/attention. Depth variation is fused into the attention probability to account for the amount of changed depth in distorted stereo images. Finally, the stereo distortion predictor is designed by integrating distortion probability, which is based on low-level human visual system (HVS), responses into actual attention probabilities. The results show that regions of attention are detected among the visually significant distortions in the stereo image pair. Drawbacks of human visual sensitivity based picture quality metrics are alleviated by integrating visual attention and depth information. We also show that positive correlation with ground-truth attention and depth maps are increased by up to 0.949 and 0.936 in terms of the Pearson and the Spearman correlation coefficients, respectively.