• Title/Summary/Keyword: attention and information

검색결과 4,537건 처리시간 0.036초

비디오 재설정 및 3D 압축처리를 위한 어텐션 메커니즘 (Attentional mechanisms for video retargeting and 3D compressive processing)

  • 황재정
    • 한국정보통신학회논문지
    • /
    • 제15권4호
    • /
    • pp.943-950
    • /
    • 2011
  • 이 논문에서는 2D 및 3D 영상의 어텐션량을 측정하여 정지 및 동영상의 재설정 및 압축처리 기법을 제시하였다. 2D 어텐션은 세 개의 주요 구성, 즉, 영상의 세기, 컬러 및 방향성을 고려하였으며, 3D 영상에서 깊이 정보를 고려하였다. 시각적 어텐션은 관심있고 흥미있는 영역이나 객체를 검출하기 위해 희소성을 정량화하는 기법에 의해 구하였다. 왜곡된 스테레오 영상에서 변화된 깊이 정보를 어텐션 확률에 정합시켜서 최종적으로 저위 HVS 반응을 실제 어텐션 확률과 종합하여 스테레오 왜곡 예측기를 설계하였다. 결과로 기존 모델에 비해 효과적인 어텐션 기법을 개발하였으며 이를 비디오 재설정에 적용하여 성능을 입증하였다.

Two-Dimensional Attention-Based LSTM Model for Stock Index Prediction

  • Yu, Yeonguk;Kim, Yoon-Joong
    • Journal of Information Processing Systems
    • /
    • 제15권5호
    • /
    • pp.1231-1242
    • /
    • 2019
  • This paper presents a two-dimensional attention-based long short-memory (2D-ALSTM) model for stock index prediction, incorporating input attention and temporal attention mechanisms for weighting of important stocks and important time steps, respectively. The proposed model is designed to overcome the long-term dependency, stock selection, and stock volatility delay problems that negatively affect existing models. The 2D-ALSTM model is validated in a comparative experiment involving the two attention-based models multi-input LSTM (MI-LSTM) and dual-stage attention-based recurrent neural network (DARNN), with real stock data being used for training and evaluation. The model achieves superior performance compared to MI-LSTM and DARNN for stock index prediction on a KOSPI100 dataset.

Conceptual understanding of the relationship between consciousness, memory, and attention

  • 김은숙;신현정
    • 한국인지과학회:학술대회논문집
    • /
    • 한국인지과학회 2010년도 춘계학술대회
    • /
    • pp.13-17
    • /
    • 2010
  • Consciousness is really regarded as too ambiguous a concept to be understood and accepted as a mental construct without the inclusion of memory and attention in any conceptualization. However we need one criterion to count satisfactorily as an explanation of consciousness in information processing. An operational working definition of consciousness could be made in comparison of memory and attention: Consciousness would be a subjective awareness of momentary experience and also have the characteristics of an operating system performing control and consolidation information processing. This could be called a cognitive consciousness. It is possible that some distinctions between consciousness, memory and attention can be made conceptually and functionally from the perspectives of information processing.

  • PDF

Industrial Process Monitoring and Fault Diagnosis Based on Temporal Attention Augmented Deep Network

  • Mu, Ke;Luo, Lin;Wang, Qiao;Mao, Fushun
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.242-252
    • /
    • 2021
  • Following the intuition that the local information in time instances is hardly incorporated into the posterior sequence in long short-term memory (LSTM), this paper proposes an attention augmented mechanism for fault diagnosis of the complex chemical process data. Unlike conventional fault diagnosis and classification methods, an attention mechanism layer architecture is introduced to detect and focus on local temporal information. The augmented deep network results preserve each local instance's importance and contribution and allow the interpretable feature representation and classification simultaneously. The comprehensive comparative analyses demonstrate that the developed model has a high-quality fault classification rate of 95.49%, on average. The results are comparable to those obtained using various other techniques for the Tennessee Eastman benchmark process.

DA-Res2Net: a novel Densely connected residual Attention network for image semantic segmentation

  • Zhao, Xiaopin;Liu, Weibin;Xing, Weiwei;Wei, Xiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4426-4442
    • /
    • 2020
  • Since scene segmentation is becoming a hot topic in the field of autonomous driving and medical image analysis, researchers are actively trying new methods to improve segmentation accuracy. At present, the main issues in image semantic segmentation are intra-class inconsistency and inter-class indistinction. From our analysis, the lack of global information as well as macroscopic discrimination on the object are the two main reasons. In this paper, we propose a Densely connected residual Attention network (DA-Res2Net) which consists of a dense residual network and channel attention guidance module to deal with these problems and improve the accuracy of image segmentation. Specifically, in order to make the extracted features equipped with stronger multi-scale characteristics, a densely connected residual network is proposed as a feature extractor. Furthermore, to improve the representativeness of each channel feature, we design a Channel-Attention-Guide module to make the model focusing on the high-level semantic features and low-level location features simultaneously. Experimental results show that the method achieves significant performance on various datasets. Compared to other state-of-the-art methods, the proposed method reaches the mean IOU accuracy of 83.2% on PASCAL VOC 2012 and 79.7% on Cityscapes dataset, respectively.

뇌종양 분할을 위한 3D 이중 융합 주의 네트워크 (3D Dual-Fusion Attention Network for Brain Tumor Segmentation)

  • ;;;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.496-498
    • /
    • 2023
  • Brain tumor segmentation problem has challenges in the tumor diversity of location, imbalance, and morphology. Attention mechanisms have recently been used widely to tackle medical segmentation problems efficiently by focusing on essential regions. In contrast, the fusion approaches enhance performance by merging mutual benefits from many models. In this study, we proposed a 3D dual fusion attention network to combine the advantages of fusion approaches and attention mechanisms by residual self-attention and local blocks. Compared to fusion approaches and related works, our proposed method has shown promising results on the BraTS 2018 dataset.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

Two-dimensional attention-based multi-input LSTM for time series prediction

  • Kim, Eun Been;Park, Jung Hoon;Lee, Yung-Seop;Lim, Changwon
    • Communications for Statistical Applications and Methods
    • /
    • 제28권1호
    • /
    • pp.39-57
    • /
    • 2021
  • Time series prediction is an area of great interest to many people. Algorithms for time series prediction are widely used in many fields such as stock price, temperature, energy and weather forecast; in addtion, classical models as well as recurrent neural networks (RNNs) have been actively developed. After introducing the attention mechanism to neural network models, many new models with improved performance have been developed; in addition, models using attention twice have also recently been proposed, resulting in further performance improvements. In this paper, we consider time series prediction by introducing attention twice to an RNN model. The proposed model is a method that introduces H-attention and T-attention for output value and time step information to select useful information. We conduct experiments on stock price, temperature and energy data and confirm that the proposed model outperforms existing models.

Attention Capsule Network for Aspect-Level Sentiment Classification

  • Deng, Yu;Lei, Hang;Li, Xiaoyu;Lin, Yiou;Cheng, Wangchi;Yang, Shan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1275-1292
    • /
    • 2021
  • As a fine-grained classification problem, aspect-level sentiment classification predicts the sentiment polarity for different aspects in context. To address this issue, researchers have widely used attention mechanisms to abstract the relationship between context and aspects. Still, it is difficult to effectively obtain a more profound semantic representation, and the strong correlation between local context features and the aspect-based sentiment is rarely considered. In this paper, a hybrid attention capsule network for aspect-level sentiment classification (ABASCap) was proposed. In this model, the multi-head self-attention was improved, and a context mask mechanism based on adjustable context window was proposed, so as to effectively obtain the internal association between aspects and context. Moreover, the dynamic routing algorithm and activation function in capsule network were optimized to meet the task requirements. Finally, sufficient experiments were conducted on three benchmark datasets in different domains. Compared with other baseline models, ABASCap achieved better classification results, and outperformed the state-of-the-art methods in this task after incorporating pre-training BERT.

A Study on Visual Behavior for Presenting Consumer-Oriented Information on an Online Fashion Store

  • Kim, Dahyun;Lee, Seunghee
    • 한국의류학회지
    • /
    • 제44권5호
    • /
    • pp.789-809
    • /
    • 2020
  • Growth in online channels has created fierce competition; consequently, retailers have to invest an increasing amount of effort into attracting consumers. In this study, eye-tracking technology examined consumers' visual behavior to gain an understanding of information searching behavior in exploring product information for fashion products. Product attribute information was classified into two image-based elements (model image information and detail image information) and two text-based elements (basic text information, detail text information), after which consumers' visual behavior for each information element was analyzed. Furthermore, whether involvement affects consumers' information search behavior was investigated. The results demonstrated that model image information attracted visual attention the quickest, while detail text information and model image information received the most visual attention. Additionally, high-involvement consumers tended to pay more attention to detailed information while low-involvement consumers tended to pay more attention to image-based and basic information. This study is expected to help broaden the understanding of consumer behavior and provide implications for establishing strategies on how to efficiently organize product information for online fashion stores.