• 제목/요약/키워드: attention method

Search Result 3,894, Processing Time 0.034 seconds

A New Performance Evaluation Method for Visual Attention System (시각주의 탐색 시스템을 위한 새로운 성능 평가 기법)

  • Cheoi, Kyungjoo
    • Journal of Information Technology Services
    • /
    • v.16 no.1
    • /
    • pp.55-72
    • /
    • 2017
  • Many of the studies of visual attention that are currently underway are seeking ways to make application systems that can be used in practice, and obtained good results using not only simulated images but also real-world images. However, despite that previous studies of selective visual attention are models intended to implement the human vision, few experiments verified the models with actual humans and there is no standardized data nor standardized experimental method for actual images. Therefore, in this paper, we propose a new performance evaluation techniques necessary for evaluation of visual attention systems. We developed an evaluation method for evaluating the performance of the visual attention system through comparison with the results of the human experiments on visual attention. Human experiments on visual attention is an experiments where human beings are instinctively aware of the unconscious when images are given to humans. So it can be useful for evaluating performance of the bottom-up attention system. Also we propose a new selective attention system that guides the user to effectively detect ROI regions by using spatial and temporal features adaptively selected according to the input image. We evaluated the performance of proposed visual attention system through the developed performance evaluation method, and we could confirm that the results of the visual attention system are similar to those of the human visual attention.

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization (Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명)

  • Zhang, Chenglong;Ahn, Hyunchul
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

Deep Learning-based Super Resolution Method Using Combination of Channel Attention and Spatial Attention (채널 강조와 공간 강조의 결합을 이용한 딥 러닝 기반의 초해상도 방법)

  • Lee, Dong-Woo;Lee, Sang-Hun;Han, Hyun Ho
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.15-22
    • /
    • 2020
  • In this paper, we proposed a deep learning based super-resolution method that combines Channel Attention and Spatial Attention feature enhancement methods. It is important to restore high-frequency components, such as texture and features, that have large changes in surrounding pixels during super-resolution processing. We proposed a super-resolution method using feature enhancement that combines Channel Attention and Spatial Attention. The existing CNN (Convolutional Neural Network) based super-resolution method has difficulty in deep network learning and lacks emphasis on high frequency components, resulting in blurry contours and distortion. In order to solve the problem, we used an emphasis block that combines Channel Attention and Spatial Attention to which Skip Connection was applied, and a Residual Block. The emphasized feature map extracted by the method was extended through Sub-pixel Convolution to obtain the super resolution. As a result, about PSNR improved by 5%, SSIM improved by 3% compared with the conventional SRCNN, and by comparison with VDSR, about PSNR improved by 2% and SSIM improved by 1%.

Attention Deep Neural Networks Learning based on Multiple Loss functions for Video Face Recognition (비디오 얼굴인식을 위한 다중 손실 함수 기반 어텐션 심층신경망 학습 제안)

  • Kim, Kyeong Tae;You, Wonsang;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1380-1390
    • /
    • 2021
  • The video face recognition (FR) is one of the most popular researches in the field of computer vision due to a variety of applications. In particular, research using the attention mechanism is being actively conducted. In video face recognition, attention represents where to focus on by using the input value of the whole or a specific region, or which frame to focus on when there are many frames. In this paper, we propose a novel attention based deep learning method. Main novelties of our method are (1) the use of combining two loss functions, namely weighted Softmax loss function and a Triplet loss function and (2) the feasibility of end-to-end learning which includes the feature embedding network and attention weight computation. The feature embedding network has a positive effect on the attention weight computation by using combined loss function and end-to-end learning. To demonstrate the effectiveness of our proposed method, extensive and comparative experiments have been carried out to evaluate our method on IJB-A dataset with their standard evaluation protocols. Our proposed method represented better or comparable recognition rate compared to other state-of-the-art video FR methods.

A Pilot Selection Method Using Divided Attention Test (주의 분배력 분석을 통한 조종사 선발 방법에 관한 연구)

  • Lee Dal-Ho
    • Journal of the military operations research society of Korea
    • /
    • v.11 no.1
    • /
    • pp.33-46
    • /
    • 1985
  • This study develops a scientific method in pilot selection by analysing a divided attention performance between the successful pilots and the failures in a flight training course. To measure the divided attention performance, Dual Task Method is used in which the primary task is a tracking task while the secondary tasks are, 1. short-term memory task 2. choice reaction task 3. judgement task. Result shows that the performance of the pilots is significantly better (p < 0.1) than that of the failures in divided attention performance. In addition, the differences in the divided attention performance between the two groups are increased in proportion to the difficulty of the task and especially in the short term memory, the increment is most dramatic.

  • PDF

An Attention Method-based Deep Learning Encoder for the Sentiment Classification of Documents (문서의 감정 분류를 위한 주목 방법 기반의 딥러닝 인코더)

  • Kwon, Sunjae;Kim, Juae;Kang, Sangwoo;Seo, Jungyun
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.268-273
    • /
    • 2017
  • Recently, deep learning encoder-based approach has been actively applied in the field of sentiment classification. However, Long Short-Term Memory network deep learning encoder, the commonly used architecture, lacks the quality of vector representation when the length of the documents is prolonged. In this study, for effective classification of the sentiment documents, we suggest the use of attention method-based deep learning encoder that generates document vector representation by weighted sum of the outputs of Long Short-Term Memory network based on importance. In addition, we propose methods to modify the attention method-based deep learning encoder to suit the sentiment classification field, which consist of a part that is to applied to window attention method and an attention weight adjustment part. In the window attention method part, the weights are obtained in the window units to effectively recognize feeling features that consist of more than one word. In the attention weight adjustment part, the learned weights are smoothened. Experimental results revealed that the performance of the proposed method outperformed Long Short-Term Memory network encoder, showing 89.67% in accuracy criteria.

Intra Prediction Method for Depth Picture Using CNN and Attention Mechanism (CNN과 Attention을 통한 깊이 화면 내 예측 방법)

  • Jae-hyuk Yoon;Dong-seok Lee;Byoung-ju Yun;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.2
    • /
    • pp.35-45
    • /
    • 2024
  • In this paper, we propose an intra prediction method for depth picture using CNN and Attention mechanism. The proposed method allows each pixel in a block to predict to select pixels among reference area. Spatial features in the vertical and horizontal directions for reference pixels are extracted from the top and left areas adjacent to the block, respectively, through a CNN layer. The two spatial features are merged into the feature direction and the spatial direction to predict features for the prediction block and reference pixels, respectively. the correlation between the prediction block and the reference pixel is predicted through attention mechanism. The predicted correlations are restored to the pixel domain through CNN layers to predict the pixels in the block. The average prediction error of intra prediction is reduced by 5.8% when the proposed method is added to VVC intra modes.

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

Computer Vision System using the mechanisms of human visual attention (인간의 시각적 주의 능력을 이용한 컴퓨터 시각 시스템)

  • 최경주;이일병
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.239-242
    • /
    • 2001
  • As systems for real time computer vision are confronted with prodigious amounts of visual information, it has become a priority to locate and analyze just that information essential to the task at hand, while ignoring the vast flow of irrelevant detail. A method of achieving this is to using human visual attention mechanism. In this paper, short review of human visual attention mechanisms and some computation models of visual attention were shown. This paper can be used as the basic data for researches on development of visual attention system that can perform various complex tasks more efficiently.

  • PDF

A Pilot Selection Method using Divided Attention Test (주의력 배분능력 분석을 통한 조종사 선발방법에 관한 연구)

  • Lee, Dal-Ho;Lee, Myeon-U
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.10 no.2
    • /
    • pp.3-16
    • /
    • 1984
  • This study develops a scientific method in pilot selection by analysing a divided attention performance between the successful pilots and the failures in a flight training course. To measure the divided attention performance, Dual Task Method is used in which the primary task is a tracking task while the secondary tasks are, 1. short term memory task, 2. choice reaction task and 3. judgement task. Result shows that the performance of the pilots is significantly better (P < 0.1) than that of the failures in dual performance. In addition, the differences in the divided attention performance between the two groups are increased in proportion to the difficulty of the task and especially in the Short Term Memory, the increment is most dramatic.

  • PDF