• Title/Summary/Keyword: Human Attention

Search Result 1,516, Processing Time 0.032 seconds

A New Performance Evaluation Method for Visual Attention System (시각주의 탐색 시스템을 위한 새로운 성능 평가 기법)

  • Cheoi, Kyungjoo
    • Journal of Information Technology Services
    • /
    • v.16 no.1
    • /
    • pp.55-72
    • /
    • 2017
  • Many of the studies of visual attention that are currently underway are seeking ways to make application systems that can be used in practice, and obtained good results using not only simulated images but also real-world images. However, despite that previous studies of selective visual attention are models intended to implement the human vision, few experiments verified the models with actual humans and there is no standardized data nor standardized experimental method for actual images. Therefore, in this paper, we propose a new performance evaluation techniques necessary for evaluation of visual attention systems. We developed an evaluation method for evaluating the performance of the visual attention system through comparison with the results of the human experiments on visual attention. Human experiments on visual attention is an experiments where human beings are instinctively aware of the unconscious when images are given to humans. So it can be useful for evaluating performance of the bottom-up attention system. Also we propose a new selective attention system that guides the user to effectively detect ROI regions by using spatial and temporal features adaptively selected according to the input image. We evaluated the performance of proposed visual attention system through the developed performance evaluation method, and we could confirm that the results of the visual attention system are similar to those of the human visual attention.

Stereo Image Quality Assessment Using Visual Attention and Distortion Predictors

  • Hwang, Jae-Jeong;Wu, Hong Ren
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.9
    • /
    • pp.1613-1631
    • /
    • 2011
  • Several metrics have been reported in the literature to assess stereo image quality, mostly based on visual attention or human visual sensitivity based distortion prediction with the help of disparity information, which do not consider the combined aspects of human visual processing. In this paper, visual attention and depth assisted stereo image quality assessment model (VAD-SIQAM) is devised that consists of three main components, i.e., stereo attention predictor (SAP), depth variation (DV), and stereo distortion predictor (SDP). Visual attention is modeled based on entropy and inverse contrast to detect regions or objects of interest/attention. Depth variation is fused into the attention probability to account for the amount of changed depth in distorted stereo images. Finally, the stereo distortion predictor is designed by integrating distortion probability, which is based on low-level human visual system (HVS), responses into actual attention probabilities. The results show that regions of attention are detected among the visually significant distortions in the stereo image pair. Drawbacks of human visual sensitivity based picture quality metrics are alleviated by integrating visual attention and depth information. We also show that positive correlation with ground-truth attention and depth maps are increased by up to 0.949 and 0.936 in terms of the Pearson and the Spearman correlation coefficients, respectively.

Computer Vision System using the mechanisms of human visual attention (인간의 시각적 주의 능력을 이용한 컴퓨터 시각 시스템)

  • 최경주;이일병
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.239-242
    • /
    • 2001
  • As systems for real time computer vision are confronted with prodigious amounts of visual information, it has become a priority to locate and analyze just that information essential to the task at hand, while ignoring the vast flow of irrelevant detail. A method of achieving this is to using human visual attention mechanism. In this paper, short review of human visual attention mechanisms and some computation models of visual attention were shown. This paper can be used as the basic data for researches on development of visual attention system that can perform various complex tasks more efficiently.

  • PDF

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

A New Residual Attention Network based on Attention Models for Human Action Recognition in Video

  • Kim, Jee-Hyun;Cho, Young-Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.1
    • /
    • pp.55-61
    • /
    • 2020
  • With the development of deep learning technology and advances in computing power, video-based research is now gaining more and more attention. Video data contains a large amount of temporal and spatial information, which is the biggest difference compared with image data. It has a larger amount of data. It has attracted intense attention in computer vision. Among them, motion recognition is one of the research focuses. However, the action recognition of human in the video is extremely complex and challenging subject. Based on many research in human beings, we have found that artificial intelligence-like attention mechanisms are an efficient model for cognition. This efficient model is ideal for processing image information and complex continuous video information. We introduce this attention mechanism into video action recognition, paying attention to human actions in video and effectively improving recognition efficiency. In this paper, we propose a new 3D residual attention network using convolutional neural network based on two attention models to identify human action behavior in the video. An evaluation result of our model showed up to 90.7% accuracy.

Joint CTC/Attention Korean ASR with CTC Ratio Scheduling (CTC Ratio Scheduling을 이용한 Joint CTC/Attention 한국어 음성인식)

  • Moon, YoungKi;Jo, YongRae;Cho, WonIk;Jo, GeunSik
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.37-41
    • /
    • 2020
  • 본 논문에서는 Joint CTC/Attention 모델에 CTC ratio scheduling을 이용한 end-to-end 한국어 음성인식을 연구하였다. Joint CTC/Attention은 CTC와 attention의 장점을 결합한 모델로서 attention, CTC 단일 모델보다 좋은 성능을 보여주지만, 학습이 진행될수록 CTC가 attention의 학습을 저해하는 요인이 된다. 본 논문에서는 이러한 문제를 해결하기 위해, 학습 진행에 따라 CTC의 비율(ratio)를 줄여나가는 CTC ratio scheduling 방법을 제안한다. CTC ratio scheduling를 이용하여 학습한 결과물은 기존 Joint CTC/Attention, 단일 attention 모델 대비 좋은 성능을 보여주는 것을 확인하였다.

  • PDF

Analysis of Effect by Duration of Cryotherapy in the Posterior region of Neck for College Students

  • Ji Hong Chang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.301-306
    • /
    • 2023
  • Attention is a fundamental aspect in the cognitive process of human. Cognitive system of human body requires to focus on selected information among a vast amount of information from sensory organs. It has widely studied that various environmental factors affected the level of attention; however, few researches have aimed to the effect of direct cryotherapy. In this research, level of attention was studied comparing sub-indexes of FAIR test between groups with different duration of direct cryotheapy to the back of neck. FAIR test is a evaluation tool for visual attention consisting of three sub-indexes. Selective attention, accuracy of attention, and persistence of attention can be independently analyzed by FAIR test. In the analysis of selective attention, cryotherapy for 5 to 20 minutes showed higher result than cryotherapy for 40 minutes. In the analysis of persistence of attention, cryotherapy for 5 to 15 minutes showed higher result than cryotherapy for 40 minutes. Overall, selective attention and persistence of attention turns out to be maximized between 5 to 20 minutes of cryotherapy and tends to decrease afterwards. However, accuracy of attention does not seem to be affected by the duration of cryotherapy. Correlation between selective attention and the skin temperature by cryotherapy tends to be negative supporting the findings by ANOVA and post-hoc test. Correlation between persistence of attention and the skin temperature showed similar results.

Comparison of Pointer Network-based Dependency Parsers Depending on Attention Mechanisms (Attention Mechanism에 따른 포인터 네트워크 기반 의존 구문 분석 모델 비교)

  • Han, Mirae;Park, Seongsik;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.274-277
    • /
    • 2021
  • 의존 구문 분석은 문장 내 의존소와 지배소 사이의 관계를 예측하여 문장 구조를 분석하는 자연어처리 태스크이다. 최근의 딥러닝 기반 의존 구문 분석 연구는 주로 포인터 네트워크를 사용하는 방법으로 연구되고 있다. 포인터 네트워크는 내부적으로 사용하는 attention 기법에 따라 성능이 달라질 수 있다. 따라서 본 논문에서는 포인터 네트워크 모델에 적용되는 attention 기법들을 비교 분석하고, 한국어 의존 구문 분석 모델에 가장 효과적인 attention 기법을 선별한다. KLUE 데이터 셋을 사용한 실험 결과, UAS는 biaffine attention을 사용할 때 95.14%로 가장 높은 성능을 보였으며, LAS는 multi-head attention을 사용했을 때 92.85%로 가장 높은 성능을 보였다.

  • PDF

STAGCN-based Human Action Recognition System for Immersive Large-Scale Signage Content (몰입형 대형 사이니지 콘텐츠를 위한 STAGCN 기반 인간 행동 인식 시스템)

  • Jeongho Kim;Byungsun Hwang;Jinwook Kim;Joonho Seon;Young Ghyu Sun;Jin Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.89-95
    • /
    • 2023
  • In recent decades, human action recognition (HAR) has demonstrated potential applications in sports analysis, human-robot interaction, and large-scale signage content. In this paper, spatial temporal attention graph convolutional network (STAGCN)-based HAR system is proposed. Spatioal-temmporal features of skeleton sequences are assigned different weights by STAGCN, enabling the consideration of key joints and viewpoints. From simulation results, it has been shown that the performance of the proposed model can be improved in terms of classification accuracy in the NTU RGB+D dataset.

Improved Deep Biaffine Attention for Korean Dependency Parsing (한국어 의존 구문 분석을 위한 개선된 Deep Biaffine Attention)

  • O, Dongsuk;Woo, Jongseong;Lee, Byungwoo;Kim, Kyungsun
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.608-610
    • /
    • 2018
  • 한국어 의존 구문 분석(Dependency Parsing)은 문장 어절의 중심어(head)와 수식어(modifier)의 의존관계를 표현하는 자연어 분석 방법이다. 최근에는 이러한 의존 관계를 표현하기 위해 주의 집중 메커니즘(Attention Mechanism)과 LSTM(Long Short Term Memory)을 결합한 모델들이 높은 성능을 보이고 있다. 본 논문에서는 개선된 Biaffine Attention 의존 구문 분석 모델을 제안한다. 제안된 모델은 기존의 Biaffine Attention에서 의존성과 의존 관계를 결정하는 방법을 개선하였고, 한국어 의존 구문 분석을 위한 입력 열의 형태소 표상을 확장함으로써 기존의 모델보다 UAS(Unlabeled Attachment Score)가 0.15%p 더 높은 성능을 보였다.

  • PDF