• 제목/요약/키워드: Action recognition

검색결과 408건 처리시간 0.026초

사람 행동 인식에서 반복 감소를 위한 저수준 사람 행동 변화 감지 방법 (Detection of Low-Level Human Action Change for Reducing Repetitive Tasks in Human Action Recognition)

  • 노요환;김민정;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제22권4호
    • /
    • pp.432-442
    • /
    • 2019
  • Most current human action recognition methods based on deep learning methods. It is required, however, a very high computational cost. In this paper, we propose an action change detection method to reduce repetitive human action recognition tasks. In reality, simple actions are often repeated and it is time consuming process to apply high cost action recognition methods on repeated actions. The proposed method decides whether action has changed. The action recognition is executed only when it has detected action change. The action change detection process is as follows. First, extract the number of non-zero pixel from motion history image and generate one-dimensional time-series data. Second, detecting action change by comparison of difference between current time trend and local extremum of time-series data and threshold. Experiments on the proposed method achieved 89% balanced accuracy on action change data and 61% reduced action recognition repetition.

시공간 템플릿과 컨볼루션 신경망을 사용한 깊이 영상 기반의 사람 행동 인식 (Depth Image-Based Human Action Recognition Using Convolution Neural Network and Spatio-Temporal Templates)

  • 음혁민;윤창용
    • 전기학회논문지
    • /
    • 제65권10호
    • /
    • pp.1731-1737
    • /
    • 2016
  • In this paper, a method is proposed to recognize human actions as nonverbal expression; the proposed method is composed of two steps which are action representation and action recognition. First, MHI(Motion History Image) is used in the action representation step. This method includes segmentation based on depth information and generates spatio-temporal templates to describe actions. Second, CNN(Convolution Neural Network) which includes feature extraction and classification is employed in the action recognition step. It extracts convolution feature vectors and then uses a classifier to recognize actions. The recognition performance of the proposed method is demonstrated by comparing other action recognition methods in experimental results.

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

Action Recognition Method in Sports Video Shear Based on Fish Swarm Algorithm

  • Jie Sun;Lin Lu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.554-562
    • /
    • 2023
  • This research offers a sports video action recognition approach based on the fish swarm algorithm in light of the low accuracy of existing sports video action recognition methods. A modified fish swarm algorithm is proposed to construct invariant features and decrease the dimension of features. Based on this algorithm, local features and global features can be classified. The experimental findings on the typical sports action data set demonstrate that the key details of sports action can be successfully retained by the dimensionality-reduced fusion invariant characteristics. According to this research, the average recognition time of the proposed method for walking, running, squatting, sitting, and bending is less than 326 seconds, and the average recognition rate is higher than 94%. This proves that this method can significantly improve the performance and efficiency of online sports video motion recognition.

로컬푸드 체험관광이 행동의도에 미치는 관계에서 소비자 인식의 매개효과 (Mediated Effects of Consumer Recognition in Relationship of Local Food Tour Experience and Intention of Action)

  • 김희동
    • 한국유기농업학회지
    • /
    • 제22권1호
    • /
    • pp.81-96
    • /
    • 2014
  • This study is aimed to examine the mediated effects of consumer recognition in relationship of local food tour experience and intention of action in the revitalization of local food. Questionnaire survey target was women in 30s and 40s. The local food tour experience is independent variable, intention of action is dependent variable, and consumer recognition is analyzed as mediated variable. As a result, consumer recognition which is mediating variable has two subordinated variables. One is direct affect and the other is indirect affect. Between local food tour experience and intention of action, there was partial mediating effect. Thus, through tour experience, consumer can have positive recognition of freshness, safety, health, taste, price, job creation and relationship. That affects to the intention of action. Based on the results of the study, it is necessary to learn success case for marketing revitalization, and develop and operate experiencing tour education program to manage customer continuously.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • 한국정보전자통신기술학회논문지
    • /
    • 제14권4호
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

다중 시점 영상 시퀀스를 이용한 강인한 행동 인식 (Robust Action Recognition Using Multiple View Image Sequences)

  • 아마드;이성환
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 가을 학술발표논문집 Vol.33 No.2 (B)
    • /
    • pp.509-514
    • /
    • 2006
  • Human action recognition is an active research area in computer vision. In this paper, we present a robust method for human action recognition by using combined information of human body shape and motion information with multiple views image sequence. The principal component analysis is used to extract the shape feature of human body and multiple block motion of the human body is used to extract the motion features of human. This combined information with multiple view sequences enhances the recognition of human action. We represent each action using a set of hidden Markov model and we model each action by multiple views. This characterizes the human action recognition from arbitrary view information. Several daily actions of elderly persons are modeled and tested by using this approach and they are correctly classified, which indicate the robustness of our method.

  • PDF

Human Action Recognition Bases on Local Action Attributes

  • Zhang, Jing;Lin, Hong;Nie, Weizhi;Chaisorn, Lekha;Wong, Yongkang;Kankanhalli, Mohan S
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권3호
    • /
    • pp.1264-1274
    • /
    • 2015
  • Human action recognition received many interest in the computer vision community. Most of the existing methods focus on either construct robust descriptor from the temporal domain, or computational method to exploit the discriminative power of the descriptor. In this paper we explore the idea of using local action attributes to form an action descriptor, where an action is no longer characterized with the motion changes in the temporal domain but the local semantic description of the action. We propose an novel framework where introduces local action attributes to represent an action for the final human action categorization. The local action attributes are defined for each body part which are independent from the global action. The resulting attribute descriptor is used to jointly model human action to achieve robust performance. In addition, we conduct some study on the impact of using body local and global low-level feature for the aforementioned attributes. Experiments on the KTH dataset and the MV-TJU dataset show that our local action attribute based descriptor improve action recognition performance.

모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템 (Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model)

  • 음혁민;이희진;윤창용
    • 한국지능시스템학회논문지
    • /
    • 제26권6호
    • /
    • pp.471-476
    • /
    • 2016
  • 본 논문은 깊이 정보를 기반으로 모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용하여 연속적인 사람 행동들을 인식하는 시스템을 설명하고 연속적인 행동 인식 시스템에서 인식 성능을 개선하기 위해 행동 적출을 수행하는 적출 모델을 제안한다. 본 시스템의 구성은 전처리 과정, 사람 행동 및 적출 모델링 그리고 연속적인 사람 행동 인식으로 이루어져 있다. 전처리 과정에서는 영상 분할과 시공간 템플릿 기반의 특징을 추출하기 위하여 Depth-MHI-HOG 방법을 사용하였으며, 추출된 특징들은 사람 행동 및 적출 모델링 과정을 통해 시퀀스들로 생성된다. 이 생성된 시퀀스들과 은닉 마르코프 모델을 사용하여 정의된 각각의 행동에 적합한 사람 행동 모델과 제안된 적출 모델을 생성한다. 연속적인 사람 행동 인식은 연속적인 행동 시퀀스에서 적출 모델에 의해 의미 있는 행동과 의미 없는 행동을 분할하는 행동 적출과 의미 있는 행동 시퀀스에 대한 모델의 확률 값들을 비교하여 연속적으로 사람 행동들을 인식한다. 실험 결과를 통해 제안된 모델이 연속적인 행동 인식 시스템에서 인식 성능을 효과적으로 개선하는 것을 검증한다.