• 제목/요약/키워드: Human action recognition

검색결과 154건 처리시간 0.026초

모션 그래디언트 히스토그램 기반의 시공간 크기 변화에 강인한 동작 인식 (Spatial-Temporal Scale-Invariant Human Action Recognition using Motion Gradient Histogram)

  • 김광수;김태형;곽수영;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권12호
    • /
    • pp.1075-1082
    • /
    • 2007
  • 본 논문은 동영상에 등장하는 다수 사람의 동작을 검출하여 검출된 동작을 개별적으로 인식하는 방법을 제안한다. 동작이 수행되는 속도 또는 크기 변화에 강인한 인식 성능을 갖기 위해 시공간축 피라미드(Spatial-Temporal Pyramid)방식을 적용한다. 동작 표현 방식을 통계적 특성 기반의 모션 그래디언트 히스토그램(MGH:Motion Gradient Histogram)으로 선택하여 인식 과정에서 발생하는 복잡도를 최소화 하였다. 다수의 동작을 검출하기 위하여 이진 차영상을 축적한 모션 에너지 이미지(MEI: Motion Energy Image) 방법을 적용하여 효율적으로 개별적 동작 영역을 획득한다. 각 영역은 동작 표현 방법인 MGH로 나타내어지고, 크기 변화에 강인하도록 피라미드 방식을 적응하여 학습된 템플릿 MGH와 유사도를 상호 비교하여 최종 인식 결과를 얻는다. 인식 성능의 평가를 위해 10개의 동영상을 활용하여 단일 객체, 다수 객체, 속도 및 크기 변화, 기존 방식과의 비교, 기타 추가 실험 등을 실시하여 다양한 조건의 영상에서 양호한 인식 결과를 확인 할 수 있었다.

A Tree Regularized Classifier-Exploiting Hierarchical Structure Information in Feature Vector for Human Action Recognition

  • Luo, Huiwu;Zhao, Fei;Chen, Shangfeng;Lu, Huanzhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권3호
    • /
    • pp.1614-1632
    • /
    • 2017
  • Bag of visual words is a popular model in human action recognition, but usually suffers from loss of spatial and temporal configuration information of local features, and large quantization error in its feature coding procedure. In this paper, to overcome the two deficiencies, we combine sparse coding with spatio-temporal pyramid for human action recognition, and regard this method as the baseline. More importantly, which is also the focus of this paper, we find that there is a hierarchical structure in feature vector constructed by the baseline method. To exploit the hierarchical structure information for better recognition accuracy, we propose a tree regularized classifier to convey the hierarchical structure information. The main contributions of this paper can be summarized as: first, we introduce a tree regularized classifier to encode the hierarchical structure information in feature vector for human action recognition. Second, we present an optimization algorithm to learn the parameters of the proposed classifier. Third, the performance of the proposed classifier is evaluated on YouTube, Hollywood2, and UCF50 datasets, the experimental results show that the proposed tree regularized classifier obtains better performance than SVM and other popular classifiers, and achieves promising results on the three datasets.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • 한국정보전자통신기술학회논문지
    • /
    • 제14권4호
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

확률적 정규 문법 추론법에 의한 사람 몸동작 인식 (Human Action Recognition by Inference of Stochastic Regular Grammars)

  • 조경은;조형제
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제7권3호
    • /
    • pp.248-259
    • /
    • 2001
  • 이 논문은 사람의 비언어적 행동을 자동적으로 분석하는 것을 목적으로 60 가지의 기본적인 사람의 윗몸 동작들을 인식하는 방법을 제안한다. 사람 몸동작을 인식하기 위한 방법으로 확률적 문법 추론법을 이용하였으며 모든 관절의 움직임 분석으로 임의의 동작을 인식하는 방법을 사용하였다. 시스템의 입력 데이타로 쓰여지는 각 관절의 실세계 3 차원 좌표들을 일정간격으로 양자화한 후, 각각 xy, zy, 평면에 투영하고, 이들을 다시 4방향 코딩하여 확률적 문법 추론법에 적합한 입력형식으로 변환한다. 또한 비언어적 행동 분석을 위한 사람의 동작 인식에는 손과 다른 부위와의 관계인 근접 정보가 동작 구분의 중요한 요소가 됨을 감안하여, 확률 문법 추론 방법을 확장하고, 일반적인 확률 문법 추론 방법과 비교하여 인식률이 향상됨을 실험결과를 통해 확인하였다.

  • PDF

Human Action Recognition Using Pyramid Histograms of Oriented Gradients and Collaborative Multi-task Learning

  • Gao, Zan;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권2호
    • /
    • pp.483-503
    • /
    • 2014
  • In this paper, human action recognition using pyramid histograms of oriented gradients and collaborative multi-task learning is proposed. First, we accumulate global activities and construct motion history image (MHI) for both RGB and depth channels respectively to encode the dynamics of one action in different modalities, and then different action descriptors are extracted from depth and RGB MHI to represent global textual and structural characteristics of these actions. Specially, average value in hierarchical block, GIST and pyramid histograms of oriented gradients descriptors are employed to represent human motion. To demonstrate the superiority of the proposed method, we evaluate them by KNN, SVM with linear and RBF kernels, SRC and CRC models on DHA dataset, the well-known dataset for human action recognition. Large scale experimental results show our descriptors are robust, stable and efficient, and outperform the state-of-the-art methods. In addition, we investigate the performance of our descriptors further by combining these descriptors on DHA dataset, and observe that the performances of combined descriptors are much better than just using only sole descriptor. With multimodal features, we also propose a collaborative multi-task learning method for model learning and inference based on transfer learning theory. The main contributions lie in four aspects: 1) the proposed encoding the scheme can filter the stationary part of human body and reduce noise interference; 2) different kind of features and models are assessed, and the neighbor gradients information and pyramid layers are very helpful for representing these actions; 3) The proposed model can fuse the features from different modalities regardless of the sensor types, the ranges of the value, and the dimensions of different features; 4) The latent common knowledge among different modalities can be discovered by transfer learning to boost the performance.

A Deep Learning Algorithm for Fusing Action Recognition and Psychological Characteristics of Wrestlers

  • Yuan Yuan;Yuan Yuan;Jun Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권3호
    • /
    • pp.754-774
    • /
    • 2023
  • Wrestling is one of the popular events for modern sports. It is difficult to quantitatively describe a wrestling game between athletes. And deep learning can help wrestling training by human recognition techniques. Based on the characteristics of latest wrestling competition rules and human recognition technologies, a set of wrestling competition video analysis and retrieval system is proposed. This system uses a combination of literature method, observation method, interview method and mathematical statistics to conduct statistics, analysis, research and discussion on the application of technology. Combined the system application in targeted movement technology. A deep learning-based facial recognition psychological feature analysis method for the training and competition of classical wrestling after the implementation of the new rules is proposed. The experimental results of this paper showed that the proportion of natural emotions of male and female wrestlers was about 50%, indicating that the wrestler's mentality was relatively stable before the intense physical confrontation, and the test of the system also proved the stability of the system.

가중치 기반 Bag-of-Feature와 앙상블 결정 트리를 이용한 정지 영상에서의 인간 행동 인식 (Human Action Recognition in Still Image Using Weighted Bag-of-Features and Ensemble Decision Trees)

  • 홍준혁;고병철;남재열
    • 한국통신학회논문지
    • /
    • 제38A권1호
    • /
    • pp.1-9
    • /
    • 2013
  • 본 논문에서는 CS-LBP (Center-Symmetric Local Binary Pattern) 특징과 공간 피라미드를 이용한 BoF (Bag of Features)를 생성하고 이를 랜덤 포레스트(Random Forest) 분류기에 적용하여 인간의 행동을 인식하는 알고리즘을 제안한다. BoF를 생성하기 위해 영상을 균일한 패치로 나누고, 각 패치 마다 CS-LBP 특징을 추출한다. 행동 분류 성능을 향상시키기 위해 패치들마다 추출한 특징벡터들에 대해 K-mean 클러스터링을 적용하여 코드 북을 생성한다. 본 논문에서는 영상의 지역적인 특성을 고려하기 위해 공간 피라미드 방법을 적용하고 각 공간 레벨에서 추출된 BoF에 대해 가중치를 적용하여 최종적으로 하나의 특징 벡터로 결합한다. 행동 분류를 위해 결정트리의 앙상블로 이루어진 랜덤 포레스트는 학습 단계에서 각 행동 클래스를 위한 분류 모델을 만든다. 가중 BoF가 적용된 랜덤 포레스트는 다양한 인간 행동 영상을 포함하고 있는 Standford Actions 40 데이터를 성공적으로 분류하였다. 또한 기존 방법에 비해 분류 성능이 유사하거나 우수하며, 한 장의 영상에 대해 빠른 인식속도를 보였다.

키넥트를 이용한 사용자 맞춤형 손동작 히트 인식 방법 (User Customizable Hit Action Recognition Method using Kinect)

  • 최윤연;탕 지아메이;장승은;김상욱
    • 한국멀티미디어학회논문지
    • /
    • 제18권4호
    • /
    • pp.557-564
    • /
    • 2015
  • There are many prior studies for more natural Human-Computer Interaction. Until now, the efforts is continued in order to recognize motions in various directions. In this paper, we suggest a user-specific recognition by hit detection method using Kinect camera and human proportion. This algorithm extracts the user-specific valid recognition rage after recognizing the user's body initially. And it corrects the difference in horizontal position between the user and Kinect, so that we can estimate a action of user by matching cursor to target using only one frame. Ensure that efficient hand recognition in the game to take advantage of this method of suggestion.

Multiscale Spatial Position Coding under Locality Constraint for Action Recognition

  • Yang, Jiang-feng;Ma, Zheng;Xie, Mei
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권4호
    • /
    • pp.1851-1863
    • /
    • 2015
  • – In the paper, to handle the problem of traditional bag-of-features model ignoring the spatial relationship of local features in human action recognition, we proposed a Multiscale Spatial Position Coding under Locality Constraint method. Specifically, to describe this spatial relationship, we proposed a mixed feature combining motion feature and multi-spatial-scale configuration. To utilize temporal information between features, sub spatial-temporal-volumes are built. Next, the pooled features of sub-STVs are obtained via max-pooling method. In classification stage, the Locality-Constrained Group Sparse Representation is adopted to utilize the intrinsic group information of the sub-STV features. The experimental results on the KTH, Weizmann, and UCF sports datasets show that our action recognition system outperforms the classical local ST feature-based recognition systems published recently.