• 제목/요약/키워드: Skeleton-based action recognition

검색결과 11건 처리시간 0.022초

Dual-Stream Fusion and Graph Convolutional Network for Skeleton-Based Action Recognition

  • Hu, Zeyuan;Feng, Yiran;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.423-430
    • /
    • 2021
  • Aiming Graph convolutional networks (GCNs) have achieved outstanding performances on skeleton-based action recognition. However, several problems remain in existing GCN-based methods, and the problem of low recognition rate caused by single input data information has not been effectively solved. In this article, we propose a Dual-stream fusion method that combines video data and skeleton data. The two networks respectively identify skeleton data and video data and fuse the probabilities of the two outputs to achieve the effect of information fusion. Experiments on two large dataset, Kinetics and NTU-RGBC+D Human Action Dataset, illustrate that our proposed method achieves state-of-the-art. Compared with the traditional method, the recognition accuracy is improved better.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • 한국정보전자통신기술학회논문지
    • /
    • 제14권4호
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식 (Deep Learning-based Action Recognition using Skeleton Joints Mapping)

  • 타스님;백중환
    • 한국항행학회논문지
    • /
    • 제24권2호
    • /
    • pp.155-162
    • /
    • 2020
  • 최근 컴퓨터 비전과 딥러닝 기술의 발전으로 비디오 분석, 영상 감시, 인터렉티브 멀티미디어 및 인간 기계 상호작용 응용을 위해 인간 행동 인식에 관한 연구가 활발히 진행되고 있다. 많은 연구자에 의해 RGB 영상, 깊이 영상, 스켈레톤 및 관성 데이터를 사용하여 인간 행동 인식 및 분류를 위해 다양한 기술이 도입되었다. 그러나 스켈레톤 기반 행동 인식은 여전히 인간 기계 상호작용 분야에서 도전적인 연구 주제이다. 본 논문에서는 동적 이미지라 불리는 시공간 이미지를 생성하기 위해 동작의 종단간 스켈레톤 조인트 매핑 기법을 제안한다. 행동 클래스 간의 분류를 수행하기 위해 효율적인 심층 컨볼루션 신경망이 고안된다. 제안된 기법의 성능을 평가하기 위해 공개적으로 액세스 가능한 UTD-MHAD 스켈레톤 데이터 세트를 사용하였다. 실험 결과 제안된 시스템이 97.45 %의 높은 정확도로 기존 방법보다 성능이 우수함을 보였다.

A Survey of Human Action Recognition Approaches that use an RGB-D Sensor

  • Farooq, Adnan;Won, Chee Sun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권4호
    • /
    • pp.281-290
    • /
    • 2015
  • Human action recognition from a video scene has remained a challenging problem in the area of computer vision and pattern recognition. The development of the low-cost RGB depth camera (RGB-D) allows new opportunities to solve the problem of human action recognition. In this paper, we present a comprehensive review of recent approaches to human action recognition based on depth maps, skeleton joints, and other hybrid approaches. In particular, we focus on the advantages and limitations of the existing approaches and on future directions.

Optimised ML-based System Model for Adult-Child Actions Recognition

  • Alhammami, Muhammad;Hammami, Samir Marwan;Ooi, Chee-Pun;Tan, Wooi-Haw
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.929-944
    • /
    • 2019
  • Many critical applications require accurate real-time human action recognition. However, there are many hurdles associated with capturing and pre-processing image data, calculating features, and classification because they consume significant resources for both storage and computation. To circumvent these hurdles, this paper presents a recognition machine learning (ML) based system model which uses reduced data structure features by projecting real 3D skeleton modality on virtual 2D space. The MMU VAAC dataset is used to test the proposed ML model. The results show a high accuracy rate of 97.88% which is only slightly lower than the accuracy when using the original 3D modality-based features but with a 75% reduction ratio from using RGB modality. These results motivate implementing the proposed recognition model on an embedded system platform in the future.

Human Activities Recognition Based on Skeleton Information via Sparse Representation

  • Liu, Suolan;Kong, Lizhi;Wang, Hongyuan
    • Journal of Computing Science and Engineering
    • /
    • 제12권1호
    • /
    • pp.1-11
    • /
    • 2018
  • Human activities recognition is a challenging task due to its complexity of human movements and the variety performed by different subjects for the same action. This paper presents a recognition algorithm by using skeleton information generated from depth maps. Concatenating motion features and temporal constraint feature produces feature vector. Reducing dictionary scale proposes an improved fast classifier based on sparse representation. The developed method is shown to be effective by recognizing different activities on the UTD-MHAD dataset. Comparison results indicate superior performance of our method over some existing methods.

행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘 (Multi-Region based Radial GCN algorithm for Human action Recognition)

  • 장한별;이칠우
    • 스마트미디어저널
    • /
    • 제11권1호
    • /
    • pp.46-57
    • /
    • 2022
  • 본 논문에서는 딥러닝을 기반으로 입력영상의 옵티컬 플로우(optical flow)와 그래디언트(gradient)를 이용하여 종단간 행동인식이 가능한 다중영역 기반 방사성 GCN(MRGCN: Multi-region based Radial Graph Convolutional Network) 알고리즘에 대해 기술한다. 이 방법은 데이터 취득이 어렵고 계산이 복잡한 스켈레톤 정보를 사용하지 않기 때문에 카메라만을 주로 사용하는 일반 CCTV 환경에도 활용이 가능하다. MRGCN의 특징은 입력영상의 옵티컬플로우와 그래디언트를 방향성 히스토그램으로 표현한 후 계산량 축소를 위해 6개의 특징 벡터로 변환하여 사용한다는 것과 시공간 영역에서 인체의 움직임과 형상변화를 계층적으로 전파시키기 위해 새롭게 고안한 방사형 구조의 네트워크 모델을 사용한다는 것이다. 또 데이터 입력 영역을 서로 겹치도록 배치하여 각 노드 간에 공간적으로 단절이 없는 정보를 입력으로 사용한 것도 중요한 특징이다. 30가지의 행동에 대해 성능평가 실험을 수행한 결과 스켈레톤 데이터를 입력으로 사용한 기존의 GCN기반 행동인식과 동등한 84.78%의 Top-1 정확도를 얻을 수 있었다. 이 결과로부터 취득이 어려운 스켈레톤 정보를 사용하지 않는 MRGCN이 복잡한 행동인식이 필요한 실제 상황에서 더욱 실용적인 방법임을 알 수 있었다.

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • 제44권2호
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

몰입형 대형 사이니지 콘텐츠를 위한 STAGCN 기반 인간 행동 인식 시스템 (STAGCN-based Human Action Recognition System for Immersive Large-Scale Signage Content)

  • 김정호;황병선;김진욱;선준호;선영규;김진영
    • 한국인터넷방송통신학회논문지
    • /
    • 제23권6호
    • /
    • pp.89-95
    • /
    • 2023
  • 인간 행동 인식 (Human action recognition, HAR) 기술은 스포츠 분석, 인간과 로봇 간의 상호작용, 대형 사이니지 콘텐츠 등의 애플리케이션에 활용되는 핵심 기술 중 하나이다. 본 논문에서는 몰입형 대형 사이니지 콘텐츠를 위한 STAGCN (Spatial temporal attention graph convolutional network) 기반 인간 행동 인식 시스템을 제안한다. STAGCN은 attention mechanism을 통해 스켈레톤 시퀀스의 시공간적 특징에 서로 다른 가중치를 부과하여, 동작 인식에 중요한 관절 및 시점을 고려할 수 있다. NTU RGB+D 데이터셋을 사용한 실험 결과, 제안된 시스템은 기존 딥러닝 모델들에 비해 높은 분류 정확도를 달성한 것을 확인했다.