• Title/Summary/Keyword: Human action recognition

검색결과 155건 처리시간 0.027초

스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식 (Deep Learning-based Action Recognition using Skeleton Joints Mapping)

  • 타스님;백중환
    • 한국항행학회논문지
    • /
    • 제24권2호
    • /
    • pp.155-162
    • /
    • 2020
  • 최근 컴퓨터 비전과 딥러닝 기술의 발전으로 비디오 분석, 영상 감시, 인터렉티브 멀티미디어 및 인간 기계 상호작용 응용을 위해 인간 행동 인식에 관한 연구가 활발히 진행되고 있다. 많은 연구자에 의해 RGB 영상, 깊이 영상, 스켈레톤 및 관성 데이터를 사용하여 인간 행동 인식 및 분류를 위해 다양한 기술이 도입되었다. 그러나 스켈레톤 기반 행동 인식은 여전히 인간 기계 상호작용 분야에서 도전적인 연구 주제이다. 본 논문에서는 동적 이미지라 불리는 시공간 이미지를 생성하기 위해 동작의 종단간 스켈레톤 조인트 매핑 기법을 제안한다. 행동 클래스 간의 분류를 수행하기 위해 효율적인 심층 컨볼루션 신경망이 고안된다. 제안된 기법의 성능을 평가하기 위해 공개적으로 액세스 가능한 UTD-MHAD 스켈레톤 데이터 세트를 사용하였다. 실험 결과 제안된 시스템이 97.45 %의 높은 정확도로 기존 방법보다 성능이 우수함을 보였다.

Dual-Stream Fusion and Graph Convolutional Network for Skeleton-Based Action Recognition

  • Hu, Zeyuan;Feng, Yiran;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.423-430
    • /
    • 2021
  • Aiming Graph convolutional networks (GCNs) have achieved outstanding performances on skeleton-based action recognition. However, several problems remain in existing GCN-based methods, and the problem of low recognition rate caused by single input data information has not been effectively solved. In this article, we propose a Dual-stream fusion method that combines video data and skeleton data. The two networks respectively identify skeleton data and video data and fuse the probabilities of the two outputs to achieve the effect of information fusion. Experiments on two large dataset, Kinetics and NTU-RGBC+D Human Action Dataset, illustrate that our proposed method achieves state-of-the-art. Compared with the traditional method, the recognition accuracy is improved better.

Optimised ML-based System Model for Adult-Child Actions Recognition

  • Alhammami, Muhammad;Hammami, Samir Marwan;Ooi, Chee-Pun;Tan, Wooi-Haw
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.929-944
    • /
    • 2019
  • Many critical applications require accurate real-time human action recognition. However, there are many hurdles associated with capturing and pre-processing image data, calculating features, and classification because they consume significant resources for both storage and computation. To circumvent these hurdles, this paper presents a recognition machine learning (ML) based system model which uses reduced data structure features by projecting real 3D skeleton modality on virtual 2D space. The MMU VAAC dataset is used to test the proposed ML model. The results show a high accuracy rate of 97.88% which is only slightly lower than the accuracy when using the original 3D modality-based features but with a 75% reduction ratio from using RGB modality. These results motivate implementing the proposed recognition model on an embedded system platform in the future.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

자세 예측을 이용한 효과적인 자세 기반 감정 동작 인식 (Effective Pose-based Approach with Pose Estimation for Emotional Action Recognition)

  • 김진옥
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제2권3호
    • /
    • pp.209-218
    • /
    • 2013
  • 인간의 동작 인식에 대한 이전 연구는 주로 관절체로 표현된 신체 움직임을 추적하고 분류하는데 초점을 맞춰 왔다. 이 방식들은 실제 이미지 사용 환경에서 신체 부위에 대한 정확한 분류가 필요하다는 점이 까다롭기 때문에 최근의 동작 인식 연구 동향은 시공간상의 관심 점과 같이 저수준의, 더 추상적인 외형특징을 이용하는 방식이 일반화되었다. 하지만 몇 년 사이 자세 예측 기술이 발전하면서 자세 기반 방식에 대한 시각을 재정립하는 것이 필요하다. 본 연구는 외형 기반 방식에서 저수준의 외형특징만으로 분류기를 학습시키는 것이 충분한지에 대한 문제를 제기하면서 자세 예측을 이용한 효과적인 자세기반 동작인식 방식을 제안하였다. 이를 위해 다양한 감정을 표현하는 동작 시나리오를 대상으로 외형 기반, 자세 기반 특징 및 두 가지 특징을 조합한 방식을 비교하였다. 실험 결과, 자세 예측을 이용한 자세 기반 방식이 저수준의 외형특징을 이용한 방식보다 감정 동작 분류 및 인식 성능이 더 나았으며 잡음 때문에 심하게 망가진 이미지의 감정 동작 인식에도 자세 예측을 이용한 자세기반의 방식이 효과적이었다.

행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘 (Multi-Region based Radial GCN algorithm for Human action Recognition)

  • 장한별;이칠우
    • 스마트미디어저널
    • /
    • 제11권1호
    • /
    • pp.46-57
    • /
    • 2022
  • 본 논문에서는 딥러닝을 기반으로 입력영상의 옵티컬 플로우(optical flow)와 그래디언트(gradient)를 이용하여 종단간 행동인식이 가능한 다중영역 기반 방사성 GCN(MRGCN: Multi-region based Radial Graph Convolutional Network) 알고리즘에 대해 기술한다. 이 방법은 데이터 취득이 어렵고 계산이 복잡한 스켈레톤 정보를 사용하지 않기 때문에 카메라만을 주로 사용하는 일반 CCTV 환경에도 활용이 가능하다. MRGCN의 특징은 입력영상의 옵티컬플로우와 그래디언트를 방향성 히스토그램으로 표현한 후 계산량 축소를 위해 6개의 특징 벡터로 변환하여 사용한다는 것과 시공간 영역에서 인체의 움직임과 형상변화를 계층적으로 전파시키기 위해 새롭게 고안한 방사형 구조의 네트워크 모델을 사용한다는 것이다. 또 데이터 입력 영역을 서로 겹치도록 배치하여 각 노드 간에 공간적으로 단절이 없는 정보를 입력으로 사용한 것도 중요한 특징이다. 30가지의 행동에 대해 성능평가 실험을 수행한 결과 스켈레톤 데이터를 입력으로 사용한 기존의 GCN기반 행동인식과 동등한 84.78%의 Top-1 정확도를 얻을 수 있었다. 이 결과로부터 취득이 어려운 스켈레톤 정보를 사용하지 않는 MRGCN이 복잡한 행동인식이 필요한 실제 상황에서 더욱 실용적인 방법임을 알 수 있었다.

목적성 행동 모방학습을 통한 의도 인식을 위한 거울뉴런 시스템 계산 모델 (Computational Model of a Mirror Neuron System for Intent Recognition through Imitative Learning of Objective-directed Action)

  • 고광은;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.606-611
    • /
    • 2014
  • The understanding of another's behavior is a fundamental cognitive ability for primates including humans. Recent neuro-physiological studies suggested that there is a direct matching algorithm from visual observation onto an individual's own motor repertories for interpreting cognitive ability. The mirror neurons are known as core regions and are handled as a functionality of intent recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper, we addressed previous works used to model the function and mechanisms of mirror neurons and proposed a computational model of a mirror neuron system which can be used in human-robot interaction environments. The major focus of the computation model is the reproduction of an individual's motor repertory with different embodiments. The model's aim is the design of a continuous process which combines sensory evidence, prior task knowledge and a goal-directed matching of action observation and execution. We also propose a biologically inspired plausible equation model.

Decomposed "Spatial and Temporal" Convolution for Human Action Recognition in Videos

  • Sediqi, Khwaja Monib;Lee, Hyo Jong
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.455-457
    • /
    • 2019
  • In this paper we study the effect of decomposed spatiotemporal convolutions for action recognition in videos. Our motivation emerges from the empirical observation that spatial convolution applied on solo frames of the video provide good performance in action recognition. In this research we empirically show the accuracy of factorized convolution on individual frames of video for action classification. We take 3D ResNet-18 as base line model for our experiment, factorize its 3D convolution to 2D (Spatial) and 1D (Temporal) convolution. We train the model from scratch using Kinetics video dataset. We then fine-tune the model on UCF-101 dataset and evaluate the performance. Our results show good accuracy similar to that of the state of the art algorithms on Kinetics and UCF-101 datasets.

Vision-based garbage dumping action detection for real-world surveillance platform

  • Yun, Kimin;Kwon, Yongjin;Oh, Sungchan;Moon, Jinyoung;Park, Jongyoul
    • ETRI Journal
    • /
    • 제41권4호
    • /
    • pp.494-505
    • /
    • 2019
  • In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real-world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real-world scenarios because they are mainly focused on well-refined datasets. Because the dumping actions in the real-world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected the dumping action by the change in relation between a person and the object being held by them. To find the person-held object of indefinite form, we used a background subtraction algorithm and human joint estimation. The person-held object was then tracked and the relation model between the joints and objects was built. Finally, the dumping action was detected through the voting-based decision module. In the experiments, we show the effectiveness of the proposed method by testing on real-world videos containing various dumping actions. In addition, the proposed framework is implemented in a real-time monitoring system through a fast online algorithm.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.