• Title/Summary/Keyword: Human Action Recognition

Search Result 155, Processing Time 0.026 seconds

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

Human Action Recognition Based on 3D Human Modeling and Cyclic HMMs

  • Ke, Shian-Ru;Thuc, Hoang Le Uyen;Hwang, Jenq-Neng;Yoo, Jang-Hee;Choi, Kyoung-Ho
    • ETRI Journal
    • /
    • v.36 no.4
    • /
    • pp.662-672
    • /
    • 2014
  • Human action recognition is used in areas such as surveillance, entertainment, and healthcare. This paper proposes a system to recognize both single and continuous human actions from monocular video sequences, based on 3D human modeling and cyclic hidden Markov models (CHMMs). First, for each frame in a monocular video sequence, the 3D coordinates of joints belonging to a human object, through actions of multiple cycles, are extracted using 3D human modeling techniques. The 3D coordinates are then converted into a set of geometrical relational features (GRFs) for dimensionality reduction and discrimination increase. For further dimensionality reduction, k-means clustering is applied to the GRFs to generate clustered feature vectors. These vectors are used to train CHMMs separately for different types of actions, based on the Baum-Welch re-estimation algorithm. For recognition of continuous actions that are concatenated from several distinct types of actions, a designed graphical model is used to systematically concatenate different separately trained CHMMs. The experimental results show the effective performance of our proposed system in both single and continuous action recognition problems.

Spatio-temporal Semantic Features for Human Action Recognition

  • Liu, Jia;Wang, Xiaonian;Li, Tianyu;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2632-2649
    • /
    • 2012
  • Most approaches to human action recognition is limited due to the use of simple action datasets under controlled environments or focus on excessively localized features without sufficiently exploring the spatio-temporal information. This paper proposed a framework for recognizing realistic human actions. Specifically, a new action representation is proposed based on computing a rich set of descriptors from keypoint trajectories. To obtain efficient and compact representations for actions, we develop a feature fusion method to combine spatial-temporal local motion descriptors by the movement of the camera which is detected by the distribution of spatio-temporal interest points in the clips. A new topic model called Markov Semantic Model is proposed for semantic feature selection which relies on the different kinds of dependencies between words produced by "syntactic " and "semantic" constraints. The informative features are selected collaboratively based on the different types of dependencies between words produced by short range and long range constraints. Building on the nonlinear SVMs, we validate this proposed hierarchical framework on several realistic action datasets.

Human Action Recognition Based on An Improved Combined Feature Representation

  • Zhang, Ning;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1473-1480
    • /
    • 2018
  • The extraction and recognition of human motion characteristics need to combine biometrics to determine and judge human behavior in the movement and distinguish individual identities. The so-called biometric technology, the specific operation is the use of the body's inherent biological characteristics of individual identity authentication, the most noteworthy feature is the invariance and uniqueness. In the past, the behavior recognition technology based on the single characteristic was too restrictive, in this paper, we proposed a mixed feature which combined global silhouette feature and local optical flow feature, and this combined representation was used for human action recognition. And we will use the KTH database to train and test the recognition system. Experiments have been very desirable results.

Human Action Recognition Via Multi-modality Information

  • Gao, Zan;Song, Jian-Ming;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.739-748
    • /
    • 2014
  • In this paper, we propose pyramid appearance and global structure action descriptors on both RGB and depth motion history images and a model-free method for human action recognition. In proposed algorithm, we firstly construct motion history image for both RGB and depth channels, at the same time, depth information is employed to filter RGB information, after that, different action descriptors are extracted from depth and RGB MHIs to represent these actions, and then multimodality information collaborative representation and recognition model, in which multi-modality information are put into object function naturally, and information fusion and action recognition also be done together, is proposed to classify human actions. To demonstrate the superiority of the proposed method, we evaluate it on MSR Action3D and DHA datasets, the well-known dataset for human action recognition. Large scale experiment shows our descriptors are robust, stable and efficient, when comparing with the-state-of-the-art algorithms, the performances of our descriptors are better than that of them, further, the performance of combined descriptors is much better than just using sole descriptor. What is more, our proposed model outperforms the state-of-the-art methods on both MSR Action3D and DHA datasets.

Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data (RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화)

  • Jeong, Jae-Hyeok;Kim, Min-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.

Video augmentation technique for human action recognition using genetic algorithm

  • Nida, Nudrat;Yousaf, Muhammad Haroon;Irtaza, Aun;Velastin, Sergio A.
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.327-338
    • /
    • 2022
  • Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.

Silhouette-Edge-Based Descriptor for Human Action Representation and Recognition

  • Odoyo, Wilfred O.;Choi, Jae-Ho;Moon, In-Kyu;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.124-131
    • /
    • 2013
  • Extraction and representation of postures and/or gestures from human activities in videos have been a focus of research in this area of action recognition. With various applications cropping up from different fields, this paper seeks to improve the performance of these action recognition machines by proposing a shape-based silhouette-edge descriptor for the human body. Information entropy, a method to measure the randomness of a sequence of symbols, is used to aid the selection of vital key postures from video frames. Morphological operations are applied to extract and stack edges to uniquely represent different actions shape-wise. To classify an action from a new input video, a Hausdorff distance measure is applied between the gallery representations and the query images formed from the proposed procedure. The method is tested on known public databases for its validation. An effective method of human action annotation and description has been effectively achieved.

Deep learning-based Human Action Recognition Technique Considering the Spatio-Temporal Relationship of Joints (관절의 시·공간적 관계를 고려한 딥러닝 기반의 행동인식 기법)

  • Choi, Inkyu;Song, Hyok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.413-415
    • /
    • 2022
  • Since human joints can be used as useful information for analyzing human behavior as a component of the human body, many studies have been conducted on human action recognition using joint information. However, it is a very complex problem to recognize human action that changes every moment using only each independent joint information. Therefore, an additional information extraction method to be used for learning and an algorithm that considers the current state based on the past state are needed. In this paper, we propose a human action recognition technique considering the positional relationship of connected joints and the change of the position of each joint over time. Using the pre-trained joint extraction model, position information of each joint is obtained, and bone information is extracted using the difference vector between the connected joints. In addition, a simplified neural network is constructed according to the two types of inputs, and spatio-temporal features are extracted by adding LSTM. As a result of the experiment using a dataset consisting of 9 behaviors, it was confirmed that when the action recognition accuracy was measured considering the temporal and spatial relationship features of each joint, it showed superior performance compared to the result using only single joint information.

  • PDF