• Title/Summary/Keyword: Human Action Recognition

Search Result 156, Processing Time 0.027 seconds

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Haptic AR Sports Technologies for Indoor Virtual Matches (실내 가상 경기를 위한 햅틱 AR 스포츠 기술)

  • Kim, J.S.;Jang, S.H.;Yang, S.I.;Yoon, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.4
    • /
    • pp.92-102
    • /
    • 2021
  • Outdoor sports activities have been restricted by serious air pollution, such as fine dust and yellow dust, and abnormal meteorological change, such as heatwave and heavy snow. These environmental problems have rapidly increased the demand for indoor sports activities. Virtual sports, such as virtual golf, virtual baseball, virtual soccer, etc., allow playing various sports games without going outdoors. Indoor sports industries and markets have seen rapid growth since the advent of virtual sports. Most virtual sports platforms use screen-based virtual reality techniques, which are why they are called screen sports. However, these platforms cannot support various sports games, especially virtual match games, such as squash, boxing, and so on, because existing screen-based virtual reality sports techniques use real balls and players. This article presents screen-based haptic-augmented reality technologies for a new virtual sports platform. The new platform does not use real balls and players to solve the limitations of previous platforms. Here, various technologies, including human motion tracking, human action recognition, haptic feedback, screen-based augmented-reality systems, and augmented-reality sports content, are unified for the new virtual sports platform. From these haptic-augmented reality technologies, the proposed platform supports sports games, including indoor virtual matches, that existing virtual sports platforms cannot support.

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Image Recognition based on Adaptive Deep Learning (적응적 딥러닝 학습 기반 영상 인식)

  • Kim, Jin-Woo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.113-117
    • /
    • 2018
  • Human emotions are revealed by various factors. Words, actions, facial expressions, attire and so on. But people know how to hide their feelings. So we can not easily guess its sensitivity using one factor. We decided to pay attention to behaviors and facial expressions in order to solve these problems. Behavior and facial expression can not be easily concealed without constant effort and training. In this paper, we propose an algorithm to estimate human emotion through combination of two results by gradually learning human behavior and facial expression with little data through the deep learning method. Through this algorithm, we can more comprehensively grasp human emotions.

RECOGNITION ALGORITHM OF DRIED OAK MUSHROOM GRADINGS USING GRAY LEVEL IMAGES

  • Lee, C.H.;Hwang, H.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.773-779
    • /
    • 1996
  • Dried oak mushroom have complex and various visual features. Grading and sorting of dried oak mushrooms has been done by the human expert. Though actions involved in human grading looked simple, a decision making underneath the simple action comes from the result of the complex neural processing of the visual image. Through processing details involved in human visual recognition has not been fully investigated yet, it might say human can recognize objects via one of three ways such as extracting specific features or just image itself without extracting those features or in a combined manner. In most cases, extracting some special quantitative features from the camera image requires complex algorithms and processing of the gray level image requires the heavy computing load. This fact can be worse especially in dealing with nonuniform, irregular and fuzzy shaped agricultural products, resulting in poor performance because of the sensitiveness to the crisp criteria or specific ules set up by algorithms. Also restriction of the real time processing often forces to use binary segmentation but in that case some important information of the object can be lost. In this paper, the neuro net based real time recognition algorithm was proposed without extracting any visual feature but using only the directly captured raw gray images. Specially formated adaptable size of grids was proposed for the network input. The compensation of illumination was also done to accomodate the variable lighting environment. The proposed grading scheme showed very successful results.

  • PDF

Binary Hashing CNN Features for Action Recognition

  • Li, Weisheng;Feng, Chen;Xiao, Bin;Chen, Yanquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4412-4428
    • /
    • 2018
  • The purpose of this work is to solve the problem of representing an entire video using Convolutional Neural Network (CNN) features for human action recognition. Recently, due to insufficient GPU memory, it has been difficult to take the whole video as the input of the CNN for end-to-end learning. A typical method is to use sampled video frames as inputs and corresponding labels as supervision. One major issue of this popular approach is that the local samples may not contain the information indicated by the global labels and sufficient motion information. To address this issue, we propose a binary hashing method to enhance the local feature extractors. First, we extract the local features and aggregate them into global features using maximum/minimum pooling. Second, we use the binary hashing method to capture the motion features. Finally, we concatenate the hashing features with global features using different normalization methods to train the classifier. Experimental results on the JHMDB and MPII-Cooking datasets show that, for these new local features, binary hashing mapping on the sparsely sampled features led to significant performance improvements.

Recognition of Occupants' Cold Discomfort-Related Actions for Energy-Efficient Buildings

  • Song, Kwonsik;Kang, Kyubyung;Min, Byung-Cheol
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.426-432
    • /
    • 2022
  • HVAC systems play a critical role in reducing energy consumption in buildings. Integrating occupants' thermal comfort evaluation into HVAC control strategies is believed to reduce building energy consumption while minimizing their thermal discomfort. Advanced technologies, such as visual sensors and deep learning, enable the recognition of occupants' discomfort-related actions, thus making it possible to estimate their thermal discomfort. Unfortunately, it remains unclear how accurate a deep learning-based classifier is to recognize occupants' discomfort-related actions in a working environment. Therefore, this research evaluates the classification performance of occupants' discomfort-related actions while sitting at a computer desk. To achieve this objective, this study collected RGB video data on nine college students' cold discomfort-related actions and then trained a deep learning-based classifier using the collected data. The classification results are threefold. First, the trained classifier has an average accuracy of 93.9% for classifying six cold discomfort-related actions. Second, each discomfort-related action is recognized with more than 85% accuracy. Third, classification errors are mostly observed among similar discomfort-related actions. These results indicate that using human action data will enable facility managers to estimate occupants' thermal discomfort and, in turn, adjust the operational settings of HVAC systems to improve the energy efficiency of buildings in conjunction with their thermal comfort levels.

  • PDF

Agent's Activities based Intention Recognition Computing (에이전트 행동에 기반한 의도 인식 컴퓨팅)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.87-98
    • /
    • 2012
  • Understanding agent's intent is an essential component of the human-computer interaction of ubiquitous computing. Because correct inference of subject's intention in ubiquitous computing system helps particularly to understand situations that involve collaboration among multiple agents or detection of situations that can pose a particular activity. This paper, inspired by people have a mechanism for interpreting one another's actions and for inferring the intentions and goals that underlie action, proposes an approach that allows a computing system to quickly recognize the intent of agents based on experience data acquired through prior capabilities of activities recognition. To proceed intention recognition, proposed method uses formulations of Hidden Markov Models (HMM) to model a system's prior experience and agents' action change, then makes for system infer intents in advance before the agent's actions are finalized while taking the perspective of the agent whose intent should be recognized. Quantitative validation of experimental results, while presenting an accurate rate, an early detection rate and a correct duration rate with detecting the intent of several people performing various activities, shows that proposed research contributes to implement effective intent recognition system.

Bio-mimetic Recognition of Action Sequence using Unsupervised Learning (비지도 학습을 이용한 생체 모방 동작 인지 기반의 동작 순서 인식)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.15 no.4
    • /
    • pp.9-20
    • /
    • 2014
  • Making good predictions about the outcome of one's actions would seem to be essential in the context of social interaction and decision-making. This paper proposes a computational model for learning articulated motion patterns for action recognition, which mimics biological-inspired visual perception processing of human brain. Developed model of cortical architecture for the unsupervised learning of motion sequence, builds upon neurophysiological knowledge about the cortical sites such as IT, MT, STS and specific neuronal representation which contribute to articulated motion perception. Experiments show how the model automatically selects significant motion patterns as well as meaningful static snapshot categories from continuous video input. Such key poses correspond to articulated postures which are utilized in probing the trained network to impose implied motion perception from static views. We also present how sequence selective representations are learned in STS by fusing snapshot and motion input and how learned feedback connections enable making predictions about future input sequence. Network simulations demonstrate the computational capacity of the proposed model for motion recognition.

Improving Performance of Human Action Recognition on Accelerometer Data (가속도 센서 데이터 기반의 행동 인식 모델 성능 향상 기법)

  • Nam, Jung-Woo;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.523-528
    • /
    • 2020
  • With a widespread of sensor-rich mobile devices, the analysis of human activities becomes more general and simpler than ever before. In this paper, we propose two deep neural networks that efficiently and accurately perform human activity recognition (HAR) using tri-axial accelerometers. In combination with powerful modern deep learning techniques like batch normalization and LSTM networks, our model outperforms baseline approaches and establishes state-of-the-art results on WISDM dataset.