• Title/Summary/Keyword: Human motion detect

Search Result 94, Processing Time 0.028 seconds

Motion Sensor Data Normalization Algorithm for Pedestrian Pattern Detection (보행 패턴 검출을 위한 동작센서 데이터 정규화 알고리즘)

  • Kim Nam-Jin;Hong Joo-Hyun;Lee Tae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.94-102
    • /
    • 2005
  • In this paper, three axial accelerometer was used to develop a small sensor module, which was attached to human body to calculate the acceleration in gravity direction by human motion, when it was positioned in any direction. To measure its wearer's walking or running motion using the sensor module, the acquired sensor data was pre-processed to enable its quantitative analysis. The acquired digital data was transformed to orthogonal coordinate value in three dimension and calculated to be single scalar acceleration data in gravity direction and normalized to be physical unit value. The normalized sensor data was used to detect walking pattern and calculate their step counts. Developed algorithm was implemented in the form of PDA application. The accuracy of the developed sensor to detect step count was about 97% in laboratory experiment.

  • PDF

Implementation of Wireless Human Movement Detection System using Thermopile Array Sensor (서모파일 어레이 센서를 이용한 무선 인체 감지 시스템 설계)

  • Lee, Min Goo;Park, Yong Kuk;Jung, Kyung Kwon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.857-860
    • /
    • 2014
  • This paper proposes a human movement detection system by a thermopile array sensor. In the system, the sensor is attached to the ceiling and it acquires spatial temperatures, which is called thermal distribution. The system obtains $4{\times}4$ pixels thermal distributions from the sensor. The distributions are analyzed to extract human movement. As the experimental result, the proposed system successfully detected human movements.

  • PDF

Movement Intention Detection of Human Body Based on Electromyographic Signal Analysis Using Fuzzy C-Means Clustering Algorithm (인체의 동작의도 판별을 위한 퍼지 C-평균 클러스터링 기반의 근전도 신호처리 알고리즘)

  • Park, Kiwon;Hwang, Gun-Young
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.1
    • /
    • pp.68-79
    • /
    • 2016
  • Electromyographic (EMG) signals have been widely used as motion commands of prosthetic arms. Although EMG signals contain meaningful information including the movement intentions of human body, it is difficult to predict the subject's motion by analyzing EMG signals in real-time due to the difficulties in extracting motion information from the signals including a lot of noises inherently. In this paper, four Ag/AgCl electrodes are placed on the surface of the subject's major muscles which are in charge of four upper arm movements (wrist flexion, wrist extension, ulnar deviation, finger flexion) to measure EMG signals corresponding to the movements. The measured signals are sampled using DAQ module and clustered sequentially. The Fuzzy C-Means (FCMs) method calculates the center values of the clustered data group. The fuzzy system designed to detect the upper arm movement intention utilizing the center values as input signals shows about 90% success in classifying the movement intentions.

Applying Hilbert-Huang Transform to Extract Essential Patterns from Hand Accelerometer Data (힐버트-황 변환에 통한 Hand Accelerometer 데이터의 핵심 패턴 추출)

  • Choe, Byeongseog;Suh, Jung-Yul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.179-190
    • /
    • 2017
  • Hand Accelerometers are widely used to detect human motion patterns in real-time. It is essential to reliably identify which type of activity is performed by human subjects. This rests on having accurate template of each activity. Many human activities are represented as a set of multiple time-series data from such sensors, which are mostly non-stationary and non-linear in nature. This requires a method which can effectively extract patterns from non-stationary and non-linear data. To achieve such a goal, we propose the method to apply Hilbert-Huang Transform which is known to be an effective way of extracting non-stationary and non-linear components from time-series data. It is applied on samples of accelerometer data to determine its effectiveness.

Dynamic Bayesian Network-Based Gait Analysis (동적 베이스망 기반의 걸음걸이 분석)

  • Kim, Chan-Young;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.354-362
    • /
    • 2010
  • This paper proposes a new method for a hierarchical analysis of human gait by dividing the motion into gait direction and gait posture using the tool of dynamic Bayesian network. Based on Factorial HMM (FHMM), which is a type of DBN, we design the Gait Motion Decoder (GMD) in a circular architecture of state space, which fits nicely to human walking behavior. Most previous studies focused on human identification and were limited in certain viewing angles and forwent modeling of the walking action. But this work makes an explicit and separate modeling of pedestrian pose and posture to recognize gait direction and detect orientation change. Experimental results showed 96.5% in pose identification. The work is among the first efforts to analyze gait motions into gait pose and gait posture, and it could be applied to a broad class of human activities in a number of situations.

A Study for Improved Human Action Recognition using Multi-classifiers (비디오 행동 인식을 위하여 다중 판별 결과 융합을 통한 성능 개선에 관한 연구)

  • Kim, Semin;Ro, Yong Man
    • Journal of Broadcast Engineering
    • /
    • v.19 no.2
    • /
    • pp.166-173
    • /
    • 2014
  • Recently, human action recognition have been developed for various broadcasting and video process. Since a video can consist of various scenes, keypoint approaches have been more attracted than template based methods for real application. Keypoint approahces tried to find regions having motion in video, and made 3-dimensional patches. Then, descriptors using histograms were computed from the patches, and a classifier based on machine learning method was applied to detect actions in video. However, a single classifier was difficult to handle various human actions. In order to improve this problem, approaches using multi classifiers were used to detect and to recognize objects. Thus, we propose a new human action recognition using decision-level fusion with support vector machine and sparse representation. The proposed method extracted descriptors based on keypoint approach from a video, and acquired results from each classifier for human action recognition. Then, we applied weights which were acquired by training stage to fuse each results from two classifiers. The experiment results in this paper show better result than a previous fusion method.

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

Hand Gesture Recognition using Optical Flow Field Segmentation and Boundary Complexity Comparison based on Hidden Markov Models

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.504-516
    • /
    • 2011
  • In this paper, we will present a method to detect human hand and recognize hand gesture. For detecting the hand region, we use the feature of human skin color and hand feature (with boundary complexity) to detect the hand region from the input image; and use algorithm of optical flow to track the hand movement. Hand gesture recognition is composed of two parts: 1. Posture recognition and 2. Motion recognition, for describing the hand posture feature, we employ the Fourier descriptor method because it's rotation invariant. And we employ PCA method to extract the feature among gesture frames sequences. The HMM method will finally be used to recognize these feature to make a final decision of a hand gesture. Through the experiment, we can see that our proposed method can achieve 99% recognition rate at environment with simple background and no face region together, and reduce to 89.5% at the environment with complex background and with face region. These results can illustrate that the proposed algorithm can be applied as a production.

Deep Learning-Based Outlier Detection and Correction for 3D Pose Estimation (3차원 자세 추정을 위한 딥러닝 기반 이상치 검출 및 보정 기법)

  • Ju, Chan-Yang;Park, Ji-Sung;Lee, Dong-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.419-426
    • /
    • 2022
  • In this paper, we propose a method to improve the accuracy of 3D human pose estimation model in various move motions. Existing human pose estimation models have some problems of jitter, inversion, swap, miss that cause miss coordinates when estimating human poses. These problems cause low accuracy of pose estimation models to detect exact coordinates of human poses. We propose a method that consists of detection and correction methods to handle with these problems. Deep learning-based outlier detection method detects outlier of human pose coordinates in move motion effectively and rule-based correction method corrects the outlier according to a simple rule. We have shown that the proposed method is effective in various motions with the experiments using 2D golf swing motion data and have shown the possibility of expansion from 2D to 3D coordinates.