• Title/Summary/Keyword: Human motion detect

Search Result 94, Processing Time 0.029 seconds

Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose (OpenPose기반 딥러닝을 이용한 운동동작분류 성능 비교)

  • Nam Rye Son;Min A Jung
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The collected dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.

Efficient Intermediate Joint Estimation using the UKF based on the Numerical Inverse Kinematics (수치적인 역운동학 기반 UKF를 이용한 효율적인 중간 관절 추정)

  • Seo, Yung-Ho;Lee, Jun-Sung;Lee, Chil-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.39-47
    • /
    • 2010
  • A research of image-based articulated pose estimation has some problems such as detection of human feature, precise pose estimation, and real-time performance. In particular, various methods are currently presented for recovering many joints of human body. We propose the novel numerical inverse kinematics improved with the UKF(unscented Kalman filter) in order to estimate the human pose in real-time. An existing numerical inverse kinematics is required many iterations for solving the optimal estimation and has some problems such as the singularity of jacobian matrix and a local minima. To solve these problems, we combine the UKF as a tool for optimal state estimation with the numerical inverse kinematics. Combining the solution of the numerical inverse kinematics with the sampling based UKF provides the stability and rapid convergence to optimal estimate. In order to estimate the human pose, we extract the interesting human body using both background subtraction and skin color detection algorithm. We localize its 3D position with the camera geometry. Next, through we use the UKF based numerical inverse kinematics, we generate the intermediate joints that are not detect from the images. Proposed method complements the defect of numerical inverse kinematics such as a computational complexity and an accuracy of estimation.

Object Detection Method on Vision Robot using Sensor Fusion (센서 융합을 이용한 이동 로봇의 물체 검출 방법)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.249-254
    • /
    • 2007
  • A mobile robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. We focus on how to detect a object region well using image processing algorithm because it gives robots the ability of working for human. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. Shape information and signature algorithm are used to segment the objects from background regardless of shape changes. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

Design requirements of mediating device for total physical response - A protocol analysis of preschool children's behavioral patterns (체감형 학습을 위한 매개 디바이스의 디자인 요구사항 - 프로토콜 분석법을 통한 미취학 아동의 행동 패턴 분석)

  • Kim, Yun-Kyung;Kim, Hyun-Jeong;Kim, Myung-Suk
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.103-110
    • /
    • 2010
  • TPR(Total Physical Response) is a new representative learning method for children's education. Today's approach to TPR has focused on signals from a user which becomes input data in a human-computer interaction, but the accuracy of sensing from body signals(e. g. motion and voice) isn't so perfect that it seems difficult to apply on an education system. To overcome these limits, we suggest a mediating interface device which can detect the user's motion using correct numerical values such as acceleration and angular speed. In addition, we suggest new design requirements for the mediating device through analyzing children's behavior as human factors by ethnography research and protocol analysis. As a result, we found that; children are unskilled in physical control when they use objects; tend to lean on an object unconsciously with touch. Also their behaviors are restricted, when they use objects. Therefore a mediating device should satisfy new design requirements which are make up for unskilled handling, support familiar and natural physical activity.

  • PDF

Development of Driver's Emotion and Attention Recognition System using Multi-modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 운전자의 감정 및 주의력 인식 기술 개발)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.754-761
    • /
    • 2008
  • As the automobile industry and technologies are developed, driver's tend to more concern about service matters than mechanical matters. For this reason, interests about recognition of human knowledge and emotion to make safe and convenient driving environment for driver are increasing more and more. recognition of human knowledge and emotion are emotion engineering technology which has been studied since the late 1980s to provide people with human-friendly services. Emotion engineering technology analyzes people's emotion through their faces, voices and gestures, so if we use this technology for automobile, we can supply drivels with various kinds of service for each driver's situation and help them drive safely. Furthermore, we can prevent accidents which are caused by careless driving or dozing off while driving by recognizing driver's gestures. the purpose of this paper is to develop a system which can recognize states of driver's emotion and attention for safe driving. First of all, we detect a signals of driver's emotion by using bio-motion signals, sleepiness and attention, and then we build several types of databases. by analyzing this databases, we find some special features about drivers' emotion, sleepiness and attention, and fuse the results through Multi-Modal method so that it is possible to develop the system.

Real-time People Occupancy Detection by Camera Vision Sensor (카메라 비전 센서를 활용하는 실시간 사람 점유 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.774-784
    • /
    • 2017
  • Occupancy sensors installed in buildings and households turn off the light if the space is vacant. Currently PIR (pyroelectric infra-red) motion sensors have been utilized. Recently, the researches using camera sensors have been carried out in order to overcome the demerit of PIR that can not detect static people. If the tradeoff of cost and performance is satisfied, the camera sensors are expected to replace the current PIRs. In this paper, we propose vision sensor-based occupancy detection being composed of tracking, recognition and detection. Our softeware is designed to meet the real-time processing. In experiments, 14.5fps is achieved at 15fps USB input. Also, the detection accuracy reached 82.0%.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Vest-type System on Machine Learning-based Algorithm to Detect and Predict Falls

  • Ho-Chul Kim;Ho-Seong Hwang;Kwon-Hee Lee;Min-Hee Kim
    • PNF and Movement
    • /
    • v.22 no.1
    • /
    • pp.43-54
    • /
    • 2024
  • Purpose: Falls among persons older than 65 years are a significant concern due to their frequency and severity. This study aimed to develop a vest-type embedded artificial intelligence (AI) system capable of detecting and predicting falls in various scenarios. Methods: In this study, we established and developed a vest-type embedded AI system to judge and predict falls in various directions and situations. To train the AI, we collected data using acceleration and gyroscope values from a six-axis sensor attached to the seventh cervical and the second sacral vertebrae of the user, considering accurate motion analysis of the human body. The model was constructed using a neural network-based AI prediction algorithm to anticipate the direction of falls using the collected pedestrian data. Results: We focused on developing a lightweight and efficient fall prediction model for integration into an embedded AI algorithm system, ensuring real-time network optimization. Our results showed that the accuracy of fall occurrence and direction prediction using the trained fall prediction model was 89.0% and 78.8%, respectively. Furthermore, the fall occurrence and direction prediction accuracy of the model quantized for embedded porting was 87.0 % and 75.5 %, respectively. Conclusion: The developed fall detection and prediction system, designed as a vest-type with an embedded AI algorithm, offers the potential to provide real-time feedback to pedestrians in clinical settings and proactively prepare for accidents.

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.