• Title/Summary/Keyword: Human Joint Behavior

Search Result 52, Processing Time 0.025 seconds

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

Deep learning-based Human Action Recognition Technique Considering the Spatio-Temporal Relationship of Joints (관절의 시·공간적 관계를 고려한 딥러닝 기반의 행동인식 기법)

  • Choi, Inkyu;Song, Hyok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.413-415
    • /
    • 2022
  • Since human joints can be used as useful information for analyzing human behavior as a component of the human body, many studies have been conducted on human action recognition using joint information. However, it is a very complex problem to recognize human action that changes every moment using only each independent joint information. Therefore, an additional information extraction method to be used for learning and an algorithm that considers the current state based on the past state are needed. In this paper, we propose a human action recognition technique considering the positional relationship of connected joints and the change of the position of each joint over time. Using the pre-trained joint extraction model, position information of each joint is obtained, and bone information is extracted using the difference vector between the connected joints. In addition, a simplified neural network is constructed according to the two types of inputs, and spatio-temporal features are extracted by adding LSTM. As a result of the experiment using a dataset consisting of 9 behaviors, it was confirmed that when the action recognition accuracy was measured considering the temporal and spatial relationship features of each joint, it showed superior performance compared to the result using only single joint information.

  • PDF

Human Activity Recognition Using Spatiotemporal 3-D Body Joint Features with Hidden Markov Models

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2767-2780
    • /
    • 2016
  • Video-based human-activity recognition has become increasingly popular due to the prominent corresponding applications in a variety of fields such as computer vision, image processing, smart-home healthcare, and human-computer interactions. The essential goals of a video-based activity-recognition system include the provision of behavior-based information to enable functionality that proactively assists a person with his/her tasks. The target of this work is the development of a novel approach for human-activity recognition, whereby human-body-joint features that are extracted from depth videos are used. From silhouette images taken at every depth, the direction and magnitude features are first obtained from each connected body-joint pair so that they can be augmented later with motion direction, as well as with the magnitude features of each joint in the next frame. A generalized discriminant analysis (GDA) is applied to make the spatiotemporal features more robust, followed by the feeding of the time-sequence features into a Hidden Markov Model (HMM) for the training of each activity. Lastly, all of the trained-activity HMMs are used for depth-video activity recognition.

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

LSTM(Long Short-Term Memory)-Based Abnormal Behavior Recognition Using AlphaPose (AlphaPose를 활용한 LSTM(Long Short-Term Memory) 기반 이상행동인식)

  • Bae, Hyun-Jae;Jang, Gyu-Jin;Kim, Young-Hun;Kim, Jin-Pyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.187-194
    • /
    • 2021
  • A person's behavioral recognition is the recognition of what a person does according to joint movements. To this end, we utilize computer vision tasks that are utilized in image processing. Human behavior recognition is a safety accident response service that combines deep learning and CCTV, and can be applied within the safety management site. Existing studies are relatively lacking in behavioral recognition studies through human joint keypoint extraction by utilizing deep learning. There were also problems that were difficult to manage workers continuously and systematically at safety management sites. In this paper, to address these problems, we propose a method to recognize risk behavior using only joint keypoints and joint motion information. AlphaPose, one of the pose estimation methods, was used to extract joint keypoints in the body part. The extracted joint keypoints were sequentially entered into the Long Short-Term Memory (LSTM) model to be learned with continuous data. After checking the behavioral recognition accuracy, it was confirmed that the accuracy of the "Lying Down" behavioral recognition results was high.

Kinematic Modeling of Distal Radioulnar Joint for Human Forearm Rotation (인간의 전완 회전을 위한 원위 요척골 관절의 기구학적 모델링)

  • Yoon, Dukchan;Lee, Geon;Choi, Youngjin
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.251-257
    • /
    • 2019
  • This paper presents the kinematic modeling of the human forearm rotation constructed with a spatial four-bar linkage. Especially, a circumduction of the distal ulna is modeled for a minimal displacement of the position of the hand during the forearm rotation from the supination to the pronation. To establish its model, four joint types of the four-bar linkage are, firstly, assigned with the reasonable grounds, and then the spatial linkage having the URUU (Universal-Revolute-Universal-Universal) joint type is proposed. Kinematic analysis is conducted to show the behavior of the distal radio-ulna as well as to evaluate the angular displacements of all the joints. From the simulation result, it is, finally, revealed that the URUU spatial linkage can be substituted for the URUR (Universal-Revolute-Universal-Revolute) spatial linkage by a kinematic constraint.

An experimental study on the human upright standing posture exposed to longitudinal vibration

  • Shin, Young-Kyun;Arif Muhammad;Inooka Hikaru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.77.2-77
    • /
    • 2002
  • Human upright standing posture in the sagittal plane is studied, when it exposed in the antero-posterior vibration. A two link inverted pendulum model is considered and described its functional behavior in terms of ankle and hip joint according to the dominant joints that provides the largest contribution to the corresponding human reactionary motion. The data is analyzed, both in the time domain and the frequency domain. Subjects behave as a non-rigid pendulum with a mass and a spring throughout the whole period of the platform motion. When vision was allowed, each segment of body shows more stabilized.

  • PDF

Body Segment Length and Joint Motion Range Restriction for Joint Errors Correction in FBX Type Motion Capture Animation based on Kinect Camera (키넥트 카메라 기반 FBX 형식 모션 캡쳐 애니메이션에서의 관절 오류 보정을 위한 인체 부위 길이와 관절 가동 범위 제한)

  • Jeong, Ju-heon;Kim, Sang-Joon;Yoon, Myeong-suk;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.405-417
    • /
    • 2020
  • Due to the popularization of the Extended Reality, research is actively underway to implement human motion in real-time 3D animation. In particular, Microsoft developed Kinect cameras for 3D motion information can be obtained without the burden of facilities and with simple operation, real-time animation can be generated by combining with 3D formats such as FBX. Compared to the marker-based motion capture system, however, Kinect has low accuracy due to its lack of estimated performance of joint information. In this paper, two algorithms are proposed to correct joint estimation errors in order to realize natural human motion in motion capture animation system in Kinect camera-based FBX format. First, obtain the position information of a person with a Kinect and create a depth map to correct the wrong joint position value using the human body segment length constraint information, and estimate the new rotation value. Second, the pre-set joint motion range constraint is applied to the existing and estimated rotation value and implemented in FBX to eliminate abnormal behavior. From the experiment, we found improvements in human behavior and compared errors between algorithms to demonstrate the superiority of the system.

Motion-capture-based walking simulation of digital human adapted to laser-scanned 3D as-is environments for accessibility evaluation

  • Maruyama, Tsubasa;Kanai, Satoshi;Date, Hiroaki;Tada, Mitsunori
    • Journal of Computational Design and Engineering
    • /
    • v.3 no.3
    • /
    • pp.250-265
    • /
    • 2016
  • Owing to our rapidly aging society, accessibility evaluation to enhance the ease and safety of access to indoor and outdoor environments for the elderly and disabled is increasing in importance. Accessibility must be assessed not only from the general standard aspect but also in terms of physical and cognitive friendliness for users of different ages, genders, and abilities. Meanwhile, human behavior simulation has been progressing in the areas of crowd behavior analysis and emergency evacuation planning. However, in human behavior simulation, environment models represent only "as-planned" situations. In addition, a pedestrian model cannot generate the detailed articulated movements of various people of different ages and genders in the simulation. Therefore, the final goal of this research was to develop a virtual accessibility evaluation by combining realistic human behavior simulation using a digital human model (DHM) with "as-is" environment models. To achieve this goal, we developed an algorithm for generating human-like DHM walking motions, adapting its strides, turning angles, and footprints to laser-scanned 3D as-is environments including slopes and stairs. The DHM motion was generated based only on a motion-capture (MoCap) data for flat walking. Our implementation constructed as-is 3D environment models from laser-scanned point clouds of real environments and enabled a DHM to walk autonomously in various environment models. The difference in joint angles between the DHM and MoCap data was evaluated. Demonstrations of our environment modeling and walking simulation in indoor and outdoor environments including corridors, slopes, and stairs are illustrated in this study.