• 제목/요약/키워드: Human Joint Behavior

검색결과 52건 처리시간 0.025초

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

관절의 시·공간적 관계를 고려한 딥러닝 기반의 행동인식 기법 (Deep learning-based Human Action Recognition Technique Considering the Spatio-Temporal Relationship of Joints)

  • 최인규;송혁
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.413-415
    • /
    • 2022
  • 인간의 관절은 인간의 신체를 구성하는 요소로 인간의 행동을 분석하는데 유용한 정보로 활용될 수 있기 때문에 관절 정보를 이용한 행동인식에 대한 많은 연구가 진행되었다. 하지만 각각의 독립적인 관절 정보만을 이용해서 시시각각 변화하는 인간의 행동을 인식하는 것은 매우 복잡한 문제이다. 따라서 학습에 사용할 부가적인 정보 추출 방법과 과거의 상태를 기반으로 현재 상태를 판단하는 고려하는 알고리즘이 필요하다. 본 논문에서는 연결된 관절들의 위치 관계와 각 관절의 위치가 시간의 흐름에 따라 변화하는 것을 고려한 행동 인식 기법을 제안한다. 사전 학습된 관절 추출 모델을 이용하여 각 관절의 위치 정보를 획득하고 연결된 관절 사이의 차 벡터를 이용하여 뼈대 정보를 추출한다. 그리고 두 가지 형태의 입력에 맞춰 간소화된 신경망을 구성하고 LSTM을 더하여 시·공간적 특징을 추출하도록 한다. 9개의 행동으로 구성된 데이터 셋을 이용하여 실험한 결과 각 관절 및 뼈대의 시·공간적 관계 특징을 고려하여 행동 인식 정확도를 측정하였을 때 단일 관절 정보만을 이용한 결과에 비해 뛰어난 성능을 보임을 확인하였다.

  • PDF

Human Activity Recognition Using Spatiotemporal 3-D Body Joint Features with Hidden Markov Models

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권6호
    • /
    • pp.2767-2780
    • /
    • 2016
  • Video-based human-activity recognition has become increasingly popular due to the prominent corresponding applications in a variety of fields such as computer vision, image processing, smart-home healthcare, and human-computer interactions. The essential goals of a video-based activity-recognition system include the provision of behavior-based information to enable functionality that proactively assists a person with his/her tasks. The target of this work is the development of a novel approach for human-activity recognition, whereby human-body-joint features that are extracted from depth videos are used. From silhouette images taken at every depth, the direction and magnitude features are first obtained from each connected body-joint pair so that they can be augmented later with motion direction, as well as with the magnitude features of each joint in the next frame. A generalized discriminant analysis (GDA) is applied to make the spatiotemporal features more robust, followed by the feeding of the time-sequence features into a Hidden Markov Model (HMM) for the training of each activity. Lastly, all of the trained-activity HMMs are used for depth-video activity recognition.

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권2호
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

AlphaPose를 활용한 LSTM(Long Short-Term Memory) 기반 이상행동인식 (LSTM(Long Short-Term Memory)-Based Abnormal Behavior Recognition Using AlphaPose)

  • 배현재;장규진;김영훈;김진평
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제10권5호
    • /
    • pp.187-194
    • /
    • 2021
  • 사람의 행동인식(Action Recognition)은 사람의 관절 움직임에 따라 어떤 행동을 하는지 인식하는 것이다. 이를 위해서 영상처리에 활용되는 컴퓨터 비전 태스크를 활용하였다. 사람의 행동인식은 딥러닝과 CCTV를 결합한 안전사고 대응서비스로서 안전관리 현장 내에서도 적용될 수 있다. 기존연구는 딥러닝을 활용하여 사람의 관절 키포인트 추출을 통한 행동인식 연구가 상대적으로 부족한 상태이다. 또한 안전관리 현장에서 작업자를 지속적이고 체계적으로 관리하기 어려운 문제점도 있었다. 본 논문에서는 이러한 문제점들을 해결하기 위해 관절 키포인트와 관절 움직임 정보만을 이용하여 위험 행동을 인식하는 방법을 제안하고자 한다. 자세추정방법(Pose Estimation)의 하나인 AlphaPose를 활용하여 신체 부위의 관절 키포인트를 추출하였다. 추출된 관절 키포인트를 LSTM(Long Short-Term Memory) 모델에 순차적으로 입력하여 연속적인 데이터로 학습을 하였다. 행동인식 정확률을 확인한 결과 "누워있기(Lying Down)" 행동인식 결과의 정확도가 높음을 확인할 수 있었다.

인간의 전완 회전을 위한 원위 요척골 관절의 기구학적 모델링 (Kinematic Modeling of Distal Radioulnar Joint for Human Forearm Rotation)

  • 윤덕찬;이건;최영진
    • 로봇학회논문지
    • /
    • 제14권4호
    • /
    • pp.251-257
    • /
    • 2019
  • This paper presents the kinematic modeling of the human forearm rotation constructed with a spatial four-bar linkage. Especially, a circumduction of the distal ulna is modeled for a minimal displacement of the position of the hand during the forearm rotation from the supination to the pronation. To establish its model, four joint types of the four-bar linkage are, firstly, assigned with the reasonable grounds, and then the spatial linkage having the URUU (Universal-Revolute-Universal-Universal) joint type is proposed. Kinematic analysis is conducted to show the behavior of the distal radio-ulna as well as to evaluate the angular displacements of all the joints. From the simulation result, it is, finally, revealed that the URUU spatial linkage can be substituted for the URUR (Universal-Revolute-Universal-Revolute) spatial linkage by a kinematic constraint.

An experimental study on the human upright standing posture exposed to longitudinal vibration

  • Shin, Young-Kyun;Arif Muhammad;Inooka Hikaru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.77.2-77
    • /
    • 2002
  • Human upright standing posture in the sagittal plane is studied, when it exposed in the antero-posterior vibration. A two link inverted pendulum model is considered and described its functional behavior in terms of ankle and hip joint according to the dominant joints that provides the largest contribution to the corresponding human reactionary motion. The data is analyzed, both in the time domain and the frequency domain. Subjects behave as a non-rigid pendulum with a mass and a spring throughout the whole period of the platform motion. When vision was allowed, each segment of body shows more stabilized.

  • PDF

키넥트 카메라 기반 FBX 형식 모션 캡쳐 애니메이션에서의 관절 오류 보정을 위한 인체 부위 길이와 관절 가동 범위 제한 (Body Segment Length and Joint Motion Range Restriction for Joint Errors Correction in FBX Type Motion Capture Animation based on Kinect Camera)

  • 정주헌;김상준;윤명석;박구만
    • 방송공학회논문지
    • /
    • 제25권3호
    • /
    • pp.405-417
    • /
    • 2020
  • 확장현실의 대중화로 사람의 동작을 실시간 3D 애니메이션으로 구현하는 연구가 활발히 진행 중이다. 특히 Microsoft에서 키넥트 카메라를 개발함에 따라 설비의 부담 없이 간단한 조작만으로도 3D 모션 정보 취득이 가능해져 FBX와 같은 3D 형식과 결합하여 실시간 애니메이션 생성이 가능해졌다. 하지만 키넥트는 마커 기반 모션 캡쳐 시스템에 비해 관절 정보의 추정 성능이 뒤떨어져 낮은 정확도를 보인다. 이에 본 논문에서는 키넥트 카메라 기반 FBX 형식의 모션 캡쳐 애니메이션 시스템에서의 자연스러운 인체 움직임을 구현하고자 관절 추정 오류를 보정하는 두 알고리즘을 제안한다. 첫 번째로 키넥트로 사람의 위치 정보를 취득하고 깊이 지도를 생성하여 인체 부위 길이 제약 정보를 이용해 잘못된 관절 위치 값을 보정, 새로운 회전 값을 추정한다. 두 번째로 기존 및 추정된 회전 값들에 미리 설정된 관절 가동 범위 제약을 적용, FBX로 구현해 비정상적인 동작을 제거한다. 실험으로부터 사람의 동작이 개선되는 것을 확인하였고 알고리즘 간 오차를 비교하여 시스템의 우수성을 입증하였다.

Motion-capture-based walking simulation of digital human adapted to laser-scanned 3D as-is environments for accessibility evaluation

  • Maruyama, Tsubasa;Kanai, Satoshi;Date, Hiroaki;Tada, Mitsunori
    • Journal of Computational Design and Engineering
    • /
    • 제3권3호
    • /
    • pp.250-265
    • /
    • 2016
  • Owing to our rapidly aging society, accessibility evaluation to enhance the ease and safety of access to indoor and outdoor environments for the elderly and disabled is increasing in importance. Accessibility must be assessed not only from the general standard aspect but also in terms of physical and cognitive friendliness for users of different ages, genders, and abilities. Meanwhile, human behavior simulation has been progressing in the areas of crowd behavior analysis and emergency evacuation planning. However, in human behavior simulation, environment models represent only "as-planned" situations. In addition, a pedestrian model cannot generate the detailed articulated movements of various people of different ages and genders in the simulation. Therefore, the final goal of this research was to develop a virtual accessibility evaluation by combining realistic human behavior simulation using a digital human model (DHM) with "as-is" environment models. To achieve this goal, we developed an algorithm for generating human-like DHM walking motions, adapting its strides, turning angles, and footprints to laser-scanned 3D as-is environments including slopes and stairs. The DHM motion was generated based only on a motion-capture (MoCap) data for flat walking. Our implementation constructed as-is 3D environment models from laser-scanned point clouds of real environments and enabled a DHM to walk autonomously in various environment models. The difference in joint angles between the DHM and MoCap data was evaluated. Demonstrations of our environment modeling and walking simulation in indoor and outdoor environments including corridors, slopes, and stairs are illustrated in this study.