• Title/Summary/Keyword: 3-D pose

Search Result 337, Processing Time 0.026 seconds

Multi-View 3D Human Pose Estimation Based on Transformer (트랜스포머 기반의 다중 시점 3차원 인체자세추정)

  • Seoung Wook Choi;Jin Young Lee;Gye Young Kim
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.48-56
    • /
    • 2023
  • The technology of Three-dimensional human posture estimation is used in sports, motion recognition, and special effects of video media. Among various methods for this, multi-view 3D human pose estimation is essential for precise estimation even in complex real-world environments. But Existing models for multi-view 3D human posture estimation have the disadvantage of high order of time complexity as they use 3D feature maps. This paper proposes a method to extend an existing monocular viewpoint multi-frame model based on Transformer with lower time complexity to 3D human posture estimation for multi-viewpoints. To expand to multi-viewpoints our proposed method first generates an 8-dimensional joint coordinate that connects 2-dimensional joint coordinates for 17 joints at 4-vieiwpoints acquired using the 2-dimensional human posture detector, CPN(Cascaded Pyramid Network). This paper then converts them into 17×32 data with patch embedding, and enters the data into a transformer model, finally. Consequently, the MLP(Multi-Layer Perceptron) block that outputs the 3D-human posture simultaneously updates the 3D human posture estimation for 4-viewpoints at every iteration. Compared to Zheng[5]'s method the number of model parameters of the proposed method was 48.9%, MPJPE(Mean Per Joint Position Error) was reduced by 20.6 mm (43.8%) and the average learning time per epoch was more than 20 times faster.

  • PDF

Human Motion Tracking based on 3D Depth Point Matching with Superellipsoid Body Model (타원체 모델과 깊이값 포인트 매칭 기법을 활용한 사람 움직임 추적 기술)

  • Kim, Nam-Gyu
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.255-262
    • /
    • 2012
  • Human motion tracking algorithm is receiving attention from many research areas, such as human computer interaction, video conference, surveillance analysis, and game or entertainment applications. Over the last decade, various tracking technologies for each application have been demonstrated and refined among them such of real time computer vision and image processing, advanced man-machine interface, and so on. In this paper, we introduce cost-effective and real-time human motion tracking algorithms based on depth image 3D point matching with a given superellipsoid body representation. The body representative model is made by using parametric volume modeling method based on superellipsoid and consists of 18 articulated joints. For more accurate estimation, we exploit initial inverse kinematic solution with classified body parts' information, and then, the initial pose is modified to more accurate pose by using 3D point matching algorithm.

Evidence gathering for line based recognition by real plane

  • Lee, Jae-Kyu;Ryu, Moon-Wook;Lee, Jang-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.195-199
    • /
    • 2008
  • We present an approach to detect real plane for line base recognition and pose estimation Given 3D line segments, we set up reference plane for each line pair and measure the normal distance from the end point to the reference plane. And then, normal distances are measured between remains of line endpoints and reference plane to decide whether these lines are coplanar with respect to the reference plane. After we conduct this coplanarity test, we initiate visibility test using z-buffer value to prune out ambiguous planes from reference planes. We applied this algorithm to real images, and the results are found useful for evidence fusion and probabilistic verification to assist the line based recognition as well as 3D pose estimation.

  • PDF

Development of a Robot arm capable of recognizing 3-D object using stereo vision

  • Kim, Sungjin;Park, Seungjun;Park, Hongphyo;Sangchul Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.128.6-128
    • /
    • 2001
  • In this paper, we present a methodology of sensing and control for a robot system designed to be capable of grasping an object and moving it to target point Stereo vision system is employed to determine to depth map which represents the distance from the camera. In stereo vision system we have used a center-referenced projection to represent the discrete match space for stereo correspondence. This center-referenced disparity space contains new occlusion points in addition to the match points which we exploit to create a concise representation of correspondence an occlusion. And from the depth map we find the target object´s pose and position in 3-D space. To find the target object´s pose and position, we use the method of the model-based recognition.

  • PDF

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

A Study On Three-dimensional Optimized Face Recognition Model : Comparative Studies and Analysis of Model Architectures (3차원 얼굴인식 모델에 관한 연구: 모델 구조 비교연구 및 해석)

  • Park, Chan-Jun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.6
    • /
    • pp.900-911
    • /
    • 2015
  • In this paper, 3D face recognition model is designed by using Polynomial based RBFNN(Radial Basis Function Neural Network) and PNN(Polynomial Neural Network). Also recognition rate is performed by this model. In existing 2D face recognition model, the degradation of recognition rate may occur in external environments such as face features using a brightness of the video. So 3D face recognition is performed by using 3D scanner for improving disadvantage of 2D face recognition. In the preprocessing part, obtained 3D face images for the variation of each pose are changed as front image by using pose compensation. The depth data of face image shape is extracted by using Multiple point signature. And whole area of face depth information is obtained by using the tip of a nose as a reference point. Parameter optimization is carried out with the aid of both ABC(Artificial Bee Colony) and PSO(Particle Swarm Optimization) for effective training and recognition. Experimental data for face recognition is built up by the face images of students and researchers in IC&CI Lab of Suwon University. By using the images of 3D face extracted in IC&CI Lab. the performance of 3D face recognition is evaluated and compared according to two types of models as well as point signature method based on two kinds of depth data information.

Design and Evaluation of Intelligent Helmet Display System (지능형 헬멧시현시스템 설계 및 시험평가)

  • Hwang, Sang-Hyun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.5
    • /
    • pp.417-428
    • /
    • 2017
  • In this paper, we describe the architectural design, unit component hardware design and core software design(Helmet Pose Tracking Software and Terrain Elevation Data Correction Software) of IHDS(Intelligent Helmet Display System), and describe the results of unit test and integration test. According to the trend of the latest helmet display system, the specifications which includes 3D map display, FLIR(Forward Looking Infra-Red) display, hybrid helmet pose tracking, visor reflection type of binocular optical system, NVC(Night Vision Camera) display, lightweight composite helmet shell were applied to the design. Especially, we proposed unique design concepts such as the automatic correction of altitude error of 3D map data, high precision image registration, multi-color lighting optical system, transmissive image emitting surface using diffraction optical element, tracking camera minimizing latency time of helmet pose estimation and air pockets for helmet fixation on head. After completing the prototype of all system components, unit tests and system integration tests were performed to verify the functions and performance.

Motion Capture System using Integrated Pose Sensors (융합센서 기반의 모션캡처 시스템)

  • Kim, Byung-Yul;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.65-74
    • /
    • 2010
  • At the aim of solving the problems appearing in traditional optical motion capturing systems such as the interference among multiple patches and the complexity of sensor and patch allocations, this paper proposes a new motion capturing system which is composed of a single camera and multiple motion sensors. A motion sensor is consisted of an acceleration sensor and a gyro sensor to detect the motion of a patched body and the orientation (roll, pitch, and yaw) of the motion, respectively. Although Image information provides the positions of the patches in 2D, the orientation information of the patch motions acquired by the motion sensors can generate 3D pose of the patches using simple equations. Since the proposed system uses the minimum number of sensors to detect the relative pose of a patch, it is easy to install on a moving body and can be economically used for various applications. The performance and the advantages of the proposed system have been proved by the experiments.

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

Dimensional Analysis for the Front Chassis Module in the Auto Industry (자동차 프런트 샤시 모듈의 좌표 해석)

  • 이동목;양승한
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.8
    • /
    • pp.50-56
    • /
    • 2004
  • The directional ability of an automobile has an influence on driver directly, and hence it must be given most priority. Alignment factors of automobile such as the camber, caster and toe directly affect the directional ability of a vehicle. The above mentioned factors are determined by the pose of interlinks in the assembly of an automobile front chassis module. Measuring the position of center point of ball joints in the front lower arm is very difficult. A method to determine this position is suggested in this paper. Pose estimation for front chassis module and dimensional evaluation to find the rotational characteristics of front lower arm were developed based on fundamental geometric techniques. To interpret the inspection data obtained for front chassis module, 3-D best fit method is needed. The best fit method determines the relationship between the nominal design coordinate system and the corresponding feature coordinate system. The least squares method based on singular value decomposition is used in this paper.