• Title/Summary/Keyword: 3-D pose

Search Result 340, Processing Time 0.024 seconds

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Camera Exterior Parameters Based on Vector Inner Production Application: Absolute Orientation (벡터내적 기반 카메라 외부 파라메터 응용 : 절대표정)

  • Chon, Jae-Choon;Sastry, Shankar
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.1
    • /
    • pp.70-74
    • /
    • 2008
  • In the field of camera motion research, it is widely held that the position (movement) and pose (rotation) of cameras are correlated and cannot be independently separated. A new equation based on inner product is proposed here to independently separate the position and pose. It is proved that the position and pose are not correlated and the equation is applied to estimation of the camera exterior parameters using a real image and 3D data.

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Augmented Reality Service Based on Object Pose Prediction Using PnP Algorithm

  • Kim, In-Seon;Jung, Tae-Won;Jung, Kye-Dong
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.295-301
    • /
    • 2021
  • Digital media technology is gradually developing with the development of convergence quaternary industrial technology and mobile devices. The combination of deep learning and augmented reality can provide more convenient and lively services through the interaction of 3D virtual images with the real world. We combine deep learning-based pose prediction with augmented reality technology. We predict the eight vertices of the bounding box of the object in the image. Using the predicted eight vertices(x,y), eight vertices(x,y,z) of 3D mesh, and the intrinsic parameter of the smartphone camera, we compute the external parameters of the camera through the PnP algorithm. We calculate the distance to the object and the degree of rotation of the object using the external parameter and apply to AR content. Our method provides services in a web environment, making it highly accessible to users and easy to maintain the system. As we provide augmented reality services using consumers' smartphone cameras, we can apply them to various business fields.

Head Pose Estimation by using Morphological Property of Disparity Map

  • Jun, Se-Woong;Park, Sung-Kee;Lee, Moon-Key
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.735-739
    • /
    • 2005
  • This paper presents a new system to estimate the head pose of human in interactive indoor environment that has dynamic illumination change and large working space. The main idea of this system is to suggest a new morphological feature for estimating head angle from stereo disparity map. When a disparity map is obtained from stereo camera, the matching confidence value can be derived by measurements of correlation of the stereo images. Applying a threshold to the confidence value, we also obtain the specific morphology of the disparity map. Therefore, we can obtain the morphological shape of disparity map. Through the analysis of this morphological property, the head pose can be estimated. It is simple and fast algorithm in comparison with other algorithm which apply facial template, 2D, 3D models and optical flow method. Our system can automatically segment and estimate head pose in a wide range of head motion without manual initialization like other optical flow system. As the result of experiments, we obtained the reliable head orientation data under the real-time performance.

  • PDF

Pose-invariant Face Recognition using a Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • 노진우;홍정화;고한석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.929-938
    • /
    • 2004
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with previously estimated pitch pose angle by the stereo geometry. Also, since we have an advantage that we can utilize two images acquired at the same time, we can increase overall recognition performance by decision-level fusion. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the yaw pose transform, and the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model. Also, by using stereo camera system we achieved an increased recognition rate 5.24% more for the case of upper face pose, and 3.34% more by decision-level fusion.

Visual Servoing of Robotic Manipulators for Moving Objects (동적 물체에 대한 로봇 매니퓰레이터의 Visual Servoing)

  • Sim, Kwee-Bo;Oh, Seung-Wook
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.15-24
    • /
    • 1996
  • This paper presents a new method for visual servoing to control the pose(position and orientation) of the robotic manipulators for grasping the 3-D moving object whose initial pose and moving informations are unknown by using the stereo camera. The stereo camera is mounted on the end-effector of robotic manipulator. In order to track the current pose of robotic manipulator to the desired pose, we use the image Jacobian, which is described by the differential transform, relating the change in image feature point to the change in the object's pose with respect to the camera. In this paper the simple PD controller is adopted for the robotic manipulator to track the desired pose. Finally, the effectiveness of the proposed method is confirmed by some computer simulations.

  • PDF

Point Clouds Compression Using Pose Deformation (포즈 변형을 이용한 포인트 클라우드 압축)

  • Lee, Sol;Park, Byung-Seo;Park, Jung-Tak;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.47-48
    • /
    • 2021
  • 본 논문에서는 대용량의 3D 데이터 시퀀스의 압축을 진행한다. 3D 데이터 시퀀스의 각 프레임에서 Pose Estimation을 통해 3D Skeleton을 추출한 뒤, 포인트 클라우드를 skeleton에 묶는 리깅 과정을 거치고, 다음 프레임과 같은 자세로 deformation을 진행한다. 다음 프레임과 같은 자세로 변형된 포인트 클라우드와 실제 다음 프레임의 포인트 클라우드를 비교하여, 두 데이터에 모두 있는 점, 실제 다음 프레임에만 있는 점, deformation한 데이터에만 있는 점으로 분류한다. 두 데이터에 모두 있는 점을 제외하고 나머지 두 분류의 점들을 저장함으로써 3D 시퀀스 데이터를 압축할 수 있다.

  • PDF

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.