• Title/Summary/Keyword: camera pose estimation

Search Result 121, Processing Time 0.037 seconds

Error Quantification of Photogrammetric 6DOF Pose Estimation (사진계측기반 6자유도 포즈 예측의 오차 정량화)

  • Kim, Sang-Jin;You, Heung-Cheol;Reu, Taekyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.350-356
    • /
    • 2013
  • Photogrammetry has been widely used for measuring the important physical quantities in aerospace areas because it is a remote and non-contact measurement method. In this study, we analyzed photogrammetric error which can be occur in six degrees of freedom(6DOF) analysis among coordinates systems with single camera. Error analysis program were developed, and validated using geometric problem converted from imaging process. We analogized that the statistic from estimated camera pose which is need to 6DOF analysis is normally distributed, and quantified the photogrammetric error using estimated population standard deviation.

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

Registration System of 3D Footwear data by Foot Movements (발의 움직임 추적에 의한 3차원 신발모델 정합 시스템)

  • Jung, Da-Un;Seo, Yung-Ho;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.24-34
    • /
    • 2007
  • Application systems that easy to access a information have been developed by IT growth and a human life variation. In this paper, we propose a application system to register a 3D footwear model using a monocular camera. In General, a human motion analysis research to body movement. However, this system research a new method to use a foot movement. This paper present a system process and show experiment results. For projection to 2D foot plane from 3D shoe model data, we construct processes that a foot tracking, a projection expression and pose estimation process. This system divide from a 2D image analysis and a 3D pose estimation. First, for a foot tracking, we propose a method that find fixing point by a foot characteristic, and propose a geometric expression to relate 2D coordinate and 3D coordinate to use a monocular camera without a camera calibration. We make a application system, and measure distance error. Then, we confirmed a registration very well.

An Image-based Augmented Reality System for Multiple Users using Multiple Markers (다수 마커를 활용한 영상 기반 다중 사용자 증강현실 시스템)

  • Moon, Ji won;Park, Dong woo;Jung, Hyun suk;Kim, Young hun;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1162-1170
    • /
    • 2018
  • This paper presents an augmented reality system for multiple users. The proposed system performs ar image-based pose estimation of users and pose of each user is shared with other uses via a network server. For camera-based pose estimation, we install multiple markers in a pre-determined space and select the marker with the best appearance. The marker is detected by corner point detection and for robust pose estimation. the marker's corner points are tracked by optical flow tracking algorithm. Experimental results show that the proposed system successfully provides an augmented reality application to multiple users even when users are rapidly moving and some of markers are occluded by users.

A New Calibration of 3D Point Cloud using 3D Skeleton (3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션)

  • Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.247-257
    • /
    • 2021
  • This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

The Estimation of the Transform Parameters Using the Pattern Matching with 2D Images (2차원 영상에서 패턴매칭을 이용한 3차원 물체의 변환정보 추정)

  • 조택동;이호영;양상민
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.7
    • /
    • pp.83-91
    • /
    • 2004
  • The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision or space resection in photogrammetry. This paper discusses estimation of transform parameters using the pattern matching method with 2D images only. In general, the 3D reference points or lines are needed to find out the 3D transform parameters, but this method is applied without the 3D reference points or lines. It uses only two images to find out the transform parameters between two image. The algorithm is simulated using Visual C++ on Windows 98.

Head Pose Estimation by using Morphological Property of Disparity Map

  • Jun, Se-Woong;Park, Sung-Kee;Lee, Moon-Key
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.735-739
    • /
    • 2005
  • This paper presents a new system to estimate the head pose of human in interactive indoor environment that has dynamic illumination change and large working space. The main idea of this system is to suggest a new morphological feature for estimating head angle from stereo disparity map. When a disparity map is obtained from stereo camera, the matching confidence value can be derived by measurements of correlation of the stereo images. Applying a threshold to the confidence value, we also obtain the specific morphology of the disparity map. Therefore, we can obtain the morphological shape of disparity map. Through the analysis of this morphological property, the head pose can be estimated. It is simple and fast algorithm in comparison with other algorithm which apply facial template, 2D, 3D models and optical flow method. Our system can automatically segment and estimate head pose in a wide range of head motion without manual initialization like other optical flow system. As the result of experiments, we obtained the reliable head orientation data under the real-time performance.

  • PDF

Development of Human Following Method of Mobile Robot Using TRT Pose (TRT Pose를 이용한 모바일 로봇의 사람 추종 기법)

  • Choi, Jun-Hyeon;Joo, Kyeong-Jin;Yun, Sang-Seok;Kim, Jong-Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.6
    • /
    • pp.281-287
    • /
    • 2020
  • In this paper, we propose a method for estimating a walking direction by which a mobile robots follows a person using TRT (Tensor RT) pose, which is motion recognition based on deep learning. Mobile robots can measure individual movements by recognizing key points on the person's pelvis and determine the direction in which the person tries to move. Using these information and the distance between robot and human, the mobile robot can follow the person stably keeping a safe distance from people. The TRT Pose only extracts key point information to prevent privacy issues while a camera in the mobile robot records video. To validate the proposed technology, experiment is carried out successfully where human walks away or toward the mobile robot in zigzag form and the robot continuously follows human with prescribed distance.

RGB Camera-based Real-time 21 DoF Hand Pose Tracking (RGB 카메라 기반 실시간 21 DoF 손 추적)

  • Choi, Junyeong;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.942-956
    • /
    • 2014
  • This paper proposes a real-time hand pose tracking method using a monocular RGB camera. Hand tracking has high ambiguity since a hand has a number of degrees of freedom. Thus, to reduce the ambiguity the proposed method adopts the step-by-step estimation scheme: a palm pose estimation, a finger yaw motion estimation, and a finger pitch motion estimation, which are performed in consecutive order. Assuming a hand to be a plane, the proposed method utilizes a planar hand model, which facilitates a hand model regeneration. The hand model regeneration modifies the hand model to fit a current user's hand, and improves robustness and accuracy of the tracking results. The proposed method can work in real-time and does not require GPU-based processing. Thus, it can be applied to various platforms including mobile devices such as Google Glass. The effectiveness and performance of the proposed method will be verified through various experiments.

B-snake Based Lane Detection with Feature Merging and Extrinsic Camera Parameter Estimation (특징점 병합과 카메라 외부 파라미터 추정 결과를 고려한 B-snake기반 차선 검출)

  • Ha, Sangheon;Kim, Gyeonghwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.1
    • /
    • pp.215-224
    • /
    • 2013
  • This paper proposes a robust lane detection algorithm for bumpy or slope changing roads by estimating extrinsic camera parameters, which represent the pose of the camera mounted on the car. The proposed algorithm assumes that two lanes are parallel with the predefined width. The lane detection and the extrinsic camera parameter estimation are performed simultaneously by utilizing B-snake in motion compensated and merged feature map with consecutive sequences. The experimental results show the robustness of the proposed algorithm in various road environments. Furthermore, the accuracy of extrinsic camera parameter estimation is evaluated by calculating the distance to a preceding car with the estimated parameters and comparing to the radar-measured distance.