• Title/Summary/Keyword: Camera pose

Search Result 269, Processing Time 0.025 seconds

Localization of a Monocular Camera using a Feature-based Probabilistic Map (특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법)

  • Kim, Hyungjin;Lee, Donghwa;Oh, Taekjun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM (사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법)

  • Lee, Jae-Min;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

Camera Calibration and Pose Estimation for Tasks of a Mobile Manipulator (모바일 머니퓰레이터의 작업을 위한 카메라 보정 및 포즈 추정)

  • Choi, Ji-Hoon;Kim, Hae-Chang;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.350-356
    • /
    • 2020
  • Workers have been replaced by mobile manipulators for factory automation in recent years. One of the typical tasks for automation is that a mobile manipulator moves to a target location and picks and places an object on the worktable. However, due to the pose estimation error of the mobile platform, the robot cannot reach the exact target position, which prevents the manipulator from being able to accurately pick and place the object on the worktable. In this study, we developed an automatic alignment system using a low-cost camera mounted on the end-effector of a collaborative robot. Camera calibration and pose estimation methods were also proposed for the automatic alignment system. This algorithm uses a markerboard composed of markers to calibrate the camera and then precisely estimate the camera pose. Experimental results demonstrate that the mobile manipulator can perform successful pick and place tasks on various conditions.

Landmark Initialization for Unscented Kalman Filter Sensor Fusion in Monocular Camera Localization

  • Hartmann, Gabriel;Huang, Fay;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.1-11
    • /
    • 2013
  • The determination of the pose of the imaging camera is a fundamental problem in computer vision. In the monocular case, difficulties in determining the scene scale and the limitation to bearing-only measurements increase the difficulty in estimating camera pose accurately. Many mobile phones now contain inertial measurement devices, which may lend some aid to the task of determining camera pose. In this study, by means of simulation and real-world experimentation, we explore an approach to monocular camera localization that incorporates both observations of the environment and measurements from accelerometers and gyroscopes. The unscented Kalman filter was implemented for this task. Our main contribution is a novel approach to landmark initialization in a Kalman filter; we characterize the tolerance to noise that this approach allows.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

A Switched Visual Servoing Technique Robust to Camera Calibration Errors for Reaching the Desired Location Following a Straight Line in 3-D Space (카메라 교정 오차에 강인한 3차원 직선 경로 추종을 위한 전환 비주얼 서보잉 기법)

  • Kim, Do-Hyoung;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.125-134
    • /
    • 2006
  • The problem of establishing the servo system to reach the desired location keeping all features in the field of view and following a straight line is considered. In addition, robustness of camera calibration parameters is considered in this paper. The proposed approach is based on switching from position-based visual servoing (PBVS) to image-based visual servoing (IBVS) and allows the camera path to follow a straight line. To achieve the objective, a pose estimation method is required; the camera's target pose is estimated from the obtained images without the knowledge of the object. A switched control law moves the camera equipped to a robot end-effector near the desired location following a straight line in Cartesian space and then positions it to the desired pose with robustness to camera calibration error. Finally simulation results show the feasibility of the proposed visual servoing technique.

  • PDF

Camera Exterior Parameters Based on Vector Inner Production Application: Absolute Orientation (벡터내적 기반 카메라 외부 파라메터 응용 : 절대표정)

  • Chon, Jae-Choon;Sastry, Shankar
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.1
    • /
    • pp.70-74
    • /
    • 2008
  • In the field of camera motion research, it is widely held that the position (movement) and pose (rotation) of cameras are correlated and cannot be independently separated. A new equation based on inner product is proposed here to independently separate the position and pose. It is proved that the position and pose are not correlated and the equation is applied to estimation of the camera exterior parameters using a real image and 3D data.

Head Pose Estimation Based on Perspective Projection Using PTZ Camera (원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정)

  • Kim, Jin Suh;Lee, Gyung Ju;Kim, Gye Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.7
    • /
    • pp.267-274
    • /
    • 2018
  • This paper describes a head pose estimation method using PTZ(Pan-Tilt-Zoom) camera. When the external parameters of a camera is changed by rotation and translation, the estimated face pose for the same head also varies. In this paper, we propose a new method to estimate the head pose independently on varying the parameters of PTZ camera. The proposed method consists of 3 steps: face detection, feature extraction, and pose estimation. For each step, we respectively use MCT(Modified Census Transform) feature, the facial regression tree method, and the POSIT(Pose from Orthography and Scaling with ITeration) algorithm. The existing POSIT algorithm does not consider the rotation of a camera, but this paper improves the POSIT based on perspective projection in order to estimate the head pose robustly even when the external parameters of a camera are changed. Through experiments, we confirmed that RMSE(Root Mean Square Error) of the proposed method improve $0.6^{\circ}$ less then the conventional method.

Artificial Landmark based Pose-Graph SLAM for AGVs in Factory Environments (공장환경에서 AGV를 위한 인공표식 기반의 포즈그래프 SLAM)

  • Heo, Hwan;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.112-118
    • /
    • 2015
  • This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-cost camera. SLAM is conducted by optimizing AGV's explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.