• 제목/요약/키워드: Camera pose

검색결과 277건 처리시간 0.025초

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권8호
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

차량 안전 제어를 위한 파티클 필터 기반의 강건한 다중 인체 3차원 자세 추정 (Particle Filter Based Robust Multi-Human 3D Pose Estimation for Vehicle Safety Control)

  • 박준상;박형욱
    • 자동차안전학회지
    • /
    • 제14권3호
    • /
    • pp.71-76
    • /
    • 2022
  • In autonomous driving cars, 3D pose estimation can be one of the effective methods to enhance safety control for OOP (Out of Position) passengers. There have been many studies on human pose estimation using a camera. Previous methods, however, have limitations in automotive applications. Due to unexplainable failures, CNN methods are unreliable, and other methods perform poorly. This paper proposes robust real-time multi-human 3D pose estimation architecture in vehicle using monocular RGB camera. Using particle filter, our approach integrates CNN 2D/3D pose measurements with available information in vehicle. Computer simulations were performed to confirm the accuracy and robustness of the proposed algorithm.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권10호
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

원형 물체를 이용한 로봇/카메라 자세의 능동보정 (Active Calibration of the Robot/camera Pose using Cylindrical Objects)

  • 한만용;김병화;김국헌;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제5권3호
    • /
    • pp.314-323
    • /
    • 1999
  • This paper introduces a methodology of active calibration of a camera pose (orientation and position) using the images of cylindrical objects that are going to be manipulated. This active calibration method is different from the passive calibration where a specific pattern needs to be located at a certain position. In the active calibration, a camera attached on the robot captures images of objects that are going to be manipulated. That is, the prespecified position and orientation data of the cylindrical object are transformed into the camera pose through the two consecutive image frames. An ellipse can be extracted from each image frame, which is defined as a circular-feature matrix. Therefore, two circular-feature matrices and motion parameters between the two ellipses are enough for the active calibration process. This active calibration scheme is very effective for the precise control of a mobile/task robot that needs to be calibrated dynamically. To verify the effectiveness of active calibration, fundamental experiments are peformed.

  • PDF

Control of a mobile robot supporting a task robot on the top

  • Lee, Jang M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 Proceedings of the Korea Automatic Control Conference, 11th (KACC); Pohang, Korea; 24-26 Oct. 1996
    • /
    • pp.1-7
    • /
    • 1996
  • This paper addresses the control problem of a mobile robot supporting a task robot with needs to be positioned precisely. The main difficulty residing in the precise control of a mobile robot supporting a task robot is providing an accurate and stable base for the task robot. That is, the end-plate of the mobile robot which is the base of the task robot can not be positioned accurately without external position sensors. This difficulty is resolved in this paper through the vision information obtained from the camera attached at the end of a task robot. First of all, the camera parameters were measured by using the images of a fixed object captured by the camera. The measured parameters include the rotation, the position, the scale factor, and the focal length of the camera. These parameters could be measured by using the features of each vertex point for a hexagonal object and by using the pin-hole model of a camera. Using the measured pose(position and orientation) of the camera and the given kinematics of the task robot, we calculate a pose of the end-plate of the mobile robot, which is used for the precise control of the mobile robot. Experimental results for the pose estimations are shown.

  • PDF

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 춘계학술발표대회
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM (Omni-directional Visual-LiDAR SLAM for Multi-Camera System)

  • 지샨 자비드;김곤우
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정 (Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System)

  • 권지욱;박문수;좌동경;홍석교
    • 제어로봇시스템학회논문지
    • /
    • 제15권12호
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식 (Pose-invariant Face Recognition using Cylindrical Model and Stereo Camera)

  • 노진우;안병두;;고한석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2012-2015
    • /
    • 2003
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with estimated object's pitch pose by stereo geometry. Also, since we have advantage that we can utilize two images acquired at the same time, we can increase overall recognition rate by decision-level fusion. By experiment, we confirmed that recognition rate could be increased using our methods.

  • PDF

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF