• Title/Summary/Keyword: camera pose estimation

Search Result 121, Processing Time 0.04 seconds

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

Face Pose Estimation using Stereo Image (스테레오 영상을 이용한 얼굴 포즈 추정)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.151-159
    • /
    • 2006
  • In this paper. we Present an estimation method of a face pose by using two camera images. First, it finds corresponding facial feature points of eyebrow, eye and lip from two images After that, it computes three dimensional location of the facial feature points by using the triangulation method of stereo vision techniques. Next. it makes a triangle by using the extracted facial feature points and computes the surface normal vector of the triangle. The surface normal of the triangle represents the direction of the face. We applied the computed face pose to display a 3D face model. The experimental results show that the proposed method extracts correct face pose.

  • PDF

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Enhanced Sign Language Transcription System via Hand Tracking and Pose Estimation

  • Kim, Jung-Ho;Kim, Najoung;Park, Hancheol;Park, Jong C.
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.3
    • /
    • pp.95-101
    • /
    • 2016
  • In this study, we propose a new system for constructing parallel corpora for sign languages, which are generally under-resourced in comparison to spoken languages. In order to achieve scalability and accessibility regarding data collection and corpus construction, our system utilizes deep learning-based techniques and predicts depth information to perform pose estimation on hand information obtainable from video recordings by a single RGB camera. These estimated poses are then transcribed into expressions in SignWriting. We evaluate the accuracy of hand tracking and hand pose estimation modules of our system quantitatively, using the American Sign Language Image Dataset and the American Sign Language Lexicon Video Dataset. The evaluation results show that our transcription system has a high potential to be successfully employed in constructing a sizable sign language corpus using various types of video resources.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Robot Posture Estimation Using Inner-Pipe Image

  • Sup, Yoon-Ji;Sok, Kang-E
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.173.1-173
    • /
    • 2001
  • This paper proposes the methodology in image processing algorithm that estimates the pose of the pipe crawling robot. The pipe crawling robots are usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light varies with the robot posture. The algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

  • PDF

Real-time Monocular Camera Pose Estimation using a Particle Filiter Intergrated with UKF (UKF와 연동된 입자필터를 이용한 실시간 단안시 카메라 추적 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.315-324
    • /
    • 2023
  • In this paper, we propose a real-time pose estimation method for a monocular camera using a particle filter integrated with UKF (unscented Kalman filter). While conventional camera tracking techniques combine camera images with data from additional devices such as gyroscopes and accelerometers, the proposed method aims to use only two-dimensional visual information from the camera without additional sensors. This leads to a significant simplification in the hardware configuration. The proposed approach is based on a particle filter integrated with UKF. The pose of the camera is estimated using UKF, which is defined individually for each particle. Statistics regarding the camera state are derived from all particles of the particle filter, from which the real-time camera pose information is computed. The proposed method demonstrates robust tracking, even in the case of rapid camera shakes and severe scene occlusions. The experiments show that our method remains robust even when most of the feature points in the image are obscured. In addition, we verify that when the number of particles is 35, the processing time per frame is approximately 25ms, which confirms that there are no issues with real-time processing.

Golf Green Slope Estimation Using a Cross Laser Structured Light System and an Accelerometer

  • Pham, Duy Duong;Dang, Quoc Khanh;Suh, Young Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.508-518
    • /
    • 2016
  • In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.

Camera Motion and Structure Recovery Using Two-step Sampling (2단계 샘플링을 이용한 카메라 움직임 및 장면 구조 복원)

  • 서정국;조청운;홍현기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.347-356
    • /
    • 2003
  • Camera pose and scene geometry estimation from video sequences is widely used in various areas such as image composition. Structure and motion recovery based on the auto calibration algorithm can insert synthetic 3D objects in real but un modeled scenes and create their views from the camera positions. However, most previous methods require bundle adjustment or non linear minimization process [or more precise results. This paper presents a new auto' calibration algorithm for video sequence based on two steps: the one is key frame selection, and the other removes the key frame with inaccurate camera matrix based on an absolute quadric estimation by LMedS. In the experimental results, we have demonstrated that the proposed method can achieve a precise camera pose estimation and scene geometry recovery without bundle adjustment. In addition, virtual objects have been inserted in the real images by using the camera trajectories.

An Automatic Data Collection System for Human Pose using Edge Devices and Camera-Based Sensor Fusion (엣지 디바이스와 카메라 센서 퓨전을 활용한 사람 자세 데이터 자동 수집 시스템)

  • Young-Geun Kim;Seung-Hyeon Kim;Jung-Kon Kim;Won-Jung Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.189-196
    • /
    • 2024
  • Frequent false positives alarm from the Intelligent Selective Control System have raised significant concerns. These persistent issues have led to declines in operational efficiency and market credibility among agents. Developing a new model or replacing the existing one to mitigate false positives alarm entails substantial opportunity costs; hence, improving the quality of the training dataset is pragmatic. However, smaller organizations face challenges with inadequate capabilities in dataset collection and refinement. This paper proposes an automatic human pose data collection system centered around a human pose estimation model, utilizing camera-based sensor fusion techniques and edge devices. The system facilitates the direct collection and real-time processing of field data at the network periphery, distributing the computational load that typically centralizes. Additionally, by directly labeling field data, it aids in constructing new training datasets.