• Title/Summary/Keyword: camera pose estimation

Search Result 121, Processing Time 0.036 seconds

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

2D - 3D Human Face Verification System based on Multiple RGB-D Camera using Head Pose Estimation (얼굴 포즈 추정을 이용한 다중 RGB-D 카메라 기반의 2D - 3D 얼굴 인증을 위한 시스템)

  • Kim, Jung-Min;Li, Shengzhe;Kim, Hak-Il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.4
    • /
    • pp.607-616
    • /
    • 2014
  • Face recognition is a big challenge in surveillance system since different rotation angles of the face make the difficulty to recognize the face of the same person. This paper proposes a novel method to recognize face with different head poses by using 3D information of the face. Firstly, head pose estimation (estimation of different head pose angles) is accomplished by the POSIT algorithm. Then, 3D face image data is constructed by using head pose estimation. After that, 2D image and the constructed 3D face matching is performed. Face verification is accomplished by using commercial face recognition SDK. Performance evaluation of the proposed method indicates that the error range of head pose estimation is below 10 degree and the matching rate is about 95%.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

An Algorithm for a pose estimation of a robot using Scale-Invariant feature Transform

  • Lee, Jae-Kwang;Huh, Uk-Youl;Kim, Hak-Il
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.517-519
    • /
    • 2004
  • This paper describes an approach to estimate a robot pose with an image. The algorithm of pose estimation with an image can be broken down into three stages : extracting scale-invariant features, matching these features and calculating affine invariant. In the first step, the robot mounted mono camera captures environment image. Then feature extraction is executed in a captured image. These extracted features are recorded in a database. In the matching stage, a Random Sample Consensus(RANSAC) method is employed to match these features. After matching these features, the robot pose is estimated with positions of features by calculating affine invariant. This algorithm is implemented and demonstrated by Matlab program.

  • PDF

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Pedestrian Recognition of Crosswalks Using Foot Estimation Techniques Based on HigherHRNet (HigherHRNet 기반의 발추정 기법을 통한 횡단보도 보행자 인식)

  • Jung, Kyung-Min;Han, Joo-Hoon;Lee, Hyun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.171-177
    • /
    • 2021
  • It is difficult to accurately extract features of pedestrian because the pedestrian is photographed at a crosswalk using a camera positioned higher than the pedestrian. In addition, it is more difficult to extract features when a part of the pedestrian's body is covered by an umbrella or parasol or when the pedestrian is holding an object. Representative methods to solve this problem include Object Detection, Instance Segmentation, and Pose Estimation. Among them, this study intends to use the Pose Estimation method. In particular, we intend to increase the recognition rate of pedestrians in crosswalks by maintaining the image resolution through HigherHRNet and applying the foot estimation technique. Finally, we show the superiority of the proposed method by applying and analyzing several data sets covered by body parts to the existing method and the proposed method.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Online Face Pose Estimation based on A Planar Homography Between A User's Face and Its Image (사용자의 얼굴과 카메라 영상 간의 호모그래피를 이용한 실시간 얼굴 움직임 추정)

  • Koo, Deo-Olla;Lee, Seok-Han;Doo, Kyung-Soo;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.4
    • /
    • pp.25-33
    • /
    • 2010
  • In this paper, we propose a simple and efficient algorithm for head pose estimation using a single camera. First, four subimages are obtained from the camera image for face feature extraction. These subimages are used as feature templates. The templates are then tracked by Kalman filtering, and camera projective matrix is computed by the projective mapping between the templates and their coordinate in the 3D coordinate system. And the user's face pose is estimated from the projective mapping between the user's face and image plane. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences.

An Implementation of QR Code based On-line Mobile Augmented Reality System (QR코드 기반의 온라인 모바일 증강현실 시스템의 구현)

  • Park, Min-Woo;Park, Jung-Pil;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1004-1016
    • /
    • 2012
  • This paper proposes a mobile augmented reality system to provide detail information of the products using QR code included in them. In the proposed system, we perform the estimation of the camera pose using both of marker-based and markerless-based methods. If the camera can see the QR code, we perform the estimation of the camera pose using the set of rectangles in the QR code. However, if the QR code is out of sight, we perform the estimation of the camera pose based homography between consecutive frames. Moreover, the content of the augmented reality in the proposed system is made by using meta-data. Therefore, the user can make contents of various scenario using only meta-data file without modification of our system. Especially, our system maintains the contents as newly updated state by the on-line server. Thus, it can reduce the unnecessary update of the program.