• Title/Summary/Keyword: Camera pose

Search Result 277, Processing Time 0.025 seconds

An Implementation of QR Code based On-line Mobile Augmented Reality System (QR코드 기반의 온라인 모바일 증강현실 시스템의 구현)

  • Park, Min-Woo;Park, Jung-Pil;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1004-1016
    • /
    • 2012
  • This paper proposes a mobile augmented reality system to provide detail information of the products using QR code included in them. In the proposed system, we perform the estimation of the camera pose using both of marker-based and markerless-based methods. If the camera can see the QR code, we perform the estimation of the camera pose using the set of rectangles in the QR code. However, if the QR code is out of sight, we perform the estimation of the camera pose based homography between consecutive frames. Moreover, the content of the augmented reality in the proposed system is made by using meta-data. Therefore, the user can make contents of various scenario using only meta-data file without modification of our system. Especially, our system maintains the contents as newly updated state by the on-line server. Thus, it can reduce the unnecessary update of the program.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Robust Vehicle Occupant Detection based on RGB-Depth-Thermal Camera (다양한 환경에서 강건한 RGB-Depth-Thermal 카메라 기반의 차량 탑승자 점유 검출)

  • Song, Changho;Kim, Seung-Hun
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.31-37
    • /
    • 2018
  • Recently, the safety in vehicle also has become a hot topic as self-driving car is developed. In passive safety systems such as airbags and seat belts, the system is being changed into an active system that actively grasps the status and behavior of the passengers including the driver to mitigate the risk. Furthermore, it is expected that it will be possible to provide customized services such as seat deformation, air conditioning operation and D.W.D (Distraction While Driving) warning suitable for the passenger by using occupant information. In this paper, we propose robust vehicle occupant detection algorithm based on RGB-Depth-Thermal camera for obtaining the passengers information. The RGB-Depth-Thermal camera sensor system was configured to be robust against various environment. Also, one of the deep learning algorithms, OpenPose, was used for occupant detection. This algorithm is advantageous not only for RGB image but also for thermal image even using existing learned model. The algorithm will be supplemented to acquire high level information such as passenger attitude detection and face recognition mentioned in the introduction and provide customized active convenience service.

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

Error Quantification of Photogrammetric 6DOF Pose Estimation (사진계측기반 6자유도 포즈 예측의 오차 정량화)

  • Kim, Sang-Jin;You, Heung-Cheol;Reu, Taekyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.350-356
    • /
    • 2013
  • Photogrammetry has been widely used for measuring the important physical quantities in aerospace areas because it is a remote and non-contact measurement method. In this study, we analyzed photogrammetric error which can be occur in six degrees of freedom(6DOF) analysis among coordinates systems with single camera. Error analysis program were developed, and validated using geometric problem converted from imaging process. We analogized that the statistic from estimated camera pose which is need to 6DOF analysis is normally distributed, and quantified the photogrammetric error using estimated population standard deviation.

Fast Camera Pose Estimation from a Single Frame for Augmented Reality Applications (증강현실 시스템 구현을 위한 단일 프레임에서의 고속 카메라 위치추정)

  • Lee, Bum-Jong;Park, Jong-Seung;Sung, Mee-Young;Noh, Sung-Ryul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.7-14
    • /
    • 2006
  • 본 논문에서는 3D 복원과 카메라 측정과정 없이 정확하게 카메라 자세를 계산하고 가상객체를 비디오에 합성하기 위한 단일 프레임 기반의 고속 계산 기법을 제안한다. 객체의 로컬 좌표와 단일 이미지에서의 대응되는 이미지 좌표로부터 카메라 자세를 계산한다. 정사영 투영모델에서의 분해기법에 기반한 구조 계산 방법으로 카메라 자세의 고속 추정이 가능하다. 정사영 투영모델에 기반하기 때문에 참조점의 설정에 따라 정확도가 달라진다. 객체에 따라 참조점을 설정하여 정확한 카메라 자세를 계산하는 방법을 제안한다. 카메라 자세 및 물체의 형태는 단일 프레임 기반으로 수행되며 카메라 자세 추정 결과가 즉시 비디오 합성에 사용될 수 있도록 하였다. 제안하는 기법의 유효성 입증을 위해 실사 비디오에 기반한 증강현실시스템을 구현하고 카메라 자세 계산과 비디오 합성의 전체 과정을 단일 프레임에 기반하여 실험을 수행하고 제안 기법의 실용성을 보였다.

  • PDF

Augmented Reality based Interactive Storyboard System (증강현실 기반의 인터랙티브 스토리보드 제작 시스템)

  • Park, Jun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.17-22
    • /
    • 2007
  • In early stages of film or animation production, storyboard is used to visually describe the outline of a story. Drawings or photographs, as well as the texts, are employed for character / item placements and camera pose. However, commercially available storyboard tools are mainly drawing and editing tools, not providing functionality for item placement and camera control. In this paper, an Augmented Reality based storyboard tool is presented, which provides an intuitive and easy-to-use interface for storyboard development. Using the presented tool, non-expert users may compose 30 scenes in his or her real environments through tangible building blocks which are used to fetch corresponding 3D models and their pose.

  • PDF

Robot Posture Estimation Using Inner-Pipe Image

  • Sup, Yoon-Ji;Sok, Kang-E
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.173.1-173
    • /
    • 2001
  • This paper proposes the methodology in image processing algorithm that estimates the pose of the pipe crawling robot. The pipe crawling robots are usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light varies with the robot posture. The algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

  • PDF