• Title/Summary/Keyword: 카메라 위치 추정

Search Result 291, Processing Time 0.035 seconds

Robust Estimation of Camera Motion Using A Local Phase Based Affine Model (국소적 위상기반 어파인 모델을 이용한 강인한 카메라 움직임 추정)

  • Jang, Suk-Yoon;Yoon, Chang-Yong;Park, Mig-Non
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.128-135
    • /
    • 2009
  • Techniques for tracking the same region of physical space with the temporal sequences of images by matching the contours of constant phase show robust and stable performance in relative to the tracking techniques using or assuming the constant intensity. Using this property, we describe an algorithm for obtaining the robust motion parameters caused by the global camera motion. First, we obtain the optical flow based on the phase of spacially filtered sequential images on the region in a direction orthogonal to orientation of each component of gabor filter bank. And then, we apply the least squares method to the optical flow to determine the affine motion parameters. We demonstrate hat proposed method can be applied to the vision based pointing device which estimate its motion using the image including the display device which cause lighting condition varieties and noise.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

Vision-based relative position estimation for object tracking (목표물 추적을 위한 비전 기반 상대 위치 추정)

  • Lee, Jong-Geol;Park, Jong-Hun;Kim, Jin-Hwan;Huh, Uk-Youl
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1880-1881
    • /
    • 2011
  • 본 논문에서는 2차원 평면상을 주행하는 이동 로봇의 목표물에 대한 상대 위치 및 방향각을 측정하는 방법에 대하여 제안한다. 측정을 위해 사용되는 센서는 스테레오 카메라로, 이동 로봇은 3DOF의 특징을 갖고 있으므로 두 개의 점을 이용하여 상대 위치 및 방향각을 측정하는 방법을 제안한다. 상대 위치를 측정하는 과정에서 외란에 의한 위치 오차가 발생하게 되며, 이에 대한 대책으로 칼만 필터를 적용하여 더욱 강건한 상대 위치 추정을 한다. 마지막으로 MATLAB을 이용한 시뮬레이션을 통하여 외란이 존재하는 환경 하에서 제안된 시스템의 성능을 확인한다.

  • PDF

An Eye Mouse System Using an Infrared Illumination Camera (적외선 조명 카메라를 이용한 동공 마우스 시스템)

  • Kim, Choong-Bum;Kim, Seong-Hoon;Han, Soo-Whan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.73-76
    • /
    • 2009
  • 장애인이 지속적으로 증가하면서 장애인들을 위한 복지 시설들 또한 증가하고 있다. 그러나 장애인들이 컴퓨터를 이용하는 데에는 아직까지 많은 불편함을 겪고 있으며, 그 원인으로는 장애인들이 컴퓨터를 보다 편리하게 사용할 수 있는 장치들이 부족하다는 것을 들 수 있다. 따라서 이 논문에서는 적외선 조명 카메라를 이용하여 장애인들이 눈동자와 눈꺼풀의 움직임만으로 마우스를 제어할 수 있는 동공 마우스 시스템을 제안하고 실험을 통하여 그 가능성을 보였다. 제안한 시스템에서는 적외선 조명 카메라, Canny 알고리즘과 퍼지 추론 기법이 이용되었으며, 눈이 포함된 영상에서 Canny 알고리즘으로 동공을 검출하여 마우스 위치를 추정하고 퍼지 추론 기법으로 눈 깜박임을 추정하여 마우스 포인터 이동 및 클릭 기능이 구현되었다. 제안된 기법들을 이용하여 실험하고 그 결과를 분석하였다.

  • PDF

A Study on the Design and Implementation of a Camera-Based 6DoF Tracking and Pose Estimation System (카메라 기반 6DoF 추적 및 포즈 추정 시스템의 설계 및 구현에 관한 연구)

  • Do-Yoon Jeong;Hee-Ja Jeong;Nam-Ho Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.5
    • /
    • pp.53-59
    • /
    • 2024
  • This study presents the design and implementation of a camera-based 6DoF (6 Degrees of Freedom) tracking and pose estimation system. In particular, we propose a method for accurately estimating the positions and orientations of all fingers of a user utilizing a 6DoF robotic arm. The system is developed using the Python programming language, leveraging the Mediapipe and OpenCV libraries. Mediapipe is employed to extract keypoints of the fingers in real-time, allowing for precise recognition of the joint positions of each finger. OpenCV processes the image data collected from the camera to analyze the finger positions, thereby enabling pose estimation. This approach is designed to maintain high accuracy despite varying lighting conditions and changes in hand position. The proposed system's performance has been validated through experiments, evaluating the accuracy of hand gesture recognition and the control capabilities of the robotic arm. The experimental results demonstrate that the system can estimate finger positions in real-time, facilitating precise movements of the 6DoF robotic arm. This research is expected to make significant contributions to the fields of robotic control and human-robot interaction, opening up various possibilities for future applications. The findings of this study will aid in advancing robotic technology and promoting natural interactions between humans and robots.

Stability Analysis of a Stereo-Camera for Close-range Photogrammetry (근거리 사진측량을 위한 스테레오 카메라의 안정성 분석)

  • Kim, Eui Myoung;Choi, In Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.123-132
    • /
    • 2021
  • To determine 3D(three-dimensional) positions using a stereo-camera in close-range photogrammetry, camera calibration to determine not only the interior orientation parameters of each camera but also the relative orientation parameters between the cameras must be preceded. As time passes after performing camera calibration, in the case of non-metric cameras, the interior and relative orientation parameters may change due to internal instability or external factors. In this study, to evaluate the stability of the stereo-camera, not only the stability of two single cameras and a stereo-camera were analyzed, but also the three-dimensional position accuracy was evaluated using checkpoints. As a result of evaluating the stability of two single cameras through three camera calibration experiments over four months, the root mean square error was ±0.001mm, and the root mean square error of the stereo-camera was ±0.012mm ~ ±0.025mm, respectively. In addition, as the results of distance accuracy using the checkpoint were ±1mm, the interior and relative orientation parameters of the stereo-camera were considered stable over that period.

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

A Study on Automatic Detection of Speed Bump by using Mathematical Morphology Image Filters while Driving (수학적 형태학 처리를 통한 주행 중 과속 방지턱 자동 탐지 방안)

  • Joo, Yong Jin;Hahm, Chang Hahk
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.3
    • /
    • pp.55-62
    • /
    • 2013
  • This paper aims to detect Speed Bump by using Omni-directional Camera and to suggest Real-time update scheme of Speed Bump through Vision Based Approach. In order to detect Speed Bump from sequence of camera images, noise should be removed as well as spot estimated as shape and pattern for speed bump should be detected first. Now that speed bump has a regular form of white and yellow area, we extracted speed bump on the road by applying erosion and dilation morphological operations and by using the HSV color model. By collecting huge panoramic images from the camera, we are able to detect the target object and to calculate the distance through GPS log data. Last but not least, we evaluated accuracy of obtained result and detection algorithm by implementing SLAMS (Simultaneous Localization and Mapping system).

Semi-automatic Camera Calibration Using Quaternions (쿼터니언을 이용한 반자동 카메라 캘리브레이션)

  • Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • The camera is a key element in image-based three-dimensional positioning, and camera calibration, which properly determines the internal characteristics of such a camera, is a necessary process that must be preceded in order to determine the three-dimensional coordinates of the object. In this study, a new methodology was proposed to determine interior orientation parameters of a camera semi-automatically without being influenced by size and shape of checkerboard for camera calibration. The proposed method consists of exterior orientation parameters estimation using quaternion, recognition of calibration target, and interior orientation parameter determination through bundle block adjustment. After determining the interior orientation parameters using the chessboard calibration target, the three-dimensional position of the small 3D model was determined. In addition, the horizontal and vertical position errors were about ${\pm}0.006m$ and ${\pm}0.007m$, respectively, through the accuracy evaluation using the checkpoints.

Object Detection based on Image Processing for Indoor Drone Localization (실내 드론의 위치 추정을 위한 영상처리 기반 객체 검출)

  • Beck, Jong-Hwan;Kim, Sang-Hoon
    • Annual Conference of KIPS
    • /
    • 2017.04a
    • /
    • pp.1003-1004
    • /
    • 2017
  • 본 연구에서는 실내 환경에서 드론의 측위를 위한 마커 인식 및 검출 기술을 소개한다. 기존 실내 측위를 위한 기술인 Global Positioning System이나 Wi-Fi를 이용한 삼각측량 기법은 실내 환경에서 각각의 성질로 인하여 사용하기 어려운 점이 있다. 본 논문에서는 2차원 바코드와 마커 등의 객체를 드론의 카메라를 이용한 실시간 영상 전송을 통하여 검출하여 위치 정보를 획득하는 기술을 소개한다. 실험에서는 드론의 카메라를 통하여 실시간 전송된 영상에서 OpenCV V2.4.10을 통하여 객체를 검출하였고, 카메라와 객체 사이의 거리와 바코드 크기에 따른 2차원 바코드의 검출 여부를 보였으며 15*15cm의 2차원 바코드는 비교적 잘 인식하였으나 비교적 작은 11*11cm의 2차원 바코드는 거리가 멀어질 수록 인식이 힘들어지는 결과를 보였다.