• Title/Summary/Keyword: 카메라 위치 추정

Search Result 291, Processing Time 0.026 seconds

Head Tracker System Using Two Infrared Cameras (두 대의 적외선 카메라를 이용한 헤드 트랙커 시스템)

  • 홍석기;박찬국
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.5
    • /
    • pp.81-87
    • /
    • 2006
  • In this paper, an experimental optical head tracker system is designed and constructed. The system is composed of the infrared LEDs and two infrared CCD cameras to filter out the interference of another light in the limited environment like the cockpit. Then the optical head tracker algorithm is designed by using the feature detection algorithm and the 3D motion estimation algorithm. The feature detection algorithm, used to obtain the 2D position coordinates of the features on the image plane, is implemented by using the thresholding and the masking techniques. The 3D motion estimation algorithm which estimates the motion of a pilot's head is implemented by using the extended Kalman filter (EKF). Also, we used the precise rate table to verify the performance of the experimental optical head tracker system and compared the rotational performance of this system with the inertial sensor.

Object Position Estimation and Optimal Moving Planning of Mobile Manipulator based on Active Camera (능동카메라기반 이동매니퓰레이터의 물체위치추정 및 최적동작계획)

  • Jin, Tae-Seok;Lee, Jang-Myung
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.1-12
    • /
    • 2005
  • A Mobile manipulator - a serial connection of a mobile robot and a task robot - is a very useful system to achieve various tasks in dangerous environment. because it has the higher performance than a fixed base manipulator in regard to the size of it's operational workspace. Unfortunately the use of a mobile robot introduces non-holonomic constraints, and the combination of a mobile robot and a manipulator generally introduces kinematic redundancy. In this paper, first a method for estimating the position of object at the cartesian coordinate system acquired by using the geometrical relationship between the image captured by 2-DOF active camera mounted on mobile robot and real object is proposed. Second, we propose a method to determine a optimal path between current the position of mobile manipulator whose mobile robot is non-holonomic and the position of object estimated by image information through the global displacement of the system in a symbolic way, using homogenous matrices. Then, we compute the corresponding joint parameters to make the desired displacement coincide with the computed symbolic displacement and object is captured through the control of a manipulator. The effectiveness of proposed method is demonstrated by the simulation and real experiment using the mobile manipulator.

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.

Implementation of Multi-device Remote Control System using Gaze Estimation Algorithm (시선 방향 추정 알고리즘을 이용한 다중 사물 제어 시스템의 구현)

  • Yu, Hyemi;Lee, Jooyoung;Jeon, Surim;Nah, JeongEun
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.812-814
    • /
    • 2022
  • 제어할 사물을 선택하기 위해 여러 단계를 거쳐야 하는 기존 '스마트 홈'의 단점을 보완하고자 본 논문에서는 사용자의 시선 방향을 추정하여 사용자가 바라보는 방향에 있는 사물을 제어할 수 있는 시스템을 제안한다. 일반 RGB 카메라를 통해 Pose Estimation으로 추출한 Landmark들의 좌표 값을 이용하여 시선 방향을 추정하는 알고리즘을 구현하였으며, 이는 근적외선 카메라와 Gaze Tracking 모델링을 통해 이루어지던 기존의 시선 추적 기술에 비해 가벼운 데이터를 산출하고 사용자와 센서간의 위치 제약이 적으며 별도의 장비를 필요로 하지 않는다. 해당 알고리즘으로 산출한 시선 추적의 정확도가 실제 주거환경에서 사용하기에 실효성이 있음을 실험을 통해 입증하였으며, 최종적으로 이 알고리즘을 적용하여 적외선 기기와 Google Home 제품에 사용할 수 있는 시선 방향 사물 제어 시스템을 구현하였다.

Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera (영상정보만을 이용한 사람과 로봇간 실시간 상대위치 추정 알고리즘)

  • Lee, Jung Uk;Sun, Ju Young;Won, Mooncheol
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.12
    • /
    • pp.1445-1452
    • /
    • 2013
  • In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ~ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

Uncertainty Analysis of Observation Matrix for 3D Reconstruction (3차원 복원을 위한 관측행렬의 불확실성 분석)

  • Koh, Sung-shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.3
    • /
    • pp.527-535
    • /
    • 2016
  • Statistical optimization algorithms have been variously developed to estimate the 3D shape and motion. However, statistical approaches are limited to analyze the sensitive effects of SfM(Shape from Motion) according to the camera's geometrical position or viewing angles and so on. This paper propose the quantitative estimation method about the uncertainties of an observation matrix by using camera imaging configuration factors predict the reconstruction ambiguities in SfM. This is a very efficient method to predict the final reconstruction performance of SfM algorithm. Moreover, the important point is that our method show how to derive the active guidelines in order to set the camera imaging configurations which can be expected to lead the reasonable reconstruction results. The experimental results verify the quantitative estimates of an observation matrix by using camera imaging configurations and confirm the effectiveness of our algorithm.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

User Positioning Method Based on Image Similarity Comparison Using Single Camera (단일 카메라를 이용한 이미지 유사도 비교 기반의 사용자 위치추정)

  • Song, Jinseon;Hur, SooJung;Park, Yongwan;Choi, Jeonghee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.8
    • /
    • pp.1655-1666
    • /
    • 2015
  • In this paper, user-position estimation method is proposed by using a single camera for both indoor and outdoor environments. Conventionally, the GPS of RF-based estimation methods have been widely studied in the literature for outdoor and indoor environments, respectively. Each method is useful only for indoor or outdoor environment. In this context, this study adopts a vision-based approach which can be commonly applicable to both environments. Since the distance or position cannot be extracted from a single still image, the reference images pro-stored in image database are used to identify the current position from the single still image captured by a single camera. The reference image is tagged with its captured position. To find the reference image which is the most similar to the current image, the SURF algorithm is used for feature extraction. The outliers in extracted features are discarded by using RANSAC algorithm. The performance of the proposed method is evaluated for two buildings and their outsides for both indoor and outdoor environments, respectively.

3-D Model-Based Tracking for Mobile Augmented Reality (모바일 증강현실을 위한 3차원 모델기반 카메라 추적)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.65-68
    • /
    • 2011
  • 본 논문에서는 모바일 증강현실을 실현하기 위한 3차원 모델기반 카메라 추적 기술을 제안한다. 3차원 모델기반 추적 기술은 평면적이지 않은 객체에 적용 가능하며, 특히 텍스처가 없는 환경에서 유용하다. 제안하는 방식은 대상 객체의 3차원 모델정보로부터 영상에서 추출한 에지와의 대응점을 찾고, 대응점의 거리를 최소화하는 카메라 움직임을 추정함으로써 이전 카메라 포즈(위치 및 방향)로부터 현재 포즈가 추적되는 방식이다. 안드로이드 플랫폼의 스마트폰 상에서 제안된 방식으로 카메라 포즈를 추적하고 3차원 가상 콘텐츠를 증강시켜 봄으로써 그 유용성을 확인한다.

  • PDF

Tracking High-speed Moving Object Using Infrared Stereo Camera and Regression Analysis Using Artificial Neural Network (적외선 스테레오 카메라를 이용한 고속 이동체 추적과 인공신경망을 이용한 회귀분석)

  • Kim, Chanran;Lee, Jaehoon;Lee, Sang Hwa;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.11a
    • /
    • pp.164-165
    • /
    • 2017
  • 본 논문에서는 고속의 이동체를 추적하기 위하여 적외선 스테레오 카메라 시스템을 이용하였다. 열원을 감지할 수 있는 적외선 카메라를 이용해 고온의 추진체를 찾는다, 그리고 고속 이동체의 3 차원 위치를 계산하기 위해 스테레오 카메라 시스템을 사용하여 카메라와 고속 이동체 사이의 거리를 계산하였다. 마지막으로 인공신경망을 이용한 회귀분석으로 고속 이동체의 궤적을 추정하였다. 제안한 시스템이 동작하는 것을 실험 결과를 통해 보인다.

  • PDF