• 제목/요약/키워드: Camera Position

검색결과 1,277건 처리시간 0.031초

항공기 탑재용 카메라 위치출력오차 측정방안 연구 (A Study of Test Method for Position Reporting Accuracy of Airborne Camera)

  • 송대범;윤용은
    • 한국군사과학기술학회지
    • /
    • 제16권5호
    • /
    • pp.646-652
    • /
    • 2013
  • PRA(Position Reporting Accuracy) for EO/IR(Electro-Optic/Infrared) airborne camera is an important factor in geo-pointing accuracy. Generally, rate table is used to measure PRA of gimbal actuated camera like EO/IR. However, it is not always possible to fix an EUT(Equipment for Under Test) to rate table due to capacity limit of the table on the size and weight of the object(EUT). Our EO/IR is too big and heavy to emplace on it. Therefore, we propose a new verification method of PRA for airborne camera and assess the validity of our proposition. In this method we use collimator, angle measuring instrument, 6 dof motion simulator, optical surface plate, leveling laser, inclinometer and poster(for alignment).

도심 자율주행을 위한 라이다 정지 장애물 지도 기반 위치 보정 알고리즘 (LiDAR Static Obstacle Map based Position Correction Algorithm for Urban Autonomous Driving)

  • 노한석;이현성;이경수
    • 자동차안전학회지
    • /
    • 제14권2호
    • /
    • pp.39-44
    • /
    • 2022
  • This paper presents LiDAR static obstacle map based vehicle position correction algorithm for urban autonomous driving. Real Time Kinematic (RTK) GPS is commonly used in highway automated vehicle systems. For urban automated vehicle systems, RTK GPS have some trouble in shaded area. Therefore, this paper represents a method to estimate the position of the host vehicle using AVM camera, front camera, LiDAR and low-cost GPS based on Extended Kalman Filter (EKF). Static obstacle map (STOM) is constructed only with static object based on Bayesian rule. To run the algorithm, HD map and Static obstacle reference map (STORM) must be prepared in advance. STORM is constructed by accumulating and voxelizing the static obstacle map (STOM). The algorithm consists of three main process. The first process is to acquire sensor data from low-cost GPS, AVM camera, front camera, and LiDAR. Second, low-cost GPS data is used to define initial point. Third, AVM camera, front camera, LiDAR point cloud matching to HD map and STORM is conducted using Normal Distribution Transformation (NDT) method. Third, position of the host vehicle position is corrected based on the Extended Kalman Filter (EKF).The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment and showed better performance than only lane-detection algorithm. It is expected to be more robust and accurate than raw lidar point cloud matching algorithm in autonomous driving.

Control of a mobile robot supporting a task robot on the top

  • Lee, Jang M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 Proceedings of the Korea Automatic Control Conference, 11th (KACC); Pohang, Korea; 24-26 Oct. 1996
    • /
    • pp.1-7
    • /
    • 1996
  • This paper addresses the control problem of a mobile robot supporting a task robot with needs to be positioned precisely. The main difficulty residing in the precise control of a mobile robot supporting a task robot is providing an accurate and stable base for the task robot. That is, the end-plate of the mobile robot which is the base of the task robot can not be positioned accurately without external position sensors. This difficulty is resolved in this paper through the vision information obtained from the camera attached at the end of a task robot. First of all, the camera parameters were measured by using the images of a fixed object captured by the camera. The measured parameters include the rotation, the position, the scale factor, and the focal length of the camera. These parameters could be measured by using the features of each vertex point for a hexagonal object and by using the pin-hole model of a camera. Using the measured pose(position and orientation) of the camera and the given kinematics of the task robot, we calculate a pose of the end-plate of the mobile robot, which is used for the precise control of the mobile robot. Experimental results for the pose estimations are shown.

  • PDF

해상도 3차원 상호상관 Volume PIV 시스템 개발 및 적용 (Development and Application of High-resolution 3-D Volume PIV System by Cross-Correlation)

  • 김미영;최장운;이현;이영호
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2002년도 학술대회지
    • /
    • pp.507-510
    • /
    • 2002
  • An algorithm of 3-D particle image velocimetry(3D-PIV) was developed for the measurement of 3-D velocity Held of complex flows. The measurement system consists of two or three CCD camera and one RGB image grabber. Flows size is $1500{\times}100{\times}180(mm)$, particle is Nylon12(1mm) and illuminator is Hollogen type lamp(100w). The stereo photogrammetry is adopted for the three dimensional geometrical mesurement of tracer particle. For the stereo-pair matching, the camera parameters should be decide in advance by a camera calibration. Camera parameter calculation equation is collinearity equation. In order to calculate the particle 3-D position based on the stereo photograrnrnetry, the eleven parameters of each camera should be obtained by the calibration of the camera. Epipolar line is used for stereo pair matching. The 3-D position of particle is calculated from the three camera parameters, centers of projection of the three cameras, and photographic coordinates of a particle, which is based on the collinear condition. To find velocity vector used 3-D position data of the first frame and the second frame. To extract error vector applied continuity equation. This study developed of various 3D-PIV animation technique.

  • PDF

필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구 (A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix)

  • 오종택;김호겸
    • 한국인터넷방송통신학회논문지
    • /
    • 제22권6호
    • /
    • pp.143-148
    • /
    • 2022
  • 이동하는 스마트폰이나 로봇의 단안 카메라를 이용하여 연속적으로 촬영된 이미지들을 분석하여 카메라의 위치를 추정하는 것은 메타버스나 이동 로봇, 사용자 위치 서비스에서 매우 중요하다. 지금까지는 PnP 관련 기술들을 적용하여 위치를 계산하였는데, 본 논문에서는 연속된 영상들에 적용된 에피폴라 기하학에서의 필수 행렬을 이용하여 카메라의 이동 방향을 구하고 기하학적인 수식 계산을 통해 카메라의 연속적인 이동 위치를 추정하는 방법을 새롭게 제안하였고, 시뮬레이션을 통해 그 정확성을 검증하였다. 이 방식은 기존의 방식과는 전혀 다른 방법으로 두 개 이상의 영상에서 하나 이상의 일치되는 특징점만 있어도 적용할 수 있는 특징이 있다.

불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구 (A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance)

  • 정완식;김경석;신광수;주철;김재확;윤현권
    • 한국정밀공학회지
    • /
    • 제16권12호
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

엘리트 유전 알고리즘을 이용한 비젼 기반 로봇의 위치 제어 (Vision Based Position Control of a Robot Manipulator Using an Elitist Genetic Algorithm)

  • 박광호;김동준;기석호;기창두
    • 한국정밀공학회지
    • /
    • 제19권1호
    • /
    • pp.119-126
    • /
    • 2002
  • In this paper, we present a new approach based on an elitist genetic algorithm for the task of aligning the position of a robot gripper using CCD cameras. The vision-based control scheme for the task of aligning the gripper with the desired position is implemented by image information. The relationship between the camera space location and the robot joint coordinates is estimated using a camera-space parameter modal that generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation. To find the joint angles of a robot manipulator for reaching the target position in the image space, we apply an elitist genetic algorithm instead of a nonlinear least square error method. Since GA employs parallel search, it has good performance in solving optimization problems. In order to improve convergence speed, the real coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using an elitist genetic algorithm with a real coding method.

후판 압연 공정에서 Edge Masking Device의 실시간 제어기술 개발 (Development of Real Time Control System of EMD Bracket in Plate Rolling Process)

  • 최일섭;박병현;최승갑
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.170-170
    • /
    • 2000
  • This paper deals with on-Line detection of strip movement and real time positioning of brackets of EMD connected with it. Strip movement is detected by 4 line CCD camera and measured position correction value is inputted to motor position controller to control position of brackets.

  • PDF

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권10호
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

원형 물체를 이용한 로봇/카메라 자세의 능동보정 (Active Calibration of the Robot/camera Pose using Cylindrical Objects)

  • 한만용;김병화;김국헌;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제5권3호
    • /
    • pp.314-323
    • /
    • 1999
  • This paper introduces a methodology of active calibration of a camera pose (orientation and position) using the images of cylindrical objects that are going to be manipulated. This active calibration method is different from the passive calibration where a specific pattern needs to be located at a certain position. In the active calibration, a camera attached on the robot captures images of objects that are going to be manipulated. That is, the prespecified position and orientation data of the cylindrical object are transformed into the camera pose through the two consecutive image frames. An ellipse can be extracted from each image frame, which is defined as a circular-feature matrix. Therefore, two circular-feature matrices and motion parameters between the two ellipses are enough for the active calibration process. This active calibration scheme is very effective for the precise control of a mobile/task robot that needs to be calibrated dynamically. To verify the effectiveness of active calibration, fundamental experiments are peformed.

  • PDF