• 제목/요약/키워드: stereo vision sensor

검색결과 72건 처리시간 0.028초

비전 기반 측위 보조 알고리즘의 성능 분석 (Performance Analysis of Vision-based Positioning Assistance Algorithm)

  • 박종수;이용;권재현
    • 한국측량학회지
    • /
    • 제37권3호
    • /
    • pp.101-108
    • /
    • 2019
  • 최근 컴퓨터 처리 속도의 향상과 영상 처리 기술의 발달로 인해 카메라에서 획득하는 정보를 기존의 GNSS(Global Navigation Satellite System), 추측 항법 기반의 측위 기술과 결합하여 안정적인 위치를 결정하기 위한 연구가 활발히 진행 중이다. 기존 연구에서는 단안 카메라를 이용한 연구가 주로 수행되었으나 이 경우 관심 객체의 절대좌표가 구축이 되어 있어야 한다는 한계점이 있다. 이러한 한계를 극복하기 위해 본 연구에서는 스테레오 영상으로부터 삼각측량법을 적용하여 카메라와 관심 객체간 거리를 추정하는 비전 기반 측위 보조 알고리즘을 개발하고 성능 분석을 수행하였다. 또한, 추정된 거리와 카메라 영상 획득 간격을 이용해 상대적인 속도를 계산하고 이를 기존에 개발된 GNSS/이동체 내부 센서 기반 측위 알고리즘과 결합하여 통합 측위 알고리즘을 구현하였다. 실제 주행 자료를 기반으로 통합측위 알고리즘에 대한 성능을 분석한 결과 기존에 개발된 GNSS/이동체 내부 센서 기반 측위 알고리즘에 비해 속도 정보를 항법해 보정에 활용하였을 때 약 4%의 미미한 위치 정확도 향상 효과를 확인하였다. 이는 영상으로부터 추정된 속도 정보의 정밀도가 낮고, 터널 등을 지날 때는 영상으로부터 적절한 정보를 추출할 수 없다는 한계가 있어 이를 보완한 추가 연구가 필요하다고 판단된다.

PSD 센서를 이용한 모션캡쳐센서의 정밀도 향상을 위한 보정에 관한 연구 (A Study on the Sensor Calibration of Motion Capture System using PSD Sensor to Improve the Accuracy)

  • 최훈일;조용준;유영기
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.583-585
    • /
    • 2004
  • In this paper we will deal with a calibration method for low cost motion capture system using psd(position sensitive detection) optical sensor. To measure the incident direction of the light from LED emitted marker, the PSD is used the output current ratio on the electrode of PSD is proportional with the incident position of the light focused by lens. In order to defect the direction of the light, the current output is converted into digital voltage value by opamp circuits peak detector and AD converter with the digital value the incident position is measured. Unfortunately, due to the non-linearly problem of the circuit poor position accuracy is shown. To overcome such problems, we compensated the non-linearly by using least-square fitting method. After compensated the non-linearly in the circuit, the system showed more enhanced position accuracy.

  • PDF

차량정밀측위를 위한 복합측위 기술 동향 (Overview of sensor fusion techniques for vehicle positioning)

  • 박진원;최계원
    • 한국전자통신학회논문지
    • /
    • 제11권2호
    • /
    • pp.139-144
    • /
    • 2016
  • 본 논문에서는 차량정밀측위를 위한 센서융합 기술의 최근 동향에 대해 다룬다. GNSS 만으로는 자율주행에서 요구하는 정밀측위의 정확도 및 신뢰도를 만족시킬 수 없다. 본 논문에서는 GNSS와 주행계, 자이로스코프 등의 관성항법 센서를 결합하는 복합측위 기술을 소개한다. 또한 라이다 및 스테레오 비전에서 탐지된 랜드마크를 정밀지도에 수록된 정보와 매칭시키는 측위 기법의 최근 동향을 소개한다.

3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션 (Hand/Eye calibration of Robot arms with a 3D visual sensing system)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

미지 환경에서의 깊이지도를 이용한 쿼드로터 착륙방식 성능 비교 (Performance Comparison of Depth Map Based Landing Methods for a Quadrotor in Unknown Environment)

  • 최종혁;박종호;임재성
    • 한국항공우주학회지
    • /
    • 제50권9호
    • /
    • pp.639-646
    • /
    • 2022
  • 본 논문은 사전에 알려지지 않은 환경에서 깊이지도를 이용하여 정보를 획득하고, 이를 기반으로 쿼드로터의 착륙 선택 알고리즘 성능을 분석한다. 무인항공기의 유도 및 제어시스템은 궤적 계획 유도시스템과 위치 및 자세 제어기로 구성된다. 아래를 향하는 짐벌 시스템에 부착된 스테레오 비전 센서가 획득한 깊이 정보를 이용하여 착륙 지점을 선정한다. 평탄도 정보는 사전 정의된 깊이지도 영역의 최대 깊이 차이 및 UAV와의 거리를 고려하여 산출한다. 본 논문에서는 3가지 착륙 방법을 제안하며 다양한 성능지수들을 활용하여 성능을 비교한다. 성능지수로는 UAV 이동 거리, 지도 정확도, 장애물 대응 시간 등을 고려한다.

Unmanned Vehicle System Configuration using All Terrain Vehicle

  • Moon, Hee-Chang;Park, Eun-Young;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1550-1554
    • /
    • 2004
  • This paper deals with an unmanned vehicle system configuration using all terrain vehicle. Many research institutes and university study and develop unmanned vehicle system and control algorithm. Now a day, they try to apply unmanned vehicle to use military device and explore space and deep sea. These unmanned vehicles can help us to work is difficult task and approach. In the previous research of unmanned vehicle in our lab, we used 1/10 scale radio control vehicle and composed the unmanned vehicle system using ultrasonic sensors, CCD camera and kinds of sensor for vehicle's motion control. We designed lane detecting algorithm using vision system and obstacle detecting and avoidance algorithm using ultrasonic sensor and infrared ray sensor. As the system is increased, it is hard to compose the system on the 1/10 scale RC car. So we have to choose a new vehicle is bigger than 1/10 scale RC car but it is smaller than real size vehicle. ATV(all terrain vehicle) and real size vehicle have similar structure and its size is smaller. In this research, we make unmanned vehicle using ATV and explain control theory of each component

  • PDF

A High Speed Vision Algorithms for Axial Motion Sensor

  • Mousset, Stephane;Miche, Pierre;Bensrhair, Abdelaziz;Lee, Sang-Goog
    • 센서학회지
    • /
    • 제7권6호
    • /
    • pp.394-400
    • /
    • 1998
  • In this paper, we present a robust and fast method that enables real-time computing of axial motion component of different points of a scene from a stereo images sequence. The aim of our method is to establish axial motion maps by computing a range of disparity maps. We propose a solution in two steps. In the first step we estimate motion with a low level computing for an image point by a detection estimation-structure. In the second step, we use the neighbourhood information of the image point with morphology operation. The motion maps are established with a constant computation time without spatio-temporal matching.

  • PDF

차량 플랫폼에 최적화한 자차량 에고 모션 추정에 관한 연구 (A Study on Vehicle Ego-motion Estimation by Optimizing a Vehicle Platform)

  • 송문형;신동호
    • 제어로봇시스템학회논문지
    • /
    • 제21권9호
    • /
    • pp.818-826
    • /
    • 2015
  • This paper presents a novel methodology for estimating vehicle ego-motion, i.e. tri-axis linear velocities and angular velocities by using stereo vision sensor and 2G1Y sensor (longitudinal acceleration, lateral acceleration, and yaw rate). The estimated ego-motion information can be utilized to predict future ego-path and improve the accuracy of 3D coordinate of obstacle by compensating for disturbance from vehicle movement representatively for collision avoidance system. For the purpose of incorporating vehicle dynamic characteristics into ego-motion estimation, the state evolution model of Kalman filter has been augmented with lateral vehicle dynamics and the vanishing point estimation has been also taken into account because the optical flow radiates from a vanishing point which might be varied due to vehicle pitch motion. Experimental results based on real-world data have shown the effectiveness of the proposed methodology in view of accuracy.

IMU 센서와 비전 시스템을 활용한 달 탐사 로버의 위치추정 알고리즘 (Localization Algorithm for Lunar Rover using IMU Sensor and Vision System)

  • 강호선;안종우;임현수;황슬우;천유영;김은한;이장명
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.65-73
    • /
    • 2019
  • In this paper, we propose an algorithm that estimates the location of lunar rover using IMU and vision system instead of the dead-reckoning method using IMU and encoder, which is difficult to estimate the exact distance due to the accumulated error and slip. First, in the lunar environment, magnetic fields are not uniform, unlike the Earth, so only acceleration and gyro sensor data were used for the localization. These data were applied to extended kalman filter to estimate Roll, Pitch, Yaw Euler angles of the exploration rover. Also, the lunar module has special color which can not be seen in the lunar environment. Therefore, the lunar module were correctly recognized by applying the HSV color filter to the stereo image taken by lunar rover. Then, the distance between the exploration rover and the lunar module was estimated through SIFT feature point matching algorithm and geometry. Finally, the estimated Euler angles and distances were used to estimate the current position of the rover from the lunar module. The performance of the proposed algorithm was been compared to the conventional algorithm to show the superiority of the proposed algorithm.

초음파 센서와 카메라를 이용한 거리측정 시스템 설계 (Design of range measurement systems using a sonar and a camera)

  • 문창수;도용태
    • 센서학회지
    • /
    • 제14권2호
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.