• Title/Summary/Keyword: Stereo Vision Sensor

Search Result 71, Processing Time 0.035 seconds

A Study on the Sensor Calibration of Motion Capture System using PSD Sensor to Improve the Accuracy (PSD 센서를 이용한 모션캡쳐센서의 정밀도 향상을 위한 보정에 관한 연구)

  • Choi, Hun-Il;Jo, Yong-Jun;Ryu, Young-Kee
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.583-585
    • /
    • 2004
  • In this paper we will deal with a calibration method for low cost motion capture system using psd(position sensitive detection) optical sensor. To measure the incident direction of the light from LED emitted marker, the PSD is used the output current ratio on the electrode of PSD is proportional with the incident position of the light focused by lens. In order to defect the direction of the light, the current output is converted into digital voltage value by opamp circuits peak detector and AD converter with the digital value the incident position is measured. Unfortunately, due to the non-linearly problem of the circuit poor position accuracy is shown. To overcome such problems, we compensated the non-linearly by using least-square fitting method. After compensated the non-linearly in the circuit, the system showed more enhanced position accuracy.

  • PDF

Overview of sensor fusion techniques for vehicle positioning (차량정밀측위를 위한 복합측위 기술 동향)

  • Park, Jin-Won;Choi, Kae-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.139-144
    • /
    • 2016
  • This paper provides an overview of recent trends in sensor fusion technologies for vehicle positioning. The GNSS by itself cannot satisfy precision and reliability required by autonomous driving. We survey sensor fusion techniques that combine the outputs from the GNSS and the inertial navigation sensors such as an odometer and a gyroscope. Moreover, we overview landmark-based positioning that matches landmarks detected by a lidar or a stereo vision to high-precision digital maps.

Hand/Eye calibration of Robot arms with a 3D visual sensing system (3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

Performance Comparison of Depth Map Based Landing Methods for a Quadrotor in Unknown Environment (미지 환경에서의 깊이지도를 이용한 쿼드로터 착륙방식 성능 비교)

  • Choi, Jong-Hyuck;Park, Jongho;Lim, Jaesung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.9
    • /
    • pp.639-646
    • /
    • 2022
  • Landing site searching algorithms are developed for a quadrotor using a depth map in unknown environment. Guidance and control system of Unmanned Aerial Vehicle (UAV) consists of a trajectory planner, a position and an attitude controller. Landing site is selected based on the information of the depth map which is acquired by a stereo vision sensor attached on the gimbal system pointing downwards. Flatness information is obtained by the maximum depth difference of a predefined depth map region, and the distance from the UAV is also considered. This study proposes three landing methods and compares their performance using various indices such as UAV travel distance, map accuracy, obstacle response time etc.

Unmanned Vehicle System Configuration using All Terrain Vehicle

  • Moon, Hee-Chang;Park, Eun-Young;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1550-1554
    • /
    • 2004
  • This paper deals with an unmanned vehicle system configuration using all terrain vehicle. Many research institutes and university study and develop unmanned vehicle system and control algorithm. Now a day, they try to apply unmanned vehicle to use military device and explore space and deep sea. These unmanned vehicles can help us to work is difficult task and approach. In the previous research of unmanned vehicle in our lab, we used 1/10 scale radio control vehicle and composed the unmanned vehicle system using ultrasonic sensors, CCD camera and kinds of sensor for vehicle's motion control. We designed lane detecting algorithm using vision system and obstacle detecting and avoidance algorithm using ultrasonic sensor and infrared ray sensor. As the system is increased, it is hard to compose the system on the 1/10 scale RC car. So we have to choose a new vehicle is bigger than 1/10 scale RC car but it is smaller than real size vehicle. ATV(all terrain vehicle) and real size vehicle have similar structure and its size is smaller. In this research, we make unmanned vehicle using ATV and explain control theory of each component

  • PDF

A High Speed Vision Algorithms for Axial Motion Sensor

  • Mousset, Stephane;Miche, Pierre;Bensrhair, Abdelaziz;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.394-400
    • /
    • 1998
  • In this paper, we present a robust and fast method that enables real-time computing of axial motion component of different points of a scene from a stereo images sequence. The aim of our method is to establish axial motion maps by computing a range of disparity maps. We propose a solution in two steps. In the first step we estimate motion with a low level computing for an image point by a detection estimation-structure. In the second step, we use the neighbourhood information of the image point with morphology operation. The motion maps are established with a constant computation time without spatio-temporal matching.

  • PDF

A Study on Vehicle Ego-motion Estimation by Optimizing a Vehicle Platform (차량 플랫폼에 최적화한 자차량 에고 모션 추정에 관한 연구)

  • Song, Moon-Hyung;Shin, Dong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.9
    • /
    • pp.818-826
    • /
    • 2015
  • This paper presents a novel methodology for estimating vehicle ego-motion, i.e. tri-axis linear velocities and angular velocities by using stereo vision sensor and 2G1Y sensor (longitudinal acceleration, lateral acceleration, and yaw rate). The estimated ego-motion information can be utilized to predict future ego-path and improve the accuracy of 3D coordinate of obstacle by compensating for disturbance from vehicle movement representatively for collision avoidance system. For the purpose of incorporating vehicle dynamic characteristics into ego-motion estimation, the state evolution model of Kalman filter has been augmented with lateral vehicle dynamics and the vanishing point estimation has been also taken into account because the optical flow radiates from a vanishing point which might be varied due to vehicle pitch motion. Experimental results based on real-world data have shown the effectiveness of the proposed methodology in view of accuracy.

Localization Algorithm for Lunar Rover using IMU Sensor and Vision System (IMU 센서와 비전 시스템을 활용한 달 탐사 로버의 위치추정 알고리즘)

  • Kang, Hosun;An, Jongwoo;Lim, Hyunsoo;Hwang, Seulwoo;Cheon, Yuyeong;Kim, Eunhan;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • In this paper, we propose an algorithm that estimates the location of lunar rover using IMU and vision system instead of the dead-reckoning method using IMU and encoder, which is difficult to estimate the exact distance due to the accumulated error and slip. First, in the lunar environment, magnetic fields are not uniform, unlike the Earth, so only acceleration and gyro sensor data were used for the localization. These data were applied to extended kalman filter to estimate Roll, Pitch, Yaw Euler angles of the exploration rover. Also, the lunar module has special color which can not be seen in the lunar environment. Therefore, the lunar module were correctly recognized by applying the HSV color filter to the stereo image taken by lunar rover. Then, the distance between the exploration rover and the lunar module was estimated through SIFT feature point matching algorithm and geometry. Finally, the estimated Euler angles and distances were used to estimate the current position of the rover from the lunar module. The performance of the proposed algorithm was been compared to the conventional algorithm to show the superiority of the proposed algorithm.

Design of range measurement systems using a sonar and a camera (초음파 센서와 카메라를 이용한 거리측정 시스템 설계)

  • Moon, Chang-Soo;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF