• Title/Summary/Keyword: vision sensor

Search Result 822, Processing Time 0.031 seconds

Light-Adaptive Vision System for Remote Surveillance Using an Edge Detection Vision Chip

  • Choi, Kyung-Hwa;Jo, Sung-Hyun;Seo, Sang-Ho;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.162-167
    • /
    • 2011
  • In this paper, we propose a vision system using a field programmable gate array(FPGA) and a smart vision chip. The output of the vision chip is varied by illumination conditions. This chip is suitable as a surveillance system in a dynamic environment. However, because the output swing of a smart vision chip is too small to definitely confirm the warning signal with the FPGA, a modification was needed for a reliable signal. The proposed system is based on a transmission control protocol/internet protocol(TCP/IP) that enables monitoring from a remote place. The warning signal indicates that some objects are too near.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

Target Tracking Control of a Quadrotor UAV using Vision Sensor (비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어)

  • Yoo, Min-Goo;Hong, Sung-Kyung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.2
    • /
    • pp.118-128
    • /
    • 2012
  • The goal of this paper is to design the target tracking controller for a quadrotor micro UAV using a vision sensor. First of all, the mathematical model of the quadrotor was estimated through the Prediction Error Method(PEM) using experimental input/output flight data, and then the estimated model was validated via the comparison with new experimental flight data. Next, the target tracking controller was designed using LQR(Linear Quadratic Regulator) method based on the estimated model. The relative distance between an object and the quadrotor was obtained by a vision sensor, and the altitude was obtained by a ultra sonic sensor. Finally, the performance of the designed target tracking controller was evaluated through flight tests.

RESEARCH ON AUTONOMOUS LAND VEHICLE FOR AGRICULTURE

  • Matsuo, Yosuke;Yukumoto, Isamu
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.810-819
    • /
    • 1993
  • An autonomous lan vehicle for agriculture(ALVA-II) was developed. A prototype vehicle was made by modifying a commercial tractor. A Navigation sensor system with a geo-magnetic sensor performed the autonomous operations of ALVA-II, such as rotary tilling with headland turnings. A navigation sensor system with a machine vision system was also investigated to control ALVA-II following a work boudnary.

  • PDF

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

A Study on the 3-dimensional feature measurement system for OMM using multiple-sensors (멀티센서 시스템을 이용한 3차원 형상의 기상측정에 관한 연구)

  • 권양훈;윤길상;조명우
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.10a
    • /
    • pp.158-163
    • /
    • 2002
  • This paper presents a multiple sensor system for rapid and high-precision coordinate data acquisition in the OMM (On-machine measurement) process. In this research, three sensors (touch probe, laser, and vision sensor) are integrated to obtain more accurate measuring results. The touch-type probe has high accuracy, but is time-consuming. Vision sensor can acquire many point data rapidly over a spatial range but its accuracy is less than other sensors. Also, it is not possible to acquire data for invisible areas. Laser sensor has medium accuracy and measuring speed among the sensors, and can acquire data for sharp or rounded edge and the features with very small holes and/or grooves. However, it has range- constraints to use because of its system structure. In this research, a new optimum sensor integration method for OMM is proposed by integrating the multiple-sensor to accomplish mote effective inspection planning. To verify the effectiveness of the proposed method, simulation and experimental works are performed, and the results are analyzed.

  • PDF

A Study on Real-time Control of Bead Height and Joint Tracking Using Laser Vision Sensor

  • Kim, H. K.;Park, H.
    • International Journal of Korean Welding Society
    • /
    • v.4 no.1
    • /
    • pp.30-37
    • /
    • 2004
  • There have been continuous efforts on automating welding processes. This automation process could be said to fall into two categories, weld seam tracking and weld quality evaluation. Recently, the attempts to achieve these two functions simultaneously are on the increase. For the study presented in this paper, a vision sensor is made, a vision system is constructed and using this, the 3 dimensional geometry of the bead is measured on-line. For the application as in welding, which is the characteristic of nonlinear process, a fuzzy controller is designed. And with this, an adaptive control system is proposed which acquires the bead height and the coordinates of the point on the bead along the horizontal fillet joint, performs seam tracking with those data, and also at the same time, controls the bead geometry to a uniform shape. A communication system, which enables the communication with the industrial robot, is designed to control the bead geometry and to track the weld seam. Experiments are made with varied offset angles from the pre-taught weld path, and they showed the adaptive system works favorable results.

  • PDF

Deburring of Irregular Burr using Vision and Force Sensors (비젼과 힘센서를 이용한 불균일 버의 디버링 가공)

  • Choi, G.J.;Kim, Y.W.;Shin, S.W.;Ahn, D.S.
    • Journal of Power System Engineering
    • /
    • v.2 no.3
    • /
    • pp.83-88
    • /
    • 1998
  • This paper presents an efficient control algorithm that removes irregular burrs using vision and force sensors. In automated robotic deburring, the reference force should be accommodated to the profile of burrs in order to prevent the tool breakage. In this paper, (1) The profile of burrs is recognized by vision sensor and followed by the calculation of reference force, (2) Deburring expert's skill is transferred to robot. Finally, the performance of robot is evaluated through simulation and experiment.

  • PDF

Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot (이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가)

  • Park, Jae-Hong;Bhan, Wook;Choi, Tae-Young;Kwon, Hyun-Il;Cho, Dong-Il;Kim, Kwang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.