• 제목/요약/키워드: Vision sensor

검색결과 822건 처리시간 0.021초

Light-Adaptive Vision System for Remote Surveillance Using an Edge Detection Vision Chip

  • Choi, Kyung-Hwa;Jo, Sung-Hyun;Seo, Sang-Ho;Shin, Jang-Kyoo
    • 센서학회지
    • /
    • 제20권3호
    • /
    • pp.162-167
    • /
    • 2011
  • In this paper, we propose a vision system using a field programmable gate array(FPGA) and a smart vision chip. The output of the vision chip is varied by illumination conditions. This chip is suitable as a surveillance system in a dynamic environment. However, because the output swing of a smart vision chip is too small to definitely confirm the warning signal with the FPGA, a modification was needed for a reliable signal. The proposed system is based on a transmission control protocol/internet protocol(TCP/IP) that enables monitoring from a remote place. The warning signal indicates that some objects are too near.

이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합 (Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot)

  • 김민영;안상태;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제16권4호
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권3호
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어 (Target Tracking Control of a Quadrotor UAV using Vision Sensor)

  • 유민구;홍성경
    • 한국항공우주학회지
    • /
    • 제40권2호
    • /
    • pp.118-128
    • /
    • 2012
  • 본 논문은 쿼드로터형 무인 비행체를 비전센서를 이용한 목표 추적 위치 제어기 설계하였고, 이를 시뮬레이션 및 실험을 통해서 확인하였다. 우선 제어기 설계에 앞서 쿼드로터의 동역학 분석 및 실험데이터를 통한 모델링을 수행하였다. 이때, 모델의 계수들은 실제 비행 데이터를 이용한 PEM(Prediction Error Method)을 이용하여 얻었다. 이 추정된 모델을 바탕으로 LQR(Linear Quadratic Regulator) 기법을 이용한 임의의 목표를 따라가는 위치 제어기를 설계하였으며, 이때 위치 정보는 비전센서의 색 정보를 이용한 Color Tracking기능을 이용하여 쿼드로터와 물체의 상대적인 위치를 얻어내었고, 초음파 센서를 이용하여 고도 정보를 얻어 내었다. 마지막으로 실제 움직이는 물체의 추적 제어 실험을 수행하여 LQR 제어기 성능을 평가하였다.

RESEARCH ON AUTONOMOUS LAND VEHICLE FOR AGRICULTURE

  • Matsuo, Yosuke;Yukumoto, Isamu
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 1993년도 Proceedings of International Conference for Agricultural Machinery and Process Engineering
    • /
    • pp.810-819
    • /
    • 1993
  • An autonomous lan vehicle for agriculture(ALVA-II) was developed. A prototype vehicle was made by modifying a commercial tractor. A Navigation sensor system with a geo-magnetic sensor performed the autonomous operations of ALVA-II, such as rotary tilling with headland turnings. A navigation sensor system with a machine vision system was also investigated to control ALVA-II following a work boudnary.

  • PDF

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

멀티센서 시스템을 이용한 3차원 형상의 기상측정에 관한 연구 (A Study on the 3-dimensional feature measurement system for OMM using multiple-sensors)

  • 권양훈;윤길상;조명우
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2002년도 추계학술대회 논문집
    • /
    • pp.158-163
    • /
    • 2002
  • This paper presents a multiple sensor system for rapid and high-precision coordinate data acquisition in the OMM (On-machine measurement) process. In this research, three sensors (touch probe, laser, and vision sensor) are integrated to obtain more accurate measuring results. The touch-type probe has high accuracy, but is time-consuming. Vision sensor can acquire many point data rapidly over a spatial range but its accuracy is less than other sensors. Also, it is not possible to acquire data for invisible areas. Laser sensor has medium accuracy and measuring speed among the sensors, and can acquire data for sharp or rounded edge and the features with very small holes and/or grooves. However, it has range- constraints to use because of its system structure. In this research, a new optimum sensor integration method for OMM is proposed by integrating the multiple-sensor to accomplish mote effective inspection planning. To verify the effectiveness of the proposed method, simulation and experimental works are performed, and the results are analyzed.

  • PDF

A Study on Real-time Control of Bead Height and Joint Tracking Using Laser Vision Sensor

  • Kim, H. K.;Park, H.
    • International Journal of Korean Welding Society
    • /
    • 제4권1호
    • /
    • pp.30-37
    • /
    • 2004
  • There have been continuous efforts on automating welding processes. This automation process could be said to fall into two categories, weld seam tracking and weld quality evaluation. Recently, the attempts to achieve these two functions simultaneously are on the increase. For the study presented in this paper, a vision sensor is made, a vision system is constructed and using this, the 3 dimensional geometry of the bead is measured on-line. For the application as in welding, which is the characteristic of nonlinear process, a fuzzy controller is designed. And with this, an adaptive control system is proposed which acquires the bead height and the coordinates of the point on the bead along the horizontal fillet joint, performs seam tracking with those data, and also at the same time, controls the bead geometry to a uniform shape. A communication system, which enables the communication with the industrial robot, is designed to control the bead geometry and to track the weld seam. Experiments are made with varied offset angles from the pre-taught weld path, and they showed the adaptive system works favorable results.

  • PDF

비젼과 힘센서를 이용한 불균일 버의 디버링 가공 (Deburring of Irregular Burr using Vision and Force Sensors)

  • 최규종;김영원;신상운;안두성
    • 동력기계공학회지
    • /
    • 제2권3호
    • /
    • pp.83-88
    • /
    • 1998
  • This paper presents an efficient control algorithm that removes irregular burrs using vision and force sensors. In automated robotic deburring, the reference force should be accommodated to the profile of burrs in order to prevent the tool breakage. In this paper, (1) The profile of burrs is recognized by vision sensor and followed by the calculation of reference force, (2) Deburring expert's skill is transferred to robot. Finally, the performance of robot is evaluated through simulation and experiment.

  • PDF

이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가 (Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot)

  • 박재홍;반욱;최태영;권현일;조동일;김광수
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.