• Title/Summary/Keyword: Vision Based Sensor

Search Result 424, Processing Time 0.021 seconds

Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot (이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가)

  • Park, Jae-Hong;Bhan, Wook;Choi, Tae-Young;Kwon, Hyun-Il;Cho, Dong-Il;Kim, Kwang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor (바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Development of A Vision-based Lane Detection System with Considering Sensor Configuration Aspect (센서 구성을 고려한 비전 기반 차선 감지 시스템 개발)

  • Park Jaehak;Hong Daegun;Huh Kunsoo;Park Jahnghyon;Cho Dongil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.97-104
    • /
    • 2005
  • Vision-based lane sensing systems require accurate and robust sensing performance in lane detection. Besides, there exists trade-off between the computational burden and processor cost, which should be considered for implementing the systems in passenger cars. In this paper, a stereo vision-based lane detection system is developed with considering sensor configuration aspects. An inverse perspective mapping method is formulated based on the relative correspondence between the left and right cameras so that the 3-dimensional road geometry can be reconstructed in a robust manner. A new monitoring model for estimating the road geometry parameters is constructed to reduce the number of the measured signals. The selection of the sensor configuration and specifications is investigated by utilizing the characteristics of standard highways. Based on the sensor configurations, it is shown that appropriate sensing region on the camera image coordinate can be determined. The proposed system is implemented on a passenger car and verified experimentally.

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Intelligent Rain Sensing Algorithm for Vision-based Smart Wiper System (비전 기반 스마트 와이퍼 시스템을 위한 지능형 레인 감지 알고리즘 개발)

  • Lee, Kyung-Chang;Kim, Man-Ho;Im, Hong-Jun;Lee, Seok
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.1727-1730
    • /
    • 2003
  • A windshield wiper system plays a key part in assurance of driver's safety at rainfall. However, because quantity of rain and snow vary irregularly according to time and velocity of automotive, a driver changes speed and operation period of a wiper from time to time in order to secure enough visual field in the traditional windshield wiper system. Because a manual operation of windshield wiper distracts driver's sensitivity and causes inadvertent driving, this is becoming direct cause of traffic accident. Therefore, this paper presents the basic architecture of vision-based smart windshield wiper system and the rain sensing algorithm that regulate speed and operation period of windshield wiper automatically according to quantity of rain or snow. Also, this paper introduces the fuzzy wiper control algorithm based on human's expertise, and evaluates performance of suggested algorithm in simulator model. In especial, the vision sensor can measure wide area relatively than the optical rain sensor. hence, this grasp rainfall state more exactly in case disturbance occurs.

  • PDF

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

Traffic Light Detection Method in Image Using Geometric Analysis Between Traffic Light and Vision Sensor (교통 신호등과 비전 센서의 위치 관계 분석을 통한 이미지에서 교통 신호등 검출 방법)

  • Choi, Changhwan;Yoo, Kook-Yeol;Park, Yongwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.2
    • /
    • pp.101-108
    • /
    • 2015
  • In this paper, a robust traffic light detection method is proposed by using vision sensor and DGPS(Difference Global Positioning System). The conventional vision-based detection methods are very sensitive to illumination change, for instance, low visibility at night time or highly reflection by bright light. To solve these limitations in visual sensor, DGPS is incorporated to determine the location and shape of traffic lights which are available from traffic light database. Furthermore the geometric relationship between traffic light and vision sensor is used to locate the traffic light in the image by using DGPS information. The empirical results show that the proposed method improves by 51% in detection rate for night time with marginal improvement in daytime environment.

A Study of Inspection of Weld Bead Defects using Laser Vision Sensor (레이저 비전 센서를 이용한 용접비드의 외부결함 검출에 관한 연구)

  • 이정익;이세헌
    • Journal of Welding and Joining
    • /
    • v.17 no.2
    • /
    • pp.53-60
    • /
    • 1999
  • Conventionally, CCD camera and vision sensor using the projected pattern of light is generally used to inspect the weld bead defects. But with this method, a lot of time is needed for image preprocessing, stripe extraction and thinning, etc. In this study, laser vision sensor using the scanning beam of light is used to shorten the time required for image preprocessing. The software for deciding whether the weld bead is in proper shape or not in real time is developed. The criteria are based upon the classification of imperfections in metallic fusion welds(ISO 6520) and limits for imperfections(ISO 5817).

  • PDF

INS/Multi-Vision Integrated Navigation System Based on Landmark (다수의 비전 센서와 INS를 활용한 랜드마크 기반의 통합 항법시스템)

  • Kim, Jong-Myeong;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.8
    • /
    • pp.671-677
    • /
    • 2017
  • A new INS/Vision integrated navigation system by using multi-vision sensors is addressed in this paper. When the total number of landmark measured by the vision sensor is smaller than the allowable number, there is possibility that the navigation filter can diverge. To prevent this problem, multi-vision concept is applied to expend the field of view so that reliable number of landmarks are always guaranteed. In this work, the orientation of camera installed are 0, 120, and -120degree with respect to the body frame to improve the observability. Finally, the proposed technique is verified by using numerical simulation.