• Title/Summary/Keyword: Vision Based Sensor

Search Result 428, Processing Time 0.026 seconds

Sensor Fusion-Based Semantic Map Building (센서융합을 통한 시맨틱 지도의 작성)

  • Park, Joong-Tae;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.277-282
    • /
    • 2011
  • This paper describes a sensor fusion-based semantic map building which can improve the capabilities of a mobile robot in various domains including localization, path-planning and mapping. To build a semantic map, various environmental information, such as doors and cliff areas, should be extracted autonomously. Therefore, we propose a method to detect doors, cliff areas and robust visual features using a laser scanner and a vision sensor. The GHT (General Hough Transform) based recognition of door handles and the geometrical features of a door are used to detect doors. To detect the cliff area and robust visual features, the tilting laser scanner and SIFT features are used, respectively. The proposed method was verified by various experiments and showed that the robot could build a semantic map autonomously in various indoor environments.

Obstacle Avoidance of Mobile Robot Based on Behavior Hierarchy by Fuzzy Logic

  • Jin, Tae-Seok
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.245-249
    • /
    • 2012
  • In this paper, we propose a navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments using an ultrasonic sensor. Instead of using "sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process.

Structural performance monitoring of an urban footbridge

  • Xi, P.S.;Ye, X.W.;Jin, T.;Chen, B.
    • Structural Monitoring and Maintenance
    • /
    • v.5 no.1
    • /
    • pp.129-150
    • /
    • 2018
  • This paper presents the structural performance monitoring of an urban footbridge located in Hangzhou, China. The structural health monitoring (SHM) system is designed and implemented for the footbridge to monitor the structural responses of the footbridge and to ensure the structural safety during the period of operation. The monitoring data of stress and displacement measured by the fiber Bragg grating (FBG)-based sensors installed at the critical locations are used to analyze and assess the operation performance of the footbridge. A linear regression method is applied to separate the temperature effect from the stress monitoring data measured by the FBG-based strain sensors. In addition, the static vertical displacement of the footbridge measured by the FBG-based hydrostatic level gauges are presented and compared with the dynamic displacement remotely measured by a machine vision-based measurement system. Based on the examination of the monitored stress and displacement data, the structural safety evaluation is executed in combination with the defined condition index.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Vision-Based Indoor Object Tracking Using Mean-Shift Algorithm (평균 이동 알고리즘을 이용한 영상기반 실내 물체 추적)

  • Kim Jong-Hun;Cho Kyeum-Rae;Lee Dae-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.8
    • /
    • pp.746-751
    • /
    • 2006
  • In this paper, we present tracking algorithm for the indoor moving object. We research passive method using a camera and image processing. It had been researched to use dynamic based estimators, such as Kalman Filter, Extended Kalman Filter and Particle Filter for tracking moving object. These algorithm have a good performance on real-time tracking, but they have a limit. If the shape of object is changed or object is located on complex background, they will fail to track them. This problem will need the complicated image processing algorithm. Finally, a large algorithm is made from integration of dynamic based estimator and image processing algorithm. For eliminating this inefficiency problem, image based estimator, Mean-shift Algorithm is suggested. This algorithm is implemented by color histogram. In other words, it decide coordinate of object's center from using probability density of histogram in image. Although shape is changed, this is not disturbed by complex background and can track object. This paper shows the results in real camera system, and decides 3D coordinate using the data from mean-shift algorithm and relationship of real frame and camera frame.

Vehicular Cooperative Navigation Based on H-SPAWN Using GNSS, Vision, and Radar Sensors (GNSS, 비전 및 레이더를 이용한 H-SPAWN 알고리즘 기반 자동차 협력 항법시스템)

  • Ko, Hyunwoo;Kong, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2252-2260
    • /
    • 2015
  • In this paper, we propose a vehicular cooperative navigation system using GNSS, vision sensor and radar sensor that are frequently used in mass-produced cars. The proposed cooperative vehicular navigation system is a variant of the Hybrid-Sum Product Algorithm over Wireless Network (H-SPAWN), where we use vision and radar sensors instead of radio ranging(i.e.,UWB). The performance is compared and analyzed with respect to the sensors, especially the position estimation error decreased about fifty percent when using radar compared to vision and radio ranging. In conclusion, the proposed system with these popular sensors can improve position accuracy compared to conventional cooperative navigation system(i.e.,H-SPAWN) and decrease implementation costs.

Deep Learning Model Selection Platform for Object Detection (사물인식을 위한 딥러닝 모델 선정 플랫폼)

  • Lee, Hansol;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.66-73
    • /
    • 2019
  • Recently, object recognition technology using computer vision has attracted attention as a technology to replace sensor-based object recognition technology. It is often difficult to commercialize sensor-based object recognition technology because such approach requires an expensive sensor. On the other hand, object recognition technology using computer vision may replace sensors with inexpensive cameras. Moreover, Real-time recognition is viable due to the growth of CNN, which is actively introduced into other fields such as IoT and autonomous vehicles. Because object recognition model applications demand expert knowledge on deep learning to select and learn the model, such method, however, is challenging for non-experts to use it. Therefore, in this paper, we analyze the structure of deep - learning - based object recognition models, and propose a platform that can automatically select a deep - running object recognition model based on a user 's desired condition. We also present the reason we need to select statistics-based object recognition model through conducted experiments on different models.

Development of Multi-Laser Vision System For 3D Surface Scanning (3 차원 곡면 데이터 획득을 위한 멀티 레이져 비젼 시스템 개발)

  • Lee, J.H.;Kwon, K.Y.;Lee, H.C.;Doe, Y.C.;Choi, D.J.;Park, J.H.;Kim, D.K.;Park, Y.J.
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.768-772
    • /
    • 2008
  • Various scanning systems have been studied in many industrial areas to acquire a range data or to reconstruct an explicit 3D model. Currently optical technology has been used widely by virtue of noncontactness and high-accuracy. In this paper, we describe a 3D laser scanning system developped to reconstruct the 3D surface of a large-scale object such as a curved-plate of ship-hull. Our scanning system comprises of 4ch-parallel laser vision modules using a triangulation technique. For multi laser vision, calibration method based on least square technique is applied. In global scanning, an effective method without solving difficulty of matching problem among the scanning results of each camera is presented. Also minimal image processing algorithm and robot-based calibration technique are applied. A prototype had been implemented for testing.

  • PDF

VFH+ based Obstacle Avoidance using Monocular Vision of Unmanned Surface Vehicle (무인수상선의 단일 카메라를 이용한 VFH+ 기반 장애물 회피 기법)

  • Kim, Taejin;Choi, Jinwoo;Lee, Yeongjun;Choi, Hyun-Taek
    • Journal of Ocean Engineering and Technology
    • /
    • v.30 no.5
    • /
    • pp.426-430
    • /
    • 2016
  • Recently, many unmanned surface vehicles (USVs) have been developed and researched for various fields such as the military, environment, and robotics. In order to perform purpose specific tasks, common autonomous navigation technologies are needed. Obstacle avoidance is important for safe autonomous navigation. This paper describes a vector field histogram+ (VFH+) based obstacle avoidance method that uses the monocular vision of an unmanned surface vehicle. After creating a polar histogram using VFH+, an open space without the histogram is selected in the moving direction. Instead of distance sensor data, monocular vision data are used for make the polar histogram, which includes obstacle information. An object on the water is recognized as an obstacle because this method is for USV. The results of a simulation with sea images showed that we can verify a change in the moving direction according to the position of objects.