• Title/Summary/Keyword: Vision-based

Search Result 3,481, Processing Time 0.036 seconds

Vision-based remote 6-DOF structural displacement monitoring system using a unique marker

  • Jeon, Haemin;Kim, Youngjae;Lee, Donghwa;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.13 no.6
    • /
    • pp.927-942
    • /
    • 2014
  • Structural displacement is an important indicator for assessing structural safety. For structural displacement monitoring, vision-based displacement measurement systems have been widely developed; however, most systems estimate only 1 or 2-DOF translational displacement. To monitor the 6-DOF structural displacement with high accuracy, a vision-based displacement measurement system with a uniquely designed marker is proposed in this paper. The system is composed of a uniquely designed marker and a camera with a zooming capability, and relative translational and rotational displacement between the marker and the camera is estimated by finding a homography transformation. The novel marker is designed to make the system robust to measurement noise based on a sensitivity analysis of the conventional marker and it has been verified through Monte Carlo simulation results. The performance of the displacement estimation has been verified through two kinds of experimental tests; using a shaking table and a motorized stage. The results show that the system estimates the structural 6-DOF displacement, especially the translational displacement in Z-axis, with high accuracy in real time and is robust to measurement noise.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Particle Filters using Gaussian Mixture Models for Vision-Based Navigation (영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터)

  • Hong, Kyungwoo;Kim, Sungjoong;Bang, Hyochoong;Kim, Jin-Won;Seo, Ilwon;Pak, Chang-Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.47 no.4
    • /
    • pp.274-282
    • /
    • 2019
  • Vision-based navigation of unmaned aerial vehicle is a significant technology that can reinforce the vulnerability of the widely used GPS/INS integrated navigation system. However, the existing image matching algorithms are not suitable for matching the aerial image with the database. For the reason, this paper proposes particle filters using Gaussian mixture models to deal with matching between aerial image and database for vision-based navigation. The particle filters estimate the position of the aircraft by comparing the correspondences of aerial image and database under the assumption of Gaussian mixture model. Finally, Monte Carlo simulation is presented to demonstrate performance of the proposed method.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • v.35 no.4
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

Enhancing Occlusion Robustness for Vision-based Construction Worker Detection Using Data Augmentation

  • Kim, Yoojun;Kim, Hyunjun;Sim, Sunghan;Ham, Youngjib
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.904-911
    • /
    • 2022
  • Occlusion is one of the most challenging problems for computer vision-based construction monitoring. Due to the intrinsic dynamics of construction scenes, vision-based technologies inevitably suffer from occlusions. Previous researchers have proposed the occlusion handling methods by leveraging the prior information from the sequential images. However, these methods cannot be employed for construction object detection in non-sequential images. As an alternative occlusion handling method, this study proposes a data augmentation-based framework that can enhance the detection performance under occlusions. The proposed approach is specially designed for rebar occlusions, the distinctive type of occlusions frequently happen during construction worker detection. In the proposed method, the artificial rebars are synthetically generated to emulate possible rebar occlusions in construction sites. In this regard, the proposed method enables the model to train a variety of occluded images, thereby improving the detection performance without requiring sequential information. The effectiveness of the proposed method is validated by showing that the proposed method outperforms the baseline model without augmentation. The outcomes demonstrate the great potential of the data augmentation techniques for occlusion handling that can be readily applied to typical object detectors without changing their model architecture.

  • PDF

A Hybrid Solar Tracking System using Weather Condition Estimates with a Vision Camera and GPS (날씨인식 결과를 이용한 GPS 와 비전센서기반 하이브리드 방식의 태양추적 시스템 개발)

  • Yoo, Jeongjae;Kang, Yeonsik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.557-562
    • /
    • 2014
  • It is well known that solar tracking systems can increase the efficiency of exiting solar panels significantly. In this paper, a hybrid solar tracking system has been developed by using both astronomical estimates from a GPS and the image processing results of a camera vision system. A decision making process is also proposed to distinguish current weather conditions using camera images. Based on the decision making results, the proposed hybrid tracking system switches two tracking control methods. The one control method is based on astronomical estimates of the current solar position. And the other control method is based on the solar image processing result. The developed hybrid solar tracking system is implemented on an experimental platform and the performance of the developed control methods are verified.