• 제목/요약/키워드: Vision-based

검색결과 3,459건 처리시간 0.035초

Vision-based remote 6-DOF structural displacement monitoring system using a unique marker

  • Jeon, Haemin;Kim, Youngjae;Lee, Donghwa;Myung, Hyun
    • Smart Structures and Systems
    • /
    • 제13권6호
    • /
    • pp.927-942
    • /
    • 2014
  • Structural displacement is an important indicator for assessing structural safety. For structural displacement monitoring, vision-based displacement measurement systems have been widely developed; however, most systems estimate only 1 or 2-DOF translational displacement. To monitor the 6-DOF structural displacement with high accuracy, a vision-based displacement measurement system with a uniquely designed marker is proposed in this paper. The system is composed of a uniquely designed marker and a camera with a zooming capability, and relative translational and rotational displacement between the marker and the camera is estimated by finding a homography transformation. The novel marker is designed to make the system robust to measurement noise based on a sensitivity analysis of the conventional marker and it has been verified through Monte Carlo simulation results. The performance of the displacement estimation has been verified through two kinds of experimental tests; using a shaking table and a motorized stage. The results show that the system estimates the structural 6-DOF displacement, especially the translational displacement in Z-axis, with high accuracy in real time and is robust to measurement noise.

멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구 (Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera)

  • 원대연;오현동;허성식;박봉균;안종선;심현철;탁민제
    • 한국항공우주학회지
    • /
    • 제37권12호
    • /
    • pp.1209-1216
    • /
    • 2009
  • 본 논문에서는 실내 공간에 설치된 복수의 카메라로부터 획득한 영상정보를 소형무인기의 자세 추정 및 제어에 이용하는 시스템에 대한 연구를 기술하였다. 제안된 시스템은 실외 비행시험의 제한을 극복하고 효율적인 비행시험 환경을 구축하기 위한 것으로 무인기의 위치 및 자세를 측정하기 위해 별도의 센서를 탑재할 필요가 없어 저가의 장비로 테스트베드를 구성할 수 있다는 장점을 갖는다. 시스템 구현을 위해 요구되는 카메라 보정, 마커 검출, 자세 추정 기법을 소개하였으며 테스트베드를 이용한 실험 결과를 통해 제안된 방법의 타당성 및 성능을 보였다.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

영상 기반 항법을 위한 가우시안 혼합 모델 기반 파티클 필터 (Particle Filters using Gaussian Mixture Models for Vision-Based Navigation)

  • 홍경우;김성중;방효충;김진원;서일원;박장호
    • 한국항공우주학회지
    • /
    • 제47권4호
    • /
    • pp.274-282
    • /
    • 2019
  • 무인항공기의 영상 기반 항법은 널리 사용되는 GPS/INS 통합 항법 시스템의 취약점을 보강할 수 있는 중요한 기술로 이에 대한 연구가 활발히 이루어지고 있다. 하지만 일반적인 영상 대조 기법은 실제 항공기 비행 상황들을 적절하게 고려하기 힘들다는 단점이 있다. 따라서 본 논문에서는 영상기반 항법을 위한 가우시안 혼합 모델 기반의 파티클 필터를 제안한다. 제안한 파티클 필터는 영상과 데이터베이스를 가우시안 혼합 모델로 가정하여 둘 간의 유사도를 이용하여 항체의 위치를 추정한다. 또한 몬테카를로 시뮬레이션을 통해 위치 추정 성능을 확인한다.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • 제35권4호
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

Enhancing Occlusion Robustness for Vision-based Construction Worker Detection Using Data Augmentation

  • Kim, Yoojun;Kim, Hyunjun;Sim, Sunghan;Ham, Youngjib
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.904-911
    • /
    • 2022
  • Occlusion is one of the most challenging problems for computer vision-based construction monitoring. Due to the intrinsic dynamics of construction scenes, vision-based technologies inevitably suffer from occlusions. Previous researchers have proposed the occlusion handling methods by leveraging the prior information from the sequential images. However, these methods cannot be employed for construction object detection in non-sequential images. As an alternative occlusion handling method, this study proposes a data augmentation-based framework that can enhance the detection performance under occlusions. The proposed approach is specially designed for rebar occlusions, the distinctive type of occlusions frequently happen during construction worker detection. In the proposed method, the artificial rebars are synthetically generated to emulate possible rebar occlusions in construction sites. In this regard, the proposed method enables the model to train a variety of occluded images, thereby improving the detection performance without requiring sequential information. The effectiveness of the proposed method is validated by showing that the proposed method outperforms the baseline model without augmentation. The outcomes demonstrate the great potential of the data augmentation techniques for occlusion handling that can be readily applied to typical object detectors without changing their model architecture.

  • PDF

날씨인식 결과를 이용한 GPS 와 비전센서기반 하이브리드 방식의 태양추적 시스템 개발 (A Hybrid Solar Tracking System using Weather Condition Estimates with a Vision Camera and GPS)

  • 유정재;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.557-562
    • /
    • 2014
  • It is well known that solar tracking systems can increase the efficiency of exiting solar panels significantly. In this paper, a hybrid solar tracking system has been developed by using both astronomical estimates from a GPS and the image processing results of a camera vision system. A decision making process is also proposed to distinguish current weather conditions using camera images. Based on the decision making results, the proposed hybrid tracking system switches two tracking control methods. The one control method is based on astronomical estimates of the current solar position. And the other control method is based on the solar image processing result. The developed hybrid solar tracking system is implemented on an experimental platform and the performance of the developed control methods are verified.