• 제목/요약/키워드: Vision Based Sensor

검색결과 421건 처리시간 0.027초

가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발 (Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes)

  • 전영산;최종은;이정욱
    • 제어로봇시스템학회논문지
    • /
    • 제20권11호
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

3-D 비젼센서를 위한 고속 자동선택 알고리즘 (High Speed Self-Adaptive Algorithms for Implementation in a 3-D Vision Sensor)

  • P.미셰;A.벤스하이르;이상국
    • 센서학회지
    • /
    • 제6권2호
    • /
    • pp.123-130
    • /
    • 1997
  • 이 논문은 다음과 같은 두가지 요소로 구성되는 독창적인 stereo vision system을 논술한다. declivity라는 새로운 개념을 도입한 자동선택 영상 분할처리 (self-adaptive image segmentation process) 와 자동선택 결정변수 (self-adaptive decision parameters) 를 응용하여 설계된 신속한 stereo matching algorithm. 현재, 실내 image의 depth map을 완성하는데 SUN-IPX 에서 3sec가 소요되나 연구중인 DSP Chip의 조합은 이 시간을 1초 이하로 단축시킬 수 있을 것이다.

  • PDF

Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles

  • Quan, Nguyen Van;Eum, Hyuk-Min;Lee, Jeisung;Hyun, Chang-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.140-146
    • /
    • 2013
  • In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.

다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선 (Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map)

  • 김시종;안광호;성창훈;정명진
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발 (Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras)

  • 진영석;전형철;신영남;현유진
    • 대한임베디드공학회논문지
    • /
    • 제13권4호
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

비전 센서를 이용한 쿼드로터형 무인비행체의 목표 추적 제어 (Target Tracking Control of a Quadrotor UAV using Vision Sensor)

  • 유민구;홍성경
    • 한국항공우주학회지
    • /
    • 제40권2호
    • /
    • pp.118-128
    • /
    • 2012
  • 본 논문은 쿼드로터형 무인 비행체를 비전센서를 이용한 목표 추적 위치 제어기 설계하였고, 이를 시뮬레이션 및 실험을 통해서 확인하였다. 우선 제어기 설계에 앞서 쿼드로터의 동역학 분석 및 실험데이터를 통한 모델링을 수행하였다. 이때, 모델의 계수들은 실제 비행 데이터를 이용한 PEM(Prediction Error Method)을 이용하여 얻었다. 이 추정된 모델을 바탕으로 LQR(Linear Quadratic Regulator) 기법을 이용한 임의의 목표를 따라가는 위치 제어기를 설계하였으며, 이때 위치 정보는 비전센서의 색 정보를 이용한 Color Tracking기능을 이용하여 쿼드로터와 물체의 상대적인 위치를 얻어내었고, 초음파 센서를 이용하여 고도 정보를 얻어 내었다. 마지막으로 실제 움직이는 물체의 추적 제어 실험을 수행하여 LQR 제어기 성능을 평가하였다.

Identification of structural systems and excitations using vision-based displacement measurements and substructure approach

  • Lei, Ying;Qi, Chengkai
    • Smart Structures and Systems
    • /
    • 제30권3호
    • /
    • pp.273-286
    • /
    • 2022
  • In recent years, vision-based monitoring has received great attention. However, structural identification using vision-based displacement measurements is far less established. Especially, simultaneous identification of structural systems and unknown excitation using vision-based displacement measurements is still a challenging task since the unknown excitations do not appear directly in the observation equations. Moreover, measurement accuracy deteriorates over a wider field of view by vision-based monitoring, so, only a portion of the structure is measured instead of targeting a whole structure when using monocular vision. In this paper, the identification of structural system and excitations using vision-based displacement measurements is investigated. It is based on substructure identification approach to treat of problem of limited field of view of vision-based monitoring. For the identification of a target substructure, substructure interaction forces are treated as unknown inputs. A smoothing extended Kalman filter with unknown inputs without direct feedthrough is proposed for the simultaneous identification of substructure and unknown inputs using vision-based displacement measurements. The smoothing makes the identification robust to measurement noises. The proposed algorithm is first validated by the identification of a three-span continuous beam bridge under an impact load. Then, it is investigated by the more difficult identification of a frame and unknown wind excitation. Both examples validate the good performances of the proposed method.

k-근접 이웃 및 비전센서를 활용한 프리팹 강구조물 조립 성능 평가 기술 (Assembly Performance Evaluation for Prefabricated Steel Structures Using k-nearest Neighbor and Vision Sensor)

  • 방현태;유병준;전해민
    • 한국전산구조공학회논문집
    • /
    • 제35권5호
    • /
    • pp.259-266
    • /
    • 2022
  • 본 논문에서는 프리팹 구조물의 품질관리를 위한 딥러닝 및 비전센서 기반의 조립 성능 평가 모델을 개발하였다. 조립부 검출을 위해 인코더-디코더 형식의 네트워크와 수용 영역 블록 합성곱 모듈을 적용한 딥러닝 모델을 사용하였다. 검출된 조립부 영역 내의 볼트홀을 검출하고, 볼트홀의 위치 값을 산정하여 k-근접 이웃 기반 모델을 사용하여 조립 품질을 평가하였다. 제안된 기법의 성능을 검증하기 위해 조립부 모형을 3D 프린팅을 이용하여 제작하여 조립부 검출 및 조립 성능 예측 모델의 성능을 검증하였다. 성능 검증 결과 높은 정밀도로 조립부를 검출하였으며, 검출된 조립부내의 볼트홀의 위치를 바탕으로 프리팹 구조물의 조립 성능을 5% 이하의 판별 오차로 평가할 수 있음을 확인하였다.

자동차 글라스 조립 자동화설비를 위한 FPGA기반 실러 도포검사 비전시스템 개발 (Development of an FPGA-based Sealer Coating Inspection Vision System for Automotive Glass Assembly Automation Equipment)

  • 김주영;박재률
    • 센서학회지
    • /
    • 제32권5호
    • /
    • pp.320-327
    • /
    • 2023
  • In this study, an FPGA-based sealer inspection system was developed to inspect the sealer applied to install vehicle glass on a car body. The sealer is a liquid or paste-like material that promotes adhesion such as sealing and waterproofing for mounting and assembling vehicle parts to a car body. The system installed in the existing vehicle design parts line does not detect the sealer in the glass rotation section and takes a long time to process. This study developed a line laser camera sensor and an FPGA vision signal processing module to solve this problem. The line laser camera sensor was developed such that the resolution and speed of the camera for data acquisition could be modified according to the irradiation angle of the laser. Furthermore, it was developed considering the mountability of the entire system to prevent interference with the sealer ejection machine. In addition, a vision signal processing module was developed using the Zynq-7020 FPGA chip to improve the processing speed of the algorithm that converted the profile to the sealer shape image acquired from a 2D camera and calculated the width and height of the sealer using the converted profile. The performance of the developed sealer application inspection system was verified by establishing an experimental environment identical to that of an actual automobile production line. The experimental results confirmed the performance of the sealer application inspection at a level that satisfied the requirements of automotive field standards.

레이더, 비전, 라이더 융합 기반 자율주행 환경 인지 센서 고장 진단 (Radar, Vision, Lidar Fusion-based Environment Sensor Fault Detection Algorithm for Automated Vehicles)

  • 최승리;정용환;이명수;이경수
    • 자동차안전학회지
    • /
    • 제9권4호
    • /
    • pp.32-37
    • /
    • 2017
  • For automated vehicles, the integrity and fault tolerance of environment perception sensor have been an important issue. This paper presents radar, vision, lidar(laser radar) fusion-based fault detection algorithm for autonomous vehicles. In this paper, characteristics of each sensor are shown. And the error of states of moving targets estimated by each sensor is analyzed to present the method to detect fault of environment sensors by characteristic of this error. Each estimation of moving targets isperformed by EKF/IMM method. To guarantee the reliability of fault detection algorithm of environment sensor, various driving data in several types of road is analyzed.