• 제목/요약/키워드: vision-based techniques

검색결과 293건 처리시간 0.027초

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • 제23권4호
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

로봇의 위치보정을 통한 경로계획 (Path finding via VRML and VISION overlay for Autonomous Robotic)

  • 손은호;박종호;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정 (Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • 제19권2호
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

다중 스펙트럼 머신비전 응용을 위한 CUDA SURF 기반의 영상 정렬 기법 (Image alignment method based on CUDA SURF for multi-spectral machine vision application)

  • 맹형열;김진형;고윤호
    • 한국멀티미디어학회논문지
    • /
    • 제17권9호
    • /
    • pp.1041-1051
    • /
    • 2014
  • In this paper, we propose a new image alignment technique based on CUDA SURF in order to solve the initial image alignment problem that frequently occurs in machine vision applications. Machine vision systems using multi-spectral images have recently become more common for solving various decision problems that cannot be performed by the human vision system. These machine vision systems mostly use markers for the initial image alignment. However, there are some applications where the markers cannot be used and the alignment techniques have to be changed whenever their markers are changed. In order to solve these problems, we propose a new image alignment method for multi-spectral machine vision applications based on SURF extracting image features without depending on markers. In this paper, we propose an image alignment method that obtains a sufficient number of feature points from multi-spectral images using SURF and removes outlier iteratively based on a least squares method. We further propose an effective preliminary scheme for removing mismatched feature point pairs that may affect the overall performance of the alignment. In addition, we reduce the execution time by implementing the proposed method using CUDA based on GPGPU in order to guarantee real-time operation. Simulation results show that the proposed method is able to align images effectively in applications where markers cannot be used.

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures

  • Lee, Jong Jae;Fukuda, Yoshio;Shinozuka, Masanobu;Cho, Soojin;Yun, Chung-Bang
    • Smart Structures and Systems
    • /
    • 제3권3호
    • /
    • pp.373-384
    • /
    • 2007
  • For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • 제35권4호
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

Enhancing Occlusion Robustness for Vision-based Construction Worker Detection Using Data Augmentation

  • Kim, Yoojun;Kim, Hyunjun;Sim, Sunghan;Ham, Youngjib
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.904-911
    • /
    • 2022
  • Occlusion is one of the most challenging problems for computer vision-based construction monitoring. Due to the intrinsic dynamics of construction scenes, vision-based technologies inevitably suffer from occlusions. Previous researchers have proposed the occlusion handling methods by leveraging the prior information from the sequential images. However, these methods cannot be employed for construction object detection in non-sequential images. As an alternative occlusion handling method, this study proposes a data augmentation-based framework that can enhance the detection performance under occlusions. The proposed approach is specially designed for rebar occlusions, the distinctive type of occlusions frequently happen during construction worker detection. In the proposed method, the artificial rebars are synthetically generated to emulate possible rebar occlusions in construction sites. In this regard, the proposed method enables the model to train a variety of occluded images, thereby improving the detection performance without requiring sequential information. The effectiveness of the proposed method is validated by showing that the proposed method outperforms the baseline model without augmentation. The outcomes demonstrate the great potential of the data augmentation techniques for occlusion handling that can be readily applied to typical object detectors without changing their model architecture.

  • PDF

A review of ground camera-based computer vision techniques for flood management

  • Sanghoon Jun;Hyewoon Jang;Seungjun Kim;Jong-Sub Lee;Donghwi Jung
    • Computers and Concrete
    • /
    • 제33권4호
    • /
    • pp.425-443
    • /
    • 2024
  • Floods are among the most common natural hazards in urban areas. To mitigate the problems caused by flooding, unstructured data such as images and videos collected from closed circuit televisions (CCTVs) or unmanned aerial vehicles (UAVs) have been examined for flood management (FM). Many computer vision (CV) techniques have been widely adopted to analyze imagery data. Although some papers have reviewed recent CV approaches that utilize UAV images or remote sensing data, less effort has been devoted to studies that have focused on CCTV data. In addition, few studies have distinguished between the main research objectives of CV techniques (e.g., flood depth and flooded area) for a comprehensive understanding of the current status and trends of CV applications for each FM research topic. Thus, this paper provides a comprehensive review of the literature that proposes CV techniques for aspects of FM using ground camera (e.g., CCTV) data. Research topics are classified into four categories: flood depth, flood detection, flooded area, and surface water velocity. These application areas are subdivided into three types: urban, river and stream, and experimental. The adopted CV techniques are summarized for each research topic and application area. The primary goal of this review is to provide guidance for researchers who plan to design a CV model for specific purposes such as flood-depth estimation. Researchers should be able to draw on this review to construct an appropriate CV model for any FM purpose.

환경 변화에 강인한 비전 기반 로봇 자율 주행 (Robust Vision-Based Autonomous Navigation Against Environment Changes)

  • 김정호;권인소
    • 대한임베디드공학회논문지
    • /
    • 제3권2호
    • /
    • pp.57-65
    • /
    • 2008
  • Recently many researches on intelligent robots have been studied. An intelligent robot is capable of recognizing environments or objects to autonomously perform specific tasks using sensor readings. One of fundamental problems in vision-based robot applications is to recognize where it is and to decide safe path to perform autonomous navigation. However, previous approaches only consider well-organized environments that there is no moving object and environment changes. In this paper, we introduce a novel navigation strategy to handle occlusions caused by moving objects using various computer vision techniques. Experimental results demonstrate the capability to overcome such difficulties for autonomous navigation.

  • PDF