• Title/Summary/Keyword: Stereo vision sensor

Search Result 73, Processing Time 0.026 seconds

Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor (바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

Development of a Stereo Vision Sensor-based Volume Measurement and Cutting Location Estimation Algorithm for Portion Cutting (포션커팅을 위한 스테레오 비전 센서 기반 부피 측정 및 절단 위치 추정 알고리즘 개발)

  • Ho Jin Kim;Seung Hyun Jeong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.5
    • /
    • pp.219-225
    • /
    • 2024
  • In this study, an algorithm was developed to measure the volume of meat products passing through the conveyor line of a portion cutter using a stereo vision sensor and calculate the cutting position to cut them into the same weight unit. Previously, three or more laser profile sensors were used for this purpose. However, in this study, the volume was measured using four stereo vision sensors, and the accuracy of the developed algorithm was verified to confirm the applicability of the technique. The technique consists of stereo correction, scanning and outlier removal, and cutting position calculation procedures. The comparison between the volume measured using the developed algorithm and the results measured using an accurate 3D scanner confirmed an accuracy of 91%. Additionally, in the case of 50g target weight, where the cutting position calculation is crucial, the cutting position was calculated at a speed of about 2.98 seconds, further confirming the applicability of the developed technique.

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.

A study on map generation of autonomous Mobile Robot using stereo vision system (스테레오 비젼 시스템을 이용한 자율 이동 로봇의 지도 작성에 관한 연구)

  • Son, Young-Seop;Lee, Kwae-Hi
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2200-2202
    • /
    • 1998
  • Autonomous mobile robot provide many functions such as sensing, processing, and driving. For more intelligent jobs, more intelligent functions are to be added and the existing functions may be updated. To execute a job autonomous mobile robot has a information of surrounding environment. So, robot uses sonar sensor, vision sensor and so on. Obtained sensor information is used map generation. This paper is focused on map generation using stereo vision system.

  • PDF

High Speed Self-Adaptive Algorithms for Implementation in a 3-D Vision Sensor (3-D 비젼센서를 위한 고속 자동선택 알고리즘)

  • Miche, Pierre;Bensrhair, Abdelaziz;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.6 no.2
    • /
    • pp.123-130
    • /
    • 1997
  • In this paper, we present an original stereo vision system which comprises two process: 1. An image segmentation algorithm based on new concept called declivity and using automatic thresholds. 2. A new stereo matching algorithm based on an optimal path search. This path is obtained by dynamic programming method which uses the threshold values calculated during the segmentation process. At present, a complete depth map of indoor scene only needs about 3 s on a Sun workstation IPX, and this time will be reduced to a few tenth of second on a specialised architecture based on several DSPs which is currently under consideration.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Fast Stereo Image Processing Method for Obstacle Detection of AGV System (AGV 시스템의 장애물 검출을 위한 고속 스테레오 영상처리 기법)

  • 전성재;조연상;박흥식
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.454-457
    • /
    • 2004
  • AGV for FMS must be detected an obstacle. Therefore, many studies have been advanced, and recently, the ultra sonic sensor is used for this. However, the new method has to be developed because the ultra-sonic-sensor has many problems as a noise in factory, an directional error and detection of the obstacle size. So, we study the fast stereo vision system that can give more information to obstacles for intelligent AGV system. For this, the simulated AGV system was made with two CCD cameras in front to get the stereo images, and the threshold process by color information (intensity and chromaticity) and structure stereo matching method were constructed.

  • PDF

Quantity Measurement by CAFFE Model and Distance and Width Measurement by Stereo Vision (CAFFE 모델을 이용한 수량 측정 및 스테레오 비전을 이용한 거리 및 너비측정)

  • Kim, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.679-684
    • /
    • 2019
  • We propose a method to measure the number of specific species of class using CAFFE model and a method to measure length and width of object using stereo vision. To obtain the width of an object, the location coordinates of objects appearing on the left and right sensor is compared and the distance from the sensor to the object is obtained. Then the length of the object in the image by using the distance and the approximate value of the actual length of the object is calculated.

An Optimal Position and Orientation of Stereo Camera (스테레오 카메라의 최적 위치 및 방향)

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Jung, Sung-Hun
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.3
    • /
    • pp.354-360
    • /
    • 2013
  • A stereo vision analysis was performed for motion and depth control of unmanned vehicles. In stereo vision, the depth information in three-dimensional coordinates can be obtained by triangulation after identifying points between the stereo image. However, there are always triangulation errors due to several reasons. Such errors in the vision triangulation can be alleviated by careful arrangement of the camera position and orientation. In this paper, an approach to the determination of the optimal position and orientation of camera is presented for unmanned vehicles.

Collision Avoidance for Indoor Mobile Robotics using Stereo Vision Sensor (스테레오 비전 센서를 이용한 실내 모바일 로봇 충돌 회피)

  • Kwon, Ki-Hyeon;Nam, Si-Byung;Lee, Se-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.5
    • /
    • pp.2400-2405
    • /
    • 2013
  • We detect the obstacle for the UGV(unmanned ground vehicle) from the compound image which is generated by stereo vision sensor masking the depth image and color image. Stereo vision sensor can gathers the distance information by stereo camera. The obstacle information from the depth compound image can be send to mobile robot and the robot can localize the indoor area. And, we test the performance of the mobile robot in terms of distance between the obstacle and the robot's position and also test the color, depth and compound image respectively. Moreover, we test the performance in terms of number of frame per second which is processed by operating machine. From the result, compound image shows the improved performance in distance and number of frames.