• 제목/요약/키워드: Vision Systems

검색결과 1,726건 처리시간 0.026초

비전 센서를 이용한 유연한 로봇팔의 끝점 위치 측정 (The Tip Position Measurement of a Flexible Robot Arm Using a Vision Sensor)

  • 신효필;이종광;강이석
    • 제어로봇시스템학회논문지
    • /
    • 제6권8호
    • /
    • pp.682-688
    • /
    • 2000
  • To improve the performance of a flexible robot arm one of the important things is the vibration displacement measurement of a flexible arm. Many types of sensors have been used to measure it, The most popular has been strain gauges which measures the deflection of the beam,. Photo sensors have also been for detecting beam displacement and accelerometers are often used to measure the beam vibration. But the vibration displacement can be obtained indirectly from these sensors. In this article a vision sensor is used as a displacement sensor to measure the vibration displacement of a flexible robot arm. Several schemes are proposed to reduce the image processing time and increase its accuracy. From the experimental results it is seen that the vision sensor can be an alternative sensor for measuring the vibration displacement and has a potential for on-line tip position control of flexible robot systems.

  • PDF

비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험 (Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests)

  • 박은성;유창호;최재원
    • 제어로봇시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

Vision-based remote 6-DOF structural displacement monitoring system using a unique marker

  • Jeon, Haemin;Kim, Youngjae;Lee, Donghwa;Myung, Hyun
    • Smart Structures and Systems
    • /
    • 제13권6호
    • /
    • pp.927-942
    • /
    • 2014
  • Structural displacement is an important indicator for assessing structural safety. For structural displacement monitoring, vision-based displacement measurement systems have been widely developed; however, most systems estimate only 1 or 2-DOF translational displacement. To monitor the 6-DOF structural displacement with high accuracy, a vision-based displacement measurement system with a uniquely designed marker is proposed in this paper. The system is composed of a uniquely designed marker and a camera with a zooming capability, and relative translational and rotational displacement between the marker and the camera is estimated by finding a homography transformation. The novel marker is designed to make the system robust to measurement noise based on a sensitivity analysis of the conventional marker and it has been verified through Monte Carlo simulation results. The performance of the displacement estimation has been verified through two kinds of experimental tests; using a shaking table and a motorized stage. The results show that the system estimates the structural 6-DOF displacement, especially the translational displacement in Z-axis, with high accuracy in real time and is robust to measurement noise.

영상처리를 이용한 Mark 판독 기법에 관한 연구 (A Study on the Mark Reader Using the Image Processing)

  • 김승호;김범진;이용구;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.83-83
    • /
    • 2000
  • Recently, Vision system has being used all around industry. Sensor systems are used for Mark Reader, for example, optical scanning is proximity sensor system, have many disadvantages, such as, lacking user interface and difficulty to store original specimens. In contrast with this, Vision systems for Mark Reader has many advantages, including function conversion to achieve other work, high accuracy, high speed, etc. In this thesis, we have researched the development of Mark Reader by using a Vision system. The processing course of this s)'stem is consist to Image Pre-Processing such as noise reduction, edge detection, threshold processing. And then, we have carried out camera calibration to calibrate images which are acquired from camera. After searching for reference point within scanning area(60pixe1${\times}$30pixe1), we have calculated points crossing by using line equations. And then, we decide to each ROI(region of interest) which are expressed by four points. Next we have converted absolute coordinate into relative coordinate for analysis a translation component. Finally we carry out Mark Reading with images classified by six patterns. As a result of experiment which follows the algorithm has proposed, we have get error within 0.5% from total image.

  • PDF

조명과 해상도에 강인한 자동 결함 검사를 위한 향상된 히스토그램 정합 방법 (An Enhanced Histogram Matching Method for Automatic Visual Defect Inspection robust to Illumination and Resolution)

  • 강수민;박세혁;허경무
    • 제어로봇시스템학회논문지
    • /
    • 제20권10호
    • /
    • pp.1030-1035
    • /
    • 2014
  • Machine vision inspection systems have replaced human inspectors in defect inspection fields for several decades. However, the inspection results of machine vision are often affected by small changes of illumination. When small changes of illumination appear in image histograms, the influence of illumination can be decreased by transformation of the histogram. In this paper, we propose an enhanced histogram matching algorithm which corrects distorted histograms by variations of illumination. We use the resolution resizing method for an optimal matching of input and reference histograms and reduction of quantization errors from the digitizing process. The proposed algorithm aims not only for improvement of the accuracy of defect detection, but also robustness against variations of illumination in machine vision inspection. The experimental results show that the proposed method maintains uniform inspection error rates under dramatic illumination changes whereas the conventional inspection method reveals inconsistent inspection results in the same illumination conditions.

An embedded vision system based on an analog VLSI Optical Flow vision sensor

  • Becanovic, Vlatako;Matsuo, Takayuki;Stocker, Alan A.
    • 한국정보기술응용학회:학술대회논문집
    • /
    • 한국정보기술응용학회 2005년도 6th 2005 International Conference on Computers, Communications and System
    • /
    • pp.285-288
    • /
    • 2005
  • We propose a novel programmable miniature vision module based on a custom designed analog VLSI (aVLSI) chip. The vision module consists of the optical flow vision sensor embedded with commercial off-the-shelves digital hardware; in our case is the Intel XScale PXA270 processor enforced with a programmable gate array device. The aVLSI sensor provides gray-scale imager data as well as smooth optical flow estimates, thus each pixel gives a triplet of information that can be continuously read out as three independent images. The particular computational architecture of the custom designed sensor, which is fully parallel and also analog, allows for efficient real-time estimations of the smooth optical flow. The Intel XScale PXA270 controls the sensor read-out and furthermore allows, together with the programmable gate array, for additional higher level processing of the intensity image and optical flow data. It also provides the necessary standard interface such that the module can be easily programmed and integrated into different vision systems, or even form a complete stand-alone vision system itself. The low power consumption, small size and flexible interface of the proposed vision module suggests that it could be particularly well suited as a vision system in an autonomous robotics platform and especially well suited for educational projects in the robotic sciences.

  • PDF

소형 머신 비전 검사 장비에 기반한 O링 치수 측정 (O-ring Size Measurement Based on a Small Machine Vision Inspection Equipment)

  • 정유수;박길흠
    • 한국산업정보학회논문지
    • /
    • 제19권4호
    • /
    • pp.41-52
    • /
    • 2014
  • 본 논문은 O링의 치수 측정에 있어 고가의 대 중형 머신비전 장비를 대체할 수 있는 소형 머신 비전 검사 장비에 기반한 O링 부품 내 외경 측정 알고리즘을 제안한다. 백라이트 조명하에 하나의 CCD 카메라를 이용하여 측정 평면으로 부터 영상을 획득하는 소형 머신 비전 검사장비에 의해 획득된 영상을 제안한 영상처리 기법 알고리즘을 이용하여 O링의 외경 및 내경치수를 측정한다. 치수 측정의 정확도를 높이기 위해 렌즈계 왜곡 보정과 원근 왜곡 보정을 소프트웨어적 기법으로 보정 하였고 O링 형상을 고려하여 타원정합 모델을 적용하였으며 보다 타원 정합의 신뢰성을 높이기 위해 RANSAC알고리즘을 적용하였다.

LabVIEW의 Machine Vision을 이용한 웨이블릿 기반 지능형 이미지 Watermarking (Wavelet Based Intelligence image Watermarking Using Machine Vision of LabVIEW)

  • 송윤재;강두영;김형권;안태천
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 추계학술대회 학술발표 논문집 제14권 제2호
    • /
    • pp.521-524
    • /
    • 2004
  • 최근 멀티미디어 기술의 발전 및 인터넷의 보급과 더불어 디지털 데이터가 가지는 복제의 용이성으로 인한 저자자의 소유권 보호와 인증에 대한 문제가 중시되고 있다. 이에 따라 디지털 데이터에 워터 마크를 삽입하여 소유권을 보호하고 데이터의 무결성을 보증하도록 하는 연구가 활발히 진행되고 있다. 본 논문에서는 본 논문에서는 디지털 영상을 주파수 광간으로 변환시킨 후 효과적인 워터마크 삽입을 위해 인간의 감지능력이 떨어지는 주파수 영역과 중요한 주파수 영역을 선택하였다. 그 다음 영상 전체에 반복적이며, 그 내용에 따라 적응적인 워터 마크를 삽입하는 방법을 제시하였다. 주파수 광간으로 변환하는 방법으로는 수직, 수평, 대각선의 3가지 방향성과 다 해상도 (Multi-resolution) 특성을 갖는 웨이블릿 변환을 택하였다. 웨이블릿 기반의 이미지 워터마킹 방법을 LabVIEW의 Machine Vision을 이용하여 지능적인 워터마크를 구현한다.

  • PDF

Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안 (A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System)

  • 김종형;장경재;권혁동
    • 한국생산제조학회지
    • /
    • 제26권2호
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

Vision-based multipoint measurement systems for structural in-plane and out-of-plane movements including twisting rotation

  • Lee, Jong-Han;Jung, Chi-Young;Choi, Eunsoo;Cheung, Jin-Hwan
    • Smart Structures and Systems
    • /
    • 제20권5호
    • /
    • pp.563-572
    • /
    • 2017
  • The safety of structures is closely associated with the structural out-of-plane behavior. In particular, long and slender beam structures have been increasingly used in the design and construction. Therefore, an evaluation of the lateral and torsional behavior of a structure is important for the safety of the structure during construction as well as under service conditions. The current contact measurement method using displacement meters cannot measure independent movements directly and also requires caution when installing the displacement meters. Therefore, in this study, a vision-based system was used to measure the in-plane and out-of-plane displacements of a structure. The image processing algorithm was based on reference objects, including multiple targets in Lab color space. The captured targets were synchronized using a load indicator connected wirelessly to a data logger system in the server. A laboratory beam test was carried out to compare the displacements and rotation obtained from the proposed vision-based measurement system with those from the current measurement method using string potentiometers. The test results showed that the proposed vision-based measurement system could be applied successfully and easily to evaluating both the in-plane and out-of-plane movements of a beam including twisting rotation.