• 제목/요약/키워드: Vision Systems

검색결과 1,716건 처리시간 0.022초

비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정 (Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System)

  • 권지욱;박문수;좌동경;홍석교
    • 제어로봇시스템학회논문지
    • /
    • 제15권12호
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

머신 비전을 이용한 ALC 블록 생산공정의 자동 측정 시스템 개발 (Development of Automatic ALC Block Measurement System Using Machine Vision)

  • 엄주진;허경무
    • 제어로봇시스템학회논문지
    • /
    • 제10권6호
    • /
    • pp.494-500
    • /
    • 2004
  • This paper presents a machine vision system, which inspects the measurement of the ALC block on a real-time basis in the production process. The automatic measurement system was established with a CCD camera, an image grabber, and a personal computer without using assembled measurement equipment. Images obtained by this system was processed by an algorithm, specially designed for an enhanced measurement accuracy. For the realization of the proposed algorithm, a preprocessing method that can be applied to overcome uneven lighting environment, boundary decision method, unit length decision method in uneven condition with rocking objects, and a projection of region using pixel summation are developed. From our experimental results, we could find that the required measurement accuracy specification is sufficiently satisfied by using the proposed method.

3D Vision-based Security Monitoring for Railroad Stations

  • Park, Young-Tae;Lee, Dae-Ho
    • Journal of the Optical Society of Korea
    • /
    • 제14권4호
    • /
    • pp.451-457
    • /
    • 2010
  • Increasing demands on the safety of public train services have led to the development of various types of security monitoring systems. Most of the surveillance systems are focused on the estimation of crowd level in the platform, thereby yielding too many false alarms. In this paper, we present a novel security monitoring system to detect critically dangerous situations such as when a passenger falls from the station platform, or when a passenger walks on the rail tracks. The method is composed of two stages of detecting dangerous situations. Objects falling over to the dangerous zone are detected by motion tracking. 3D depth information retrieved by the stereo vision is used to confirm fallen events. Experimental results show that virtually no error of either false positive or false negative is found while providing highly reliable detection performance. Since stereo matching is performed on a local image only when potentially dangerous situations are found; real-time operation is feasible without using dedicated hardware.

인서트 자동검사를 위한 시각인식 알고리즘 (A Machine Vision Algorithm for the Automatic Inspection of Inserts)

  • 이문규;신승호
    • 제어로봇시스템학회논문지
    • /
    • 제4권6호
    • /
    • pp.795-801
    • /
    • 1998
  • In this paper, we propose a machine vision algorithm for inspecting inserts which are used for milling and turning operations. Major defects of the inserts are breakage and crack on insert surfaces. Among the defects, breakages on the face of the inserts can be detected through three stages of the algorithm developed in this paper. In the first stage, a multi-layer perceptron is used to recognize the inserts being inspected. Edge detection of the insert image is performed in the second stage. Finally, in the third stage breakages on the insert face are identified using Hough transform. The overall algorithm is tested on real specimens and the results show that the algorithm works fairly well.

  • PDF

기하학적 패턴 매칭을 이용한 3차원 비전 검사 알고리즘 (3D Vision Inspection Algorithm using Geometrical Pattern Matching Method)

  • 정철진;허경무;김장기
    • 제어로봇시스템학회논문지
    • /
    • 제10권1호
    • /
    • pp.54-59
    • /
    • 2004
  • We suggest a 3D vision inspection algorithm which is based on the external shape feature. Because many electronic parts have the regular shape, if we have the database of pattern and can recognize the object using the database of the object s pattern, we can inspect many types of electronic parts. Our proposed algorithm uses the geometrical pattern matching method and 3D database on the electronic parts. We applied our suggested algorithm fer inspecting several objects including typical IC and capacitor. Through the experiments, we could find that our suggested algorithm is more effective and more robust to the inspection environment(rotation angle, light source, etc.) than conventional 2D inspection methods. We also compared our suggested algorithm with the feature space trajectory method.

Real-time Omni-directional Distance Measurement with Active Panoramic Vision

  • Yi, Soo-Yeong;Choi, Byoung-Wook;Ahuja, Narendra
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권2호
    • /
    • pp.184-191
    • /
    • 2007
  • Autonomous navigation of mobile robot requires a ranging system for measurement of distance to environmental objects. It is obvious that the wider and the faster distance measurement gives a mobile robot more freedom in trajectory planning and control. The active omni-directional ranging system proposed in this paper is capable of obtaining the distance for all 3600 directions in real-time because of the omni-directional mirror and the structured light. Distance computation including the sensitivity analysis and the experiments on the omni-directional ranging are presented to verify the effectiveness of the proposed system.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

KUVE (KIST 무인 주행 전기 자동차)의 자율 주행 (Autonomous Navigation of KUVE (KIST Unmanned Vehicle Electric))

  • 전창묵;서승범;이상훈;노치원;강성철;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제16권7호
    • /
    • pp.617-624
    • /
    • 2010
  • This article describes the system architecture of KUVE (KIST Unmanned Vehicle Electric) and unmanned autonomous navigation of it in KIST. KUVE, which is an electric light-duty vehicle, is equipped with two laser range finders, a vision camera, a differential GPS system, an inertial measurement unit, odometers, and control computers for autonomous navigation. KUVE estimates and tracks the boundary of road such as curb and line using a laser range finder and a vision camera. It follows predetermined trajectory if there is no detectable boundary of road using the DGPS, IMU, and odometers. KUVE has over 80% of success rate of autonomous navigation in KIST.

레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발 (Radar and Vision Sensor Fusion for Primary Vehicle Detection)

  • 양승한;송봉섭;엄재용
    • 제어로봇시스템학회논문지
    • /
    • 제16권7호
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

반도체 웨이퍼 고속 검사를 위한 GPU 기반 병렬처리 알고리즘 (The GPU-based Parallel Processing Algorithm for Fast Inspection of Semiconductor Wafers)

  • 박영대;김준식;주효남
    • 제어로봇시스템학회논문지
    • /
    • 제19권12호
    • /
    • pp.1072-1080
    • /
    • 2013
  • In a the present day, many vision inspection techniques are used in productive industrial areas. In particular, in the semiconductor industry the vision inspection system for wafers is a very important system. Also, inspection techniques for semiconductor wafer production are required to ensure high precision and fast inspection. In order to achieve these objectives, parallel processing of the inspection algorithm is essentially needed. In this paper, we propose the GPU (Graphical Processing Unit)-based parallel processing algorithm for the fast inspection of semiconductor wafers. The proposed algorithm is implemented on GPU boards made by NVIDIA Company. The defect detection performance of the proposed algorithm implemented on the GPU is the same as if by a single CPU, but the execution time of the proposed method is about 210 times faster than the one with a single CPU.