• 제목/요약/키워드: real-time vision

검색결과 847건 처리시간 0.027초

LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행 (A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot)

  • 김현우;황요섭;김윤기;이동혁;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제19권11호
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

Robust Real-time Object Detection on Construction Sites Using Integral Channel Features

  • Kim, Jinwoo;Chi, Seokho
    • 국제학술발표논문집
    • /
    • The 6th International Conference on Construction Engineering and Project Management
    • /
    • pp.304-309
    • /
    • 2015
  • On construction sites, it is important to monitor the performance of construction equipment and workers to achieve successful construction project management; especially, vision-based detection methods have advantages for the real-time site data collection for safety and productivity analyses. Although many researchers developed vision-based detection methods with acceptable performance, there are still limitations to be addressed: 1) sensitiveness to the shape and appearance changes of moving objects in difference working postures, and 2) high computation time. To deal with the limitations, this paper proposes a detection algorithm of construction equipment based on Integral Channel Features. For validation, 16,850 frames of video streams were recorded and analyzed. The results showed that the proposed method worked in high performance in terms of accuracy and processing time. In conclusion, the developed method can help to understand useful site information including working pattern, working time and input manpower analyses.

  • PDF

Real-Time Pipe Fault Detection System Using Computer Vision

  • Kim Hyoung-Seok;Lee Byung-Ryong
    • International Journal of Precision Engineering and Manufacturing
    • /
    • 제7권1호
    • /
    • pp.30-34
    • /
    • 2006
  • Recently, there has been an increasing demand for computer-vision-based inspection and/or measurement system as a part of factory automation equipment. In general, it is almost impossible to check the fault of all parts, coming from part-feeding system, with only manual inspection because of time limitation. Therefore, most of manual inspection is applied to specific samples, not all coming parts, and manual inspection neither guarantee consistent measuring accuracy nor decrease working time. Thus, in order to improve the measuring speed and accuracy of the inspection, a computer-aided measuring and analysis method is highly needed. In this paper, a computer-vision-based pipe inspection system is proposed, where the front and side-view profiles of three different kinds of pipes, coming from a forming line, are acquired by computer vision. And the edge detection is processed by using Laplace operator. To reduce the vision processing time, modified Hough transform is used with clustering method for straight line detection. And the center points and diameters of inner and outer circle are found to determine eccentricity of the parts. Also, an inspection system has been built so that the data and images of faulted parts are stored as files and transferred to the server.

산업용 비젼시스템을 위한 하드웨어 체인코더의 설계 (A Hardware Implementation of Chain-coding Algorithm for Industrial Vision Systems)

  • 이병일;신유식;임준홍;변증남
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1987년도 전기.전자공학 학술대회 논문집(I)
    • /
    • pp.265-269
    • /
    • 1987
  • In an industrial vision system, a coding technique for binary image is essential to extract useful informations. To reduce the processing time, a hardware implementation of the chain coding algorithm is attemped. For that purpose, the chain coding algorithm is modified so that it is more suitable for a hardware implementation. A hardwired chain coder is also developed and tested with developed vision system. The result shows that the processing time is greatly reduced and that the developed vision system is maybe feasible for real-time applications.

  • PDF

레이저 비전 센서를 이용한 용접비드의 외부결함 검출에 관한 연구 (A Study of Inspection of Weld Bead Defects using Laser Vision Sensor)

  • 이정익;이세헌
    • Journal of Welding and Joining
    • /
    • 제17권2호
    • /
    • pp.53-60
    • /
    • 1999
  • Conventionally, CCD camera and vision sensor using the projected pattern of light is generally used to inspect the weld bead defects. But with this method, a lot of time is needed for image preprocessing, stripe extraction and thinning, etc. In this study, laser vision sensor using the scanning beam of light is used to shorten the time required for image preprocessing. The software for deciding whether the weld bead is in proper shape or not in real time is developed. The criteria are based upon the classification of imperfections in metallic fusion welds(ISO 6520) and limits for imperfections(ISO 5817).

  • PDF

금속 표면의 결함 검출을 위한 영역 기반 CNN 기법 비교 (Comparison of Region-based CNN Methods for Defects Detection on Metal Surface)

  • 이민기;서기성
    • 전기학회논문지
    • /
    • 제67권7호
    • /
    • pp.865-870
    • /
    • 2018
  • A machine vision based industrial inspection includes defects detection and classification. Fast inspection is a fundamental problem for many applications of real-time vision systems. It requires little computation time and localizing defects robustly with high accuracy. Deep learning technique have been known not to be suitable for real-time applications. Recently a couple of fast region-based CNN algorithms for object detection are introduced, such as Faster R-CNN, and YOLOv2. We apply these methods for an industrial inspection problem. Three CNN based detection algorithms, VOV based CNN, Faster R-CNN, and YOLOv2, are experimented for defect detection on metal surface. The results for inspection time and various performance indices are compared and analysed.

비젼 시스템을 이용한 DGPS 데이터 보정에 관한 연구 (A study on the DGPS data errors correction through real-time coordinates conversion using the vision system)

  • 문성룡;채정수;박장훈;이호순;노도환
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 하계학술대회 논문집 D
    • /
    • pp.2310-2312
    • /
    • 2003
  • This paper describes a navigation system for an autonomous vehicle in outdoor environments. The vehicle uses vision system to detect coordinates and DGPS information to determine the vehicles initial position and orientation. The vision system detects coordinates in the environment by referring to an environment model. As the vehicle moves, it estimates its position by conventional DGPS data, and matches up the coordinates with the environment model in order to reduce the error in the vehicles position estimate. The vehicles initial position and orientation are calculated from the coordinate values of the first and second locations, which are acquired by DGPS. Subsequent orientations and positions are derived. Experimental results in real environments have showed the effectiveness of our proposed navigation methods and real-time methods.

  • PDF

OnBoard Vision Based Object Tracking Control Stabilization Using PID Controller

  • Mariappan, Vinayagam;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • 제4권4호
    • /
    • pp.81-86
    • /
    • 2016
  • In this paper, we propose a simple and effective vision-based tracking controller design for autonomous object tracking using multicopter. The multicopter based automatic tracking system usually unstable when the object moved so the tracking process can't define the object position location exactly that means when the object moves, the system can't track object suddenly along to the direction of objects movement. The system will always looking for the object from the first point or its home position. In this paper, PID control used to improve the stability of tracking system, so that the result object tracking became more stable than before, it can be seen from error of tracking. A computer vision and control strategy is applied to detect a diverse set of moving objects on Raspberry Pi based platform and Software defined PID controller design to control Yaw, Throttle, Pitch of the multicopter in real time. Finally based series of experiment results and concluded that the PID control make the tracking system become more stable in real time.

비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험 (Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests)

  • 박은성;유창호;최재원
    • 제어로봇시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.