• Title/Summary/Keyword: Vision Camera

Search Result 1,376, Processing Time 0.023 seconds

Moving object detection for biped walking robot flatfrom (이족로봇 플랫폼을 위한 동체탐지)

  • Kang, Tae-Koo;Hwang, Sang-Hyun;Kim, Dong-Won;Park, Gui-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.570-572
    • /
    • 2006
  • This paper discusses the method of moving object detection for biped robot walking. Most researches on vision based object detection have mostly focused on fixed camera based algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since hired walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, method for moving object detection has been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. But these methods are not suitable to biped walking robot. So, we suggest the advanced method which is suitable to biped walking robot platform. For carrying out certain tasks, an object detecting system using modified optical flow algorithm by wireless vision camera is implemented in a biped walking robot.

  • PDF

Mobile Robot Navigation Using Vision Information (시각 정보를 이용한 이동 로보트의 항법)

  • Cho, Dong-Kwon;Kwon, Ho-Yeol;Suh, Il-Hong;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.689-692
    • /
    • 1989
  • In this paper, the navigation problem for a mobile robot is investigated. Specifically, it is proposed that simple guide-marks be introduced and the navigation scheme be generated in conjunction with the guide-marks sensed through camera vision. For autonomous navigation, it was shown that a triple guide-mark system is more effective than a single guide-mark in estimating the position of rho vehicle itself. the navigation system is tested via a mobile robot 'Hero' equipped with a single camera vision.

  • PDF

Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System (비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정)

  • Kwon, Ji-Wook;Park, Mun-Soo;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.12
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

디지탈 화상처리를 이용한 사출제품의 길이측정용 시각검사시스템 개발에 관한 연구

  • 김재열;박환규;오보석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.04a
    • /
    • pp.281-285
    • /
    • 1996
  • In this paper, I made visual inspection system using Vision Board and it is consist of an illuminator (a fluorescent lamp), image input device(CCD(Charge)Coupled Device) camera), image processing system(Vision Board(FARAMVB-02), image output device(videomonitor, printer), a measuring instrument(TELMN1000). Length measurement by visual inspection system is used 100mm gauge block instead of calculating distance between camera and object, it measured horizontal and vertical length factor from 400mm to 650mm by increasing 50mm. In this place, measured horizontal and vertical length factor made use of length measurement of a injection. A measuring instrument used to compare a measured length of a injection visual inspection system with it. In conclusion, length measurement of a injection compared a measuring instrument withvisual inspecion system using length factor of 100mm guage block. Maximum error of length compared two devices a measuring instrument with visual inspection system is 0.55mm. And operation program is made up Borland C++ 3.1. By changing, it is applied to various uses.

  • PDF

Robust Defect Size Measuring Method for an Automated Vision Inspection System (영상기반 자동결함 검사시스템에서 재현성 향상을 위한 결함 모델링 및 측정 기법)

  • Joo, Young-Bok;Huh, Kyung-Moo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.974-978
    • /
    • 2013
  • AVI (Automatic Vision Inspection) systems automatically detect defect features and measure their sizes via camera vision. AVI systems usually report different measurements on the same defect with some variations on position or rotation mainly because different images are provided. This is caused by possible variations from the image acquisition process including optical factors, nonuniform illumination, random noises, and so on. For this reason, conventional area based defect measuring methods have problems of robustness and consistency. In this paper, we propose a new defect size measuring method to overcome this problem, utilizing volume information that is completely ignored in the area based defect measuring method. The results show that our proposed method dramatically improves the robustness and consistency of defect size measurement.

Automation for Oyster Hinge Breaking System

  • So, J.D.;Wheaton, F.W.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.658-667
    • /
    • 1996
  • A computer vision system was developed to automatically detect and locate the oyster hinge line, one step in shucking an oyster. The computer vision system consisted of a personal computer, a color frame grabber, a color CCD video camera with a zoom lens, two video monitor, a specially designed fixture to hold the oyster, a lighting system to illuminate the oyster and the system software. The software consisted of a combination of commercially available programs and custom designed programs developed using the Microsoft CTM . Test results showed that the image resolution was the most important variable influencing hinge detection efficiency. Whether or not the trimmed -off-flat-white surface area was dry or wet, the oyster size relative to the image size selected , and the image processing methods used all influenced the hinge locating efficiency. The best computer software and hardware combination used successfully located 97% of the oyster hinge lines tested. This efficienc was achieve using camera field of view of 1.9 by 1.5cm , a 180 by 170 pixel image window, and a dry trimmed -off oyster hinge end surface.

  • PDF

The Background Segmentation of the Target Object for the Stereo Vision System (스테레오 비젼 시스템을 위한 표적물체의 배경 분리)

  • Ko, Jung Hwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.1
    • /
    • pp.25-31
    • /
    • 2008
  • In this paper, we propose a new method that separates background and foreground from stereo images. This method can be improved automatic target tracking system by using disparity map of the stereo vision system and background-separating mask, which can be obtained camera configuration parameters. We use disparity map and camera configuration parameters to separate object from background. Disparity map is made with block matching algorithm from stereo images. A morphology filter is used to compensate disparity error that can be caused by occlusion area. We could obtain a separated object from background when the proposed method was applied to real stereo cameras system.

Autonomous Navigation of KUVE (KIST Unmanned Vehicle Electric) (KUVE (KIST 무인 주행 전기 자동차)의 자율 주행)

  • Chun, Chang-Mook;Suh, Seung-Beum;Lee, Sang-Hoon;Roh, Chi-Won;Kang, Sung-Chul;Kang, Yeon-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.617-624
    • /
    • 2010
  • This article describes the system architecture of KUVE (KIST Unmanned Vehicle Electric) and unmanned autonomous navigation of it in KIST. KUVE, which is an electric light-duty vehicle, is equipped with two laser range finders, a vision camera, a differential GPS system, an inertial measurement unit, odometers, and control computers for autonomous navigation. KUVE estimates and tracks the boundary of road such as curb and line using a laser range finder and a vision camera. It follows predetermined trajectory if there is no detectable boundary of road using the DGPS, IMU, and odometers. KUVE has over 80% of success rate of autonomous navigation in KIST.

A Novel Depth Measurement Technique for Collision Avoidance Mobile Robot (이동로봇의 장애물과의 충돌방지를 위한 새로운 3차원 거리 인식 방법)

  • 송재홍;나상익;김형석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.291-294
    • /
    • 2002
  • A simple computer vision technology to measure the middle-ranged depth with mono camera and plain mirror is proposed The proposed system is structured wiか the rotating mirror in front of the fixed mono camera In contrast to the previous stereo vision system in which the disparity of the closer object is larger than that of the distant object, the pixel movement caused by the rotating mirror is bigger for the pixels of the distant object in the proposed system Being inspired by such feature in the proposed system the principle of the depth measurement based on the relation of the pixel movement and the distance of object have been investigated. Also, the factors to influence the precision of the measurement are analysed The benefits of the proposed system are low price and less chance of occlusion. The robustness for practical usage is an additional benefit of the proposed vision system.

  • PDF

Vision-Based Collision-Free Formation Control of Multi-UGVs using a Camera on UAV (무인비행로봇에 장착된 카메라를 이용한 다중 무인지상로봇의 충돌 없는 대형 제어기법)

  • Choi, Francis Byonghwa;Ha, Changsu;Lee, Dongjun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.1
    • /
    • pp.53-58
    • /
    • 2013
  • In this paper, we present a framework for collision avoidance of UGVs by vision-based control. On the image plane which is created by perspective camera rigidly attached to UAV hovering stationarily, image features of UGVs are to be controlled by our control framework so that they proceed to desired locations while avoiding collision. UGVs are assumed as unicycle wheeled mobile robots with nonholonomic constraint and they follow the image feature's movement on the ground plane with low-level controller. We used potential function method to guarantee collision prevention, and showed its stability. Simulation results are presented to validate capability and stability of the proposed framework.