• Title/Summary/Keyword: Vision-based Control

Search Result 687, Processing Time 0.027 seconds

Development of an Embedded Vision Platform for Internet-based Robot Control

  • Kim, Tae-Hee;Jeon, Jae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.116.4-116
    • /
    • 2002
  • $\textbullet$In case of using overhead camera system, mobile robot moves under static working area. $\textbullet$Mobile robot must use onboard camera system to work under wide working area. $\textbullet$Mobile robot must have wireless LAN to remove restriction of movement. $\textbullet$Onboard camera system must have wireless LAN environment. $\textbullet$We develop embedded vision platform using onboard camera.

  • PDF

Object Recognition Using Planar Surface Segmentation and Stereo Vision

  • Kim, Do-Wan;Kim, Sung-Il;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1920-1925
    • /
    • 2004
  • This paper describes a new method for 3D object recognition which used surface segment-based stereo vision. The position and orientation of an objects is identified accurately enabling a robot to pick up, even though the objects are multiple and partially occluded. The stereo vision is used to get the 3D information as 3D sensing, and CAD model with its post processing is used for building models. Matching is initially performed using the model and object features, and calculate roughly the object's position and orientation. Though the fine adjustment step, the accuracy of the position and orientation are improved.

  • PDF

Development of Stereo Vision Based Welding Quality Inspection System for RV Sinking Seat (스테레오 비전을 이용한 싱킹 시트의 용접 품질 검사 시스템 개발)

  • Yun, Sang-Hwan;Kim, Han-Jong;Kim, Sung-Gaun
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.17 no.3
    • /
    • pp.71-77
    • /
    • 2008
  • This paper presents a stereo vision based autonomous inspection system for welding quality control of a RV(Recreational Vehicle) sinking seat. The three dimensional geometry of the welding bead, which is the welding quality criteria, is measured by using the captured stereo images with a median filter applied on it. The image processing software for the system was developed using the NI LabVTEW software with NI vision system. In the manufacturing process of a RV sinking seat, the developed system can be used for overcoming the precision error that arises from a visible inspection by an operator. The welding quality inspection system for RV sinking seat was verified using experimentation.

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF

Vision Based Map-Building Using Singular Value Decomposition Method for a Mobile Robot in Uncertain Environment

  • Park, Kwang-Ho;Kim, Hyung-O;Kee, Chang-Doo;Na, Seung-Yu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.101.1-101
    • /
    • 2001
  • This paper describes a grid mapping for a vision based mobile robot in uncertain indoor environment. The map building is a prerequisite for navigation of a mobile robot and the problem of feature correspondence across two images is well known to be of crucial Importance for vision-based mapping We use a stereo matching algorithm obtained by singular value decomposition of an appropriate correspondence strength matrix. This new correspondence strength means a correlation weight for some local measurements to quantify similarity between features. The visual range data from the reconstructed disparity image form an occupancy grid representation. The occupancy map is a grid-based map in which each cell has some value indicating the probability at that location ...

  • PDF

3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor (광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템)

  • Joe, Young Jin;Oh, Hyun Min;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

Development of Autonomous Loading and Unloading for Network-based Unmanned Forklift (네트워크 기반 무인지게차를 위한 팔레트 자율적재기술의 개발)

  • Park, Jee-Hun;Kim, Min-Hwan;Lee, Suk;Lee, Kyung-Chang
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.1051-1058
    • /
    • 2011
  • Unmanned autonomous forklifts have a great potential to enhance the productivity of material handling in various applications because these forklifts can pick up and deliver loads without an operator and any fixed guide. Especially, automation of pallet loading and unloading technique is useful for enhancing performance of logistics and reducing cost for automation system. There are, however, many technical difficulties in developing such forklifts including localization, map building, sensor fusion, control, and so on. This is because the system requires numerous sensors, actuators, and controllers that need to be connected with each other, and the number of connections grows very rapidly as the number of devices grows. This paper presents a vision sensorbased autonomous loading and unloading for network-based unmanned forklift where system components are connected to a shared CAN network. Functions such as image processing and control algorithm are divided into small tasks that are distributed over a number of microcontrollers with a limited computing capacity. And the experimental results show that proposed architecture can be an appropriate choice for autonomous loading in the unmanned forklift.

Vision-Based Lane Change Maneuver using Sliding Mode Control for a Vehicle (슬라이딩 모드 제어를 이용한 시각센서 기반의 차선변경제어 시스템 설계)

  • 장승호;김상우
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.8 no.6
    • /
    • pp.194-207
    • /
    • 2000
  • In this paper, we suggest a vision-based lane change control system, which can be applied on the straight road, without additional sensors such as a yaw rate sensor and a lateral accelerometer. In order to reduce the image processing time, we predict a reference line position during lane change using the lateral dynamics and the inverse perspective mapping. The sliding mode control algorithm with a boundary layer is adopted to overcome variations of parameters that significantly affects a vehicle`s lateral dynamics and to reduce chattering phenomenon. However, applying the sliding mode control to the system with a long sampling interval, the stability of a control system may seriously be affected by the sampling interval. Therefore, in this paper, a look ahead offset has been used instead of a lateral offset to reduce the effect of the long sampling interval due to the image processing time. The control algorithm is developed to follow the desired trajectory designed in advance. In the design of the desired trajectory, we take account of the constraints of lateral acceleration and lateral jerk for ride comfort. The performance of the suggested control system is evaluated in simulations as well as field tests.

  • PDF

A Study on the Development of a Robot Vision Control Scheme Based on the Newton-Raphson Method for the Uncertainty of Circumstance (불확실한 환경에서 N-R방법을 이용한 로봇 비젼 제어기법 개발에 대한 연구)

  • Jang, Min Woo;Jang, Wan Shik;Hong, Sung Mun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.3
    • /
    • pp.305-315
    • /
    • 2016
  • This study aims to develop a robot vision control scheme using the Newton-Raphson (N-R) method for the uncertainty of circumstance caused by the appearance of obstacles during robot movement. The vision system model used for this study involves six camera parameters (C1-C6). First, the estimation scheme for the six camera parameters is developed. Then, based on the six estimated parameters for three of the cameras, a scheme for the robot's joint angles is developed for the placement of a slender bar. For the placement of a slender bar for the uncertainty of circumstances, in particular, the discontinuous robot trajectory caused by obstacles is divided into three obstacle regions: the beginning region, middle region, and near-target region. Then, the effects of obstacles while using the proposed robot vision control scheme are investigated in each obstacle region by performing experiments with the placement of the slender bar.

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.