• 제목/요약/키워드: vision-based control

검색결과 683건 처리시간 0.03초

얇은막대 배치작업에 대한 N-R 과 EKF 방법을 이용하여 개발한 로봇 비젼 제어알고리즘의 평가 (Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKF Methods for Slender Bar Placement)

  • 손재경;장완식;홍성문
    • 대한기계학회논문집A
    • /
    • 제37권4호
    • /
    • pp.447-459
    • /
    • 2013
  • 실제 산업현장에서 비젼 시스템을 적용하기에는 로봇 비젼 제어알고리즘의 기구학모델의 정확도, 로봇이 움직이는 동안 카메라 초점거리와 방위에 대한 보정, 3 차원 물리적 좌표에서 2 차원 카메라 좌표로의 매핑에 대한 이해 등 해결해야 할 많은 문제점들이 있다. 본 논문에 제안된 비젼 시스템 모델은 카메라와 로봇 사이의 상대적인 위치가 알려지지 않아도 제어가 가능하고, 카메라 보정 문제를 해결하기 위해 6 개의 카메라 매개변수를 가지는 비젼 시스템 모델을 제시하였으며, 이를 이용하여 로봇 비젼 제어알고리즘 개발에 N-R 방법과 EKF 방법을 적용하였다. 최종적으로 N-R 과 EKF 방법에 의하여 개발된 로봇 비젼 제어 알고리즘의 위치 정밀도와 데이터 처리 시간을 얇은 막대 배치작업을 수행하여 비교하였다.

Integrated System for Autonomous Proximity Operations and Docking

  • Lee, Dae-Ro;Pernicka, Henry
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제12권1호
    • /
    • pp.43-56
    • /
    • 2011
  • An integrated system composed of guidance, navigation and control (GNC) system for autonomous proximity operations and the docking of two spacecraft was developed. The position maneuvers were determined through the integration of the state-dependent Riccati equation formulated from nonlinear relative motion dynamics and relative navigation using rendezvous laser vision (Lidar) and a vision sensor system. In the vision sensor system, a switch between sensors was made along the approach phase in order to provide continuously effective navigation. As an extension of the rendezvous laser vision system, an automated terminal guidance scheme based on the Clohessy-Wiltshire state transition matrix was used to formulate a "V-bar hopping approach" reference trajectory. A proximity operations strategy was then adapted from the approach strategy used with the automated transfer vehicle. The attitude maneuvers, determined from a linear quadratic Gaussian-type control including quaternion based attitude estimation using star trackers or a vision sensor system, provided precise attitude control and robustness under uncertainties in the moments of inertia and external disturbances. These functions were then integrated into an autonomous GNC system that can perform proximity operations and meet all conditions for successful docking. A six-degree of freedom simulation was used to demonstrate the effectiveness of the integrated system.

컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구 (A study on the rigid bOdy placement task of robot system based on the computer vision system)

  • 장완식;유창규;신광수;김호윤
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1995년도 추계학술대회 논문집
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정 (Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System)

  • 권지욱;박문수;좌동경;홍석교
    • 제어로봇시스템학회논문지
    • /
    • 제15권12호
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

로봇 비젼시스템을 이용한 강체 배치 실험에 대한 연구 (A Study on Rigid body Placement Task of based on Robot Vision System)

  • 장완식;신광수;안철봉
    • 한국정밀공학회지
    • /
    • 제15권11호
    • /
    • pp.100-107
    • /
    • 1998
  • This paper presents the development of estimation model and control method based on the new robot vision. This proposed control method is accomplished using the sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on the model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters, depending on each camera the joint angle of robot is estimated by the iteration method. The method is experimentally tested in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

캠시프트와 KLT특징 추적 알고리즘을 융합한 모바일 로봇의 영상기반 사람추적 및 추종 (A vision based people tracking and following for mobile robots using CAMSHIFT and KLT feature tracker)

  • 이상진;원문철
    • 한국멀티미디어학회논문지
    • /
    • 제17권7호
    • /
    • pp.787-796
    • /
    • 2014
  • Many mobile robot navigation methods utilize laser scanners, ultrasonic sensors, vision camera, and so on for detecting obstacles and path following. However, human utilizes only vision(e.g. eye) information for navigation. In this paper, we study a mobile robot control method based on only the camera vision. The Gaussian Mixture Model and a shadow removal technology are used to divide the foreground and the background from the camera image. The mobile robot uses a combined CAMSHIFT and KLT feature tracker algorithms based on the information of the foreground to follow a person. The algorithm is verified by experiments where a person is tracked and followed by a robot in a hallway.

Feature Extraction for Vision Based Micromanipulation

  • Jang, Min-Soo;Lee, Seok-Joo;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.41.5-41
    • /
    • 2002
  • This paper presents a feature extraction algorithm for vision-based micromanipulation. In order to guarantee of the accurate micromanipulation, most of micromanipulation systems use vision sensor. Vision data from an optical microscope or high magnification lens have vast information, however, characteristics of micro image such as emphasized contour, texture, and noise are make it difficult to apply macro image processing algorithms to micro image. Grasping points extraction is very important task in micromanipulation because inaccurate grasping points can cause breakdown of micro gripper or miss of micro objects. To solve those problems and extract grasping points for micromanipulation...

  • PDF

비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어 (Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation)

  • 권지욱;홍석교;좌동경
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

스테레오 영상을 이용한 이동형 머니퓰레이터의 시각제어 (Visual Servoing of a Mobile Manipulator Based on Stereo Vision)

  • 이현정;박민규;이민철
    • 제어로봇시스템학회논문지
    • /
    • 제11권5호
    • /
    • pp.411-417
    • /
    • 2005
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the potion of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. Color information is useful for simple recognition in real-time visual servoing. This paper addresses object recognition using colors, stereo matching method to reduce its calculation time, recovery of 3D space and the visual servoing.