• 제목/요약/키워드: Robot Vision

검색결과 878건 처리시간 0.027초

비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어 (Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation)

  • 권지욱;홍석교;좌동경
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

용접 형상 측정용 시각 센서 모듈 개발 (Development of Vision Sensor Module for the Measurement of Welding Profile)

  • 김창현;최태용;이주장;서정;박경택;강회신
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2006년도 춘계학술대회 논문집
    • /
    • pp.285-286
    • /
    • 2006
  • The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot automation, many kinds of contact and non-contact sensors are used. Recently, the vision sensor is most popular. In this paper, the development of the system which measures the profile of the welding part is described. The total system will be assembled into a compact module which can be attached to the head of welding robot system. This system uses the line-type structured laser diode and the vision sensor It implemented Direct Linear Transformation (DLT) for the camera calibration as well as radial distortion correction. The three dimensional shape of the parent metal is obtained after simple linear transformation and therefore, the system operates in real time. Some experiments are carried out to evaluate the performance of the developed system.

  • PDF

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구 (Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information)

  • 최재영;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제18권8호
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

비전 카메라 기반의 무논환경 자율주행 로봇을 위한 중심영역 추출 정보를 이용한 주행기준선 추출 알고리즘 (Guidance Line Extraction Algorithm using Central Region Data of Crop for Vision Camera based Autonomous Robot in Paddy Field)

  • 최근하;한상권;박광호;김경수;김수현
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.1-8
    • /
    • 2016
  • In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.

단일 카메라를 이용한 이동 로봇의 위치 추정과 주행 제어 (Position estimation and navigation control of mobile robot using mono vision)

  • 이기철;이성렬;박민용;김현태;고재원
    • 제어로봇시스템학회논문지
    • /
    • 제5권5호
    • /
    • pp.529-539
    • /
    • 1999
  • This paper suggests a new image analysis method and indoor navigation control algorithm of mobile robots using a mono vision system. In order to reduce the positional uncertainty which is generated as the robot travels around the workspace, we propose a new visual landmark recognition algorithm with 2-D graph world model which describes the workspace as only a rough plane figure. The suggested algorithm is implemented to our mobile robot and experimented in a real corridor using extended Kalman filter. The validity and performance of the proposed algorithm was verified by showing that the trajectory deviation error was maintained under 0.075m and the position estimation error was sustained under 0.05m in the resultant trajectory of the navigation.

  • PDF

시각 장치를 사용한 조선 소조립 라인에서의 용접부재 위치 인식 (Position Estimation of the Welding Panels for Sub-assembly line in Shipbuilding by Vision System)

  • 노영준;고국원;조형석;윤재웅;전자롬
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1997년도 춘계학술대회 논문집
    • /
    • pp.719-723
    • /
    • 1997
  • The welding automation in ship manufacturing process,especially in the sub-assembly line is considered to be a difficult job because the welding part is too huge, various, unstructured for a welding robot to weld fully automatically. The weld orocess at the sub-assembly line for ship manufacturing is to joint the various stiffener on the base panel. In order to realize automatic robot weld in sub-assembly line, robot have to equip with the sensing system to recognize the position of the parts. In this research,we developed a vision system to detect the position of base panle for sub-assembly line is shipbuilding process. The vision system is composed of one CCD camera attached on the base of robot, 2-500W halogen lamps for active illumination. In the image processing algorithm,the base panel is represented by two set of lines located at its two corner through hough transform. However, the various noise line caused by highlight,scratches and stiffener,roller in conveyor, and so on is contained in the captured image, this nosie can be eliminated by region segmentation and threshold in hough transform domain. The matching process to recognize the position of weld panel is executed by finding patterns in the Hough transformed domain. The sets of experiments performed in the sub-assembly line show the effectiveness of the proposed algorithm.

  • PDF

IR 센서와 영상정보를 이용한 다 개체 로봇의 장애물 회피 방법 (Obstacle Avoidance Method for Multi-Agent Robots Using IR Sensor and Image Information)

  • 전병승;이도영;최인환;모영학;박정민;임묘택
    • 제어로봇시스템학회논문지
    • /
    • 제18권12호
    • /
    • pp.1122-1131
    • /
    • 2012
  • This paper presents obstacle avoidance method for scout robot or industrial robot in unknown environment by using IR sensor and vision system. In the proposed method, robots share the information where the obstacles are located in real-time, thus the robots can choose the best path for obstacle avoidance. Using IR sensor and vision system, multiple robots efficiently evade the obstacles by the proposed cooperation method. No landmark is used at wall or floor in experiment environment. The obstacles don't have specific color or shape. To get the information of the obstacle, vision system extracts the obstacle coordinate by using an image labeling method. The information obtained by IR sensor is about the obstacle range and the locomotion direction to decide the optimal path for avoiding obstacle. The experiment was conducted in $7m{\times}7m$ indoor environment with two-wheeled mobile robots. It is shown that multiple robots efficiently move along the optimal path in cooperation with each other in the space where obstacles are located.

군집 로봇의 동시적 위치 추정 및 지도 작성 (Simultaneous Localization and Mapping For Swarm Robot)

  • 문현수;신상근;주영훈
    • 한국지능시스템학회논문지
    • /
    • 제21권3호
    • /
    • pp.296-301
    • /
    • 2011
  • 본 논문에서는 군집 로봇의 동시적 위치 추정 및 지도 작성 시스템을 제안하였다. 로봇은 실험환경에서 주변 환경을 인식하기 위해 초음파센서와 비젼 센서를 이용하였다. 실험환경을 3개의 영역으로 분할하였고, 로봇은 각 영역에서 초음파 센서로 주변 환경에 대한 거리 정보를 측정하였고, SURF 알고리즘을 이용하여 비젼 센서로부터 입력받은 영상과 landmark의 특징점을 정합하여 랜드마크를 인식하였다. 제안된 방법은 센서값들에 대한 오차에 민감하지 않고 실험환경에 비교적 정확한 지도를 작성함으로써 응용 가능성을 증명하였다.