• Title/Summary/Keyword: 로봇비젼

Search Result 139, Processing Time 0.031 seconds

A Study on the Construction of Omnidirecional Vision System for the Mobile Robot's the Autonomous Navigation (이동로봇의 자율주행을 위한 전방향 비젼 시스템의 구현에 관한 연구)

  • 고민수;한영환;이응혁;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2001.06e
    • /
    • pp.17-20
    • /
    • 2001
  • This study is regarding the autonomous navigation of the mobile robot which operates through a sensor, the Omnnidirectional Vision System which makes it possible to retrieve the real-time movements of the objects and the walls accessing the robot from all directions and to shorten the processing time. After attempting to extend the field of view by using the reflection system and then learning the point of all directions of 2$\pi$ from the robot at the distance, robot recognizes three-dimensional world through the simple image process, the transform procedure and constant monitoring of the angle and distance from the peripheral obstacles. This study consists of 3 parts: Part 1 regards the process of designing Omnnidirectional Vision System and part 2 the image process, and part 3 estimates the implementation system through the comparative study process and three-dimensional measurements.

  • PDF

Development of Intellingent Deburring System Based on Industial Robot (산업용로봇을 이용하는 지능 버 제거 시스템 개발에 관한 연구)

  • Shin, Sang-Un;Choe, Gyu-Jong;Ahn, Du-Seong
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.34 no.1
    • /
    • pp.1-5
    • /
    • 1998
  • This study presents intelligent deburring system which can transfer the exper's skill to deburring robot through neural network. The expert's skill is expressed as associate mapping between the characteristics of the burr and human expert's action. Under the fundamental idea that the state of the deburring process can be extracted via the visual sense of the human, we employ vision system for the perception and identification of the changing burr. From the demonstration of human experts, force data are measured and fitted impedance model. Finally the characteristics of the burr and coressponding force are associated by the neural network which is trained through many demonstrations. The proposed method is verified in the deburring process of welding burr.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

A Study of Real Time Object Tracking using Reinforcement Learning (강화학습을 사용한 실시간 이동 물체 추적에 관한 연구)

  • 김상헌;이동명;정재영;운학수;박민욱;김관형
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.87-90
    • /
    • 2003
  • 과거의 이동로봇 시스템은 완전한 자율주행이 주된 목표였으며 그때의 영상정보는 단지 모니터링을 하는 보조적인 수단으로 사용되었다. 그러나 지금은 이동 물체의 추적, 대상 물체의 인식과 판별, 특징 추출과 같은 다양한 응용분야에서 영상정보를 이용하는 연구가 활발히 진행되고 있다 또한 제어 측면에서는 전통적인 제어기법으로는 해결하기 힘들었던 여러 가지 비선형적인 제어를 지능제어 방법을 통하여 많이 해결하곤 하였다. 그러한 지능제어에서 신경망을 많이 사용하기도 한다. 최근에는 신경망의 학습에 많이 사용하는 방법 중 강화학습이 많이 사용되고 있다. 강화학습이란 동적인 제어평면에서 시행착오를 통해, 목적을 이루기 위해 각 상황에서 행동을 학습하는 방법이다. 그러므로 이러한 강화학습은 수많은 시행착오를 거쳐 그 대응 관계를 학습하게 된다. 제어에 사용되는 제어 파라메타는 어떠한 상태에 처할 수 있는 상태와 행동들, 그리고 상태의 변화, 또한 최적의 해를 구할 수 있는 포상알고리즘에 대해 다양하게 연구되고 있다. 본 논문에서 연구한 시스템은 비젼시스템과 Strong Arm 보드를 이용하여 대상물체의 색상과 형태를 파악한 후 실시간으로 물체를 추적할 수 있게 구성하였으며, 또한 물체 이동의 비선형적인 경향성을 강화학습을 통하여 물체이동의 비선형성을 보다 유연하게 대처하여 보다 안정하고 빠르며 정확하게 물체를 추적하는 방법을 실험을 통하여 제안하였다.

  • PDF

Real-time Robotic Vision Control Scheme Using Optimal Weighting Matrix for Slender Bar Placement Task (얇은 막대 배치작업을 위한 최적의 가중치 행렬을 사용한 실시간 로봇 비젼 제어기법)

  • Jang, Min Woo;Kim, Jae Myung;Jang, Wan Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.50-58
    • /
    • 2017
  • This paper proposes a real-time robotic vision control scheme using the weighting matrix to efficiently process the vision data obtained during robotic movement to a target. This scheme is based on the vision system model that can actively control the camera parameter and robotic position change over previous studies. The vision control algorithm involves parameter estimation, joint angle estimation, and weighting matrix models. To demonstrate the effectiveness of the proposed control scheme, this study is divided into two parts: not applying the weighting matrix and applying the weighting matrix to the vision data obtained while the camera is moving towards the target. Finally, the position accuracy of the two cases is compared by performing the slender bar placement task experimentally.

차세대 성장동력으로서의 해양구조물 및 장비 기술

  • 홍사영;홍석원
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.41 no.2
    • /
    • pp.25-34
    • /
    • 2004
  • 현재 우리나라는 1995년 국민소득 1만불 진입 이후 8년간 1만불 장벽을 넘지 못하고 있으며 그 동안 국가 성장동력의 바탕을 이룩해 온 기간산업 경쟁력의 둔화와 선진국과의 기술격차는 줄어들지 않는 한편 중국 등 후발국가의 추격이 거세지고 있는 상황에 있다. 이에 따라 내부적으로 미래에 대한 확고한 비젼 부재상태를 극복함과 동시에 1인당 국민소득 2만불 대의 선진경제로의 도약을 위한 "새로운 성장동력의 창출"이란 국가적 명제를 안게 되었다. 정부는 이를 위해 2003년 5월 말 주력 기간산업, 미래 유망사업, 지식기반 서비스산업 등 3개 분야에서 총 60개의 차세대 성장품목을 발굴하고 산업군별로 종합적인 발전전략을 수립하였으며 이 중 조선$.$해양산업은 주력 기간 산업군에서 고부가가치 선박, 디지털기반 조선컨텐츠, 해양부체 강구조물의 3개 항목이 이에 포함되었다. 이후 10대 차세대 성장동력 산업(표 1) 선정과정에서 조선$.$해양산업이 이에 명시적 포함되지는 않았으나 지능형 로봇분야와 e-Biz/지능형 물류에 부분적으로 연계되어 있고 산자부에서는 조선$.$해양산업을 포함한 10개 주력 기간산업별 기획단을 구성하여 차세대 성장동력 기획단과 함께 연구기획을 통하여 산업기술혁신 5개년 계획에 반영하는 것으로 알려져 있다[1]. (중략)다[1]. (중략)

  • PDF

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

The Technique of Human tracking using ultrasonic sensor for Human Tracking of Cooperation robot based Mobile Platform (모바일 플랫폼 기반 협동로봇의 사용자 추종을 위한 초음파 센서 활용 기법)

  • Yum, Seung-Ho;Eom, Su-Hong;Lee, Eung-Hyuk
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.638-648
    • /
    • 2020
  • Currently, the method of user-follwoing in intelligent cooperative robots usually based in vision system and using Lidar is common and have excellent performance. But in the closed space of Corona 19, which spread worldwide in 2020, robots for cooperation with medical staff were insignificant. This is because Medical staff are all wearing protective clothing to prevent virus infection, which is not easy to apply with existing research techniques. Therefore, in order to solve these problems in this paper, the ultrasonic sensor is separated from the transmitting and receiving parts, and based on this, this paper propose that estimating the user's position and can actively follow and cooperate with people. However, the ultrasonic sensors were partially applied by improving the Median filter in order to reduce the error caused by the short circuit in communication between hard reflection and the number of light reflections, and the operation technology was improved by applying the curvature trajectory for smooth operation in a small area. Median filter reduced the error of degree and distance by 70%, vehicle running stability was verified through the training course such as 'S' and '8' in the result.

Vision-Based Self-Localization of Autonomous Guided Vehicle Using Landmarks of Colored Pentagons (컬러 오각형을 이정표로 사용한 무인자동차의 위치 인식)

  • Kim Youngsam;Park Eunjong;Kim Joonchoel;Lee Joonwhoan
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.387-394
    • /
    • 2005
  • This paper describes an idea for determining self-localization using visual landmark. The critical geometric dimensions of a pentagon are used here to locate the relative position of the mobile robot with respect to the pattern. This method has the advantages of simplicity and flexibility. This pentagon is also provided nth a unique identification, using invariant features and colors that enable the system to find the absolute location of the patterns. This algorithm determines both the correspondence between observed landmarks and a stored sequence, computes the absolute location of the observer using those correspondences, and calculates relative position from a pentagon using its (ive vortices. The algorithm has been implemented and tested. In several trials it computes location accurate to within 5 centimeters in less than 0.3 second.