• Title/Summary/Keyword: Robot calibration

Search Result 208, Processing Time 0.03 seconds

A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor (CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구)

  • Shin, Chan-Bai;Kim, Jin-Dae;Lee, Jeh-Won
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.231-233
    • /
    • 2007
  • In this paper we present a new visual approach for the robust bin-picking in a two-step concept for a vision driven automatic handling robot. The technology described here is based on two types of sensors: 3D laser scanner and CCD video camera. The geometry and pose(position and orientation) information of bin contents was reconstructed from the camera and laser sensor. these information can be employed to guide the robotic arm. A new thinning algorithm and constrained hough transform method is also explained in this paper. Consequently, the developed bin-picking demonstrate the successful operation with 3D hole object.

  • PDF

Implementation of Integrated Control Environment for Biped Robot(IWR-III) (이족보행로봇(IWR-III)의 통합 저어 환경 구축)

  • Noh, Gyeong-Gon;Seo, Yeong-Seop;Kim, Jin-Geol
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.3089-3091
    • /
    • 1999
  • To control IWR-III Biped Waking Robot, those complex modules are necessary that concurrent control multi-axes servo motors, PID & Feedforward gain tuning, initial value calibration, display current status of system, user interface for emergency safety and three-dimensional rendering graphic visualization. It is developed for various-type gait $data^{[1]}$ and for control modes (i.e open/closed loop and pulse/velocity/torque control) that Integrated Control Enviroment with GUI( Graphic User Interface) consist of time-buffered control part using MMC (Multi-Motion Controller) and 3D simulation part using DirectX graphic library.

  • PDF

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

System for Measuring the Welding Profile Using Vision and Structured Light (비전센서와 구조화빔을 이용한 용접 형상 측정 시스템)

  • Kim, Chang-Hyeon;Choe, Tae-Yong;Lee, Ju-Jang;Seo, Jeong;Park, Gyeong-Taek;Gang, Hui-Sin
    • Proceedings of the Korean Society of Laser Processing Conference
    • /
    • 2005.11a
    • /
    • pp.50-56
    • /
    • 2005
  • The robot systems are widely used in the many industrial field as well as welding manufacturing. The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot tracking, many kinds of contact and non-contact sensors are used. Recently, the vision is most popular. In this paper, the development of the system which measures the shape of the welding part is described. This system uses the line-type structured laser diode and the vision sensor. It includes the correction of radial distortion which is often found in the image taken by the camera with short focal length. The Direct Linear Transformation (DLT) method is used for the camera calibration. The three dimensional shape of the parent metal is obtained after simple linear transformation. Some demos are shown to describe the performance of the developed system.

  • PDF

Localization Algorithm for a Mobile Robot using iGS (iGS를 이용한 모바일 로봇의 실내위치추정 알고리즘)

  • Seo, Dae-Geun;Cho, Sung-Ho;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.3
    • /
    • pp.242-247
    • /
    • 2008
  • As an absolute positioning system, iGS is designed based on ultrasonic signals whose speed can be formulated clearly in terms of time and room temperature, which is utilized for a mobile robot localization. The iGS is composed of an RFID receiver and an ultra-sonic transmitter, where an RFID is designated to synchronize the transmitter and receiver of the ultrasonic signal. The traveling time of the ultrasonic signal has been used to calculate the distance between the iGS system and a beacon which is located at a pre-determined location. This paper suggests an effective operation method of iGS to estimate position of the mobile robot working in unstructured environment. To expand recognition range and to improve accuracy of the system, two strategies are proposed: utilization of beacons belonging to neighboring blocks and removal of the environment-reflected ultrasonic signals. As the results, the ubiquitous localization system based on iGS as a pseudo-satellite system has been developed successfully with a low cost, a high update rate, and relatively high precision.

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

센서 통합 능력을 갖는 다중 로봇 Controller의 설계 기술

  • 서일홍;여희주;엄광식
    • ICROS
    • /
    • v.2 no.3
    • /
    • pp.81-91
    • /
    • 1996
  • 이 글에서는 Multi-Tasking Real Time O.S인 VxWorks를 기본으로 하여 다중센서 융합(Multi-Sensor Fusion) 능력을 갖는 다중 로봇 협조제어 시스템의 구현에 대하여 살펴보았다. 본 제어 시스템은 두대 로봇의 제어에 필요한 장애물 회피, 조건 동작(Conditional Motion) 혹은 동시동작(Concurrent Motion)과 외부 디바이스와의 동기 Motion(Conveyor Tracking)을 수행할 수 있게 구현하였고, 몇몇 작업을 통해 우수성을 입증하였다. 앞으로 본 연구와 관련한 추후 과제로는 1) 자유도가 6관절형인 수직다관절 매니퓰레이터를 위한 충돌회피 알고리즘의 개발, 2) Two Arm Robot의 상대 위치를 위한 Auto-Calibration 시스템의 개발, 3) CAD Based Trajectory 생성 등이 있다.

  • PDF

Three Examples of Learning Robots

  • Mashiro, Oya;Graefe, Volker
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.147.1-147
    • /
    • 2001
  • Future robots, especially service and personal robots, will need much more intelligence, robustness and user-friendliness. The ability to learn contributes to these characteristics and is, therefore, becoming more and more important. Three of the numerous varieties of learning are discussed together with results of real-world experiments with three autonomous robots: (1) the acquisition of map knowledge by a mobile robot, allowing it to navigate in a network of corridors, (2) the acquisition of motion control knowledge by a calibration-free manipulator, allowing it to gain task-related experience and improve its manipulation skills while it is working, and (3) the ability to learn how to perform service tasks ...

  • PDF

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Sampling-based Control of SAR System Mounted on A Simple Manipulator (간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법)

  • Lee, Ahyun;Lee, Joo-Ho;Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.