• 제목/요약/키워드: Robot calibration

검색결과 208건 처리시간 0.023초

CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구 (A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor)

  • 신찬배;김진대;이재원
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.231-233
    • /
    • 2007
  • In this paper we present a new visual approach for the robust bin-picking in a two-step concept for a vision driven automatic handling robot. The technology described here is based on two types of sensors: 3D laser scanner and CCD video camera. The geometry and pose(position and orientation) information of bin contents was reconstructed from the camera and laser sensor. these information can be employed to guide the robotic arm. A new thinning algorithm and constrained hough transform method is also explained in this paper. Consequently, the developed bin-picking demonstrate the successful operation with 3D hole object.

  • PDF

이족보행로봇(IWR-III)의 통합 저어 환경 구축 (Implementation of Integrated Control Environment for Biped Robot(IWR-III))

  • 노경곤;서영섭;김진걸
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.3089-3091
    • /
    • 1999
  • To control IWR-III Biped Waking Robot, those complex modules are necessary that concurrent control multi-axes servo motors, PID & Feedforward gain tuning, initial value calibration, display current status of system, user interface for emergency safety and three-dimensional rendering graphic visualization. It is developed for various-type gait $data^{[1]}$ and for control modes (i.e open/closed loop and pulse/velocity/torque control) that Integrated Control Enviroment with GUI( Graphic User Interface) consist of time-buffered control part using MMC (Multi-Motion Controller) and 3D simulation part using DirectX graphic library.

  • PDF

Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안 (A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System)

  • 김종형;장경재;권혁동
    • 한국생산제조학회지
    • /
    • 제26권2호
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

비전센서와 구조화빔을 이용한 용접 형상 측정 시스템 (System for Measuring the Welding Profile Using Vision and Structured Light)

  • 김창현;최태용;이주장;서정;박경택;강희신
    • 한국레이저가공학회:학술대회논문집
    • /
    • 한국레이저가공학회 2005년도 추계학술발표대회 논문집
    • /
    • pp.50-56
    • /
    • 2005
  • The robot systems are widely used in the many industrial field as well as welding manufacturing. The essential tasks to operate the welding robot are the acquisition of the position and/or shape of the parent metal. For the seam tracking or the robot tracking, many kinds of contact and non-contact sensors are used. Recently, the vision is most popular. In this paper, the development of the system which measures the shape of the welding part is described. This system uses the line-type structured laser diode and the vision sensor. It includes the correction of radial distortion which is often found in the image taken by the camera with short focal length. The Direct Linear Transformation (DLT) method is used for the camera calibration. The three dimensional shape of the parent metal is obtained after simple linear transformation. Some demos are shown to describe the performance of the developed system.

  • PDF

iGS를 이용한 모바일 로봇의 실내위치추정 알고리즘 (Localization Algorithm for a Mobile Robot using iGS)

  • 서대근;조성호;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제14권3호
    • /
    • pp.242-247
    • /
    • 2008
  • As an absolute positioning system, iGS is designed based on ultrasonic signals whose speed can be formulated clearly in terms of time and room temperature, which is utilized for a mobile robot localization. The iGS is composed of an RFID receiver and an ultra-sonic transmitter, where an RFID is designated to synchronize the transmitter and receiver of the ultrasonic signal. The traveling time of the ultrasonic signal has been used to calculate the distance between the iGS system and a beacon which is located at a pre-determined location. This paper suggests an effective operation method of iGS to estimate position of the mobile robot working in unstructured environment. To expand recognition range and to improve accuracy of the system, two strategies are proposed: utilization of beacons belonging to neighboring blocks and removal of the environment-reflected ultrasonic signals. As the results, the ubiquitous localization system based on iGS as a pseudo-satellite system has been developed successfully with a low cost, a high update rate, and relatively high precision.

융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구 (Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information)

  • 최재영;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제18권8호
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

센서 통합 능력을 갖는 다중 로봇 Controller의 설계 기술

  • 서일홍;여희주;엄광식
    • 제어로봇시스템학회지
    • /
    • 제2권3호
    • /
    • pp.81-91
    • /
    • 1996
  • 이 글에서는 Multi-Tasking Real Time O.S인 VxWorks를 기본으로 하여 다중센서 융합(Multi-Sensor Fusion) 능력을 갖는 다중 로봇 협조제어 시스템의 구현에 대하여 살펴보았다. 본 제어 시스템은 두대 로봇의 제어에 필요한 장애물 회피, 조건 동작(Conditional Motion) 혹은 동시동작(Concurrent Motion)과 외부 디바이스와의 동기 Motion(Conveyor Tracking)을 수행할 수 있게 구현하였고, 몇몇 작업을 통해 우수성을 입증하였다. 앞으로 본 연구와 관련한 추후 과제로는 1) 자유도가 6관절형인 수직다관절 매니퓰레이터를 위한 충돌회피 알고리즘의 개발, 2) Two Arm Robot의 상대 위치를 위한 Auto-Calibration 시스템의 개발, 3) CAD Based Trajectory 생성 등이 있다.

  • PDF

Three Examples of Learning Robots

  • Mashiro, Oya;Graefe, Volker
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.147.1-147
    • /
    • 2001
  • Future robots, especially service and personal robots, will need much more intelligence, robustness and user-friendliness. The ability to learn contributes to these characteristics and is, therefore, becoming more and more important. Three of the numerous varieties of learning are discussed together with results of real-world experiments with three autonomous robots: (1) the acquisition of map knowledge by a mobile robot, allowing it to navigate in a network of corridors, (2) the acquisition of motion control knowledge by a calibration-free manipulator, allowing it to gain task-related experience and improve its manipulation skills while it is working, and (3) the ability to learn how to perform service tasks ...

  • PDF

카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법 (Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model)

  • 임이지;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1099-1110
    • /
    • 2023
  • 자율주행 및 robot navigation의 인식 시스템은 성능 향상을 위해 다중 센서를 융합(Multi-Sensor Fusion)을 한 후, 객체 인식 및 추적, 차선 감지 등의 비전 작업을 한다. 현재 카메라와 라이다 센서의 융합을 기반으로 한 딥러닝 모델에 대한 연구가 활발히 이루어지고 있다. 그러나 딥러닝 모델은 입력 데이터의 변조를 통한 적대적 공격에 취약하다. 기존의 다중 센서 기반 자율주행 인식 시스템에 대한 공격은 객체 인식 모델의 신뢰 점수를 낮춰 장애물 오검출을 유도하는 데에 초점이 맞춰져 있다. 그러나 타겟 모델에만 공격이 가능하다는 한계가 있다. 센서 융합단계에 대한 공격의 경우 융합 이후의 비전 작업에 대한 오류를 연쇄적으로 유발할 수 있으며, 이러한 위험성에 대한 고려가 필요하다. 또한 시각적으로 판단하기 어려운 라이다의 포인트 클라우드 데이터에 대한 공격을 진행하여 공격 여부를 판단하기 어렵도록 한다. 본 연구에서는 이미지 스케일링 기반 카메라-라이다 융합 모델(camera-LiDAR calibration model)인 LCCNet 의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트에 스케일링 공격을 하고자 한다. 스케일링 알고리즘과 크기별 공격 성능 실험을 진행한 결과 평균 77% 이상의 융합 오류를 유발하였다.

간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법 (Sampling-based Control of SAR System Mounted on A Simple Manipulator)

  • 이아현;이주호;이주행
    • 한국CDE학회논문집
    • /
    • 제19권4호
    • /
    • pp.356-367
    • /
    • 2014
  • A robotic sapatial augmented reality (RSAR) system, which combines robotic components with projector-based AR technique, is unique in its ability to expand the user interaction area by dynamically changing the position and orientation of a projector-camera unit (PCU). For a moving PCU mounted on a conventional robotic device, we can compute its extrinsic parameters using a robot kinematics method assuming a link and joint geometry is available. In a RSAR system based on user-created robot (UCR), however, it is difficult to calibrate or measure the geometric configuration, which limits to apply a conventional kinematics method. In this paper, we propose a data-driven kinematics control method for a UCR-based RSAR system. The proposed method utilized a pre-sampled data set of camera calibration acquired at sufficient instances of kinematics configurations in fixed joint domains. Then, the sampled set is compactly represented as a set of B-spline surfaces. The proposed method have merits in two folds. First, it does not require any kinematics model such as a link length or joint orientation. Secondly, the computation is simple since it just evaluates a several polynomials rather than relying on Jacobian computation. We describe the proposed method and demonstrates the results for an experimental RSAR system with a PCU on a simple pan-tilt arm.