• Title/Summary/Keyword: Robot Calibration

Search Result 208, Processing Time 0.026 seconds

Determination the Opsition for Mobile Robot using a Neural Network (신경회로망을 이용한 이동로봇의 위치결정)

  • 이효진;이기성;곽한택
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.219-222
    • /
    • 1996
  • During the navigation of mobile robot, one of the essential task is to determination the absolute location of mobile robot. In this paper, we proposed a method to determine the position of the camera from a landmark through the visual image of a quadrangle typed landmark using neural network. In determining the position of the camera on the world coordinate, there is difference between real value and calculated value because of uncertainty in pixels, incorrect camera calibration and lens distortion etc. This paper describes the solution of the above problem using BPN(Back Propagation Network). The experimental results show the superiority of the proposed method in comparison to conventional method in the performance of determining camera position.

  • PDF

극소형 전자기계장치에 관한 연구전망

  • 양상식
    • 전기의세계
    • /
    • v.39 no.6
    • /
    • pp.14-19
    • /
    • 1990
  • 1. CAD system과 PROPS를 접속하여 CADsystem에서 Design된 surface를 사용할 수 있으며 Robot Kinematics를 graphic library화하여 surface배치 수상 및 path generation 및 animation을 통하여 가공작업을 위한 로보트 운동을 simulation할 수 있게 되었다. 2. Denavit-hartenberg transformation form에 의해 여러 Robot Kinematic을 일반적인 형식으로 library화 하였다. 3. 금형 가공의 공정들을 Menu로 만들어서 Expert system을 도입, 손쉽게 Interactive한 작업을 할 수 있게 하였다. 4. 차후의 연구 목표는 로보트 Calibration S/W의 개발 및 실현 그리고 Expert System을 이용한 Robot Program Generator의 완성을 통한 전체 Off-line programming System을 정립하는데 있다. 이를 위해서 더 실제적인 Tool Path Generation과 Expert System을 이용한 가공 조건의 결정 및 User Interface를 위한 Window가 개발되어야 한다. 5. 1차년도에 개발된 Robotonomic Tool System의 유연성을 확장시킨다. 실험결과를 바탕으로 공정 자동화 시스템을 확장시킨다. 6. 연마공정자동화에 필수적인 공구 및 공구 Tip의 표준화 및 자동교환장치를 개발한다. 7. 금형연마 Cell의 구성요소들간의 Interface 및 System Controller에서의 집적화를 시킨다.

  • PDF

A Head-Eye Calibration Technique Using Image Rectification (영상 교정을 이용한 헤드-아이 보정 기법)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.8
    • /
    • pp.11-23
    • /
    • 2000
  • Head-eye calibration is a process for estimating the unknown orientation and position of a camera with respect to a mobile platform, such as a robot wrist. We present a new head-eye calibration technique which can be applied for platforms with rather limited motion capability In particular, the proposed calibration technique can be applied to find the relative orientation of a camera mounted on a linear translation platform which does not have rotation capability. The algorithm find the rotation using a calibration data obtained from pure Translation of a camera along two different axes We have derived a calibration algorithm exploiting the rectification technique in such a way that the rectified images should satisfy the epipolar constraint. We present the calibration procedure for both the rotation and the translation components of a camera relative to the platform coordinates. The efficacy of the algorithm is demonstrated through simulations and real experiments.

  • PDF

Projection mapping onto multiple objects using a projector robot

  • Yamazoe, Hirotake;Kasetani, Misaki;Noguchi, Tomonobu;Lee, Joo-Ho
    • Advances in robotics research
    • /
    • v.2 no.1
    • /
    • pp.45-57
    • /
    • 2018
  • Even though the popularity of projection mapping continues to increase and it is being implemented in more and more settings, most current projection mapping systems are limited to special purposes, such as outdoor events, live theater and musical performances. This lack of versatility arises from the large number of projectors needed and their proper calibration. Furthermore, we cannot change the positions and poses of projectors, or their projection targets, after the projectors have been calibrated. To overcome these problems, we propose a projection mapping method using a projector robot that can perform projection mapping in more general or ubiquitous situations, such as shopping malls. We can estimate a projector's position and pose with the robot's self-localization sensors, but the accuracy of this approach remains inadequate for projection mapping. Consequently, the proposed method solves this problem by combining self-localization by robot sensors with position and pose estimation of projection targets based on a 3D model. We first obtain the projection target's 3D model and then use it to accurately estimate the target's position and pose and thus achieve accurate projection mapping with a projector robot. In addition, our proposed method performs accurate projection mapping even after a projection target has been moved, which often occur in shopping malls. In this paper, we employ Ubiquitous Display (UD), which we are researching as a projector robot, to experimentally evaluate the effectiveness of the proposed method.

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Improved LiDAR-Camera Calibration Using Marker Detection Based on 3D Plane Extraction

  • Yoo, Joong-Sun;Kim, Do-Hyeong;Kim, Gon-Woo
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2530-2544
    • /
    • 2018
  • In this paper, we propose an enhanced LiDAR-camera calibration method that extracts the marker plane from 3D point cloud information. In previous work, we estimated the straight line of each board to obtain the vertex. However, the errors in the point information in relation to the z axis were not considered. These errors are caused by the effects of user selection on the board border. Because of the nature of LiDAR, the point information is separated in the horizontal direction, causing the approximated model of the straight line to be erroneous. In the proposed work, we obtain each vertex by estimating a rectangle from a plane rather than obtaining a point from each straight line in order to obtain a vertex more precisely than the previous study. The advantage of using planes is that it is easier to select the area, and the most point information on the board is available. We demonstrated through experiments that the proposed method could be used to obtain more accurate results compared to the performance of the previous method.

A Position Measurements of Moving Object in 2D Plane (2차원 평면상에서 이동하는 물체의 위치측정)

  • Ro, Jae-Hee;Lee, Yong-Jung;Choi, Jae-Ha;Ro, Young-Shick;Lee, Yang-Burm
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.12
    • /
    • pp.1537-1543
    • /
    • 1999
  • In this paper, PSD(Position Sensitive Detector) sensor system that estimates position for moving objects in 2D plane is developed. PSD sensor is used to measure the position of an incidence light in real-time. To get the position of light source of moving target, a new parameter calibration algorithm and neural network technique are proposed and applied. Real-time position measurements of the mobile robot with light source is examined to validate the proposed method. It is shown that the proposed technique provides accurate position estimation of the moving object.

  • PDF

Position Measurements of Moving Object in Cartesian Coordinate (직교좌표에서 이동물체의 위치측정)

  • 이용중;노재희;이양범
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.10 no.1
    • /
    • pp.36-42
    • /
    • 2001
  • In this paper, PSD(Position Sensitive Detector) sensor system that estimates position for moving objects in 2D plane is developed. PSD sensor is used to measure the position the position of and incidence light in real-time. To get the position of light source of moving target, a new parameter calibration algorithm and neural network technique are proposed and applied. Real-time position measurements of the mobile robot with light source is examined to validate the proposed method. It is shown that the proposed technique provides accurate position estimation of the moving object.

  • PDF

Attitude Estimation of a Foot for Biped Robots Using Multiple Sensors (다중 센서 융합을 통한 이족 보행 로봇 발의 자세 추정)

  • Ryu, Je-Hun;You, Bun-Jae;Park, Min-Yong;Kim, Do-Yoon;Choi, Young-Jin;Oh, Sang-Rok
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.586-588
    • /
    • 2004
  • Although stable control algorithm has been implemented to the biped robot, the stability is not guaranteed because of encoder errors and/or rigid body elastics. Hence precise body pose estimation is required for more natural and long term walk. Specially pelvis sloping by gravity or uneven ground on landing place are most critical reason for undulated motion. In order to overcome these difficulties an estimation system for foot position and orientation using PSD sensors and Gyro sensors is proposed along with calibration algorithm and experimental verification.

  • PDF

Mobile Robot Localization using Ubiquitous Vision System (시각기반 센서 네트워크를 이용한 이동로봇의 위치 추정)

  • Dao, Nguyen Xuan;Kim, Chi-Ho;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2780-2782
    • /
    • 2005
  • In this paper, we present a mobile robot localization solution by using a Ubiquitous Vision System (UVS). The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the robots' positions. We construct UVS as a multi-agent system by regarding each vision sensor as one vision agent (VA). Each VA performs target segmentation by color and motion information as well as visual tracking for multiple objects. Our modified identified contractnet (ICN) protocol is used for communication between VAs to coordinate multitask. This protocol raises scalability and modularity of thesystem because of independent number of VAs and needless calibration. Furthermore, the handover between VAs by using ICN is seamless. Experimental results show the robustness of the solution with respect to a widespread area. The performance in indoor environments shows the feasibility of the proposed solution in real-time.

  • PDF