• Title/Summary/Keyword: eye-in-hand camera

Search Result 34, Processing Time 0.038 seconds

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.172-179
    • /
    • 2002
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

A New Landmark-Based Visual Servoing with Stereo Camera for Door Opening

  • Han, Myoung-Soo;Lee, Soon-Geul;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.2-100
    • /
    • 2002
  • In this paper we propose a new visual servoing method for door opening with mobile manipulator. We use an eye-to-hand system that stereo camera is mounted on mobile platform, and adopt the position-based method. The previous methods for door opening mostly used eye-in-hand system with mono camera and required predefined knowledge such as radius and position about door grip, which was mainly caused by using mono cam era. This is also a severe constraint for pursuing general-purpose algorithm for door opening. For overcoming such drawback, we use stereo camera and suggest a new method that detect the door grip and estimate its pose from stereo depth information without predefined knowledge. Al...

  • PDF

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

Development of a shape measuring system by hand-eye robot (Hand-Eye Robot에 의한 형상계측 시스템의 개발)

  • 정재문;김선일;양윤모
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.586-590
    • /
    • 1990
  • In this paper we describe the shape measuring technique and system with a non-contractive sensor, composed of slit-ray projector and solid-state camera. For improving the accuracy and preventing measuring dead point, this sensor part is attached to the end of robot, and each sensing is executed after one step moving. By patching these sensing data, whole measuring data is constructed. The calibration between sensor and world coordinate is implemented through the specific calibration block by transformation matrix method. The result of experiment was satisfactory.

  • PDF

3 Dimensional Position Estimation for Eye-in-Hand Robot Vision (Eye-in-Hand 로보트 비젼을 이용한 3차원 위치 정보의 추정)

  • Jang, Won;Kim, Kyung-Jin;Chung, Myung-Jin;Bien, Zeungnam
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.8
    • /
    • pp.1152-1160
    • /
    • 1989
  • This paper describes a 3 dimensional position estimation method for a eye-in-hand robot vision system. The camera is mounted on the tip of a robot manipulator, and moves without restriction. Sequences of images are considered simultaneously for nonlinear least-square formation, and the best estimation of the 3 dimensional position is searched by the Simplex search algorithm. The experiments show that the proposed method can provide fairly accurate position information, while the robot is executing a given task.

  • PDF

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

Real-time Depth Estimation for Visual Serving with Eye-in-Hand Robot (아이인핸드로봇의 영상 추적을 위한 실시간 거리측정)

  • Park, Jong-Cheol;Bien, Zeung-Nam;Ro, Cheol-Rae
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1122-1124
    • /
    • 1996
  • Depth between the robot and the target is an essential information in the robot control. However, in case of eye-in-hand robot with one camera, it is not easy to get an accurate depth information in real-time. In this paper, the techniques of depth-from-motion and depth-from-focus are combined to accomplish the real-time requirement. Integration of the two approaches are accomplished by appropriate use of confidence factors which are evaluated by fuzzy rules. Also a fuzzy logic based calibration technique is proposed.

  • PDF

A Novel Visual Servoing Approach For Keeping Feature Points Within The Field-of-View (특징점이 Field of View를 벗어나지 않는 새로운 Visual Servoing 기법)

  • Park, Do-Hwan;Yeom, Joon-Hyung;Park, Noh-Yong;Ha, In-Joong
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.322-324
    • /
    • 2007
  • In this paper, an eye-in-hand visual servoing strategy for keeping feature points within the FOV(field-of-view) is proposed. We first specify the FOV constraint which must be satisfied to keep the feature points within the FOV. It is expressed as the inequality relationship between (i) the LOS(jine-of-sight) angles of the center of the feature points from the optical axis of the camera and (ii) the distance between the object and the camera. We then design a nonlinear feedback controller which decouples linearly the translational and rotational control loops. Finally, we show that appropriate choice of the controller gains assures to satisfy the FOV constraint. The main advantage of our approach over the previous ones is that the trajectory of the camera is smooth and circular-like. Furthermore, ours can be applied to the large camera displacement problem.

  • PDF

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.