• 제목/요약/키워드: 3D Robot Vision

Search Result 138, Processing Time 0.028 seconds

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

A Study on Real-time Control of Bead Height and Joint Tracking Using Laser Vision Sensor

  • Kim, H. K.;Park, H.
    • International Journal of Korean Welding Society
    • /
    • v.4 no.1
    • /
    • pp.30-37
    • /
    • 2004
  • There have been continuous efforts on automating welding processes. This automation process could be said to fall into two categories, weld seam tracking and weld quality evaluation. Recently, the attempts to achieve these two functions simultaneously are on the increase. For the study presented in this paper, a vision sensor is made, a vision system is constructed and using this, the 3 dimensional geometry of the bead is measured on-line. For the application as in welding, which is the characteristic of nonlinear process, a fuzzy controller is designed. And with this, an adaptive control system is proposed which acquires the bead height and the coordinates of the point on the bead along the horizontal fillet joint, performs seam tracking with those data, and also at the same time, controls the bead geometry to a uniform shape. A communication system, which enables the communication with the industrial robot, is designed to control the bead geometry and to track the weld seam. Experiments are made with varied offset angles from the pre-taught weld path, and they showed the adaptive system works favorable results.

  • PDF

Development of Automotive Position Measuring Vision System

  • Lee, Chan-Ho;Oh, Jong-Kyu;Hur, Jong-Sung;Han, Chul-Hi;Kim, Young-Su;Lee, Kyu-Ho;Hur, Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1511-1515
    • /
    • 2004
  • Machine vision system plays an important role in factory automation. Its many applications are found in automobile manufacturing industries, as an eye for robotic automation system. In this paper, an automobile position measuring vision system(APMVS) applicable to manufacturing line for under body painting of a car is introduced. The APMVS measures position and orientation of the car body to be sealed or painted by the robots. The configuration of the overall robotic sealing/painting system, design and application procedure, and application examples are described.

  • PDF

Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure (평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션)

  • Joung, Ji Hoon;Kang, Tae-Sun;Shin, Hyeon-Ho;Kim, SooJong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

Implementation of 3D Moving Target-Tracking System based on MSE and BPEJTC Algorithms

  • Ko, Jung-Hwan;Lee, Maeng-Ho;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.5 no.1
    • /
    • pp.41-46
    • /
    • 2004
  • In this paper, a new stereo 3D moving-target tracking system using the MSE (mean square error) and BPEJTC (binary phase extraction joint transform correlator) algorithms is proposed. A moving target is extracted from the sequential input stereo image by applying a region-based MSE algorithm following which, the location coordinates of a moving target in each frame are obtained through correlation between the extracted target image and the input stereo image by using the BPEJTC algorithm. Through several experiments performed with 20 frames of the stereo image pair with $640{\times}480$ pixels, we confirmed that the proposed system is capable of tracking a moving target at a relatively low error ratio of 1.29 % on average at real time.

Development of Personal Robot Platform : Designed Approach for Modularization

  • Roh, Se-gon;S. M Baek;Lee, D. H;Park, K. H;T. K Moon;S. W Ryew;T. Y Kuc;Kim, H. S;Lee, H. G H
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.117.3-117
    • /
    • 2002
  • In this paper a new framework is presented for developing the personal robot being used in home environments. We mainly focus on the system engineering technology such as the modularization and standardization. Effective ways for interfacing among modules are addressed regarding compatibility in hardware and software, and as a result a personal robot platform named DHR I is built. The robot is composed of five modules such as brain, mobile, sensor, vision, and user interface modules. Each module can be easily plugged in the system in a mechanical as well as electrical sense by sharing the communication protocol with IEEE1394 FireWire. &n...

  • PDF

Developement and control of a sensor based quadruped walking robot

  • Bien, Zeungnam;Lee, Yun-Jung;Suh, Il-Hong;Lee, Ji-Hong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1087-1092
    • /
    • 1990
  • This paper describes the development and control of a quadruped walking robot, named as KAISER-II. The control system with multiprocessor based hierachical structure is developed. In order to navigate autonomously on a rough terrain, an identification algorithm for robot's position is proposed using 3-D vision and guide-mark pattern Also, a simple attitude control algorithm is included using force sensors. Through experimental results, it is shown that the robot can not only walk statically on even terrain but also cross over or go through the artificially made obstacles such as stairs, horizontal bar and tunnel-typed one.

  • PDF

Control of Robot Manipulators Using LQG Visual Tracking Cotroller (LQG 시각추종제어기를 이용한 로봇매니퓰레이터의 제어)

  • Lim, Tai-Hun;Jun, Hyang-Sig;Choi, Young-Kiu;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2995-2997
    • /
    • 1999
  • Recently, real-time visual tracking control for a robot manipulator is performed by using a vision feedback sensor information. In this paper, the optical flow is computed based on the eye-in-hand robot configuration. The image jacobian is employed to calculate the rotation and translation velocity of a 3D moving object. LQG visual controller generates the real-time visual trajectory. In order to improving the visual tracking performance. VSC controller is employed to control the robot manipulator. Simulation results show a better visual tracking performance than other method.

  • PDF

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

A Robotic System for Transferring Tobacco Seedlings

  • Lee, D.W.;W.F.McClure
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.850-858
    • /
    • 1993
  • Germinatin and early growth of tobacco seedlings in trays containing many cells is increasing in popularity . Since 100 % germination is not likely , a major problem is to locate and replace the content of those cells which contain either no seedling or a stunted seedling with a plug containing a viable seedling. Empty cells and seedlings of poor quality take up valuable space in a greenhouse. They may also cause difficulty when transplanting seedlings into the field. Robotic technology, including the implementation of computer vision, appears to be an attractive alternative to the use of manual labor for accomplishing this task. Operating AGBOT, short for Agricultural ROBOT, involved four steps : (1) capturing the image, (2) processing the image, (3) moving the manipulator, (4) working the gripper. This research seedlings within a cell-grown environment. the configuration of the cell-grown seedling environment dictated the design of a Cartesian robot suitable for working ov r a flat plane. Experiments of AGBOT performance in transferring large seedlings produced trays which were more than 98% survived one week after transfer. In general , the system generated much better than expected.

  • PDF