• Title/Summary/Keyword: Control Object

Search Result 2,599, Processing Time 0.027 seconds

Development of the Robot's Gripper Control System using DSP (DSP 를 이용한 로봇의 그리퍼 제어장치의 개발)

  • Kim Gab-Soon
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.77-84
    • /
    • 2006
  • This paper describes the design and implementation of a robot's gripper control system. In order to safely grasp an unknown object using the robot's gripper, the gripper should detect the force of gripping direction and the force of gravity direction, and should perform the force control using the detected forces and the robot's gripper control system. In this paper, the robot's gripper control system is designed and manufactured using DSP(Digital Signal Processor), and the gripper is composed of two 6-axis force/moment sensors which measures the Fx force(force of x-direction), Fy force, Fz force, and the Mx moment(moment of x-direction), My moment, Mz moment at the same time. The response characteristic test of the system is performed to determine the proportional gain Kp and the integral gain Ki of PI controller. As a result, it is shown that the developed robot's gripper control system grasps an unknown object safely.

Object Tracking & PTZ camera Control for Intelligent Surveillance System (지능형 감시 시스템을 위한 객체 추적 및 PTZ 카메라 제어)

  • Park, Ho-Sik;Hwang, Suen-Ki;Nam, Kee-Hwan;Bae, Cheol-Soo;Lee, Jin-Ki;Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.2
    • /
    • pp.95-100
    • /
    • 2013
  • Smart surveillance, is the use of automatic video analysis technologies in video surveillance applications. We present a robust object tracking method using pan-tilt-zoom camera for intelligent surveillance System, As the result of the experiment using 78 vehicle, the success rate of the tracking for moving object & non-moving object werw 97.4% and 91%. and 84.6%. the success rate o PTZ control for license plate image.

Development of a 6-axis Robot's Finger Force/Moment Sensor for Stably Grasping an Unknown Object (미지물체를 안전하게 잡기 위한 6축 로봇손가락 힘/모멘트센서의 개발)

  • 김갑순
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.7
    • /
    • pp.105-113
    • /
    • 2003
  • This paper describes the development of a 6-axis robot's finger force/moment sensor, which measures forces Fx, Fy, Fz, and moments Mx, My, Mz simultaneously, for stably grasping an unknown object. In order to safely grasp an unknown object using the robot's gripper, it should measure the force in the gripping direction and the force in the gravity direction, and perform the force control using the measured forces. Thus, the robot's gripper should be composed of 6-axis robot's finger force/moment sensor that can measure forces Fx, Fy, Fz, and moments Mx, My, Mz simultaneously. In this paper, the 6-axis robot's finger force/moment sensor for measuring forces Fx, Fy, Fz, and moments Mx, My, Mz simultaneously was newly modeled using several parallel-plate beams, designed, and fabricated. The characteristic test of made sensor was performed. and the result shows that interference errors of the developed sensor are less than 3%. Also, Robot's gripper with the 6-axis robot's finger force/moment sensor for the characteristic test of force control was manufactured, and the characteristic test for grasping an unknown object was performed using it. The fabricated gripper could grasp an unknown object stably. Thus, the developed 6-axis robot's finger force/moment sensor may be used for robot's gripper.

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

Development of 3D Scanner Based on Laser Structured-light Image (레이저 구조광 영상기반 3차원 스캐너 개발)

  • Ko, Young-Jun;Yi, Soo-Yeong;Lee, Jun-O
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.186-191
    • /
    • 2016
  • This paper addresses the development of 3D data acquisition system (3D scanner) based laser structured-light image. The 3D scanner consists of a stripe laser generator, a conventional camera, and a rotation table. The stripe laser onto an object has distortion according to 3D shape of an object. By analyzing the distortion of the laser stripe in a camera image, the scanner obtains a group of 3D point data of the object. A simple semiconductor stripe laser diode is adopted instead of an expensive LCD projector for complex structured-light pattern. The camera has an optical filter to remove illumination noise and improve the performance of the distance measurement. Experimental results show the 3D data acquisition performance of the scanner with less than 0.2mm measurement error in 2 minutes. It is possible to reconstruct a 3D shape of an object and to reproduce the object by a commercially available 3D printer.

Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner (카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종)

  • Kim, Hyoung-Rae;Cui, Xue-Nan;Lee, Jae-Hong;Lee, Seung-Jun;Kim, Hakil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Intelligent Distance Controller for Humanoid Robot Arms Handling a Common Object (휴머노이드 로롯팔의 물체 조작을 위한 지능형 거리 제어기)

  • Bhogadi, Dileep K.;Cho, Hyun-Chan;Kim, Kwang-Sun;Wilson, Sara
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.71-74
    • /
    • 2008
  • The main object of this paper is concentrated on distance control of two robot arms of a humanoid using Fuzzy Logic Controller (FLC) for handling a common object. Serial Link Robot arms are widely used in most significantly in Humanoids serving for older people and also in various industrial applications. A method is proposed here that separates the interconnections between two robot arms so that the resulting model of two arms is decomposed into fuzzy logic based controller. The distance between two end effectors is always kept equal to that of the diameter of an object to be handled, so that the object would not fall down. Mathematical model of this system was obtained to simulate the behavior of serial robotic arms in close loop control before using fuzzy logic controller. Lagrangian equation of motion has been used to obtain the appropriate mathematical model of Robotic arms. The results are shown to provide some improvement over those obtained by more conventional means.

  • PDF

Object Tracking & PTZ camera Control for Intelligent Surveillance System (지능형 감시 시스템을 위한 객체 추적 및 PTZ 카메라 제어)

  • Lee, Young-Sik;Kim, Tae-Woo;Nam, Kee-Hwan;Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.1 no.2
    • /
    • pp.65-70
    • /
    • 2008
  • Smart surveillance, is the use of automatic video analysis technologies in video surveillance applications. We present a robust object tracking method using pan-tilt-zoom camera for intelligent surveillance System, As the result of the experiment using 78 vehicle, the success rate of the tracking for moving object & non-moving object werw 97.4% and 91%. and 84.6%. the success rate o PTZ control for license plate image.

  • PDF

3D motion estimation using multisensor data fusion (센서융합을 이용한 3차원 물체의 동작 예측)

  • 양우석;장종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.679-684
    • /
    • 1993
  • This article presents an approach to estimate the general 3D motion of a polyhedral object using multiple, sensory data some of which may not provide sufficient information for the estimation of object motion. Motion can be estimated continuously from each sensor through the analysis of the instantaneous state of an object. We have introduced a method based on Moore-Penrose pseudo-inverse theory to estimate the instantaneous state of an object. A linear feedback estimation algorithm is discussed to estimate the object 3D motion. Then, the motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown object. The techniques of multisensor data fusion can be categorized into three methods: averaging, decision, and guiding. We present a fusion algorithm which combines averaging and decision.

  • PDF