• 제목/요약/키워드: Control Object

검색결과 2,602건 처리시간 0.025초

DSP 를 이용한 로봇의 그리퍼 제어장치의 개발 (Development of the Robot's Gripper Control System using DSP)

  • 김갑순
    • 한국정밀공학회지
    • /
    • 제23권5호
    • /
    • pp.77-84
    • /
    • 2006
  • This paper describes the design and implementation of a robot's gripper control system. In order to safely grasp an unknown object using the robot's gripper, the gripper should detect the force of gripping direction and the force of gravity direction, and should perform the force control using the detected forces and the robot's gripper control system. In this paper, the robot's gripper control system is designed and manufactured using DSP(Digital Signal Processor), and the gripper is composed of two 6-axis force/moment sensors which measures the Fx force(force of x-direction), Fy force, Fz force, and the Mx moment(moment of x-direction), My moment, Mz moment at the same time. The response characteristic test of the system is performed to determine the proportional gain Kp and the integral gain Ki of PI controller. As a result, it is shown that the developed robot's gripper control system grasps an unknown object safely.

지능형 감시 시스템을 위한 객체 추적 및 PTZ 카메라 제어 (Object Tracking & PTZ camera Control for Intelligent Surveillance System)

  • 박호식;황선기;남기환;배철수;이진기;김태우
    • 한국정보전자통신기술학회논문지
    • /
    • 제6권2호
    • /
    • pp.95-100
    • /
    • 2013
  • 본 논문에서는 지능형 감시 시스템을 위한 객체 추적 방법과 PTZ 카메라 제어에 대하여 제안하고 있다. 제안된 알고리즘의 우수성을 입증하고자 검지 영역 내 차량의 진입 및 추적 실험을 하였고, 차량이 검지 영역 내에서 정차시 카메라의 PTZ를 제어하여 차량의 번호판 영상을 취득하도록 하였다. 실험에 참여한 차량은 움직이는 차량의 경우97.4%, 정차해 있는 차량의 경우91%의 추적율을 나타내었다. 그리고 정차해 있는 차량에 대해 번호판 위치로 정확한 PTZ 제어가 된 경우는 65대로 84.6%이었다. 실험 결과 제안된 알고리즘이 지능형 영상 감시 시스템에서 효율적으로 사용되어 질 수 있음을 입증하였다.

미지물체를 안전하게 잡기 위한 6축 로봇손가락 힘/모멘트센서의 개발 (Development of a 6-axis Robot's Finger Force/Moment Sensor for Stably Grasping an Unknown Object)

  • 김갑순
    • 한국정밀공학회지
    • /
    • 제20권7호
    • /
    • pp.105-113
    • /
    • 2003
  • This paper describes the development of a 6-axis robot's finger force/moment sensor, which measures forces Fx, Fy, Fz, and moments Mx, My, Mz simultaneously, for stably grasping an unknown object. In order to safely grasp an unknown object using the robot's gripper, it should measure the force in the gripping direction and the force in the gravity direction, and perform the force control using the measured forces. Thus, the robot's gripper should be composed of 6-axis robot's finger force/moment sensor that can measure forces Fx, Fy, Fz, and moments Mx, My, Mz simultaneously. In this paper, the 6-axis robot's finger force/moment sensor for measuring forces Fx, Fy, Fz, and moments Mx, My, Mz simultaneously was newly modeled using several parallel-plate beams, designed, and fabricated. The characteristic test of made sensor was performed. and the result shows that interference errors of the developed sensor are less than 3%. Also, Robot's gripper with the 6-axis robot's finger force/moment sensor for the characteristic test of force control was manufactured, and the characteristic test for grasping an unknown object was performed using it. The fabricated gripper could grasp an unknown object stably. Thus, the developed 6-axis robot's finger force/moment sensor may be used for robot's gripper.

영상처리와 센서융합을 활용한 지능형 6족 이동 로봇 (Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion)

  • 이상무;김상훈
    • 제어로봇시스템학회논문지
    • /
    • 제15권4호
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

레이저 구조광 영상기반 3차원 스캐너 개발 (Development of 3D Scanner Based on Laser Structured-light Image)

  • 고영준;이수영;이준오
    • 제어로봇시스템학회논문지
    • /
    • 제22권3호
    • /
    • pp.186-191
    • /
    • 2016
  • This paper addresses the development of 3D data acquisition system (3D scanner) based laser structured-light image. The 3D scanner consists of a stripe laser generator, a conventional camera, and a rotation table. The stripe laser onto an object has distortion according to 3D shape of an object. By analyzing the distortion of the laser stripe in a camera image, the scanner obtains a group of 3D point data of the object. A simple semiconductor stripe laser diode is adopted instead of an expensive LCD projector for complex structured-light pattern. The camera has an optical filter to remove illumination noise and improve the performance of the distance measurement. Experimental results show the 3D data acquisition performance of the scanner with less than 0.2mm measurement error in 2 minutes. It is possible to reconstruct a 3D shape of an object and to reproduce the object by a commercially available 3D printer.

카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종 (Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner)

  • 김형래;최학남;이재홍;이승준;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구 (A Study on the Determination of 3-D Object's Position Based on Computer Vision Method)

  • 김경석
    • 한국생산제조학회지
    • /
    • 제8권6호
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

휴머노이드 로롯팔의 물체 조작을 위한 지능형 거리 제어기 (Intelligent Distance Controller for Humanoid Robot Arms Handling a Common Object)

  • Bhogadi, Dileep K.;Cho, Hyun-Chan;Kim, Kwang-Sun;Wilson, Sara
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국지능시스템학회 2008년도 춘계학술대회 학술발표회 논문집
    • /
    • pp.71-74
    • /
    • 2008
  • The main object of this paper is concentrated on distance control of two robot arms of a humanoid using Fuzzy Logic Controller (FLC) for handling a common object. Serial Link Robot arms are widely used in most significantly in Humanoids serving for older people and also in various industrial applications. A method is proposed here that separates the interconnections between two robot arms so that the resulting model of two arms is decomposed into fuzzy logic based controller. The distance between two end effectors is always kept equal to that of the diameter of an object to be handled, so that the object would not fall down. Mathematical model of this system was obtained to simulate the behavior of serial robotic arms in close loop control before using fuzzy logic controller. Lagrangian equation of motion has been used to obtain the appropriate mathematical model of Robotic arms. The results are shown to provide some improvement over those obtained by more conventional means.

  • PDF

지능형 감시 시스템을 위한 객체 추적 및 PTZ 카메라 제어 (Object Tracking & PTZ camera Control for Intelligent Surveillance System)

  • 이영식;김태우;남기환;박호식;배철수
    • 한국정보전자통신기술학회논문지
    • /
    • 제1권2호
    • /
    • pp.65-70
    • /
    • 2008
  • 본 논문에서는 지능형 감시 시스템을 위한 객체 추적 방법과 PTZ 카메라 제어에 대하여 제안하고 있다. 제안된 알고리즘의 우수성을 입증하고자 검지 영역내 차량의 진입 및 추적 실험을 하였고, 차량이 검지영역 내에서 정차시 카메라의 PTZ 를 제어하여 차량의 번호판 영상을 취득하도록 하였다. 실험에 참여한 차량은 움직이는 차량의 경우 97.4%, 정차해 있는 차량의 경우 91%의 추적율을 나타내었다. 그리고 정차해 있는 차량에 대해 번호판 위치로 정확한 PTZ 제어가 된 경우는 65대로 84.6% 이었다. 실험 결과 제안된 알고리즘이 지능형 영상 감시 시스템에서 효율적으로 사용되어 질 수 있음을 입증하였다.

  • PDF

센서융합을 이용한 3차원 물체의 동작 예측 (3D motion estimation using multisensor data fusion)

  • 양우석;장종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국내학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.679-684
    • /
    • 1993
  • This article presents an approach to estimate the general 3D motion of a polyhedral object using multiple, sensory data some of which may not provide sufficient information for the estimation of object motion. Motion can be estimated continuously from each sensor through the analysis of the instantaneous state of an object. We have introduced a method based on Moore-Penrose pseudo-inverse theory to estimate the instantaneous state of an object. A linear feedback estimation algorithm is discussed to estimate the object 3D motion. Then, the motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown object. The techniques of multisensor data fusion can be categorized into three methods: averaging, decision, and guiding. We present a fusion algorithm which combines averaging and decision.

  • PDF