• Title/Summary/Keyword: Control Object

Search Result 2,599, Processing Time 0.034 seconds

Fast Computation of the Visibility Region Using the Spherical Projection Method

  • Chu, Gil-Whoan;Chung, Myung-Jin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.92-99
    • /
    • 2002
  • To obtain visual information of a target object, a camera should be placed within the visibility region. As the visibility region is dependent on the relative position of the target object and the surrounding object, the position change of the surrounding object during a task requires recalculation of the visibility region. For a fast computation of the visibility region so as to modify the camera position to be located within the visibility region, we propose a spherical projection method. After being projected onto the sphere the visibility region is represented in $\theta$-$\psi$ spaces of the spherical coordinates. The reduction of calculation space enables a fast modification of the camera location according to the motion of the surrounding objects so that the continuous observation of the target object during the task is possible.

Localization of a Mobile Robot Using the Information of a Moving Object (운동물체의 정보를 이용한 이동로봇의 자기 위치 추정)

  • Roh, Dong-Kyu;Kim, Il-Myung;Kim, Byung-Hwa;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.11
    • /
    • pp.933-938
    • /
    • 2001
  • In this paper, we describe a method for the mobile robot using images of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using the a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot`s position. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot. The Kalman filter scheme is applied to this method. Effectiveness of the proposed method is demonstrated by the simulation.

  • PDF

A Study on Visual Servoing Image Information for Stabilization of Line-of-Sight of Unmanned Helicopter (무인헬기의 시선안정화를 위한 시각제어용 영상정보에 관한 연구)

  • 신준영;이현정;이민철
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.600-603
    • /
    • 2004
  • UAV (Unmanned Aerial Vehicle) is an aerial vehicle that can accomplish the mission without pilot. UAV was developed for a military purpose such as a reconnaissance in an early stage. Nowadays usage of UAV expands into a various field of civil industry such as a drawing a map, broadcasting, observation of environment. These UAV, need vision system to offer accurate information to person who manages on ground and to control the UAV itself. Especially LOS(Line-of-Sight) system wants to precisely control direction of system which wants to tracking object using vision sensor like an CCD camera, so it is very important in vision system. In this paper, we propose a method to recognize object from image which is acquired from camera mounted on gimbals and offer information of displacement between center of monitor and center of object.

  • PDF

Generation Human -like Arm Motion to Catch a Moving Object

  • Kwon, Oh-Kyu;Park, Poo-Gyeon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.161.5-161
    • /
    • 2001
  • Robots are required to assist our activities in daily life. In this paper, we focus on arm movement to catch moving object as one of important tasks frequently performed by human. We propose an algorithm which enables a robot to perform human-like arm motion to catch a moving object. First we analyze human hand trajectories and velocity profiles to catch an object. From the experimental results, we extract some characteristics in the process of approaching and following a moving object and confirm that these are necessary to realize human-like motion. We then adopt an instantaneous optimal control method which evaluates the error and energy cost at each sampling step, and design two time-varying weight matrices to introduce human characteristic into robot motion. The matrix concerning the error is defined as a time-increasing ...

  • PDF

Pose Estimation of an Object from X-ray Images Based on Principal Axis Analysis

  • Roh, Young-Jun;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.97.4-97
    • /
    • 2002
  • 1. Introduction Pose estimation of a three dimensional object has been studied in robot vision area, and it is needed in a number of industrial applications such as process monitoring and control, assembly and PCB inspection. In this research, we propose a new pose estimation method based on principal axes analysis. Here, it is assumed that the locations of x-ray source and the image plane are predetermined and the object geometry is known. To this end, we define a dispersion matrix of an object, which is a discrete form of inertia matrix of the object. It can be determined here from a set of x-ray images, at least three images are required. Then, the pose information is obtained fro...

  • PDF

Effective Covariance Tracker based on Adaptive Foreground Segmentation in Tracking Window (적응적인 물체분리를 이용한 효과적인 공분산 추적기)

  • Lee, Jin-Wook;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.8
    • /
    • pp.766-770
    • /
    • 2010
  • In this paper, we present an effective covariance tracking algorithm based on adaptive size changing of tracking window. Recent researches have advocated the use of a covariance matrix of object image features for tracking objects instead of the conventional histogram object models used in popular algorithms. But, according to the general covariance tracking algorithm, it can not deal with the scale changes of the moving objects. The scale of the moving object often changes in various tracking environment and the tracking window(or object kernel) has to be adapted accordingly. In addition, the covariance matrix of moving objects should be adaptively updated considering of the tracking window size. We provide a solution to this problem by segmenting the moving object from the background pixels of the tracking window. Therefore, we can improve the tracking performance of the covariance tracking method. Our several simulations prove the effectiveness of the proposed method.

Object-Based Image Search Using Color and Texture Homogeneous Regions (유사한 색상과 질감영역을 이용한 객체기반 영상검색)

  • 유헌우;장동식;서광규
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.6
    • /
    • pp.455-461
    • /
    • 2002
  • Object-based image retrieval method is addressed. A new image segmentation algorithm and image comparing method between segmented objects are proposed. For image segmentation, color and texture features are extracted from each pixel in the image. These features we used as inputs into VQ (Vector Quantization) clustering method, which yields homogeneous objects in terns of color and texture. In this procedure, colors are quantized into a few dominant colors for simple representation and efficient retrieval. In retrieval case, two comparing schemes are proposed. Comparing between one query object and multi objects of a database image and comparing between multi query objects and multi objects of a database image are proposed. For fast retrieval, dominant object colors are key-indexed into database.

Design of a Robot's Hand with Two 3-Axis Force Sensor for Grasping an Unknown Object

  • Kim, Gab-Soon
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.4 no.3
    • /
    • pp.12-19
    • /
    • 2003
  • This paper describes the design of a robot's hand with two fingers for stably grasping an unknown object, and the development of a 3-axis force sensor for which is necessary to constructing the robot's fingers. In order to safely grasp an unknown object using the robot's fingers, they should measure the forces in the gripping and in the gravity directions, and control the measured forces. The 3-axis force sensor should be used for accurately measuring the weight of an unknown object in the gravity direction. Thus, in this paper, the robot's hand with two fingers for stably grasping an unknown object is designed, and the 3-axis force sensor is newly modeled and fabricated using several parallel-plate beams.

Efficient 3D Scene Labeling using Object Detectors & Location Prior Maps (물체 탐지기와 위치 사전 확률 지도를 이용한 효율적인 3차원 장면 레이블링)

  • Kim, Joo-Hee;Kim, In-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.996-1002
    • /
    • 2015
  • In this paper, we present an effective system for the 3D scene labeling of objects from RGB-D videos. Our system uses a Markov Random Field (MRF) over a voxel representation of the 3D scene. In order to estimate the correct label of each voxel, the probabilistic graphical model integrates both scores from sliding window-based object detectors and also from object location prior maps. Both the object detectors and the location prior maps are pre-trained from manually labeled RGB-D images. Additionally, the model integrates the scores from considering the geometric constraints between adjacent voxels in the label estimation. We show excellent experimental results for the RGB-D Scenes Dataset built by the University of Washington, in which each indoor scene contains tabletop objects.

Ultrasound Echolocation Inspired by a Prey Detection Strategy of Big Brown Bats (박쥐의 먹이 탐지 전략을 모방한 초음파 센서의 물체 위치 추정)

  • Park, Sang-Wook;Kim, Dae-Eun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.3
    • /
    • pp.161-167
    • /
    • 2012
  • It is known that big brown bats can distinguish echo of a prey at various angles. In this paper, we suggest a new object localization strategy using ultrasonic echolocation. We calculate the relative energy ratio between a high frequency component of ultrasound signal and a low frequency component of ultrasound signal for a target object. We found the measure depends on bearing angle of the object in space. We also tested energy ratio of echoed FM ultrasound signals depending on frequency, based on cross-correlation. It can determine the relative angular position of objects even though the reflected signals are congested form each object.