• 제목/요약/키워드: vision-based tracking

검색결과 405건 처리시간 0.03초

이동 로봇의 비젼 기반 제어 (Vision Based Mobile Robot Control)

  • 김진환
    • 전기학회논문지P
    • /
    • 제60권2호
    • /
    • pp.63-67
    • /
    • 2011
  • This paper presents the mobile robot control based on vision system. The proposed vision based controller consist of the camera tracking controller and the formation controller. Th e camera controller has the adaptive gain based on IBVS. The formation controller which is designed in the sense of the Lyapunov stability follows the leader. Simluation results show that the proposed vision based mobile robot control is validated for indoor mobile robot applications.

광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템 (3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor)

  • 조영진;오현민;김민영
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발 (Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment)

  • 김유진;이호준;이경수
    • 자동차안전학회지
    • /
    • 제13권4호
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

영상궤환을 이용한 이동체의 주적 및 잡기 작업의 구현 (Implementation of tracking and grasping the moving object using visual feedback)

  • 권철;강형진;박민용
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1995년도 추계학술대회 논문집 학회본부
    • /
    • pp.579-582
    • /
    • 1995
  • Recently, the vision system has the wide and growing' application field on account of the vast information from that visual mechanism. Especially, in the control field, the vision system has been applied to the industrial robot. In this paper, the object tracking and grasping task is accomplished by the robot vision system with a camera in the robot hand. The camera setting method is proposed to implement that task in a simple way. In spite of the calibration error, the stable grasping task is achieved using the tracking control algorithm based on the vision feature.

  • PDF

화상회의를 위한 자동추적 카메라 제어시스템 개발 (Development of Auto Tracking Vision Control System for Video Conference)

  • 한병조;황찬길;황영호;양해원
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 제39회 하계학술대회
    • /
    • pp.1712-1713
    • /
    • 2008
  • In this paper, we develop the vision control systems of auto tracking based on image processing techniques for video conference. The developed auto tracking vision control system consists of control hardware including vision, two dc motors and dc motor drivers. Image processing techniques are used to pixel of two images. Motion detection algorithm is applied to eliminate the noise. Experiment results are presented to illustrate the effectiveness and the applicability of the approaches proposed.

  • PDF

수중 로봇을 위한 다중 템플릿 및 가중치 상관 계수 기반의 물체 인식 및 추종 (Multiple Templates and Weighted Correlation Coefficient-based Object Detection and Tracking for Underwater Robots)

  • 김동훈;이동화;명현;최현택
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.142-149
    • /
    • 2012
  • The camera has limitations of poor visibility in underwater environment due to the limited light source and medium noise of the environment. However, its usefulness in close range has been proved in many studies, especially for navigation. Thus, in this paper, vision-based object detection and tracking techniques using artificial objects for underwater robots have been studied. We employed template matching and mean shift algorithms for the object detection and tracking methods. Also, we propose the weighted correlation coefficient of adaptive threshold -based and color-region-aided approaches to enhance the object detection performance in various illumination conditions. The color information is incorporated into the template matched area and the features of the template are used to robustly calculate correlation coefficients. And the objects are recognized using multi-template matching approach. Finally, the water basin experiments have been conducted to demonstrate the performance of the proposed techniques using an underwater robot platform yShark made by KORDI.

Multi-Object Tracking using the Color-Based Particle Filter in ISpace with Distributed Sensor Network

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.46-51
    • /
    • 2005
  • Intelligent Space(ISpace) is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Simulations are carried out to evaluate the proposed performance. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

스네이크를 이용한 영역기반 물체추적 알고리즘 (Region Based Object Tracking with Snakes)

  • 김영섭;한규범;백윤수
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 춘계학술대회논문집B
    • /
    • pp.307-312
    • /
    • 2001
  • In this paper, we proposed the object-tracking algorithm that recognizes and estimates the any shaped and size objects using vision system. For the extraction of the object from the background of the acquired images, spatio-temporal filter and signature parsing algorithm are used. Specially, for the solution of correspondence problem of the multiple objects tracking, we compute snake energy and position information of the target objects. Through the real-time tracking experiment, we verified the effectiveness of the suggested tracking algorithm.

  • PDF

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.