• 제목/요약/키워드: vision-based tracking

검색결과 405건 처리시간 0.025초

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권2호
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

비전 기반의 손동작 검출 및 추적 시스템 (Vision-based hand Gesture Detection and Tracking System)

  • 박호식;배철수
    • 한국통신학회논문지
    • /
    • 제30권12C호
    • /
    • pp.1175-1180
    • /
    • 2005
  • 본 논문에서는 비전 기반의 손동작 검출 및 추적 시스템을 제안하고자 한다. 기존의 손동작 인식 시스템은 정적인 관측 환경에서 배경을 제거함으로 손을 검출하는 단순한 방법을 사용함으로써, 카메라의 움직임, 조명의 변화 등에 의해 견실하지 못하였다. 그러므로 본 논문에서는 기하학적 구조에 의하여 손의 외형을 인식하여 검출할 수 있는 통계적 방법을 제안하였다. 또한 카메라의 각도에 의한 손이 겹쳐 보이는 문제를 줄이기 위하여 다중 카메라를 사용하였으며 비동기식 다중 관측으로 시스템의 범용성을 향상시키었다. 실험 결과 제안된 방법이 기존의 외관을 이용한 방법보다 $3.91\%$ 개선된 $99.28\%$의 인식률을 나타내어 제안한 방법의 효율성을 입증하였다.

비젼 시스템을 이용한 이동 물체의 그립핑 (The Moving Object Gripping Using Vision Systems)

  • 조기흠;최병준;전재현;홍석교
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 하계학술대회 논문집 G
    • /
    • pp.2357-2359
    • /
    • 1998
  • This paper proposes trajectory tracking of the moving object based on one camera vision system. And, this system proposes a method which robot manipulator grips moving object and predicts coordinate of moving objcet. The trajectory tracking and position coordinate are computed by vision data acquired to camera. Robot manipulator tracks and grips moving object by vision data. The proposed vision systems use a algorithm to do real-time processing.

  • PDF

비전 기반 무인반송차의 효과적인 운행을 위한 가상추적륜 기반 유도선 추적 기법 (A Guideline Tracing Technique Based on a Virtual Tracing Wheel for Effective Navigation of Vision-based AGVs)

  • 김민환;변성민
    • 한국멀티미디어학회논문지
    • /
    • 제19권3호
    • /
    • pp.539-547
    • /
    • 2016
  • Automated guided vehicles (AGVs) are widely used in industry. Several types of vision-based AGVs have been studied in order to reduce cost of infrastructure building at floor of workspace and to increase flexibility of changing the navigation path layout. A practical vision-based guideline tracing method is proposed in this paper. A virtual tracing wheel is introduced and adopted in this method, which enables a vision-based AGV to trace a guideline in diverse ways. This method is also useful for preventing damage of the guideline by enforcing the real steering wheel of the AGV not to move on the guideline. Usefulness of the virtual tracing wheel is analyzed through computer simulations. Several navigation tests with a commercial AGV were also performed on a usual guideline layout and we confirmed that the virtual tracing wheel based tracing method could work practically well.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

능동 카메라 기반의 물체 추적 제어기 설계 (Controller Design for Object Tracking with an Active Camera)

  • 윤수진;최군호
    • 반도체디스플레이기술학회지
    • /
    • 제10권1호
    • /
    • pp.83-89
    • /
    • 2011
  • In the case of the tracking system with an active camera, it is very difficult to guarantee real-time processing due to the attribute of vision system which handles large amounts of data at once and has time delay to process. The reliability of the processed result is also badly influenced by the slow sampling time and uncertainty caused by the image processing. In this paper, we figure out dynamic characteristics of pixels reflected on the image plane and derive the mathematical model of the vision tracking system which includes the actuating part and the image processing part. Based on this model, we find a controller that stabilizes the system and enhances the tracking performance to track a target rapidly. The centroid is used as the position index of moving object and the DC motor in the actuating part is controlled to keep the identified centroid at the center point of the image plane.

분산다중센서로 구현된 지능화공간의 색상정보를 이용한 실시간 물체추적 (Real-Time Objects Tracking using Color Configuration in Intelligent Space with Distributed Multi-Vision)

  • 진태석;이장명;하시모토히데키
    • 제어로봇시스템학회논문지
    • /
    • 제12권9호
    • /
    • pp.843-849
    • /
    • 2006
  • Intelligent Space defines an environment where many intelligent devices, such as computers and sensors, are distributed. As a result of the cooperation between smart devices, intelligence emerges from the environment. In such scheme, a crucial task is to obtain the global location of every device in order to of for the useful services. Some tracking systems often prepare the models of the objects in advance. It is difficult to adopt this model-based solution as the tracking system when many kinds of objects exist. In this paper the location is achieved with no prior model, using color properties as information source. Feature vectors of multiple objects using color histogram and tracking method are described. The proposed method is applied to the intelligent environment and its performance is verified by the experiments.

딥러닝 기술을 이용한 3차원 객체 추적 기술 리뷰 (A Review of 3D Object Tracking Methods Using Deep Learning)

  • 박한훈
    • 융합신호처리학회논문지
    • /
    • 제22권1호
    • /
    • pp.30-37
    • /
    • 2021
  • 카메라 영상을 이용한 3차원 객체 추적 기술은 증강현실 응용 분야를 위한 핵심 기술이다. 영상 분류, 객체 검출, 영상 분할과 같은 컴퓨터 비전 작업에서 CNN(Convolutional Neural Network)의 인상적인 성공에 자극 받아, 3D 객체 추적을 위한 최근의 연구는 딥러닝(deep learning)을 활용하는 데 초점을 맞추고 있다. 본 논문은 이러한 딥러닝을 활용한 3차원 객체 추적 방법들을 살펴본다. 딥러닝을 활용한 3차원 객체 추적을 위한 주요 방법들을 설명하고, 향후 연구 방향에 대해 논의한다.

객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘 (Bottleneck-based Siam-CNN Algorithm for Object Tracking)

  • 임수창;김종찬
    • 한국멀티미디어학회논문지
    • /
    • 제25권1호
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

Using CNN- VGG 16 to detect the tennis motion tracking by information entropy and unascertained measurement theory

  • Zhong, Yongfeng;Liang, Xiaojun
    • Advances in nano research
    • /
    • 제12권2호
    • /
    • pp.223-239
    • /
    • 2022
  • Object detection has always been to pursue objects with particular properties or representations and to predict details on objects including the positions, sizes and angle of rotation in the current picture. This was a very important subject of computer vision science. While vision-based object tracking strategies for the analysis of competitive videos have been developed, it is still difficult to accurately identify and position a speedy small ball. In this study, deep learning (DP) network was developed to face these obstacles in the study of tennis motion tracking from a complex perspective to understand the performance of athletes. This research has used CNN-VGG 16 to tracking the tennis ball from broadcasting videos while their images are distorted, thin and often invisible not only to identify the image of the ball from a single frame, but also to learn patterns from consecutive frames, then VGG 16 takes images with 640 to 360 sizes to locate the ball and obtain high accuracy in public videos. VGG 16 tests 99.6%, 96.63%, and 99.5%, respectively, of accuracy. In order to avoid overfitting, 9 additional videos and a subset of the previous dataset are partly labelled for the 10-fold cross-validation. The results show that CNN-VGG 16 outperforms the standard approach by a wide margin and provides excellent ball tracking performance.