• Title/Summary/Keyword: vision-based tracking

Search Result 405, Processing Time 0.023 seconds

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

The Moving Object Gripping Using Vision Systems (비젼 시스템을 이용한 이동 물체의 그립핑)

  • Cho, Ki-Heum;Choi, Byong-Joon;Jeon, Jae-Hyun;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2357-2359
    • /
    • 1998
  • This paper proposes trajectory tracking of the moving object based on one camera vision system. And, this system proposes a method which robot manipulator grips moving object and predicts coordinate of moving objcet. The trajectory tracking and position coordinate are computed by vision data acquired to camera. Robot manipulator tracks and grips moving object by vision data. The proposed vision systems use a algorithm to do real-time processing.

  • PDF

A Guideline Tracing Technique Based on a Virtual Tracing Wheel for Effective Navigation of Vision-based AGVs (비전 기반 무인반송차의 효과적인 운행을 위한 가상추적륜 기반 유도선 추적 기법)

  • Kim, Minhwan;Byun, Sungmin
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.3
    • /
    • pp.539-547
    • /
    • 2016
  • Automated guided vehicles (AGVs) are widely used in industry. Several types of vision-based AGVs have been studied in order to reduce cost of infrastructure building at floor of workspace and to increase flexibility of changing the navigation path layout. A practical vision-based guideline tracing method is proposed in this paper. A virtual tracing wheel is introduced and adopted in this method, which enables a vision-based AGV to trace a guideline in diverse ways. This method is also useful for preventing damage of the guideline by enforcing the real steering wheel of the AGV not to move on the guideline. Usefulness of the virtual tracing wheel is analyzed through computer simulations. Several navigation tests with a commercial AGV were also performed on a usual guideline layout and we confirmed that the virtual tracing wheel based tracing method could work practically well.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

Controller Design for Object Tracking with an Active Camera (능동 카메라 기반의 물체 추적 제어기 설계)

  • Youn, Su-Jin;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.1
    • /
    • pp.83-89
    • /
    • 2011
  • In the case of the tracking system with an active camera, it is very difficult to guarantee real-time processing due to the attribute of vision system which handles large amounts of data at once and has time delay to process. The reliability of the processed result is also badly influenced by the slow sampling time and uncertainty caused by the image processing. In this paper, we figure out dynamic characteristics of pixels reflected on the image plane and derive the mathematical model of the vision tracking system which includes the actuating part and the image processing part. Based on this model, we find a controller that stabilizes the system and enhances the tracking performance to track a target rapidly. The centroid is used as the position index of moving object and the DC motor in the actuating part is controlled to keep the identified centroid at the center point of the image plane.

Real-Time Objects Tracking using Color Configuration in Intelligent Space with Distributed Multi-Vision (분산다중센서로 구현된 지능화공간의 색상정보를 이용한 실시간 물체추적)

  • Jin, Tae-Seok;Lee, Jang-Myung;Hashimoto, Hideki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.9
    • /
    • pp.843-849
    • /
    • 2006
  • Intelligent Space defines an environment where many intelligent devices, such as computers and sensors, are distributed. As a result of the cooperation between smart devices, intelligence emerges from the environment. In such scheme, a crucial task is to obtain the global location of every device in order to of for the useful services. Some tracking systems often prepare the models of the objects in advance. It is difficult to adopt this model-based solution as the tracking system when many kinds of objects exist. In this paper the location is achieved with no prior model, using color properties as information source. Feature vectors of multiple objects using color histogram and tracking method are described. The proposed method is applied to the intelligent environment and its performance is verified by the experiments.

A Review of 3D Object Tracking Methods Using Deep Learning (딥러닝 기술을 이용한 3차원 객체 추적 기술 리뷰)

  • Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.1
    • /
    • pp.30-37
    • /
    • 2021
  • Accurate 3D object tracking with camera images is a key enabling technology for augmented reality applications. Motivated by the impressive success of convolutional neural networks (CNNs) in computer vision tasks such as image classification, object detection, image segmentation, recent studies for 3D object tracking have focused on leveraging deep learning. In this paper, we review deep learning approaches for 3D object tracking. We describe key methods in this field and discuss potential future research directions.

Bottleneck-based Siam-CNN Algorithm for Object Tracking (객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

Using CNN- VGG 16 to detect the tennis motion tracking by information entropy and unascertained measurement theory

  • Zhong, Yongfeng;Liang, Xiaojun
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.223-239
    • /
    • 2022
  • Object detection has always been to pursue objects with particular properties or representations and to predict details on objects including the positions, sizes and angle of rotation in the current picture. This was a very important subject of computer vision science. While vision-based object tracking strategies for the analysis of competitive videos have been developed, it is still difficult to accurately identify and position a speedy small ball. In this study, deep learning (DP) network was developed to face these obstacles in the study of tennis motion tracking from a complex perspective to understand the performance of athletes. This research has used CNN-VGG 16 to tracking the tennis ball from broadcasting videos while their images are distorted, thin and often invisible not only to identify the image of the ball from a single frame, but also to learn patterns from consecutive frames, then VGG 16 takes images with 640 to 360 sizes to locate the ball and obtain high accuracy in public videos. VGG 16 tests 99.6%, 96.63%, and 99.5%, respectively, of accuracy. In order to avoid overfitting, 9 additional videos and a subset of the previous dataset are partly labelled for the 10-fold cross-validation. The results show that CNN-VGG 16 outperforms the standard approach by a wide margin and provides excellent ball tracking performance.