• Title/Summary/Keyword: Vision based tracking

Search Result 405, Processing Time 0.032 seconds

Tracking of Single Moving Object based on Motion Estimation (움직임 추정에 기반한 단일 이동객체 추적)

  • Oh Myoung-Kwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.6 no.4
    • /
    • pp.349-354
    • /
    • 2005
  • The study on computer vision is aimed on creating a system to substitute the ability of human visual sensor. Especially, moving object tracking system is becoming an important area of study. In this study, we have proposed the tracking system of single moving object based on motion estimation. The tracking system performed motion estimation using differential image, and then tracked the moving object by controlling Pan/Tilt device of camera. Proposed tracking system is devided into image acquisition and preprocessing phase, motion estimation phase and object tracking phase. As a result of experiment, motion of moving object can be estimated. The result of tracking, object was not lost and tracked correctly.

  • PDF

Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV (가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험)

  • Lee, Byoung-Jin;Yun, Suk-Chang;Lee, Young-Jae;Sung, Sang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

A Vision-Based Jig-Saw Puzzle Matching Method (영상처리 시스템을 이용한 그림조각 맞추기에 관한 연구)

  • 이동주;서일홍;오상록
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.1
    • /
    • pp.96-104
    • /
    • 1990
  • In this paper, a novel method of jig-saw puzzle matching is proposed using a modifided boundary matching algorithm without a priori knowledge for the matched puzzle. Specifically, a boundary tracking algorithm is utilised to segment each puzzle from low-resolution image data. Segmented puzzle is described via corner point, angle and distance between two adjacent coner point, and convexity and/or concavity of corner point. Proposed algorithm is implemented and tested in IBM PC and PC version vision system, and applied successfully to real jig-saw puzzles.

  • PDF

Aspects of a head-mounted eye-tracker based on a bidirectional OLED microdisplay

  • Baumgarten, Judith;Schuchert, Tobias;Voth, Sascha;Wartenberg, Philipp;Richter, Bernd;Vogel, Uwe
    • Journal of Information Display
    • /
    • v.13 no.2
    • /
    • pp.67-71
    • /
    • 2012
  • In today's mobile world, small and lightweight information systems are becoming increasingly important. Microdisplays are the base for several near-to-eye display devices. The addition of an integrated image sensor significantly boosts the range of applications. This paper describes the base-building block for these systems: the bidirectional organic light-emitting diode microdisplay. A small and lightweight optic design, an eye-tracking algorithm, and interaction concepts are also presented.

Hand gesture recognition for player control

  • Shi, Lan Yan;Kim, Jin-Gyu;Yeom, Dong-Hae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1908-1909
    • /
    • 2011
  • Hand gesture recognition has been widely used in virtual reality and HCI (Human-Computer-Interaction) system, which is challenging and interesting subject in the vision based area. The existing approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. The purpose of this paper is to combine the finite state machine (FSM) and the gesture recognition method, in other to control Windows Media Player, such as: play/pause, next, pervious, and volume up/down.

  • PDF

A Study on the optical aspects of machine vision based dimensional measurement system (정밀 좌표측정용 머신비전 시스템의 광학적 해석에 관한 연구)

  • Lee, E.H.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.11 no.2
    • /
    • pp.149-163
    • /
    • 1994
  • A novel method of dimensional measurement using machine vision, which is called Landmark Tracking System, has been developed. Its advantages come form tracking only the bright, standard shaped "landmarks" which are made from retroreflective sheets. In the design of the LTS, it is essential to know the relationship between optical parameters and their influence on system performance. Such optical parameters include the brightness of landmark image, the illumination system design, and the choice of imaging optics. And the performance of retroreflective material also plays important role in the LTS performances. Influences of such optical parameters on LTS's dimensional measurement characteristics are investigated, with respect to the retroreflective material, the imaging optics, and the illumination system. Measuremtn errors due to parameter variations are also analyzed. Experiments are performed with a LTS prototype. Retroreflective characteristics are verified, and the LTS's measurement performances are measured in the form of repeatability and accuracy. Experimental results shgow that the LTS has repeatability better than 1/30,000 of a field of view(30 degrees), and accuracy better tha 1/3,000 of a field fo view.d fo view.

  • PDF

Customer Activity Recognition System using Image Processing

  • Waqas, Maria;Nasir, Mauizah;Samdani, Adeel Hussain;Naz, Habiba;Tanveer, Maheen
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.63-66
    • /
    • 2021
  • The technological advancement in computer vision has made system like grab-and-go grocery a reality. Now all the shoppers have to do now is to walk in grab the items and go out without having to wait in the long queues. This paper presents an intelligent retail environment system that is capable of monitoring and tracking customer's activity during shopping based on their interaction with the shelf. It aims to develop a system that is low cost, easy to mount and exhibit adequate performance in real environment.

Object Tracking of Mobile Robots using Hough Transform (Hough Transform을 이용한 이동 로봇의 물체 추적)

  • Jung, Kyung-Kwon;Shin, Heon-Soo;Lee, Hyun-Kwan;Eom, Ki-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.819-822
    • /
    • 2007
  • In this paper, we propose an object-tracking of mobile robots using CHT(Circular Hough transform) algorithm. The proposed method extracts the region of moving objects using 1-D projection algorithm, and detects circular objects using CHT. In order to verify the effectiveness of the proposed tracking method, we perform experiments of ball shape object-tracking using mobile robot based on ARM processor with CMOS camera.

  • PDF

Object Tracking using Feature Map from Convolutional Neural Network (컨볼루션 신경망의 특징맵을 사용한 객체 추적)

  • Lim, Suchang;Kim, Do Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.126-133
    • /
    • 2017
  • The conventional hand-crafted features used to track objects have limitations in object representation. Convolutional neural networks, which show good performance results in various areas of computer vision, are emerging as new ways to break through the limitations of feature extraction. CNN extracts the features of the image through layers of multiple layers, and learns the kernel used for feature extraction by itself. In this paper, we use the feature map extracted from the convolution layer of the convolution neural network to create an outline model of the object and use it for tracking. We propose a method to adaptively update the outline model to cope with various environment change factors affecting the tracking performance. The proposed algorithm evaluated the validity test based on the 11 environmental change attributes of the CVPR2013 tracking benchmark and showed excellent results in six attributes.

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF