• Title/Summary/Keyword: video tracking

Search Result 608, Processing Time 0.03 seconds

Design and Implementation of UAV System for Autonomous Tracking

  • Cho, Eunsung;Ryoo, Intae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.829-842
    • /
    • 2018
  • Unmanned Aerial Vehicle (UAV) is diversely utilized in our lives such as daily hobbies, specialized video image taking and disaster prevention activities. New ways of UAV application have been explored recently such as UAV-based delivery. However, most UAV systems are being utilized in a passive form such as real-time video image monitoring, filmed image ground analysis and storage. For more proactive UAV utilization, there should be higher-performance UAV and large-capacity memory than those presently utilized. Against this backdrop, this study described the general matters on proactive software platform and high-performance UAV hardware for real-time target tracking; implemented research on its design and implementation, and described its implementation method. Moreover, in its established platform, this study measured and analyzed the core-specific CPU consumption.

Fuzzy Based Shadow Removal and Integrated Boundary Detection for Video Surveillance

  • Niranjil, Kumar A.;Sureshkumar, C.
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.6
    • /
    • pp.2126-2133
    • /
    • 2014
  • We present a scalable object tracking framework, which is capable of removing shadows and tracking the people. The framework consists of background subtraction, fuzzy based shadow removal and boundary tracking algorithm. This work proposes a general-purpose method that combines statistical assumptions with the object-level knowledge of moving objects, apparent objects, and shadows acquired in the processing of the previous frames. Pixels belonging to moving objects and shadows are processed differently in order to supply an object-based selective update. Experimental results demonstrate that the proposed method is able to track the object boundaries under significant shadows with noise and background clutter.

A Robust Multi-part Tracking of Humans in the Video Sequence (비디오 영상내의 사람 추적을 위한 강인한 멀티-파트 추적 방법)

  • 김태현;김진율
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2088-2091
    • /
    • 2003
  • We presents a new algorithm for tracking person in video sequence that integrates the meanshift iteration procedure into the particle filtering. Utilizing the nice property of convergence to the modes in the meanshift iteration we show that only a few sample points are sufficient, while in general the particle filtering requires a large number of sample points. Multi-parts of a person is tracked independently of each other based on the color Then, the similarity against the reference model color and the geometric constraints between multi-parts are reflected as the sample weights. Also presented is the computer simulation results, which show successful tracking even for complex background clutter.

  • PDF

Development of Auto Tracking Vision Control System for Video Conference (화상회의를 위한 자동추적 카메라 제어시스템 개발)

  • Han, Byung-Jo;Hwang, Chan-Gil;Hwang, Young-Ho;Yang, Hai-Won
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1712-1713
    • /
    • 2008
  • In this paper, we develop the vision control systems of auto tracking based on image processing techniques for video conference. The developed auto tracking vision control system consists of control hardware including vision, two dc motors and dc motor drivers. Image processing techniques are used to pixel of two images. Motion detection algorithm is applied to eliminate the noise. Experiment results are presented to illustrate the effectiveness and the applicability of the approaches proposed.

  • PDF

A study on Object Tracking using Color-based Particle Filter

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.743-744
    • /
    • 2016
  • Object tracking in video sequences is a challenging task and has various applications. Particle filtering has been proven very successful for non-Gaussian and non-linear estimation problems. In this study, we first try to develop a color-based particle filter. In this approach, the color distributions of video frames are integrated into particle filtering. Color distributions are applied because of their robustness and computational efficiency. The model of the particle filter is defined by the color information of the tracked object. The model is compared with the current hypotheses of the particle filter using the Bhattacharyya coefficient. The proposed tracking method directly incorporates the scale and motion changes of the objects. Experimental results have been presented to show the effectiveness of our proposed system.

Real-Time Surveillance of People on an Embedded DSP-Platform

  • Qiao, Qifeng;Peng, Yu;Zhang, Dali
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.1 no.1
    • /
    • pp.3-8
    • /
    • 2007
  • This paper presents a set of techniques used in a real-time visual surveillance system. The system is implemented on a low-cost embedded DSP platform that is designed to work with stationary video sources. It consists of detection, a tracking and a classification module. The detector uses a statistical method to establish the background model and extract the foreground pixels. These pixels are grouped into blobs which are classified into single person, people in a group and other objects by the dynamic periodicity analysis. The tracking module uses mean shift algorithm to locate the target position. The system aims to control the human density in the surveilled scene and detect what happens abnormally. The major advantage of this system is the real-time capability and it only requires a video stream without other additional sensors. We evaluate the system in the real application, for example monitoring the subway entrance and the building hall, and the results prove the system's superior performance.

  • PDF

Development of an Integrated Traffic Object Detection Framework for Traffic Data Collection (교통 데이터 수집을 위한 객체 인식 통합 프레임워크 개발)

  • Yang, Inchul;Jeon, Woo Hoon;Lee, Joyoung;Park, Jihyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.191-201
    • /
    • 2019
  • A fast and accurate integrated traffic object detection framework was proposed and developed, harnessing a computer-vision based deep-learning approach performing automatic object detections, a multi object tracking technology, and video pre-processing tools. The proposed method is capable of detecting traffic object such as autos, buses, trucks and vans from video recordings taken under a various kinds of external conditions such as stability of video, weather conditions, video angles, and counting the objects by tracking them on a real-time basis. By creating plausible experimental scenarios dealing with various conditions that likely affect video quality, it is discovered that the proposed method achieves outstanding performances except for the cases of rain and snow, thereby resulting in 98% ~ 100% of accuracy.

Video-based 3-dimensional tracking system (영상을 이용한 3차원 위치 추적 시스템 개발1)

  • 박경수;반영환;이안재;임창주
    • Proceedings of the ESK Conference
    • /
    • 1996.10a
    • /
    • pp.160-165
    • /
    • 1996
  • This paper presents the development of video-based 3-dimensional tracking system. Measurement of human motion is important in the application of ergonomics. The system uses advanced direct video measurement technology. Passive retro-reflecting markers are attached to a subject and movements of markers are observed by two CCD cameras. Infrared light emitted near the CCD cameras is reflected by the markers and is detected by the cameras. The image ae captured by Samsung MVBO2 board and the center of markers is calculated by DSP program. The position of markers are transferred from MVB02 board to the computer through AT bus. The computer then tracks the position of each marker and saves the data. This system has dynamic accuracy with 1% error and the sampling rate to 6-10 Hz, and can analyse the trajectory and speed of the marker. The results of this study can be used for operators motion analysis, task analysis, and hand movement characteristic analysis.

  • PDF

Visual Modeling and Content-based Processing for Video Data Storage and Delivery

  • Hwang Jae-Jeong;Cho Sang-Gyu
    • Journal of information and communication convergence engineering
    • /
    • v.3 no.1
    • /
    • pp.56-61
    • /
    • 2005
  • In this paper, we present a video rate control scheme for storage and delivery in which the time-varying viewing interests are controlled by human gaze. To track the gaze, the pupil's movement is detected using the three-step process : detecting face region, eye region, and pupil point. To control bit rates, the quantization parameter (QP) is changed by considering the static parameters, the video object priority derived from the pupil tracking, the target PSNR, and the weighted distortion value of the coder. As results, we achieved human interfaced visual model and corresponding region-of-interest rate control system.

An Iterated Optical Flow Estimation Method for Automatically Tracking and Positioning Homologous Points in Video Image Sequences

  • Tsay, Jaan-Rong;Lee, I-Chien
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.372-374
    • /
    • 2003
  • The optical flow theory can be utilized for automatically tracking and positioning homologous points in digital video (DV) image sequences. In this paper, the Lucas-Kanade optical flow estimation (LKOFE) method and the normalized cross-correlation (NCC) method are compared and analyzed using the DV image sequences acquired by our SONY DCRPC115 DV camera. Thus, an improved optical flow estimation procedure, called 'Iterated Optical Flow Estimation (IOFE)', is presented. Our test results show that the trackable range of 3${\sim}$4 pixels in the LKOFE procedure can be apparently enlarged to 30 pixels in the IOFE.

  • PDF