• Title/Summary/Keyword: vision-based tracking

Search Result 405, Processing Time 0.025 seconds

Lane-Level Positioning based on 3D Tracking Path of Traffic Signs (교통 표지판의 3차원 추적 경로를 이용한 자동차의 주행 차로 추정)

  • Park, Soon-Yong;Kim, Sung-ju
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.172-182
    • /
    • 2016
  • Lane-level vehicle positioning is an important task for enhancing the accuracy of in-vehicle navigation systems and the safety of autonomous vehicles. GPS (Global Positioning System) and DGPS (Differential GPS) are generally used in navigation service systems, which however only provide an accuracy level up to 2~3 m. In this paper, we propose a 3D vision based lane-level positioning technique which can provides accurate vehicle position. The proposed method determines the current driving lane of a vehicle by tracking the 3D position of traffic signs which stand at the side of the road. Using a stereo camera, the 3D tracking paths of traffic signs are computed and their projections to the 2D road plane are used to determine the distance from the vehicle to the signs. Several experiments are performed to analyze the feasibility of the proposed method in many real roads. According to the experimental results, the proposed method can achieve 90.9% accuracy in lane-level positioning.

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.

A Development of Video Monitoring System on Real Time (실시간 영상감시 시스템 개발)

  • Cho, Hyun-Seob
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.2
    • /
    • pp.240-244
    • /
    • 2007
  • Non-intrusive methods based on active remote IR illumination fur eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

  • PDF

Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision (도심 자율주행을 위한 비전기반 차선 추종주행 실험)

  • Suh, Seung-Beum;Kang, Yeon-Sik;Roh, Chi-Won;Kang, Sung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

The Recognition of Crack Detection Using Difference Image Analysis Method based on Morphology (모폴로지 기반의 차영상 분석기법을 이용한 균열검출의 인식)

  • Byun Tae-bo;Kim Jang-hyung;Kim Hyung-soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.197-205
    • /
    • 2006
  • This paper presents the moving object tracking method using vision system. In order to track object in real time, the image of moving object have to be located the origin of the image coordinate axes. Accordingly, Fuzzy Control System is investigated for tracking the moving object, which control the camera module with Pan/Tilt mechanism. Hereafter, so the this system is applied to mobile robot, we design and implement image processing board for vision system. Also fuzzy controller is implemented to the StrongArm board. Finally, the proposed fuzzy controller is useful for the real-time moving object tracking system by experiment.

A Task Scheduling Strategy in a Multi-core Processor for Visual Object Tracking Systems (시각물체 추적 시스템을 위한 멀티코어 프로세서 기반 태스크 스케줄링 방법)

  • Lee, Minchae;Jang, Chulhoon;Sunwoo, Myoungho
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.2
    • /
    • pp.127-136
    • /
    • 2016
  • The camera based object detection systems should satisfy the recognition performance as well as real-time constraints. Particularly, in safety-critical systems such as Autonomous Emergency Braking (AEB), the real-time constraints significantly affects the system performance. Recently, multi-core processors and system-on-chip technologies are widely used to accelerate the object detection algorithm by distributing computational loads. However, due to the advanced hardware, the complexity of system architecture is increased even though additional hardwares improve the real-time performance. The increased complexity also cause difficulty in migration of existing algorithms and development of new algorithms. In this paper, to improve real-time performance and design complexity, a task scheduling strategy is proposed for visual object tracking systems. The real-time performance of the vision algorithm is increased by applying pipelining to task scheduling in a multi-core processor. Finally, the proposed task scheduling algorithm is applied to crosswalk detection and tracking system to prove the effectiveness of the proposed strategy.

Asynchronous Sensor Fusion using Multi-rate Kalman Filter (다중주기 칼만 필터를 이용한 비동기 센서 융합)

  • Son, Young Seop;Kim, Wonhee;Lee, Seung-Hi;Chung, Chung Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.11
    • /
    • pp.1551-1558
    • /
    • 2014
  • We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor's predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.

A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot (LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행)

  • Kim, Hyun Woo;Hawng, Yo-Seup;Kim, Yun-Ki;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

F-Hessian SIFT-Based Railroad Level-Crossing Vision System (F-Hessian SIFT기반의 철도건널목 영상 감시 시스템)

  • Lim, Hyung-Sup;Yoon, Hak-Sun;Kim, Chel-Huan;Ryu, Deung-Ryeol;Cho, Hwang;Lee, Key-Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.2
    • /
    • pp.138-144
    • /
    • 2010
  • This paper presents the experimental analysis of a F-Hessian SIFT-Based Railroad Level-Crossing Safety Vision System. Region of surveillance, region of interests, data matching based on extracting feature points has been examined under the laboratory condition by the model rig on a small scale. Real-time system were observed by using SIFT based on F-Hessian feature tracking method and other common algorithm.

Multiple Object Tracking with Color-Based Particle Filter for Intelligent Space (공간지능화를 위한 색상기반 파티클 필터를 이용한 다중물체추적)

  • Jin, Tae-Seok;Hashimoto, Hideki
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.1
    • /
    • pp.21-28
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

  • PDF