• Title/Summary/Keyword: Vision based tracking

Search Result 405, Processing Time 0.031 seconds

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.

Real-Time Vehicle Detector with Dynamic Segmentation and Rule-based Tracking Reasoning for Complex Traffic Conditions

  • Wu, Bing-Fei;Juang, Jhy-Hong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.12
    • /
    • pp.2355-2373
    • /
    • 2011
  • Vision-based vehicle detector systems are becoming increasingly important in ITS applications. Real-time operation, robustness, precision, accurate estimation of traffic parameters, and ease of setup are important features to be considered in developing such systems. Further, accurate vehicle detection is difficult in varied complex traffic environments. These environments include changes in weather as well as challenging traffic conditions, such as shadow effects and jams. To meet real-time requirements, the proposed system first applies a color background to extract moving objects, which are then tracked by considering their relative distances and directions. To achieve robustness and precision, the color background is regularly updated by the proposed algorithm to overcome luminance variations. This paper also proposes a scheme of feedback compensation to resolve background convergence errors, which occur when vehicles temporarily park on the roadside while the background image is being converged. Next, vehicle occlusion is resolved using the proposed prior split approach and through reasoning for rule-based tracking. This approach can automatically detect straight lanes. Following this step, trajectories are applied to derive traffic parameters; finally, to facilitate easy setup, we propose a means to automate the setting of the system parameters. Experimental results show that the system can operate well under various complex traffic conditions in real time.

Robust Control of Robot Manipulators using Vision Systems

  • Lee, Young-Chan;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.7 no.2
    • /
    • pp.162-170
    • /
    • 2003
  • In this paper, we propose a robust controller for trajectory control of n-link robot manipulators using feature based on visual feedback. In order to reduce tracking error of the robot manipulator due to parametric uncertainties, integral action is included in the dynamic control part of the inner control loop. The desired trajectory for tracking is generated from feature extraction by the camera mounted on the end effector. The stability of the robust state feedback control system is shown by the Lyapunov method. Simulation and experimental results on a 5-link robot manipulator with two degree of freedom show that the proposed method has good tracking performance.

  • PDF

Real time tracking of multiple humans for mobile robot application

  • Park, Joon-Hyuk;Park, Byung-Soo;Lee, Seok;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.3-100
    • /
    • 2002
  • This paper presents the method for detection and tracking of multiple humans robustly in mobile platform. The perception of human is performed in real time through the processing of images acquired from a moving stereo vision system. We performed multi-cue integration such as human shape, skin color and depth information to detect and track each human in moving background scene. Human shape is measured by edge-based template matching on distance transformed image. Improving robustness for human detection, we apply the human face skin color in HSV color space. And we could increase the accuracy and the robustness in both detection and tracking by applying random sampling stochastic estimati...

  • PDF

Developing Head/Eye Tracking System and Sync Verification (헤드/아이 통합 트랙커 개발 및 통합 성능 검증)

  • Kim, Jeong-Ho;Lee, Dae-Woo;Heo, Se-Jong;Park, Chan-Gook;Baek, Kwang-Yul;Bang, Hyo-Choong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.90-95
    • /
    • 2010
  • This paper describes the development of integrated head and eye tracker system. Vision based head tracker is performed and it has 7mm error in 300mm translation. The epi-polar method and point matching are used for determining a position of head and rotational degree. High brightness LEDs are installed on helmet and the installed pattern is very important to match the points of stereo system. Eye tracker also uses LED for constant illumination. A Position of gazed object(3m distance) is determined by pupil tracking and eye tracker has 1~5 pixel error. Integration of result data of each tracking system is important. RS-232C communication is applied to integrated system and triggering signal is used for synchronization.

Implementation of Moving Object Recognition based on Deep Learning (딥러닝을 통한 움직이는 객체 검출 알고리즘 구현)

  • Lee, YuKyong;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.2
    • /
    • pp.67-70
    • /
    • 2018
  • Object detection and tracking is an exciting and interesting research area in the field of computer vision, and its technologies have been widely used in various application systems such as surveillance, military, and augmented reality. This paper proposes and implements a novel and more robust object recognition and tracking system to localize and track multiple objects from input images, which estimates target state using the likelihoods obtained from multiple CNNs. As the experimental result, the proposed algorithm is effective to handle multi-modal target appearances and other exceptions.

Video Road Vehicle Detection and Tracking based on OpenCV

  • Hou, Wei;Wu, Zhenzhen;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.226-233
    • /
    • 2022
  • Video surveillance is widely used in security surveillance, military navigation, intelligent transportation, etc. Its main research fields are pattern recognition, computer vision and artificial intelligence. This article uses OpenCV to detect and track vehicles, and monitors by establishing an adaptive model on a stationary background. Compared with traditional vehicle detection, it not only has the advantages of low price, convenient installation and maintenance, and wide monitoring range, but also can be used on the road. The intelligent analysis and processing of the scene image using CAMSHIFT tracking algorithm can collect all kinds of traffic flow parameters (including the number of vehicles in a period of time) and the specific position of vehicles at the same time, so as to solve the vehicle offset. It is reliable in operation and has high practical value.

A Suggestion for Worker Feature Extraction and Multiple-Object Tracking Method in Apartment Construction Sites (아파트 건설 현장 작업자 특징 추출 및 다중 객체 추적 방법 제안)

  • Kang, Kyung-Su;Cho, Young-Woon;Ryu, Han-Guk
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2021.05a
    • /
    • pp.40-41
    • /
    • 2021
  • The construction industry has the highest occupational accidents/injuries among all industries. Korean government installed surveillance camera systems at construction sites to reduce occupational accident rates. Construction safety managers are monitoring potential hazards at the sites through surveillance system; however, the human capability of monitoring surveillance system with their own eyes has critical issues. Therefore, this study proposed to build a deep learning-based safety monitoring system that can obtain information on the recognition, location, identification of workers and heavy equipment in the construction sites by applying multiple-object tracking with instance segmentation. To evaluate the system's performance, we utilized the MS COCO and MOT challenge metrics. These results present that it is optimal for efficiently automating monitoring surveillance system task at construction sites.

  • PDF

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.