• Title/Summary/Keyword: Vehicle Image Tracking

Search Result 155, Processing Time 0.024 seconds

Detecting and Tracking Vehicles at Local Region by using Segmented Regions Information (분할 영역 정보를 이용한 국부 영역에서 차량 검지 및 추적)

  • Lee, Dae-Ho;Park, Young-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.929-936
    • /
    • 2007
  • The novel vision-based scheme for real-time extracting traffic parameters is proposed in this paper. Detecting and tracking of vehicle is processed at local region installed by operator. Local region is divided to segmented regions by edge and frame difference, and the segmented regions are classified into vehicle, road, shadow and headlight by statistical and geometrical features. Vehicle is detected by the result of the classification. Traffic parameters such as velocity, length, occupancy and distance are estimated by tracking using template matching at local region. Because background image are not used, it is possible to utilize under various conditions such as weather, time slots and locations. It is performed well with 90.16% detection rate in various databases. If direction, angle and iris are fitted to operating conditions, we are looking forward to using as the core of traffic monitoring systems.

Development of a Vehicle Tracking Algorithm using Automatic Detection Line Calculation (검지라인 자동계산을 이용한 차량추적 알고리즘 개발)

  • Oh, Ju-Taek;Min, Joon-Young;Hur, Byung-Do;Kim, Myung-Seob
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.4
    • /
    • pp.265-273
    • /
    • 2008
  • Video Image Processing (VIP) for traffic surveillance has been used not only to gather traffic information, but also to detect traffic conflicts and incident conditions. This paper presents a system development of gathering traffic information and conflict detection based on automatic calculation of pixel length within the detection zone on a Video Detection System (VDS). This algorithm improves the accuracy of traffic information using the automatic detailed line segmentsin the detection zone. This system also can be applied for all types of intersections. The experiments have been conducted with CCTV images, installed at a Bundang intersection, and verified through comparison with a commercial VDS product.

Color Vision Based Close Leading Vehicle Tracking in Stop-and-Go Traffic Condition (저속주행환경에서 컬러비전 기반의 근거리 전방차량추적)

  • Rho, Kwang-Hyun;Han, Min-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.9
    • /
    • pp.3037-3047
    • /
    • 2000
  • This paper describes a method of tracking a close leading vehicle by color image processing using the pairs of tail and brake lights. which emit red light and are housed on the rear of the vehicle in stop-and-go traffic condition. In the color image converted as an HSV color model. candidate regions of rear lights are identified using the color features of a pair of lights. Then. the pair of tailor brake lights are detected by means of the geometrical features and location features for the pattern of the tail and brake lights. The location of the leading vehicle can be estimated by the location of the detected lights and the vehicle can be tracked continuously. It is also possible to detect the braking status of the leading vehicle by measuring the change in HSV color components of the pair of lights detected. In the experiment. this method tracked a leading vehicle successfully from urban road images and was more useful at night than in the daylight. The KAV-Ill (Korea Autonomous Vehicle- Ill) equipped with a color vision system implementing this algorithm was able to follow a leading vehicle autonomously at speeds of up to 15km!h on a paved road at night. This method might be useful for developing an LSA (Low Speed Automation) system that can relieve driver's stress in the stop-and-go traffic conditions encountered on urban roads.

  • PDF

A Multiple Vehicle Object Detection Algorithm Using Feature Point Matching (특징점 매칭을 이용한 다중 차량 객체 검출 알고리즘)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.1
    • /
    • pp.123-128
    • /
    • 2018
  • In this paper, we propose a multi-vehicle object detection algorithm using feature point matching that tracks efficient vehicle objects. The proposed algorithm extracts the feature points of the vehicle using the FAST algorithm for efficient vehicle object tracking. And True if the feature points are included in the image segmented into the 5X5 region. If the feature point is not included, it is processed as False and the corresponding area is blacked to remove unnecessary object information excluding the vehicle object. Then, the post processed area is set as the maximum search window size of the vehicle. And A minimum search window using the outermost feature points of the vehicle is set. By using the set search window, we compensate the disadvantages of the search window size of mean-shift algorithm and track vehicle object. In order to evaluate the performance of the proposed method, SIFT and SURF algorithms are compared and tested. The result is about four times faster than the SIFT algorithm. And it has the advantage of detecting more efficiently than the process of SUFR algorithm.

Implementation of View Point Tracking System for Outdoor Augmented Reality (옥외 증강현실을 위한 관측점 트래킹 시스템 구현)

  • Choi, Tae-Jong;Kim, Jung-Kuk;Huh, Woong;Jang, Byung-Tae
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.45-54
    • /
    • 2004
  • In this paper, a view point tracking system has been realized for outdoor augmented reality including broad area monitoring. Since the surroundings of the moving view point are changing, it is necessary to track the position and observation moment of the view point system for consistency between real and virtual images. For this reason, the GPS(Global Positioning System) is applied to the realized system for tracking the information on position and direction of the moving system. In addition, an optical position tracking system that is able to track view point in a limited area is used, because the local tracking system has to trace the image variation, seen to the observer in a moving vehicle, at a particular position and time. It was found that the realized outdoor augmented reality system, which combined the virtual information tracked in real time with the real image, can be very practical in various application area.

Deep Learning Based Emergency Response Traffic Signal Control System

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.121-129
    • /
    • 2023
  • In this paper, we developed a traffic signal control system for emergency situations that can minimize loss of property and life by actively controlling traffic signals in a certain section in response to emergency situations. When the emergency vehicle terminal transmits an emergency signal including identification information and GPS information, the surrounding image is obtained from the camera, and the object is analyzed based on deep learning to output object information having information such as the location, type, and size of the object. After generating information tracking this object and detecting the signal system, the signal system is switched to emergency mode to identify and track the emergency vehicle based on the received GPS information, and to transmit emergency control signals based on the emergency vehicle's traveling route. It is a system that can be transmitted to a signal controller. This system prevents the emergency vehicle from being blocked by an emergency control signal that is applied first according to an emergency signal, thereby minimizing loss of life and property due to traffic obstacles.

Realization of An Outdoor Augmented Reality System using GPS Tracking Method (GPS 트래킹 방식을 이용한 옥외용 증강현실 시스템 구현)

  • Choi, Tae-Jong;Kim, Jung-Kuk;Huh, Woong;Jang, Byun-Tae
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.5
    • /
    • pp.45-55
    • /
    • 2002
  • In this paper, we describe an outdoor augmented reality system using GPS tracking for position and attitude information. The system consist of a remote mobile operation unit and a ground operation unit. The remote mobile operation unit includes a real-time image acquiring device, a GPS tracking device, and a wireless data transceiver; the ground operation unit includes a wireless transceiver, a virtual image generating device, and an image superimposing device. The GPS tracking device for measurement of position and attitude of the remote mobile operation unit was designed by TANS Vector and RT-20 for DGPS. The wireless data transceiver was for data transmission between the remote mobile operation unit and the ground operation unit. After the remote mobile operation unit was installed on a vehicle and a helicopter, the system was evaluated to verify its validity in actual applications. It was found that the implemented system could be used for obtaining real-time remote information such as construction simulation, tour guide, broadcasting, disaster observation, or military purpose.

Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision (다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.7 no.1
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

Surf points based Moving Target Detection and Long-term Tracking in Aerial Videos

  • Zhu, Juan-juan;Sun, Wei;Guo, Bao-long;Li, Cheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5624-5638
    • /
    • 2016
  • A novel method based on Surf points is proposed to detect and lock-track single ground target in aerial videos. Videos captured by moving cameras contain complex motions, which bring difficulty in moving object detection. Our approach contains three parts: moving target template detection, search area estimation and target tracking. Global motion estimation and compensation are first made by grids-sampling Surf points selecting and matching. And then, the single ground target is detected by joint spatial-temporal information processing. The temporal process is made by calculating difference between compensated reference and current image and the spatial process is implementing morphological operations and adaptive binarization. The second part improves KALMAN filter with surf points scale information to predict target position and search area adaptively. Lastly, the local Surf points of target template are matched in this search region to realize target tracking. The long-term tracking is updated following target scaling, occlusion and large deformation. Experimental results show that the algorithm can correctly detect small moving target in dynamic scenes with complex motions. It is robust to vehicle dithering and target scale changing, rotation, especially partial occlusion or temporal complete occlusion. Comparing with traditional algorithms, our method enables real time operation, processing $520{\times}390$ frames at around 15fps.

Integrated Video Analytics for Drone Captured Video (드론 영상 종합정보처리 및 분석용 시스템 개발)

  • Lim, SongWon;Cho, SungMan;Park, GooMan
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.243-250
    • /
    • 2019
  • In this paper, we propose a system for processing and analyzing drone image information which can be applied variously in disasters-security situation. The proposed system stores the images acquired from the drones in the server, and performs image processing and analysis according to various scenarios. According to each mission, deep-learning method is used to construct an image analysis system in the images acquired by the drone. Experiments confirm that it can be applied to traffic volume measurement, suspect and vehicle tracking, survivor identification and maritime missions.