• Title/Summary/Keyword: Vehicle detection

Search Result 1,314, Processing Time 0.035 seconds

Deep-learning Sliding Window Based Object Detection and Tracking for Generating Trigger Signal of the LPR System (LPR 시스템 트리거 신호 생성을 위한 딥러닝 슬라이딩 윈도우 방식의 객체 탐지 및 추적)

  • Kim, Jinho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.4
    • /
    • pp.85-94
    • /
    • 2021
  • The LPR system's trigger sensor makes problem occasionally due to the heave weight of vehicle or the obsolescence equipment. If we replace the hardware sensor to the deep-learning based software sensor in order to generate the trigger signal, LPR system maintenance would be a lot easier. In this paper we proposed the deep-learning sliding window based object detection and tracking algorithm for the LPR system's trigger signal generation. The gate passing vehicle's license plate recognition results are combined into the normal tracking algorithm to catch the position of the vehicle on the trigger line. The experimental results show that the deep learning sliding window based trigger signal generating performance was 100% for the gate passing vehicles including the 5.5% trigger signal position errors due to the minimum bounding box location errors in the vehicle detection process.

An Approach to Video Based Traffic Parameter Extraction (영상을 기반 교통 파라미터 추출에 관한 연구)

  • Yu, Mei;Kim, Yong-Deak
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.38 no.5
    • /
    • pp.42-51
    • /
    • 2001
  • Vehicle detection is the basic of traffic monitoring. Video based systems have several apparent advantages compared with other kinds of systems. However, In video based systems, shadows make troubles for vehicle detection, especially active shadows resulted from moving vehicles. In this paper, a new method that combines background subtraction and edge detection is proposed for vehicle detection and shadow rejection. The method is effective and the correct rate of vehicle detection is higher than 98% in experiments, during which the passive shadows resulted from roadside buildings grew considerably. Based on the proposed vehicle detection method, vehicle tracking, counting, classification and speed estimation are achieved so that traffic parameters concerning traffic flow is obtained to describe the load of each lane.

  • PDF

Fast Lamp Pairing-based Vehicle Detection Robust to Atypical and Turn Signal Lamps at Night

  • Jeong, Kyeong Min;Song, Byung Cheol
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.269-275
    • /
    • 2017
  • Automatic vehicle detection is a very important function for autonomous vehicles. Conventional vehicle detection approaches are based on visible-light images obtained from cameras mounted on a vehicle in the daytime. However, unlike daytime, a visible-light image is generally dark at night, and the contrast is low, which makes it difficult to recognize a vehicle. As a feature point that can be used even in the low light conditions of nighttime, the rear lamp is virtually unique. However, conventional rear lamp-based detection methods seldom cope with atypical lamps, such as LED lamps, or flashing turn signals. In this paper, we detect atypical lamps by blurring the lamp area with a low pass filter (LPF) to make out the lamp shape. We also propose to detect flickering of the turn signal lamp in a manner such that the lamp area is vertically projected, and the maximum difference of two paired lamps is examined. Experimental results show that the proposed algorithm has a higher F-measure value of 0.24 than the conventional lamp pairing-based detection methods, on average. In addition, the proposed algorithm shows a fast processing time of 6.4 ms per frame, which verifies real-time performance of the proposed algorithm.

A Study On the Image Based Traffic Information Extraction Algorithm (영상기반 교통정보 추출 알고리즘에 관한 연구)

  • 하동문;이종민;김용득
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.6
    • /
    • pp.161-170
    • /
    • 2001
  • Vehicle detection is the basic of traffic monitoring. Video based systems have several apparent advantages compared with other kinds of systems. However, In video based systems, shadows make troubles for vehicle detection. especially active shadows resulted from moving vehicles. In this paper a new method that combines background subtraction and edge detection is proposed for vehicle detection and shadow rejection. The method is effective and the correct rate of vehicle detection is higher than 98(%) in experiments, during which the passive shadows resulted from roadside buildings grew considerably. Based on the proposed vehicle detection method, vehicle tracking, counting, classification and speed estimation are achieved so that traffic information concerning traffic flow is obtained to describe the load of each lane.

  • PDF

A Vehicle Tracking Algorithm Focused on the Initialization of Vehicle Detection-and Distance Estimation (초기 차량 검출 및 거리 추정을 중심으로 한 차량 추적 알고리즘)

  • 이철헌;설성욱;김효성;남기곤;주재흠
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1496-1504
    • /
    • 2004
  • In this paper, we propose an algorithm for initializing a target vehicle detection, tracking the vehicle and estimating the distance from it on the stereo images acquired from a forward-looking stereo camera mounted on a road driving vehicle. The process of vehicle detection extracts road region using lane recognition and searches vehicle feature from road region. The distance of tracking vehicle is estimated by TSS correlogram matching from stereo Images. Through the simulation, this paper shows that the proposed method segments, matches and tracks vehicles robustly from image sequences obtained by moving stereo camera.

Temporal matching prior network for vehicle license plate detection and recognition in videos

  • Yoo, Seok Bong;Han, Mikyong
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.411-419
    • /
    • 2020
  • In real-world intelligent transportation systems, accuracy in vehicle license plate detection and recognition is considered quite critical. Many algorithms have been proposed for still images, but their accuracy on actual videos is not satisfactory. This stems from several problematic conditions in videos, such as vehicle motion blur, variety in viewpoints, outliers, and the lack of publicly available video datasets. In this study, we focus on these challenges and propose a license plate detection and recognition scheme for videos based on a temporal matching prior network. Specifically, to improve the robustness of detection and recognition accuracy in the presence of motion blur and outliers, forward and bidirectional matching priors between consecutive frames are properly combined with layer structures specifically designed for plate detection. We also built our own video dataset for the deep training of the proposed network. During network training, we perform data augmentation based on image rotation to increase robustness regarding the various viewpoints in videos.

Fast, Accurate Vehicle Detection and Distance Estimation

  • Ma, QuanMeng;Jiang, Guang;Lai, DianZhi;cui, Hua;Song, Huansheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.610-630
    • /
    • 2020
  • A large number of people suffered from traffic accidents each year, so people pay more attention to traffic safety. However, the traditional methods use laser sensors to calculate the vehicle distance at a very high cost. In this paper, we propose a method based on deep learning to calculate the vehicle distance with a monocular camera. Our method is inexpensive and quite convenient to deploy on the mobile platforms. This paper makes two contributions. First, based on Light-Head RCNN, we propose a new vehicle detection framework called Light-Car Detection which can be used on the mobile platforms. Second, the planar homography of projective geometry is used to calculate the distance between the camera and the vehicles ahead. The results show that our detection system achieves 13FPS detection speed and 60.0% mAP on the Adreno 530 GPU of Samsung Galaxy S7, while only requires 7.1MB of storage space. Compared with the methods existed, the proposed method achieves a better performance.

Aerial Dataset Integration For Vehicle Detection Based on YOLOv4

  • Omar, Wael;Oh, Youngon;Chung, Jinwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.747-761
    • /
    • 2021
  • With the increasing application of UAVs in intelligent transportation systems, vehicle detection for aerial images has become an essential engineering technology and has academic research significance. In this paper, a vehicle detection method for aerial images based on the YOLOv4 deep learning algorithm is presented. At present, the most known datasets are VOC (The PASCAL Visual Object Classes Challenge), ImageNet, and COCO (Microsoft Common Objects in Context), which comply with the vehicle detection from UAV. An integrated dataset not only reflects its quantity and photo quality but also its diversity which affects the detection accuracy. The method integrates three public aerial image datasets VAID, UAVD, DOTA suitable for YOLOv4. The training model presents good test results especially for small objects, rotating objects, as well as compact and dense objects, and meets the real-time detection requirements. For future work, we will integrate one more aerial image dataset acquired by our lab to increase the number and diversity of training samples, at the same time, while meeting the real-time requirements.

Radar and Vision Sensor Fusion for Primary Vehicle Detection (레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발)

  • Yang, Seung-Han;Song, Bong-Sob;Um, Jae-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

Signal Processing Algorithm of FMCW RADAR using DSP (DSP를 이용한 FMCW 레이다 신호처리 알고리즘)

  • 한성칠;박상진;강성민;구경헌
    • Proceedings of the IEEK Conference
    • /
    • 2001.06a
    • /
    • pp.425-428
    • /
    • 2001
  • In this paper, FMCW radar signal processing technique for the vehicle detection system are studied. And FMCW radar sensor is used as a equipment for vehicle detection. To test the performance of developed algorithm, the evaluation of the algorithm is done by simulation for signal processing technique of vehicle detection system. RADAR signal of a driving vehicle is generated by using the Matlab. Distance and velocity of vehicles are calculated with developed a1gorithm. Also the signal processing procedure is done for the virtual data with FM-AM converted noise.

  • PDF