• Title/Summary/Keyword: Image Based Vehicle Detection

Search Result 268, Processing Time 0.022 seconds

Fast Lamp Pairing-based Vehicle Detection Robust to Atypical and Turn Signal Lamps at Night

  • Jeong, Kyeong Min;Song, Byung Cheol
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.269-275
    • /
    • 2017
  • Automatic vehicle detection is a very important function for autonomous vehicles. Conventional vehicle detection approaches are based on visible-light images obtained from cameras mounted on a vehicle in the daytime. However, unlike daytime, a visible-light image is generally dark at night, and the contrast is low, which makes it difficult to recognize a vehicle. As a feature point that can be used even in the low light conditions of nighttime, the rear lamp is virtually unique. However, conventional rear lamp-based detection methods seldom cope with atypical lamps, such as LED lamps, or flashing turn signals. In this paper, we detect atypical lamps by blurring the lamp area with a low pass filter (LPF) to make out the lamp shape. We also propose to detect flickering of the turn signal lamp in a manner such that the lamp area is vertically projected, and the maximum difference of two paired lamps is examined. Experimental results show that the proposed algorithm has a higher F-measure value of 0.24 than the conventional lamp pairing-based detection methods, on average. In addition, the proposed algorithm shows a fast processing time of 6.4 ms per frame, which verifies real-time performance of the proposed algorithm.

A Study On the Image Based Traffic Information Extraction Algorithm (영상기반 교통정보 추출 알고리즘에 관한 연구)

  • 하동문;이종민;김용득
    • Journal of Korean Society of Transportation
    • /
    • v.19 no.6
    • /
    • pp.161-170
    • /
    • 2001
  • Vehicle detection is the basic of traffic monitoring. Video based systems have several apparent advantages compared with other kinds of systems. However, In video based systems, shadows make troubles for vehicle detection. especially active shadows resulted from moving vehicles. In this paper a new method that combines background subtraction and edge detection is proposed for vehicle detection and shadow rejection. The method is effective and the correct rate of vehicle detection is higher than 98(%) in experiments, during which the passive shadows resulted from roadside buildings grew considerably. Based on the proposed vehicle detection method, vehicle tracking, counting, classification and speed estimation are achieved so that traffic information concerning traffic flow is obtained to describe the load of each lane.

  • PDF

Template Mask based Parking Car Slots Detection in Aerial Images

  • Wirabudi, Andri Agustav;Han, Heeji;Bang, Junho;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.27 no.7
    • /
    • pp.999-1010
    • /
    • 2022
  • The increase in vehicle purchases worldwide is having a very significant impact on the availability of parking spaces. In particular, since it is difficult to secure a parking space in an urban area, it may be of great help to the driver to check vehicle parking information in advance. However, the current parking lot information is still operated semi-manually, such as notifications. Therefore, in this study, we propose a system for detecting a parking space using a relatively simple image processing method based on an image taken from the sky and evaluate its performance. The proposed method first converts the captured RGB image into a black-and-white binary image. This is to simplify the calculation for detection using discrete information. Next, a morphological operation is applied to increase the clarity of the binary image, and a template mask in the form of a bounding box indicating a parking space is applied to check the parking state. Twelve image samples and 2181 total of test, were used for the experiment, and a threshold of 40% was used to detect each parking space. The experimental results showed that information on the availability of parking spaces for parking users was provided with an accuracy of 95%. Although the number of experimental images is somewhat insufficient to address the generality of accuracy, it is possible to confirm the possibility of parking space detection with a simple image processing method.

Temporal matching prior network for vehicle license plate detection and recognition in videos

  • Yoo, Seok Bong;Han, Mikyong
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.411-419
    • /
    • 2020
  • In real-world intelligent transportation systems, accuracy in vehicle license plate detection and recognition is considered quite critical. Many algorithms have been proposed for still images, but their accuracy on actual videos is not satisfactory. This stems from several problematic conditions in videos, such as vehicle motion blur, variety in viewpoints, outliers, and the lack of publicly available video datasets. In this study, we focus on these challenges and propose a license plate detection and recognition scheme for videos based on a temporal matching prior network. Specifically, to improve the robustness of detection and recognition accuracy in the presence of motion blur and outliers, forward and bidirectional matching priors between consecutive frames are properly combined with layer structures specifically designed for plate detection. We also built our own video dataset for the deep training of the proposed network. During network training, we perform data augmentation based on image rotation to increase robustness regarding the various viewpoints in videos.

Vehicle Detection Scheme Based on a Boosting Classifier with Histogram of Oriented Gradient (HOG) Features and Image Segmentation] (HOG 특징 및 영상분할을 이용한 부스팅분류 기반 자동차 검출 기법)

  • Choi, Mi-Soon;Lee, Jeong-Hwan;Roh, Tae-Moon;Shim, Jae-Chang
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.10
    • /
    • pp.955-961
    • /
    • 2010
  • In this paper, we describe a study of a vehicle detection method based on a Boosting Classifier which uses Histogram of Oriented Gradient (HOG) features and Image Segmentation techniques. An input image is segmented by means of a split and merge algorithm. Then, the two largest segmented regions are removed in order to reduce the search region and speed up processing time. The HOG features are then calculated for each pixel in the search region. In order to detect the vehicle region we used the AdaBoost (adaptive boost) method, which is well known for classifying samples with two classes. To evaluate the performance of the proposed method, 537 training images were used to train and learn the classifier, followed by 500 non-training images to provide the recognition rate. From these experiments we were able to detect the proper image 98.34% of the time for the 500 non-training images. In conclusion, the proposed method can be used for detecting the location of a vehicle in an intelligent vehicle control system.

Night Time Leading Vehicle Detection Using Statistical Feature Based SVM (통계적 특징 기반 SVM을 이용한 야간 전방 차량 검출 기법)

  • Joung, Jung-Eun;Kim, Hyun-Koo;Park, Ju-Hyun;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.7 no.4
    • /
    • pp.163-172
    • /
    • 2012
  • A driver assistance system is critical to improve a convenience and stability of vehicle driving. Several systems have been already commercialized such as adaptive cruise control system and forward collision warning system. Efficient vehicle detection is very important to improve such driver assistance systems. Most existing vehicle detection systems are based on a radar system, which measures distance between a host and leading (or oncoming) vehicles under various weather conditions. However, it requires high deployment cost and complexity overload when there are many vehicles. A camera based vehicle detection technique is also good alternative method because of low cost and simple implementation. In general, night time vehicle detection is more complicated than day time vehicle detection, because it is much more difficult to distinguish the vehicle's features such as outline and color under the dim environment. This paper proposes a method to detect vehicles at night time using analysis of a captured color space with reduction of reflection and other light sources in images. Four colors spaces, namely RGB, YCbCr, normalized RGB and Ruta-RGB, are compared each other and evaluated. A suboptimal threshold value is determined by Otsu algorithm and applied to extract candidates of taillights of leading vehicles. Statistical features such as mean, variance, skewness, kurtosis, and entropy are extracted from the candidate regions and used as feature vector for SVM(Support Vector Machine) classifier. According to our simulation results, the proposed statistical feature based SVM provides relatively high performances of leading vehicle detection with various distances in variable nighttime environments.

Optical Camera Communication Based Lateral Vehicle Position Estimation Scheme Using Angle of LED Street Lights (LED 가로등의 각도를 이용한 광카메라통신기반 횡방향 차량 위치추정 기법)

  • Jeon, Hui-Jin;Yun, Soo-Keun;Kim, Byung Wook;Jung, Sung-Yoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.9
    • /
    • pp.1416-1423
    • /
    • 2017
  • Lane detection technology is one of the most important issues on car safety and self-driving capability of autonomous vehicle. This paper introduces an accurate lane detection scheme based on OCC(Optical Camera Communication) for moving vehicles. For lane detection of moving vehicles, the streetlights and the front camera of the vehicle were used for a transmitter and a receiver, respectively. Based on the angle information of multiple streetlights in a captured image, the distance from sidewalk can be calculated using non-linear regression analysis. Simulation results show that the proposed scheme shows robust performance of accurate lane detection.

Vehicle Detection Using Optimal Features for Adaboost (Adaboost 최적 특징점을 이용한 차량 검출)

  • Kim, Gyu-Yeong;Lee, Geun-Hoo;Kim, Jae-Ho;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1129-1135
    • /
    • 2013
  • A new vehicle detection algorithm based on the multiple optimal Adaboost classifiers with optimal feature selection is proposed. It consists of two major modules: 1) Theoretical DDISF(Distance Dependent Image Scaling Factor) based image scaling by site modeling of the installed cameras. and 2) optimal features selection by Haar-like feature analysis depending on the distance of the vehicles. The experimental results of the proposed algorithm shows improved recognition rate compare to the previous methods for vehicles and non-vehicles. The proposed algorithm shows about 96.43% detection rate and about 3.77% false alarm rate. These are 3.69% and 1.28% improvement compared to the standard Adaboost algorithmt.

Methodology for Vehicle Trajectory Detection Using Long Distance Image Tracking (원거리 차량 추적 감지 방법)

  • Oh, Ju-Taek;Min, Joon-Young;Heo, Byung-Do
    • International Journal of Highway Engineering
    • /
    • v.10 no.2
    • /
    • pp.159-166
    • /
    • 2008
  • Video image processing systems (VIPS) offer numerous benefits to transportation models and applications, due to their ability to monitor traffic in real time. VIPS based on a wide-area detection algorithm provide traffic parameters such as flow and velocity as well as occupancy and density. However, most current commercial VIPS utilize a tripwire detection algorithm that examines image intensity changes in the detection regions to indicate vehicle presence and passage, i.e., they do not identify individual vehicles as unique targets. If VIPS are developed to track individual vehicles and thus trace vehicle trajectories, many existing transportation models will benefit from more detailed information of individual vehicles. Furthermore, additional information obtained from the vehicle trajectories will improve incident detection by identifying lane change maneuvers and acceleration/deceleration patterns. However, unlike human vision, VIPS cameras have difficulty in recognizing vehicle movements over a detection zone longer than 100 meters. Over such a distance, the camera operators need to zoom in to recognize objects. As a result, vehicle tracking with a single camera is limited to detection zones under 100m. This paper develops a methodology capable of monitoring individual vehicle trajectories based on image processing. To improve traffic flow surveillance, a long distance tracking algorithm for use over 200m is developed with multi-closed circuit television (CCTV) cameras. The algorithm is capable of recognizing individual vehicle maneuvers and increasing the effectiveness of incident detection.

  • PDF

Real Time On-Road Vehicle Detection with Low-Level Visual Features and Boosted Cascade of Haar-Like Features (미약한 시각 특징과 Haar 유사 특징들의 강화 연결에 의한 도로 상의 실 시간 차량 검출)

  • Adhikari, Shyam Prasad;Yoo, Hyeon-Joong;Kim, Hyong-Suk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.1
    • /
    • pp.17-21
    • /
    • 2011
  • This paper presents a real- time detection of on-road succeeding vehicles based on low level edge features and a boosted cascade of Haar-like features. At first, the candidate vehicle location in an image is found by low level horizontal edge and symmetry characteristic of vehicle. Then a boosted cascade of the Haar-like features is applied to the initial hypothesized vehicle location to extract the refined vehicle location. The initial hypothesis generation using simple edge features speeds up the whole detection process and the application of a trained cascade on the hypothesized location increases the accuracy of the detection process. Experimental results on real world road scenario with processing speed of up to 27 frames per second for $720{\times}480$ pixel images are presented.