• Title/Summary/Keyword: Night vehicle detection

Search Result 42, Processing Time 0.024 seconds

Implementation and Evaluation of Multiple Target Algorithm for Automotive Radar Sensor (차량용 레이더 센서를 위한 다중 타겟 알고리즘의 구현과 평가)

  • Ryu, In-hwan;Won, In-Su;Kwon, Jang-Woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.105-115
    • /
    • 2017
  • Conventional traffic detection sensors such as loop detectors and image sensors are expensive to install and maintain and require different detection algorithms depending on the night and day and have a disadvantage that the detection rate varies widely depending on the weather. On the other hand, the millimeter-wave radar is not affected by bad weather and can obtain constant detection performance regardless of day or night. In addition, there is no need for blocking trafficl for installation and maintenance, and multiple vehicles can be detected at the same time. In this study, a multi-target detection algorithm for a radar sensor with this advantage was devised / implemented by applying a conventional single target detection algorithm. We performed the evaluation and the meaningful results were obtained.

Robust vehicle Detection in Rainy Situation with Adaboost Using CLAHE (우천 상황에 강인한 CLAHE를 적용한 Adaboost 기반 차량 검출 방법)

  • Kang, Seokjun;Han, Dong Seog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.12
    • /
    • pp.1978-1984
    • /
    • 2016
  • This paper proposes a robust vehicle detecting method by using Adaboost and CLAHE(Contrast-Limit Adaptive Histogram Equalization). We propose two method to detect vehicle effectively. First, we are able to judge rainy and night by converting RGB value to brightness. Second, we can detect a taillight, designate a ROI(Region Of Interest) by using CLAHE. And then, we choose an Adaboost algorithm by comparing traditional vehicle detecting method such as GMM(Gaussian Mixture Model), Optical flow and Adaboost. In this paper, we use proposed method and get better performance of detecting vehicle. The precision and recall score of proposed method are 0.85 and 0.87. That scores are better than GMM and optical flow.

A Study on Drowsy Driving Detection using SURF (SURF를 이용한 졸음운전 검출에 관한 연구)

  • Choi, Na-Ri;Choi, Ki-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.11 no.4
    • /
    • pp.131-143
    • /
    • 2012
  • In this paper, we propose a drowsy driver detection system with a novel eye state detection method that is adaptive to various vehicle environment such as glasses, light and so forth using SURF(Speed Up Robust Feature) which can extract quickly local features from images. Also the performance of eye state detection is improved as individual three eye-state templates of each driver can be made using Bayesian inference. The experimental results under various environment with average 98.1% and 96.1% detection rate in the daytime and at night respectively and those in the opened ZJU database with average 97.8% detection rate show that the proposed method outperforms the current state-of-the-art.

Lane Detection System Based on Vision Sensors Using a Robust Filter for Inner Edge Detection (차선 인접 에지 검출에 강인한 필터를 이용한 비전 센서 기반 차선 검출 시스템)

  • Shin, Juseok;Jung, Jehan;Kim, Minkyu
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.164-170
    • /
    • 2019
  • In this paper, a lane detection and tracking algorithm based on vision sensors and employing a robust filter for inner edge detection is proposed for developing a lane departure warning system (LDWS). The lateral offset value was precisely calculated by applying the proposed filter for inner edge detection in the region of interest. The proposed algorithm was subsequently compared with an existing algorithm having lateral offset-based warning alarm occurrence time, and an average error of approximately 15ms was observed. Tests were also conducted to verify whether a warning alarm is generated when a driver departs from a lane, and an average accuracy of approximately 94% was observed. Additionally, the proposed LDWS was implemented as an embedded system, mounted on a test vehicle, and was made to travel for approximately 100km for obtaining experimental results. Obtained results indicate that the average lane detection rates at day time and night time are approximately 97% and 96%, respectively. Furthermore, the processing time of the embedded system is found to be approximately 12fps.

Development of Video-Detection Integration Algorithm on Vehicle Tracking (트래킹 기반 영상검지 통합 알고리즘 개발)

  • Oh, Jutaek;Min, Junyoung;Hu, Byungdo;Hwang, Bohee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.5D
    • /
    • pp.635-644
    • /
    • 2009
  • Image processing technique in the outdoor environment is very sensitive, and it tends to lose a lot of accuracy when it rapidly changes by outdoor environment. Therefore, in order to calculate accurate traffic information using the traffic monitoring system, we must resolve removing shadow in transition time, Distortion by the vehicle headlights at night, noise of rain, snow, and fog, and occlusion. In the research, we developed a system to calibrate the amount of traffic, speed, and time occupancy by using image processing technique in a variety of outdoor environments change. This system were tested under outdoor environments at the Gonjiam test site, which is managed by Korea Institute of Construction Technology (www.kict.re.kr) for testing performance. We evaluated the performance of traffic information, volume counts, speed, and occupancy time, with 4 lanes (2 lanes are upstream and the rests are downstream) from the 16th to 18th December, 2008. The evaluation method performed as based on the standard data is a radar detection compared to calculated data using image processing technique. The System evaluation results showed that the amount of traffic, speed, and time occupancy in period (day, night, sunrise, sunset) are approximately 92-97% accuracy when these data compared to the standard data.

Ensemble Deep Network for Dense Vehicle Detection in Large Image

  • Yu, Jae-Hyoung;Han, Youngjoon;Kim, JongKuk;Hahn, Hernsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.45-55
    • /
    • 2021
  • This paper has proposed an algorithm that detecting for dense small vehicle in large image efficiently. It is consisted of two Ensemble Deep-Learning Network algorithms based on Coarse to Fine method. The system can detect vehicle exactly on selected sub image. In the Coarse step, it can make Voting Space using the result of various Deep-Learning Network individually. To select sub-region, it makes Voting Map by to combine each Voting Space. In the Fine step, the sub-region selected in the Coarse step is transferred to final Deep-Learning Network. The sub-region can be defined by using dynamic windows. In this paper, pre-defined mapping table has used to define dynamic windows for perspective road image. Identity judgment of vehicle moving on each sub-region is determined by closest center point of bottom of the detected vehicle's box information. And it is tracked by vehicle's box information on the continuous images. The proposed algorithm has evaluated for performance of detection and cost in real time using day and night images captured by CCTV on the road.

Vehicle Detection and Ship Stability Calculation using Image Processing Technique (영상처리기법을 활용한 차량 검출 및 선박복원성 계산)

  • Kim, Deug-Bong;Heo, Jun-Hyeog;Kim, Ga-Lam;Seo, Chang-Beom;Lee, Woo-Jun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.7
    • /
    • pp.1044-1050
    • /
    • 2021
  • After the occurrence of several passenger ship accidents in Korea, various systems are being developed for passenger ship safety management. A total of 162 passenger ships operate along the coast of Korea, of which 105 (65 %) are car-ferries with open vehicle decks. The car-ferry has a navigation pattern that passes through 2 to 4 islands. Safety inspections at the departure point(home port) are carried out by the crew, the operation supervisor of the operation management office, and the maritime safety supervisor. In some cases, self-inspections are carried out for safety inspections at layovers. As with any system, there are institutional and practical limitations. To this end, this study was conducted to suggest a method of detecting a vehicle using image processing and linking it to the calculations for ship stability. For vehicle detection, a method using a difference image and one using machine learning were used. However, a limitation was observed in these methods that the vehicle could not be identified due to strong background lighting from the pier and the ship in the cases where the camera was backlit such as during sunset or at night. It appears necessary to secure sufficient image data and upgrade the program for stable image processing.

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.

A High-performance Lane Recognition Algorithm Using Word Descriptors and A Selective Hough Transform Algorithm with Four-channel ROI (다중 ROI에서 영상 화질 표준화 및 선택적 허프 변환 알고리즘을 통한 고성능의 차선 인식 알고리즘)

  • Cho, Jae-Hyun;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.148-161
    • /
    • 2015
  • The examples that used camera in the vehicle is increasing with the growth of the automotive market, and the importance of the image processing technique is expanding. In particular, the Lane Departure Warning System (LDWS) and related technologies are under development in various fields. In this paper, in order to improve the lane recognition rate more than the conventional method, we extract a Normalized Luminance Descriptor value and a Normalized Contrast Descriptor value, and adjust image gamma values to modulate Normalized Image Quality by using the correlation between the extracted two values. Then, we apply the Hough transform using the optimized accumulator cells to the four-channel ROI. The proposed algorithm was verified in 27 frame/sec and $640{\times}480$ resolution. As a result, Lane recognition rate was higher than the average 97% in day, night, and late-night road environments. The proposed method also shows successful lane recognition in sections with curves or many lane boundary.

Video Based Tail-Lights Status Recognition Algorithm (영상기반 차량 후미등 상태 인식 알고리즘)

  • Kim, Gyu-Yeong;Lee, Geun-Hoo;Do, Jin-Kyu;Park, Keun-Soo;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.10
    • /
    • pp.1443-1449
    • /
    • 2013
  • Automatic detection of vehicles in front is an integral component of many advanced driver-assistance system, such as collision mitigation, automatic cruise control, and automatic head-lamp dimming. Regardless day and night, tail-lights play an important role in vehicle detecting and status recognizing of driving in front. However, some drivers do not know the status of the tail-lights of vehicles. Thus, it is required for drivers to inform status of tail-lights automatically. In this paper, a recognition method of status of tail-lights based on video processing and recognition technology is proposed. Background estimation, optical flow and Euclidean distance is used to detect vehicles entering tollgate. Then saliency map is used to detect tail-lights and recognize their status in the Lab color coordinates. As results of experiments of using tollgate videos, it is shown that the proposed method can be used to inform status of tail-lights.