• 제목/요약/키워드: Nighttime vehicle detection

검색결과 11건 처리시간 0.021초

State Machine and Downhill Simplex Approach for Vision-Based Nighttime Vehicle Detection

  • Choi, Kyoung-Ho;Kim, Do-Hyun;Kim, Kwang-Sup;Kwon, Jang-Woo;Lee, Sang-Il;Chen, Ken;Park, Jong-Hyun
    • ETRI Journal
    • /
    • 제36권3호
    • /
    • pp.439-449
    • /
    • 2014
  • In this paper, a novel vision-based nighttime vehicle detection approach is presented, combining state machines and downhill simplex optimization. In the proposed approach, vehicle detection is modeled as a sequential state transition problem; that is, vehicle arrival, moving, and departure at a chosen detection area. More specifically, the number of bright pixels and their differences, in a chosen area of interest, are calculated and fed into the proposed state machine to detect vehicles. After a vehicle is detected, the location of the headlights is determined using the downhill simplex method. In the proposed optimization process, various headlights were evaluated for possible headlight positions on the detected vehicles; allowing for an optimal headlight position to be located. Simulation results were provided to show the robustness of the proposed approach for nighttime vehicle and headlight detection.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

Fast Lamp Pairing-based Vehicle Detection Robust to Atypical and Turn Signal Lamps at Night

  • Jeong, Kyeong Min;Song, Byung Cheol
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권4호
    • /
    • pp.269-275
    • /
    • 2017
  • Automatic vehicle detection is a very important function for autonomous vehicles. Conventional vehicle detection approaches are based on visible-light images obtained from cameras mounted on a vehicle in the daytime. However, unlike daytime, a visible-light image is generally dark at night, and the contrast is low, which makes it difficult to recognize a vehicle. As a feature point that can be used even in the low light conditions of nighttime, the rear lamp is virtually unique. However, conventional rear lamp-based detection methods seldom cope with atypical lamps, such as LED lamps, or flashing turn signals. In this paper, we detect atypical lamps by blurring the lamp area with a low pass filter (LPF) to make out the lamp shape. We also propose to detect flickering of the turn signal lamp in a manner such that the lamp area is vertically projected, and the maximum difference of two paired lamps is examined. Experimental results show that the proposed algorithm has a higher F-measure value of 0.24 than the conventional lamp pairing-based detection methods, on average. In addition, the proposed algorithm shows a fast processing time of 6.4 ms per frame, which verifies real-time performance of the proposed algorithm.

적응형 헤드 램프 컨트롤을 위한 야간 차량 인식 (Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System)

  • 김현구;정호열;박주현
    • 대한임베디드공학회논문지
    • /
    • 제6권1호
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.

HSV 색 공간을 이용한 야간 차량 검출시스템 (Vehicle Tracking System using HSV Color Space at nighttime)

  • 박호식
    • 한국정보전자통신기술학회논문지
    • /
    • 제8권4호
    • /
    • pp.270-274
    • /
    • 2015
  • 본 논문에서는 HSV 색 공간을 이용한 야간 차량의 검출 시스템을 제안한다. 주정차 감시등 도로변에서 자동차를 감시하는 경우 자동차 번호판 추출하는 것이 중요하다. 일반적으로 번호판 추출을 위해서는 원거리에서 자동차 검출후 Pan-Tilt-Zoom 카메라로 자동차를 일정한 크기로 확대한 영상을 획득하여 번호판을 추출한다. 그리고 자동차 검출 및 추적을 위해 Mean-Shift 혹은 Optical Flow 알고리듬이 많이 이용되고 있다. 그러나 이러한 알고리즘은 주간에는 성공적으로 자동차를 검출 및 추적 할수 있었으나 야간에는 검출 및 추적에 어려움이 있었다. 그래서 본 논문에서는 입력 영상을 HSV 색 공간으로 변환하면 자동차의 전조등 혹은 후미등의 위치가 두드러지게 나타나는 것을 이용하여 자동차의 위치를 검출하였다. 실험 결과 정면 차량의 경우 93.9%, 후면 차량의 경우 97.7%의 차량을 검출하여 제안된 방법이 야간 차량 검출에 효율적임을 증명하였다.

전조등의 시각적 특성을 이용한 야간 사각 지대 차량 검출 기법 (Night-Time Blind Spot Vehicle Detection Using Visual Property of Head-Lamp)

  • 정정은;김현구;박주현;정호열
    • 대한임베디드공학회논문지
    • /
    • 제6권5호
    • /
    • pp.311-317
    • /
    • 2011
  • The blind spot is an area where drivers visibility does not reach. When drivers change a lane to adjacent lane, they need to give an attention because of the blind spot. If drivers try to change lane without notice of vehicle approaching in the blind spot, it causes a reason to have a car accident. This paper proposes a night-time blind spot vehicle detection using cameras. At nighttime, head-lights are used as characteristics to detect vehicles. Candidates of headlight are selected by high luminance feature and then shape filter and kalman filter are employed to remove other noisy blobs having similar luminance to head-lights. In addition, vehicle position is estimated from detected head-light, using virtual center line represented by approximated the first order linear equation. Experiments show that proposed method has relatively high detection porformance in clear weather independent to the road types, but has not sufficient performance in rainy weather because of various ground reflectors.

통계적 특징 기반 SVM을 이용한 야간 전방 차량 검출 기법 (Night Time Leading Vehicle Detection Using Statistical Feature Based SVM)

  • 정정은;김현구;박주현;정호열
    • 대한임베디드공학회논문지
    • /
    • 제7권4호
    • /
    • pp.163-172
    • /
    • 2012
  • A driver assistance system is critical to improve a convenience and stability of vehicle driving. Several systems have been already commercialized such as adaptive cruise control system and forward collision warning system. Efficient vehicle detection is very important to improve such driver assistance systems. Most existing vehicle detection systems are based on a radar system, which measures distance between a host and leading (or oncoming) vehicles under various weather conditions. However, it requires high deployment cost and complexity overload when there are many vehicles. A camera based vehicle detection technique is also good alternative method because of low cost and simple implementation. In general, night time vehicle detection is more complicated than day time vehicle detection, because it is much more difficult to distinguish the vehicle's features such as outline and color under the dim environment. This paper proposes a method to detect vehicles at night time using analysis of a captured color space with reduction of reflection and other light sources in images. Four colors spaces, namely RGB, YCbCr, normalized RGB and Ruta-RGB, are compared each other and evaluated. A suboptimal threshold value is determined by Otsu algorithm and applied to extract candidates of taillights of leading vehicles. Statistical features such as mean, variance, skewness, kurtosis, and entropy are extracted from the candidate regions and used as feature vector for SVM(Support Vector Machine) classifier. According to our simulation results, the proposed statistical feature based SVM provides relatively high performances of leading vehicle detection with various distances in variable nighttime environments.

야간 PDS를 위한 광학 흐름과 기울기 방향 히스토그램 이용 방법 (Using Optical Flow and HoG for Nighttime PDS)

  • 조휘택;유현중;김형석;황젱넹
    • 한국산학기술학회논문지
    • /
    • 제10권7호
    • /
    • pp.1556-1567
    • /
    • 2009
  • 자동차 주요 생산국인 우리나라 보행자의 교통사고 사망률은 인구 10만 명 당 5.28명으로서 OECD 평균의 약 2.5배에 달한다. 보행자를 감지하고 운전자에게 경보를 보내주는 시스템이 개발되어 보행자 교통사고를 조금이라도 줄일 수 있다면, 그 자체만으로도 보행자 감지 시스템의 가치는 충분하기 때문에 PDS에 대한 관심이 높아지고 있다. 보행자 교통사고율은 야간에 더 높기 때문에, 야간 보행자 감지 시스템에 주요 자동차 회사들이 관심을 두고 있으나, 그들은 일반적으로 고가의 나이트비젼 또는 복합적 센서를 사용하는 장비를 채택하고 있다. 본 논문에서는 PDS에서 나이트비젼 대신에, 넓은 동적 범위를 갖는 가시 스펙트럼 대역 흑백 카메라 한 대만을 사용하는 야간 보행자 감지 기법을 제안한다. 서로 다른 환경에서 촬영된 야간 동영상들에 대해 실험한 결과, 제안 알고리듬이 에지가 어느 정도 정확하게 검출되는 상황이라면 정확한 보행자 검출 성능을 보였다.

자율주행자동차를 위한 8채널 LiDAR 센서 및 객체 검출 알고리즘의 구현 (Realization of Object Detection Algorithm and Eight-channel LiDAR sensor for Autonomous Vehicles)

  • 김주영;우승탁;유종호;박영빈;이중희;조현창;최현용
    • 센서학회지
    • /
    • 제28권3호
    • /
    • pp.157-163
    • /
    • 2019
  • The LiDAR sensor, which is widely regarded as one of the most important sensors, has recently undergone active commercialization owing to the significant growth in the production of ADAS and autonomous vehicle components. The LiDAR sensor technology involves radiating a laser beam at a particular angle and acquiring a three-dimensional image by measuring the lapsed time of the laser beam that has returned after being reflected. The LiDAR sensor has been incorporated and utilized in various devices such as drones and robots. This study focuses on object detection and recognition by employing sensor fusion. Object detection and recognition can be executed as a single function by incorporating sensors capable of recognition, such as image sensors, optical sensors, and propagation sensors. However, a single sensor has limitations with respect to object detection and recognition, and such limitations can be overcome by employing multiple sensors. In this paper, the performance of an eight-channel scanning LiDAR was evaluated and an object detection algorithm based on it was implemented. Furthermore, object detection characteristics during daytime and nighttime in a real road environment were verified. Obtained experimental results corroborate that an excellent detection performance of 92.87% can be achieved.

Vanishing Line based Lane Detection for Augmented Reality-aided Driver Induction

  • Yun, Jeong-Rok;Lee, Dong-Kil;Chun, Sung-Kuk;Hong, Sung-Hoon
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권1호
    • /
    • pp.73-83
    • /
    • 2019
  • In this paper, we propose the augmented reality(AR) based driving navigation based on robust lane detection method to dynamic environment changes. The proposed technique uses the detected lane position as a marker which is a key element for enhancing driving information. We propose Symmetrical Local Threshold(SLT) algorithm which is able to robustly detect lane to dynamic illumination environment change such as shadows. In addition, by using Morphology operation and Connected Component Analysis(CCA) algorithm, it is possible to minimize noises in the image, and Region Of Interest(ROI) is defined through region division using a straight line passing through several vanishing points We also propose the augmented reality aided visualization method for Interchange(IC) and driving navigation using reference point detection based on the detected lane coordinates inside and outside the ROI. Validation experiments were carried out to assess the accuracy and robustness of the proposed system in vairous environment changes. The average accuracy of the proposed system in daytime, nighttime, rainy day, and cloudy day is 79.3% on 4600 images. The results of the proposed system for AR based IC and driving navigation were also presented. We are hopeful that the proposed research will open a new discussion on AR based driving navigation platforms, and thus, that such efforts will enrich the autonomous vehicle services in the near future.