• Title/Summary/Keyword: 도로표지 검출

Search Result 28, Processing Time 0.024 seconds

Algorithm for Speed Sign Recognition Using Color Attributes and Selective Region of Interest (칼라 특성과 선택적 관심영역을 이용한 속도 표지판 인식 알고리즘)

  • Park, Ki Hun;Kwon, Oh Seol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.93-103
    • /
    • 2018
  • This paper presents a method for speed limit sign recognition in images. Conventional sign recognition methods decreases recognition accuracy because they are very sensitive and include repeated features. The proposed method emphasizes color attributes based on the weighted YUV color space. Moreover, the recognition accuracy can be improved by extracting the local region of interest (ROI) in the candidates. The proposed method uses the Haar features and the Adaboost classifier for recognition. Experimental results confirm that the proposed algorithm is superior to conventional algorithms under various speed signs and conditions.

A study on detection method of traffic lights using Spotlights and MSER regions detection (Spotlights와 Maximally Stable Extremal Regions)영역 검출 기반의 조도변화에 강인한 교통신호등 검출 방안)

  • Kim, Jong-Bae;Jiang, Ji-Woog
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1709-1712
    • /
    • 2013
  • 교통 신호등은 운전자 혹은 보행자들의 뚜렷한 시인성 확보를 위해 가능한 주위 배경과 구분되는 색상, 모양, 질감 등으로 구성하여 설치되어 있는 특징을 가지고 있다. 결국 기존 교통 신호등 검출 연구들에서는 대부분 교통 신호등의 색상과 모양을 기반으로 한 검출 연구가 주류를 이루고 있는 것이 사실이다. 하지만, 외부 날씨, 복잡한 시내, 다른 물체와의 겹침 등의 문제로 인해 색상 및 모양 기반의 교통 신호등, motion blur, 검출 오류가 증가 되고 있다. 따라서 본 연구에서는 입력 영상에서 색상정보를 배제하고 motion blur나 밝기 변화에 덜 민감하고 먼 거리에서도 뛰어난 시인성을 가진 spot light 검출을 통해 입력 영상에서 가장 밝은 교통표지판 후보 영역들을 검출한다. 그리고 교통 신호등의 특징인 가능한 원형을 유지하고 있으며 원형 외부 색상과 내부 색상이 현저하게 두드러지는 영역을 maximally stable extremal regions (MSER) 알고리즘을 사용하여 입력 영상에서 후보 영역을 선택한다. 마지막으로, 검출된 영역들에서 교통 신호등 영역을 검출하기 위해 템플릿 매칭 방법을 적용한다. 제안한 방법을 도로 상에서 실험한 결과, 평균 94% 이상의 검출율을 제시하였고, 특히 야간 시간대에 검출율이 비교적 높게 제시되었다.

An Efficient Lane Detection Algorithm Based on Hough Transform and Quadratic Curve Fitting (Hough 변환과 2차 곡선 근사화에 기반한 효율적인 차선 인식 알고리즘)

  • Kwon, Hwa-Jung;Yi, June-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3710-3717
    • /
    • 1999
  • For the development of unmanned autonomous vehicle, it is essential to detect obstacles, especially vehicles, in the forward direction of navigation. In order to reliably exclude regions that do not contain obstacles and save a considerable amount of computational effort, it is often necessary to confine computation only to ROI(region of interest)s. A ROI is usually chosen as the interior region of the lane. We propose a computationally simple and efficient method for the detection of lanes based on Hough transform and quadratic curve fitting. The proposed method first employs Hough transform to get approximate locations of lanes, and then applies quadratic curve fitting to the locations computed by Hough transform. We have experimented the proposed method on real outdoor road scene. Experimental results show that our method gives accurate detection of straight and curve lanes, and is computationally very efficient.

  • PDF

A Driving Information Centric Information Processing Technology Development Based on Image Processing (영상처리 기반의 운전자 중심 정보처리 기술 개발)

  • Yang, Seung-Hoon;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.31-37
    • /
    • 2012
  • Today, the core technology of an automobile is becoming to IT-based convergence system technology. To cope with many kinds of situations and provide the convenience for drivers, various IT technologies are being integrated into automobile system. In this paper, we propose an convergence system, which is called Augmented Driving System (ADS), to provide high safety and convenience of drivers based on image information processing. From imaging sensor, the image data is acquisited and processed to give distance from the front car, lane, and traffic sign panel by the proposed methods. Also, a converged interface technology with camera for gesture recognition and microphone for speech recognition is provided. Based on this kind of system technology, car accident will be decreased although drivers could not recognize the dangerous situations, since the system can recognize situation or user context to give attention to the front view. Through the experiments, the proposed methods achieved over 90% of recognition in terms of traffic sign detection, lane detection, and distance measure from the front car.

Real-time Identification of Traffic Light and Road Sign for the Next Generation Video-Based Navigation System (차세대 실감 내비게이션을 위한 실시간 신호등 및 표지판 객체 인식)

  • Kim, Yong-Kwon;Lee, Ki-Sung;Cho, Seong-Ik;Park, Jeong-Ho;Choi, Kyoung-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.13-24
    • /
    • 2008
  • A next generation video based car navigation is researched to supplement the drawbacks of existed 2D based navigation and to provide the various services for safety driving. The components of this navigation system could be a load object database, identification module for load lines, and crossroad identification module, etc. In this paper, we proposed the traffic lights and road sign recognition method which can be effectively exploited for crossroad recognition in video-based car navigation systems. The method uses object color information and other spatial features in the video image. The results show average 90% recognition rate from 30m to 60m distance for traffic lights and 97% at 40-90m distance for load sign. The algorithm also achieves 46msec/frame processing time which also indicates the appropriateness of the algorithm in real-time processing.

  • PDF

Magnetic Markers-based Autonomous Navigation System for a Personal Rapid Transit (PRT) Vehicle (PRT 차량을 위한 자기표지 기반 무인 자율주행 시스템)

  • Byun, Yeun-Sub;Um, Ju-Hwan;Jeong, Rag-Gyo;Kim, Baek-Hyun;Kang, Seok-Won
    • Journal of Digital Convergence
    • /
    • v.13 no.1
    • /
    • pp.297-304
    • /
    • 2015
  • Recently, the demand for a PRT(Personal Rapid Transit) system based on autonomous navigation is increasing. Accordingly, the applicability investigations of the PRT system on rail tracks or roadways have been widely studied. In the case of unmanned vehicle operations without physical guideways on roadways, to monitor the position of the vehicle in real time is very important for stable, robust and reliable guidance of an autonomous vehicle. The Global Positioning System (GPS) has been commercially used for vehicle positioning. However, it cannot be applied in environments as tunnels or interiors of buildings. The PRT navigation system based on magnetic markers reference sensing that can overcome these environmental restrictions and the vehicle dynamics model for its H/W configuration are presented in this study. In addition, the design of a control S/W dedicated for unmanned operation of a PRT vehicle and its prototype implementation for experimental validation on a pilot network were successfully achieved.

Road Image Enhancement Method for Vision-based Intelligent Vehicle (비전기반 지능형 자동차를 위한 도로 주행 영상 개선 방법)

  • Kim, Seunggyu;Park, Daeyong;Choi, Yeongwoo
    • Korean Journal of Cognitive Science
    • /
    • v.25 no.1
    • /
    • pp.51-71
    • /
    • 2014
  • This paper presents an image enhancement method in real road traffic scenes. The images captured by the camera on the car cannot keep the color constancy as illumination or weather changes. In the real environment, these problems are more worse at back light conditions and at night that make more difficult to the applications of the vision-based intelligent vehicles. Using the existing image enhancement methods without considering the position and intensity of the light source and their geometric relations the image quality can even be deteriorated. Thus, this paper presents a fast and effective method for image enhancement resembling human cognitive system which consists of 1) image preprocessing, 2) color-contrast evaluation, 3) alpha blending of over/under estimated image and preprocessed image. An input image is first preprocessed by gamma correction, and then enhanced by an Automatic Color Enhancement(ACE) method. Finally, the preprocessed image and the ACE image are blended to improve image visibility. The proposed method shows drastically enhanced results visually, and improves the performance in traffic sign detection of the vision based intelligent vehicle applications.

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.