• Title/Summary/Keyword: Advanced driver assistance systems

Search Result 73, Processing Time 0.03 seconds

To prevent unprotected left turn accident A Study on the Improvement of ADAS System (비보호 좌회전 사고 예방을 위한 ADAS 시스템 개선 방안의 관한 연구)

  • Jun-Young Kim;Kyung-Jun Kim;Se-Young Park;Shin-Hyoung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.940-942
    • /
    • 2023
  • 교통사고 통계에 따르면 비보호 구역 내 도로에서 발생하는 교통사고 발생률이 일반 도로보다 30% 높은 수준임이 밝혀졌다. 기존 첨단 운전자 지원 시스템(ADAS: Advanced Driver Assistance Systems)은 다양한 사고 시나리오가 존재하는 비보호 구역에 적용하기에는 한계가 있다. 본 논문은 이러한 문제에 대응하기 위해 기존 ADAS 기능을 확장하여 예측과 판단이 어려운 비보호 구역에서 AI 분석을 통해 운전자에게 주행 가능 여부를 시각적으로 제공하는 시스템을 개발하고자 한다. 이 시스템은 운전자에게 경고와 지원을 제공함으로써 비보호 구역 내 교통사고를 예방할 수 있다.

Vehicle Classification and Tracking based on Deep Learning (딥러닝 기반의 자동차 분류 및 추적 알고리즘)

  • Hyochang Ahn;Yong-Hwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.161-165
    • /
    • 2023
  • One of the difficult works in an autonomous driving system is detecting road lanes or objects in the road boundaries. Detecting and tracking a vehicle is able to play an important role on providing important information in the framework of advanced driver assistance systems such as identifying road traffic conditions and crime situations. This paper proposes a vehicle detection scheme based on deep learning to classify and tracking vehicles in a complex and diverse environment. We use the modified YOLO as the object detector and polynomial regression as object tracker in the driving video. With the experimental results, using YOLO model as deep learning model, it is possible to quickly and accurately perform robust vehicle tracking in various environments, compared to the traditional method.

  • PDF

A Study on Distributed Message Allocation Method of CAN System with Dual Communication Channels (중복 통신 채널을 가진 CAN 시스템에서 분산 메시지 할당 방법에 관한 연구)

  • Kim, Man-Ho;Lee, Jong-Gap;Lee, Suk;Lee, Kyung-Chang
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.1018-1023
    • /
    • 2010
  • The CAN (Controller Area Network) system is the most dominant protocol for in-vehicle networking system because it provides bounded transmission delay among ECUs (Electronic Control Units) at data rates between 125Kbps and 1Mbps. And, many automotive companies have chosen the CAN protocol for their in-vehicle networking system such as chassis network system because of its excellent communication characteristics. However, the increasing number of ECUs and the need for more intelligent functions such as ADASs (Advanced Driver Assistance Systems) or IVISs (In-Vehicle Information Systems) require a network with more network capacity and the real-time QoS (Quality-of-Service). As one approach to enhancing the network capacity of a CAN system, this paper introduces a CAN system with dual communication channel. And, this paper presents a distributed message allocation method that allocates messages to the more appropriate channel using forecast traffic of each channel. Finally, an experimental testbed using commercial off-the-shelf microcontrollers with two CAN protocol controllers was used to demonstrate the feasibility of the CAN system with dual communication channel using the distributed message allocation method.

Study on Effectiveness of Accident Reduction Depending on Autonomous Emergency Braking System (AEB 장치에 대한 사고경감 효과 연구)

  • Choi, JunYoung;Kang, SeungSu;Park, EunAh;Lee, KangWon;Lee, SiHun;Cho, SooKang;Kwon, YoungGil
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.2
    • /
    • pp.6-10
    • /
    • 2019
  • This paper describes effectiveness of accident reduction on vehicles equipped with AEB using accident data occurring in Korea. During the statistical period, we used the number of vehicles which are covered by auto insurance and the number of accidents. To maximize the reduction effect of accidents caused by the driver's carelessness, the analysis was limited to Physical Damage Coverage that covers the cost of repairing or replacing the damaged vehicle caused by the driver's fault. Due to Personal Information Protection Law, it was not capable of comparing the same vehicle using Vehicle Identification Number in this study. Instead of that, we used it as a similar vehicle, so there are limits to the comparison and analysis results. As a result of this study, we have found that the effect of reducing accidents was different depending on the vehicle class, but it was generally concluded that the number of accidents decreased when the vehicle was equipped with an AEB system. Domestic research on the AEB effect of reducing accidents is not active yet. Therefore, it is absolutely essential to analyze the effects according to various conditions such as driver's age, occupation and gender as well as expanding the study models in the future.

Filtering-Based Method and Hardware Architecture for Drivable Area Detection in Road Environment Including Vegetation (초목을 포함한 도로 환경에서 주행 가능 영역 검출을 위한 필터링 기반 방법 및 하드웨어 구조)

  • Kim, Younghyeon;Ha, Jiseok;Choi, Cheol-Ho;Moon, Byungin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.51-58
    • /
    • 2022
  • Drivable area detection, one of the main functions of advanced driver assistance systems, means detecting an area where a vehicle can safely drive. The drivable area detection is closely related to the safety of the driver and it requires high accuracy with real-time operation. To satisfy these conditions, V-disparity-based method is widely used to detect a drivable area by calculating the road disparity value in each row of an image. However, the V-disparity-based method can falsely detect a non-road area as a road when the disparity value is not accurate or the disparity value of the object is equal to the disparity value of the road. In a road environment including vegetation, such as a highway and a country road, the vegetation area may be falsely detected as the drivable area because the disparity characteristics of the vegetation are similar to those of the road. Therefore, this paper proposes a drivable area detection method and hardware architecture with a high accuracy in road environments including vegetation areas by reducing the number of false detections caused by V-disparity characteristic. When 289 images provided by KITTI road dataset are used to evaluate the road detection performance of the proposed method, it shows an accuracy of 90.12% and a recall of 97.96%. In addition, when the proposed hardware architecture is implemented on the FPGA platform, it uses 8925 slice registers and 7066 slice LUTs.

Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras (AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발)

  • Jin, Youngseok;Jeon, Hyeongcheol;Shin, Young-Nam;Hyun, Eugin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.4
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

Reinforcement Learning Strategy for Automatic Control of Real-time Obstacle Avoidance based on Vehicle Dynamics (실시간 장애물 회피 자동 조작을 위한 차량 동역학 기반의 강화학습 전략)

  • Kang, Dong-Hoon;Bong, Jae Hwan;Park, Jooyoung;Park, Shinsuk
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.297-305
    • /
    • 2017
  • As the development of autonomous vehicles becomes realistic, many automobile manufacturers and components producers aim to develop 'completely autonomous driving'. ADAS (Advanced Driver Assistance Systems) which has been applied in automobile recently, supports the driver in controlling lane maintenance, speed and direction in a single lane based on limited road environment. Although technologies of obstacles avoidance on the obstacle environment have been developed, they concentrates on simple obstacle avoidances, not considering the control of the actual vehicle in the real situation which makes drivers feel unsafe from the sudden change of the wheel and the speed of the vehicle. In order to develop the 'completely autonomous driving' automobile which perceives the surrounding environment by itself and operates, ability of the vehicle should be enhanced in a way human driver does. In this sense, this paper intends to establish a strategy with which autonomous vehicles behave human-friendly based on vehicle dynamics through the reinforcement learning that is based on Q-learning, a type of machine learning. The obstacle avoidance reinforcement learning proceeded in 5 simulations. The reward rule has been set in the experiment so that the car can learn by itself with recurring events, allowing the experiment to have the similar environment to the one when humans drive. Driving Simulator has been used to verify results of the reinforcement learning. The ultimate goal of this study is to enable autonomous vehicles avoid obstacles in a human-friendly way when obstacles appear in their sight, using controlling methods that have previously been learned in various conditions through the reinforcement learning.

Traversable Region Detection Algorithm using Lane Information and Texture Analysis (차로 수 정보와 텍스쳐 분석을 활용한 주행가능영역 검출 알고리즘)

  • Hwang, Sung Soo;Kim, Do Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.979-989
    • /
    • 2016
  • Traversable region detection is an essential step for advanced driver assistance systems and self-driving car systems, and it has been conducted by detecting lanes from input images. The performance can be unreliable, however, when the light condition is poor or there exist no lanes on the roads. To solve this problem, this paper proposes an algorithm which utilizes the information about the number of lanes and texture analysis. The proposed algorithm first specifies road region candidates by utilizing the number of lanes information. Among road region candidates, the road region is determined as the region in which texture is homogeneous and texture discontinuities occur around its boundaries. Traversable region is finally detected by dividing the estimated road region with the number of lanes information. This paper combines the proposed algorithm with a lane detection-based method to construct a system, and simulation results show that the system detects traversable region even on the road with poor light conditions or no lanes.

Dense Optical flow based Moving Object Detection at Dynamic Scenes (동적 배경에서의 고밀도 광류 기반 이동 객체 검출)

  • Lim, Hyojin;Choi, Yeongyu;Nguyen Khac, Cuong;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.277-285
    • /
    • 2016
  • Moving object detection system has been an emerging research field in various advanced driver assistance systems (ADAS) and surveillance system. In this paper, we propose two optical flow based moving object detection methods at dynamic scenes. Both proposed methods consist of three successive steps; pre-processing, foreground segmentation, and post-processing steps. Two proposed methods have the same pre-processing and post-processing steps, but different foreground segmentation step. Pre-processing calculates mainly optical flow map of which each pixel has the amplitude of motion vector. Dense optical flows are estimated by using Farneback technique, and the amplitude of the motion normalized into the range from 0 to 255 is assigned to each pixel of optical flow map. In the foreground segmentation step, moving object and background are classified by using the optical flow map. Here, we proposed two algorithms. One is Gaussian mixture model (GMM) based background subtraction, which is applied on optical map. Another is adaptive thresholding based foreground segmentation, which classifies each pixel into object and background by updating threshold value column by column. Through the simulations, we show that both optical flow based methods can achieve good enough object detection performances in dynamic scenes.

Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System (경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출)

  • Hong, Sunghoon;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.