• Title/Summary/Keyword: Advanced Driver Assistance Systems

Search Result 73, Processing Time 0.024 seconds

Advanced Channel Estimation Schemes Using CDP based Updated Matrix for IEEE802.11p/WAVE Systems

  • Park, Choeun;Ko, Kyunbyoung
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.39-44
    • /
    • 2018
  • Today, cars have developed into intelligent automobiles that combine advanced control equipment and IT technology to provide driving assistance and convenience to users. These vehicles provide infotainment services to the driver, but this does not improve the safety of the driver. Accordingly, V2X communication, which forms a network between a vehicle and a vehicle, between a vehicle and an infrastructure, or between a vehicle and a human, is drawing attention. Therefore, various techniques for improving channel estimation performance without changing the IEEE 802.11p standard have been proposed, but they do not satisfy the packet error rate (PER) performance required by the C-ITS service. In this paper, we analyze existing channel estimation techniques and propose a new channel estimation scheme that achieves better performance than existing techniques. It does this by applying the updated matrix for the data pilot symbol to the construct data pilot (CDP) channel estimation scheme and by further performing the interpolation process in the frequency domain. Finally, through simulations based on the IEEE 802.11p standard, we confirmed the performance of the existing channel estimation schemes and the proposed channel estimation scheme by coded PER.

Development of Collision Safety Control Logic using ADAS information and Machine Learning (머신러닝/ADAS 정보 활용 충돌안전 제어로직 개발)

  • Park, Hyungwook;Song, Soo Sung;Shin, Jang Ho;Han, Kwang Chul;Choi, Se Kyung;Ha, Heonseok;Yoon, Sungroh
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.3
    • /
    • pp.60-64
    • /
    • 2022
  • In the automotive industry, the development of automobiles to meet safety requirements is becoming increasingly complex. This is because quality evaluation agencies in each country are continually strengthening new safety standards for vehicles. Among these various requirements, collision safety must be satisfied by controlling airbags, seat belts, etc., and can be defined as post-crash safety. Apart from this safety system, the Advanced Driver Assistance Systems (ADAS) use advanced detection sensors, GPS, communication, and video equipment to detect the hazard and notify driver before the collision. However, research to improve passenger safety in case of an accident by using the sensor of active safety represented by ADAS in the existing passive safety is limited to the level that utilizes the sudden braking level of the FCA (Forward Collision-avoidance Assist) system. Therefore, this study aims to develop logic that can improve passenger protection in case of an accident by using ADAS information and driving information secured before a collision. The proposed logic was constructed based on LSTM deep learning techniques and trained using crash test data.

A Real-Time Hardware Design of CNN for Vehicle Detection (차량 검출용 CNN 분류기의 실시간 처리를 위한 하드웨어 설계)

  • Bang, Ji-Won;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.20 no.4
    • /
    • pp.351-360
    • /
    • 2016
  • Recently, machine learning algorithms, especially deep learning-based algorithms, have been receiving attention due to its high classification performance. Among the algorithms, Convolutional Neural Network(CNN) is known to be efficient for image processing tasks used for Advanced Driver Assistance Systems(ADAS). However, it is difficult to achieve real-time processing for CNN in vehicle embedded software environment due to the repeated operations contained in each layer of CNN. In this paper, we propose a hardware accelerator which enhances the execution time of CNN by parallelizing the repeated operations such as convolution. Xilinx ZC706 evaluation board is used to verify the performance of the proposed accelerator. For $36{\times}36$ input images, the hardware execution time of CNN is 2.812ms in 100MHz clock frequency and shows that our hardware can be executed in real-time.

Real-time FCWS implementation using CPU-FPGA architecture (CPU-FPGA 구조를 이용한 실시간 FCWS 구현)

  • Han, Sungwoo;Jeong, Yongjin
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.358-367
    • /
    • 2017
  • Advanced Driver Assistance Systems(ADAS), such as Front Collision Warning System (FCWS) are currently being developed. FCWS require high processing speed because it must operate in real time while driving. In addition, a low-power system is required to operate in an automobile embedded system. In this paper, FCWS is implemented in CPU-FPGA architecture in embedded system to enable real-time processing. The lane detection enabled the use of the Inverse Transform Perspective (IPM) and sliding window methods to operate at fast speed. To detect the vehicle, a Convolutional Neural Network (CNN) with high recognition rate and accelerated by parallel processing in FPGA is used. The proposed architecture was verified using Intel FPGA Cyclone V SoC(System on Chip) with ARM-Core A9 which operates in low power and on-board FPGA. The performance of FCWS in HD resolution is 44FPS, which is real time, and energy efficiency is about 3.33 times higher than that of high performance PC enviroment.

A Study of Mobile Edge Computing System Architecture for Connected Car Media Services on Highway

  • Lee, Sangyub;Lee, Jaekyu;Cho, Hyeonjoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5669-5684
    • /
    • 2018
  • The new mobile edge network architecture has been required for an increasing amount of traffic, quality requirements, advanced driver assistance system for autonomous driving and new cloud computing demands on highway. This article proposes a hierarchical cloud computing architecture to enhance performance by using adaptive data load distribution for buses that play the role of edge computing server. A vehicular dynamic cloud is based on wireless architecture including Wireless Local Area Network and Long Term Evolution Advanced communication is used for data transmission between moving buses and cars. The main advantages of the proposed architecture include both a reduction of data loading for top layer cloud server and effective data distribution on traffic jam highway where moving vehicles require video on demand (VOD) services from server. Through the description of real environment based on NS-2 network simulation, we conducted experiments to validate the proposed new architecture. Moreover, we show the feasibility and effectiveness for the connected car media service on highway.

Night-time Vehicle Detection Method Using Convolutional Neural Network (합성곱 신경망 기반 야간 차량 검출 방법)

  • Park, Woong-Kyu;Choi, Yeongyu;KIM, Hyun-Koo;Choi, Gyu-Sang;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.2
    • /
    • pp.113-120
    • /
    • 2017
  • In this paper, we present a night-time vehicle detection method using CNN (Convolutional Neural Network) classification. The camera based night-time vehicle detection plays an important role on various advanced driver assistance systems (ADAS) such as automatic head-lamp control system. The method consists mainly of thresholding, labeling and classification steps. The classification step is implemented by existing CIFAR-10 model CNN. Through the simulations tested on real road video, we show that CNN classification is a good alternative for night-time vehicle detection.

A Study on the Image DB Construction for the Multi-function Front Looking Camera System Development (다기능 전방 카메라 개발을 위한 영상 DB 구축 방법에 관한 연구)

  • Kee, Seok-Cheol
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.25 no.2
    • /
    • pp.219-226
    • /
    • 2017
  • This paper addresses the effective and quantitative image DB construction for the development of front looking camera systems. The automotive industry has expanded the capability of front camera solutions that will help ADAS(Advanced Driver Assistance System) applications targeting Euro NCAP function requirements. These safety functions include AEB(Autonomous Emergency Braking), TSR(Traffic Signal Recognition), LDW(Lane Departure Warning) and FCW(Forward Collision Warning). In order to guarantee real road safety performance, the driving image DB logged under various real road conditions should be used to train core object classifiers and verify the function performance of the camera system. However, the driving image DB would entail an invalid and time consuming task without proper guidelines. The standard working procedures and design factors required for each step to build an effective image DB for reliable automotive front looking camera systems are proposed.

Research on Cognitive Effects and Responsiveness of Smartphone-based Augmented Reality Navigation (스마트폰 증강현실 내비게이션의 인지능력과 호응도에 관한 연구)

  • Sohn, Min Gook;Lee, Seung Tae;Lee, Jae Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.3
    • /
    • pp.272-280
    • /
    • 2014
  • Most of the car navigation systems pzrovide 2D or 3D virtual map-based driving guidance. One of the important issues is how to reduce cognitive burden to the driver who should interpret the abstracted information to real world driving information. Recently, an augmented reality (AR)-based navigation is considered as a new way to reduce cognitive workload by superimposing guidance information into the real world scene captured by the camera. In particular, head-up display (HUD) is popular to implement AR navigation. However, HUD is too expensive to be set up in most cars so that the HUD-based AR navigation is currently unrealistic for navigational assistance. Meanwhile, smartphones with advanced computing capability and various sensors are popularized and also provide navigational assistance. This paper presents a research on cognitive effect and responsiveness of an AR navigation by a comparative study with a conventional virtual map-based navigation on the same smartphone. This paper experimented both quantitative and qualitative studies to compare cognitive workload and responsiveness, respectively. The number of eye gazing at the navigation system is used to measure the cognitive effect. In addition, questionnaires are used for qualitative analysis of the responsiveness.

Vision-based Real-time Vehicle Detection and Tracking Algorithm for Forward Collision Warning (전방 추돌 경보를 위한 영상 기반 실시간 차량 검출 및 추적 알고리즘)

  • Hong, Sunghoon;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.962-970
    • /
    • 2021
  • The cause of the majority of vehicle accidents is a safety issue due to the driver's inattention, such as drowsy driving. A forward collision warning system (FCWS) can significantly reduce the number and severity of accidents by detecting the risk of collision with vehicles in front and providing an advanced warning signal to the driver. This paper describes a low power embedded system based FCWS for safety. The algorithm computes time to collision (TTC) through detection, tracking, distance calculation for the vehicle ahead and current vehicle speed information with a single camera. Additionally, in order to operate in real time even in a low-performance embedded system, an optimization technique in the program with high and low levels will be introduced. The system has been tested through the driving video of the vehicle in the embedded system. As a result of using the optimization technique, the execution time was about 170 times faster than that when using the previous non-optimized process.

Real-time Speed Limit Traffic Sign Detection System for Robust Automotive Environments

  • Hoang, Anh-Tuan;Koide, Tetsushi;Yamamoto, Masaharu
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.237-250
    • /
    • 2015
  • This paper describes a hardware-oriented algorithm and its conceptual implementation in a real-time speed limit traffic sign detection system on an automotive-oriented field-programmable gate array (FPGA). It solves the training and color dependence problems found in other research, which saw reduced recognition accuracy under unlearned conditions when color has changed. The algorithm is applicable to various platforms, such as color or grayscale cameras, high-resolution (4K) or low-resolution (VGA) cameras, and high-end or low-end FPGAs. It is also robust under various conditions, such as daytime, night time, and on rainy nights, and is adaptable to various countries' speed limit traffic sign systems. The speed limit traffic sign candidates on each grayscale video frame are detected through two simple computational stages using global luminosity and local pixel direction. Pipeline implementation using results-sharing on overlap, application of a RAM-based shift register, and optimization of scan window sizes results in a small but high-performance implementation. The proposed system matches the processing speed requirement for a 60 fps system. The speed limit traffic sign recognition system achieves better than 98% accuracy in detection and recognition, even under difficult conditions such as rainy nights, and is implementable on the low-end, low-cost Xilinx Zynq automotive Z7020 FPGA.