• Title/Summary/Keyword: Advanced driver assistance system

Search Result 101, Processing Time 0.022 seconds

Implementation of Integrated Controller of ACC/LKS based on OSEK OS (OSEK OS 기반 ACC/LKS 통합제어기 구현)

  • Choi, Dan-Bee;Lee, Kyung-Jung;Ahn, Hyun-Sik
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.5
    • /
    • pp.1-8
    • /
    • 2013
  • This paper implements an integrated vehicle chassis system of ACC(Adaptive Cruise Control) and LKS(Lane Keeping System) based on OSEK OS to vehicle operating system and analyzes its performance through experiments. In recent years active safety and advanced driver assistance system has discussed to improve safety of vehicle. Among the rest, We integrate ACC that controls longitudinal velocity of vehicle and LKS that assists a vehicle in maintaing its driving lane, then implement integrated control system in vehicle. Implemented control system uses OSEK/VDX proposed standard, which is aiming at reusability and safety of software for vehicle and removal hardware dependence of application software. Redesigned control system based on OSEK OS, which is supported by OSEK/VDX, can manage real-time task, process interrupt and manage shared resource. We show by results performed EILS(ECU-In-the-Loop Simulation) that OSEK OS-based integrated controller of ACC and LKS is equivalent conventional integrated controller of ACC and LKS.

Development of ISO 26262 based Requirements Analysis and Verification Method for Efficient Development of Vehicle Software

  • Kyoung Lak Choi;Min Joong Kim;Young Min Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.219-230
    • /
    • 2023
  • With the development of autonomous driving technology, as the use of software in vehicles increases, the complexity of the system increases and the difficulty of development increases. Developments that meet ISO 26262 must be carried out to reduce the malfunctions that may occur in vehicles where the system is becoming more complex. ISO 26262 for the functional safety of the vehicle industry proposes to consider functional safety from the design stage to all stages of development. Specifically at the software level, the requirements to be complied with during development and the requirements to be complied with during verification are defined. However, it is not clearly expressed about specific design methods or development methods, and it is necessary to supplement development guidelines. The importance of analysis and verification of requirements is increasing due to the development of technology and the increase of system complexity. The vehicle industry must carry out developments that meet functional safety requirements while carrying out various development activities. We propose a process that reflects the perspective of system engineering to meet the smooth application and developmentrequirements of ISO 26262. In addition, the safety analysis/verification FMEA processforthe safety of the proposed ISO 26262 function was conducted based on the FCAS (Forward Collision Avoidance Assist System) function applied to autonomous vehicles and the results were confirmed. In addition, the safety analysis/verification FMEA process for the safety of the proposed ISO 26262 function was conducted based on the FCAS (Forward Collision Avoidance Assist System) function applied to the advanced driver assistance system and the results were confirmed.

Development of a Vehicle Positioning Algorithm Using In-vehicle Sensors and Single Photo Resection and its Performance Evaluation (차량 내장 센서와 단영상 후방 교차법을 이용한 차량 위치 결정 알고리즘 개발 및 성능 평가)

  • Kim, Ho Jun;Lee, Im Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.2
    • /
    • pp.21-29
    • /
    • 2017
  • For the efficient and stable operation of autonomous vehicles or advanced driver assistance systems being actively studied nowadays, it is important to determine the positions of the vehicle accurately and economically. A satellite based navigation system is mainly used for positioning, but it has a limitation in signal blockage areas. To overcome this limitation, sensor fusion methods including additional sensors such as an inertial navigation system have been mainly proposed but the high sensor cost has been a problem. In this work, we develop a vehicle position estimation algorithm using in-vehicle sensors and a low-cost imaging sensor without any expensive additional sensor. We determine the vehicle positions using the velocity and yaw-rate of a car from the in-vehicle sensors and the position and attitude of the camera based on the single photo resection process. For the evaluation, we built a prototype system, acquired test data using the system, and estimated the trajectory. The proposed algorithm shows the accuracy of about 40% higher than an in-vehicle sensor only method.

Technology Acceptance Modeling based on User Experience for Autonomous Vehicles

  • Cho, Yujun;Park, Jaekyu;Park, Sungjun;Jung, Eui S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.87-108
    • /
    • 2017
  • Objective: The purpose of this study was to precede the acceptance study based on automation steps and user experience that was lacked in the past study on the core technology of autonomous vehicle, ADAS. The first objective was to construct the acceptance model of ADAS technology that is the core technology, and draw factors that affect behavioral intention through user experience-based evaluation by applying driving simulator. The second one was to see the change of factors on automation step of autonomous vehicle through the UX/UA score. Background: The number of vehicles with the introduction of ADAS is increasing, and it caused change of interaction between vehicle and driver as automation is being developed on the particular drive factor. For this reason, it is becoming important to study the technology acceptance on how driver can actively accept giving up some parts of automated drive operation and handing over the authority to vehicle. Method: We organized the study model and items through literature investigation and the scenario according to the 4 stages of automation of autonomous vehicle, and preceded acceptance assessment using driving simulator. Total 68 men and woman were participated in this experiment. Results: We drew results of Performance Expectancy (PE), Social Influence (SI), Perceived Safety (PS), Anxiety (AX), Trust (T) and Affective Satisfaction (AS) as the factors that affect Behavioral Intention (BI). Also the drawn factors shows that UX/UA score has a significant difference statistically according to the automation steps of autonomous vehicle, and UX/UA tends to move up until the stage 2 of automation, and at stage 3 it goes down to the lowest level, and it increases a little or stays steady at stage 4. Conclusion and Application: First, we presented the acceptance model of ADAS that is the core technology of autonomous vehicle, and it could be the basis of the future acceptance study of the ADAS technology as it verifies through user experience-based assessment using driving simulator. Second, it could be helpful to the appropriate ADAS development in the future as drawing the change of factors and predicting the acceptance level according to the automation stages of autonomous vehicle through UX/UA score, and it could also grasp and avoid the problem that affect the acceptance level. It is possible to use these study results as tools to test validity of function before ADAS offering company launches the products. Also it will help to prevent the problems that could be caused when applying the autonomous vehicle technology, and to establish technology that is easily acceptable for drivers, so it will improve safety and convenience of drivers.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.

Development of Vehicle LDW Application Service using AUTOSAR Platform on Multi-Core MCU (멀티코어 상의 AUTOSAR 플랫폼을 활용한 차량용 LDW 응용 서비스 개발)

  • Park, Mi-Ryong;Kim, Dongwon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.113-120
    • /
    • 2014
  • In this paper, we examine Asymmetric Multi-Processing Environment to provide LDW service. Asymmetric Multi-Processing Environment consists of high-speed MCU to support rapid image processing and low-speed MCU for controlling with other ECU at the control domain. Also we designed rapid image process application and LDW application Software Component(SW-C) according to the development process rule of AUTOSAR. To communicate between two MCUs, timer based polling based IPC was designed. Also to communicate with other ECUs(Electronic Control Units), we designed CAN messages to provide alarm information and receiving CAN message to catch the Turn signal. We confirm the possibility of the various ADAS development using an Asymmetric Multi-Processing Environment and AUTOSAR platform. We also expect providing ISO 26262 functional safety.

Video Based Tail-Lights Status Recognition Algorithm (영상기반 차량 후미등 상태 인식 알고리즘)

  • Kim, Gyu-Yeong;Lee, Geun-Hoo;Do, Jin-Kyu;Park, Keun-Soo;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.10
    • /
    • pp.1443-1449
    • /
    • 2013
  • Automatic detection of vehicles in front is an integral component of many advanced driver-assistance system, such as collision mitigation, automatic cruise control, and automatic head-lamp dimming. Regardless day and night, tail-lights play an important role in vehicle detecting and status recognizing of driving in front. However, some drivers do not know the status of the tail-lights of vehicles. Thus, it is required for drivers to inform status of tail-lights automatically. In this paper, a recognition method of status of tail-lights based on video processing and recognition technology is proposed. Background estimation, optical flow and Euclidean distance is used to detect vehicles entering tollgate. Then saliency map is used to detect tail-lights and recognize their status in the Lab color coordinates. As results of experiments of using tollgate videos, it is shown that the proposed method can be used to inform status of tail-lights.

Interactive ADAS development and verification framework based on 3D car simulator (3D 자동차 시뮬레이터 기반 상호작용형 ADAS 개발 및 검증 프레임워크)

  • Cho, Deun-Sol;Jung, Sei-Youl;Kim, Hyeong-Su;Lee, Seung-gi;Kim, Won-Tae
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.970-977
    • /
    • 2018
  • The autonomous vehicle is based on an advanced driver assistance system (ADAS) consisting of a sensor that collects information about the surrounding environment and a control module that determines the measured data. As interest in autonomous navigation technology grows recently, an easy development framework for ADAS beginners and learners is needed. However, existing development and verification methods are based on high performance vehicle simulator, which has drawbacks such as complexity of verification method and high cost. Also, most of the schemes do not provide the sensing data required by the ADAS directly from the simulator, which limits verification reliability. In this paper, we present an interactive ADAS development and verification framework using a 3D vehicle simulator that overcomes the problems of existing methods. ADAS with image recognition based artificial intelligence was implemented as a virtual sensor in a 3D car simulator, and autonomous driving verification was performed in real scenarios.

Smart Camera Technology to Support High Speed Video Processing in Vehicular Network (차량 네트워크에서 고속 영상처리 기반 스마트 카메라 기술)

  • Son, Sanghyun;Kim, Taewook;Jeon, Yongsu;Baek, Yunju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.152-164
    • /
    • 2015
  • A rapid development of semiconductors, sensors and mobile network technologies has enable that the embedded device includes high sensitivity sensors, wireless communication modules and a video processing module for vehicular environment, and many researchers have been actively studying the smart car technology combined on the high performance embedded devices. The vehicle is increased as the development of society, and the risk of accidents is increasing gradually. Thus, the advanced driver assistance system providing the vehicular status and the surrounding environment of the vehicle to the driver using various sensor data is actively studied. In this paper, we design and implement the smart vehicular camera device providing the V2X communication and gathering environment information. And we studied the method to create the metadata from a received video data and sensor data using video analysis algorithm. In addition, we invent S-ROI, D-ROI methods that set a region of interest in a video frame to improve calculation performance. We performed the performance evaluation for two ROI methods. As the result, we confirmed the video processing speed that S-ROI is 3.0 times and D-ROI is 4.8 times better than a full frame analysis.

Traffic Sign Recognition using SVM and Decision Tree for Poor Driving Environment (SVM과 의사결정트리를 이용한 열악한 환경에서의 교통표지판 인식 알고리즘)

  • Jo, Young-Bae;Na, Won-Seob;Eom, Sung-Je;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.485-494
    • /
    • 2014
  • Traffic Sign Recognition(TSR) is an important element in an Advanced Driver Assistance System(ADAS). However, many studies related to TSR approaches only in normal daytime environment because a sign's unique color doesn't appear in poor environment such as night time, snow, rain or fog. In this paper, we propose a new TSR algorithm based on machine learning for daytime as well as poor environment. In poor environment, traditional methods which use RGB color region doesn't show good performance. So we extracted sign characteristics using HoG extraction, and detected signs using a Support Vector Machine(SVM). The detected sign is recognized by a decision tree based on 25 reference points in a Normalized RGB system. The detection rate of the proposed system is 96.4% and the recognition rate is 94% when applied in poor environment. The testing was performed on an Intel i5 processor at 3.4 GHz using Full HD resolution images. As a result, the proposed algorithm shows that machine learning based detection and recognition methods can efficiently be used for TSR algorithm even in poor driving environment.