• Title/Summary/Keyword: 자율주행시스템

Search Result 739, Processing Time 0.031 seconds

A Study on the Introduction for Automated Vehicle-based Mobility Service Considering the Level Of Service of Road Infrastructure (도로 인프라 수준을 고려한 자율주행 기반 모빌리티 서비스 도입 방향 고찰)

  • Tak, Sehyun;Kim, Haegon;Kang, Kyeongpyo;Lee, Donghoun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.5
    • /
    • pp.19-33
    • /
    • 2019
  • There have been enormous efforts to develop an innovative public transport bus service for enhancing its operational efficiency based on Automated Vehicle(AV). However, since the vehicle operating environment in the public transport system varies with the purpose and method of mobility service, it is necessary to preferentially evaluate the current roadworthiness for an effective way to introduce the AV. Therefore, this study classified and redefined AV-based mobility service based on literature reviews. This research conducted the roadworthiness test for checking the feasibilities of the AV-based mobility services. Furthermore, we suggested some deployment strategies of the AV-based mobility service considering the Level-Of-Service (LOS) of road infrastructure based on the results of roadworthiness tests. The proposed direction would have a great potential to introduce the AV-based public transport system in the near future.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

Direction detection and autonomous mobile robot using LED lighting-based indoor location recognition system (LED 조명 기반 실내위치 인식 시스템을 이용한 이동로봇의 방향 검출 및 자율주행)

  • Bang, Jae Hyeok;Park, Su Man;Yi, Keon Young
    • Proceedings of the KIEE Conference
    • /
    • 2015.07a
    • /
    • pp.1298-1299
    • /
    • 2015
  • 이동 로봇의 자기 위치 인식 방법으로 GPS를 많이 이용하지만 건물 내부공간에서는 위성신호 수신 장애가 있기 때문에 GPS 사용이 어렵다. 이에 대한 대안으로 다양한 형태의 실내 측위 기술에 관한 연구가 진행되어왔다. 최근에는 WiFi를 이용한 방법이 일부 상용화 되고 있으나 정밀도가 3~5m라는 한계가 있으며, LED 조명을 이용한 방법은 실용화 단계에 이르지는 못했지만 많은 연구가 진행되고 있다. 당 연구실에서도 LED조명을 기반으로 한 실내위치 인식 시스템을 개발하였으며, 지난 연구에서는 이를 이용한 이동로봇의 자율주행을 연구하였다. 본 연구에서는 지난 연구에 덧붙여 두개의 수신부를 이용하여 로봇의 방향인식오류 개선 및 이동 로봇의 자율주행을 보여주고자 한다. 제시된 시스템은 이동로봇, 조명제어장치 그리고 컴퓨터로 구성된다. 이동로봇은 상용화된 마이크로 마우스에 탑재된 조명신호 수신장치를 통하여 자신의 위치와 방향을 감지하며, 컴퓨터와의 Wi-Fi 통신으로 자신의 위치를 컴퓨터에 전송하거나 위치 명령을 수신한다. 컴퓨터에서는 수신 받은 이동로봇의 위치를 실시간으로 화면에 표시하며, 이동로봇에 전달할 위치명령을 사용자가 입력하는 기능을 제공한다. 사용자가 이동경로를 설정한 후 이동로봇으로 명령을 보내면 로봇은 자신의 위치와 목적지를 비교하며 자율주행을 하게 된다. 실험을 통하여 확인한 결과 지난연구의 방향인식의 문제점이 해결되어 제시된 시스템으로 실내공간에서도 이동로봇의 자율주행이 원만히 이루어짐을 확인하였다.

  • PDF

Ontology-based Navigational Planning for Autonomous Robots (온톨로지에 기반한 자율주행 로봇의 운항)

  • Lee, In-K.;Seo, Suk-T.;Jeong, Hye-C.;Kwon, Soon-H.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.626-631
    • /
    • 2007
  • Autonomous robots performing desired tasks in rough, changing, unstructured environments without continuous human assistance must have the ability to cope with its surroundings whether this be certain or not. The development of algorithms deriving useful conclusions from uncertain information obtained by various sensors may be the first for it. Recently ontology is taken great attention as a method useful for the representation and processing of knowledge. In this paper, we propose an ontology-based navigation algorithm for autonomous robots, and provide computer simulation results in order to show the validity of the proposed algorithm.

자율주행 자동차의 전기적 파워 조향 시스템을 위한 제어 기법의 개관

  • Son, Yeong-Seop;Kim, Won-Hui;Jeong, Jeong-Ju
    • ICROS
    • /
    • v.21 no.1
    • /
    • pp.31-36
    • /
    • 2015
  • 운전자에게 편의성을 제공하는 차량의 주행관련 Advanced driver assist system (ADAS)에는 차량의 종방향과 횡방향 운동에 대한 제어기가 요구된다. 횡방향 제어를 위해서는 조향 시스템의 조향각 제어가 요구되는데 최근 구조적으로 간단하고 연비향상, 차량의 중량 감소, 빠른 응답성을 가지고 있는 전기적 파워 조향 (Electric power steering, EPS) 시스템이 자동차 산업에서 널리 사용되고 있다. 차량의 주행관련 ADAS를 사용하여 자율 주행 시 EPS 시스템은 상위 제어기에서 계산된 필요한 조향각을 추종 할 수 있도록 조향 핸들의 각 제어를 해야 한다. 그러나 일반적인 EPS 시스템은 운전자가 조향 핸들에 인가된 토크를 보조해 줄 수 있는 토크를 출력해 준다. 본 논문에서는 이러한 문제를 해결하는 방법들을 설명한다. 먼저 EPS 시스템의 기본 기능에 대해서 설명을 하고, 자율 추행 차량을 위한 조항 핸들의 각 제어를 위한 proportional-integral 제어, 슬라이딩 모드 제어 (Sliding mode control), 관측기 기반 비선형 댐핑 제어(Observer based nonlinear damping control) 등과 같은 다양한 기법의 제어 알고리즘들에 대한 방법들이 고찰되었다.

  • PDF

Performance Evaluation Using Neural Network Learning of Indoor Autonomous Vehicle Based on LiDAR (라이다 기반 실내 자율주행 차량에서 신경망 학습을 사용한 성능평가 )

  • Yonghun Kwon;Inbum Jung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.93-102
    • /
    • 2023
  • Data processing through the cloud causes many problems, such as latency and increased communication costs in the communication process. Therefore, many researchers study edge computing in the IoT, and autonomous driving is a representative application. In indoor self-driving, unlike outdoor, GPS and traffic information cannot be used, so the surrounding environment must be recognized using sensors. An efficient autonomous driving system is required because it is a mobile environment with resource constraints. This paper proposes a machine-learning method using neural networks for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the distance data measured by the LiDAR sensor. We designed six learning models to evaluate according to the number of input data of the proposed neural networks. In addition, we made an autonomous vehicle based on Raspberry Pi for driving and learning and an indoor driving track produced for collecting data and evaluation. Finally, we compared six neural network models in terms of accuracy, response time, and battery consumption, and the effect of the number of input data on performance was confirmed.

Experimental Setup for Autonomous Navigation of Robotic Vehicle for University Campus (대학 캠퍼스용 로봇차량의 자율주행을 위한 실험환경 구축)

  • Cho, Sung Taek;Park, Young Jun;Jung, Seul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.105-112
    • /
    • 2016
  • This paper presents the experimental setup for autonomous navigation of a robotic vehicle for touring university campus. The robotic vehicle is developed for navigation of specific areas such as university campus or play parks. The robotic vehicle can carry two passengers to travel short distances. For the robotic vehicle to navigate autonomously the specific distance from the main gate to the administrative building in the university, the experimental setup for SLAM is presented. As an initial step, a simple method of following the line detected by a single camera is implemented for the partial area. The central line on the pavement colored with two kinds, red and yellow, is detected by image processing, and the robotic vehicle is commanded to follow the line. Experimental studies are conducted to demonstrate the performance of navigation as a possible touring vehicle.

Spatial Factors' Analysis of Affecting on Automated Driving Safety Using Spatial Information Analysis Based on Level 4 ODD Elements (Level 4 자율주행서비스 ODD 구성요소 기반 공간정보분석을 통한 자율주행의 안전성에 영향을 미치는 공간적 요인 분석)

  • Tagyoung Kim;Jooyoung Maeng;Kyeong-Pyo Kang;SangHoon Bae
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.182-199
    • /
    • 2023
  • Since 2021, government departments have been promoting Automated Driving Technology Development and Innovation Project as national research and development(R&D) project. The automated vehicles and service technologies developed as part of these projects are planned to be subsequently provided to the public at the selected Living Lab City. Therefore, it is important to determine a spatial area and operation section that enables safe and stable automated driving, depending on the purpose and characteristics of the target service. In this study, the static Operational Design Domain(ODD) elements for Level 4 automated driving services were reclassified by reviewing previously published papers and related literature surveys and investigating field data. Spatial analysis techniques were used to consider the reclassified ODD elements for level 4 in the real area of level 3 automated driving services because it is important to reflect the spatial factors affecting safety related to real automated driving technologies and services. Consequently, a total of six driving mode changes(disengagement) were derived through spatial information analysis techniques, and the factors affecting the safety of automated driving were crosswalk, traffic light, intersection, bicycle road, pocket lane, caution sign, and median strip. This spatial factor analysis method is expected to be useful for determining special areas for the automated driving service.

Development of the autonomous vehicle model with embedded 32-bit microprocessor (32비트 마이크로프로세서를 이용한 모형 자율주행차량 개발)

  • Lee, Bom-Seok;Kim, June-sik;Kim, Byeong-Hwa;Kim, Dong-Gyu;Lee, Woo-Haeng;Park, Ju-Hyun;Lee, Seul-Ki
    • Proceedings of the KIEE Conference
    • /
    • 2015.07a
    • /
    • pp.103-104
    • /
    • 2015
  • 본 논문에서는 모형 차량을 이용하여 자율주행 시스템을 구현하고자 한다. 이 시스템의 목적은 차선을 계속 유지함으로써 자율주행을 하는데 있다. 이를 위하여 32비트 마이크로프로세서를 이용한 모형차량을 설계하고 흰 바탕과 검은색 차선으로 이루어진 환경에서 시뮬레이션을 통해 살펴본다. 라인스캔 카메라(TSL1401CL)를 사용하여 차선을 인식하고 이에 따라 자율주행을 하는 시스템을 구현한다.

  • PDF

Self-driving quarantine robot with chlorine dioxide system (이산화염소 시스템을 적용한 자율주행 방역 로봇)

  • Bang, Gul-Won
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.145-150
    • /
    • 2021
  • In order to continuously perform quarantine in public places, it is not easy to secure manpower, but using self-driving-based robots can solve problems caused by manpower. Self-driving-based quarantine robots can continuously prevent the spread of harmful viruses and diseases in public institutions and hospitals without additional manpower. The location of the autonomous driving function was estimated by applying the Pinnacle filter algorithm, and the UV sterilization system and chlorine dioxide injection system were applied for quarantine. The driving time is more than 3 hours and the position error is 0.5m.Soon, the stop-avoidance function was operated at 95% and the obstacle detection distance was 1.5 m, and the automatic charge recovery was charged by moving to the charging cradle at the remaining 10% of the battery capacity. As a result of quarantine with an unmanned quarantine system, UV sterilization is 99% and chlorine dioxide is sterilized more than 95%, which can contribute to reducing enormous social costs.