• Title/Summary/Keyword: 카메라 기반 주행

Search Result 166, Processing Time 0.032 seconds

Integration of Visually Detected Lane Information into Costmap (비전 기반 차선 인식 정보의 Costmap 반영 연구)

  • Jihoon Ha;Kyunam Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1135-1136
    • /
    • 2023
  • 자율주행에서의 경로 계획을 위해서는 costmap을 활용할 수 있다. Costmap은 map 정보와 센서 데이터를 토대로 해당 지역을 통과할 때의 위험도를 cost로 할당한다. 그러나 local costmap에는 센서로 인식한 장애물만이 고려되며, 차선 정보를 경로 계획에 포함하기 위해서는 별도의 정보 처리가 필요하다. 본 연구에서는 카메라로 인식한 차선 정보를 costmap에 포함함으로써 통합적인 판단 방법론을 탐색하고, 위치 추정 및 경로 계획에서의 활용 가능성을 제시한다.

The Road condition-based Braking Strength Calculation System for a fully autonomous driving vehicle (완전 자율주행을 위한 도로 상태 기반 제동 강도 계산 시스템)

  • Son, Su-Rak;Jeong, Yi-Na
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.53-59
    • /
    • 2022
  • After the 3rd level autonomous driving vehicle, the 4th and 5th level of autonomous driving technology is trying to maintain the optimal condition of the passengers as well as the perfect driving of the vehicle. However current autonomous driving technology is too dependent on visual information such as LiDAR and front camera, so it is difficult to fully autonomously drive on roads other than designated roads. Therefore this paper proposes a Braking Strength Calculation System (BSCS), in which a vehicle classifies road conditions using data other than visual information and calculates optimal braking strength according to road conditions and driving conditions. The BSCS consists of RCDM (Road Condition Definition Module), which classifies road conditions based on KNN algorithm, and BSCM (Braking Strength Calculation Module), which calculates optimal braking strength while driving based on current driving conditions and road conditions. As a result of the experiment in this paper, it was possible to find the most suitable number of Ks for the KNN algorithm, and it was proved that the RCDM proposed in this paper is more accurate than the unsupervised K-means algorithm. By using not only visual information but also vibration data applied to the suspension, the BSCS of the paper can make the braking of autonomous vehicles smoother in various environments where visual information is limited.

A vision sensor based on structured light for active sensor network (자율 센서 네트워크를 위한 스트럭쳐 라이트 기반 비전 센서)

  • Park, Joon-Suk;Song, Ha-Yoon;Park, Jun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.349-352
    • /
    • 2007
  • 자율 센서 네트워크에서 센서 노드가 자율 주행하기 위해서 주행 경로 상의 장애물을 회피해야 한다. 이를 위해서 저렴하고 동적인 상황에서도 효과적인 장애물 탐지 비전 센서를 구현하였다. 이 연구에서는 Structured Light방식을 이용하였으며. Structured Light로는 라인 패턴의 적외선 레이저를 사용하였고 카메라에 적외선 필터를 장착하여 빛의 효과에 둔감하게 하였다. 값과 시간에 따른 2차 Thresholding으로 노이즈를 제거하였다. 실험 결과 센서 노드를 기준으로 한 2D 좌표계에서 최대 10mm의 오차로 장애물의 X, Y좌표를 찾을 수 있었으며 비전 센서의 프로그램은 객체화 하여 센서 노드 프로그램과 연동되어 장애물의 정보를 Localize와 Map building에 사용 할 수 있도록 제공한다.

The research of implementing safety driving system based on camera vision system (Camera Vision 기반 주행안전 시스템 구현에 관한 연구)

  • Park, Hwa-Beom;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.9
    • /
    • pp.1088-1095
    • /
    • 2019
  • The information and communication technology that is being developed recently has been greatly influencing the automobile market. In recent years, devices equipped with IT technology have been installed for the safety and convenience of the driver. However, it has the advantage of increased convenience as well as the disadvantage of increasing traffic accidents due to driver's distraction. In order to prevent such accidents, it is necessary to develop safety systems of various types and ways. In this paper implements a platform that can recognize LDWS and FCWS and PDWS by using a single camera without using radar sensor and camera fusion and stereo camera method using two or more sensors, and proposes to study multi-function driving safety platform using a single camera by analyzing recognition rate evaluation and validity on a vehicle.

Real-Time Individual Tracking of Multiple Moving Objects for Projection based Augmented Visualization (다중 동적객체의 실시간 독립추적을 통한 프로젝션 증강가시화)

  • Lee, June-Hyung;Kim, Ki-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.357-364
    • /
    • 2014
  • AR contents, if markers to be tracked move fast, show flickering while updating images captured from cameras. Conventional methods employing image based markers and SLAM algorithms for tracking objects have the problem that they do not allow more than 2 objects to be tracked simultaneously and interacted with each other in the same camera scene. In this paper, an improved SLAM type algorithm for tracking dynamic objects is proposed and investigated to solve the problem described above. To this end, method using 2 virtual cameras for one physical camera is adopted, which makes the tracked 2 objects interacted with each other. This becomes possible because 2 objects are perceived separately by single physical camera. Mobile robots used as dynamic objects are synchronized with virtual robots in the well-designed contents, proving usefulness of applying the result of individual tracking for multiple moving objects to augmented visualization of objects.

Comparing State Representation Techniques for Reinforcement Learning in Autonomous Driving (자율주행 차량 시뮬레이션에서의 강화학습을 위한 상태표현 성능 비교)

  • Jihwan Ahn;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.109-123
    • /
    • 2024
  • Research into vision-based end-to-end autonomous driving systems utilizing deep learning and reinforcement learning has been steadily increasing. These systems typically encode continuous and high-dimensional vehicle states, such as location, velocity, orientation, and sensor data, into latent features, which are then decoded into a vehicular control policy. The complexity of urban driving environments necessitates the use of state representation learning through networks like Variational Autoencoders (VAEs) or Convolutional Neural Networks (CNNs). This paper analyzes the impact of different image state encoding methods on reinforcement learning performance in autonomous driving. Experiments were conducted in the CARLA simulator using RGB images and semantically segmented images captured by the vehicle's front camera. These images were encoded using VAE and Vision Transformer (ViT) networks. The study examines how these networks influence the agents' learning outcomes and experimentally demonstrates the role of each state representation technique in enhancing the learning efficiency and decision- making capabilities of autonomous driving systems.

Real-time Simultaneous Localization and Mapping (SLAM) for Vision-based Autonomous Navigation (영상기반 자동항법을 위한 실시간 위치인식 및 지도작성)

  • Lim, Hyon;Lim, Jongwoo;Kim, H. Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.5
    • /
    • pp.483-489
    • /
    • 2015
  • In this paper, we propose monocular visual simultaneous localization and mapping (SLAM) in the large-scale environment. The proposed method continuously computes the current 6-DoF camera pose and 3D landmarks position from video input. The proposed method successfully builds consistent maps from challenging outdoor sequences using a monocular camera as the only sensor. By using a binary descriptor and metric-topological mapping, the system demonstrates real-time performance on a large-scale outdoor dataset without utilizing GPUs or reducing input image size. The effectiveness of the proposed method is demonstrated on various challenging video sequences.

A Comparison of Korea Standard HD Map for Actual Driving Support of Autonomous Vehicles and Analysis of Application Layers (자율주행자동차 실주행 지원을 위한 표준 정밀도로지도 비교 및 활용 레이어 분석)

  • WON, Sang-Yeon;JEON, Young-Jae;JEONG, Hyun-Woo;KWON, Chan-Oh
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.132-145
    • /
    • 2020
  • By coming of the 4th industrial revolution era, HD map have became a key infrastructure for determining precise location of autonomous driving in areas of futuristic cars, logistics and robots. Autonomous vehicles have became more dependent on HD map to determine the exact location of objects detected by various sensors such as LiDAR, GNSS, Radar, and stereo cameras as well as self-location decisions. By actualizing autonomous driving and C-ITS technologies, the demand for precise information on HD map have increased. And also the demand for the creation of new information based on the convergence of various changes and real-time information have increased. In this study, domestic and international HD map standards and related environments have analyzed. Based on this, usability has researched which comparison with standard HD map established by various institutions. Additionally, usability of standard HD map have studied for applying actual autonomous vehicles by reworking HD map. By the result of study, standard HD map have well established to use by various institutions. If further research about layer classification and definition by institutions will carried out based on this study, it has expected that and efficient establishment and renewal of HD map will take place.

Real Time Pothole Detection System based on Video Data for Automatic Maintenance of Road Surface Distress (도로의 파손 상태를 자동관리하기 위한 동영상 기반 실시간 포트홀 탐지 시스템)

  • Jo, Youngtae;Ryu, Seungki
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.8-19
    • /
    • 2016
  • Potholes are caused by the presence of water in the underlying soil structure, which weakens the road pavement by expansion and contraction of water at freezing and thawing temperatures. Recently, automatic pothole detection systems have been studied, such as vibration-based methods and laser scanning methods. However, the vibration-based methods have low detection accuracy and limited detection area. Moreover, the costs for laser scanning-based methods are significantly high. Thus, in this paper, we propose a new pothole detection system using a commercial black-box camera. Normally, the computing power of a commercial black-box camera is limited. Thus, the pothole detection algorithm should be designed to work with the embedded computing environment of a black-box camera. The designed pothole detection algorithm has been tested by implementing in a black-box camera. The experimental results are analyzed with specific evaluation metrics, such as sensitivity and precision. Our studies confirm that the proposed pothole detection system can be utilized to gather pothole information in real-time.

A Study on Estimation of Traffic Flow Using Image-based Vehicle Identification Technology (영상기반 차량인식 기법을 이용한 교통류 추정에 관한 연구)

  • Kim, Minjeong;Jeong, Daehan;Kim, Hoe Kyoung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.110-123
    • /
    • 2019
  • Traffic data is the most basic element necessary for transportation planning and traffic system operation. Recently, a method of estimating traffic flow characteristics using distance to a leading vehicle measured by an ADAS camera has been attempted. This study investigated the feasibility of the ADAS vehicle reflecting the distance error of image-based vehicle identification technology as a means to estimate the traffic flow through the normalized root mean square error (NRMSE) based on the number of lanes, traffic demand, penetration rate of probe vehicle, and time-space estimation area by employing the microscopic simulation model, VISSIM. As a result, the estimate of low density traffic flow (i.e., LOS A, LOS B) is unreliable due to the limitation of the maximum identification distance of ADAS camera. Although the reliability of the estimates can be improved if multiple lanes, high traffic demands, and high penetration rates are implemented, artificially raising the penetration rates is unrealistic. Their reliability can be improved by extending the time dimension of the estimation area as well, but the most influential one is the driving behavior of the ADAS vehicle. In conclusion, although it is not possible to accurately estimate the traffic flow with the ADAS camera, its applicability will be expanded by improving its performance and functions.