• Title/Summary/Keyword: 카메라 기반 주행

Search Result 164, Processing Time 0.029 seconds

Performance Improvement of Pedestrian Detection using a GM-PHD Filter (GM-PHD 필터를 이용한 보행자 탐지 성능 향상 방법)

  • Lee, Yeon-Jun;Seo, Seung-Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.150-157
    • /
    • 2015
  • Pedestrian detection has largely been researched as one of the important technologies for autonomous driving vehicle and preventing accidents. There are two categories for pedestrian detection, camera-based and LIDAR-based. LIDAR-based methods have the advantage of the wide angle of view and insensitivity of illuminance change while camera-based methods have not. However, there are several problems with 3D LIDAR, such as insufficient resolution to detect distant pedestrians and decrease in detection rate in a complex situation due to segmentation error and occlusion. In this paper, two methods using GM-PHD filter are proposed to improve the poor rates of pedestrian detection algorithms based on 3D LIDAR. First one improves detection performance and resolution of object by automatic accumulation of points in previous frames onto current objects. Second one additionally enhances the detection results by applying the GM-PHD filter which is modified in order to handle the poor situation to classified multi target. A quantitative evaluation with autonomously acquired road environment data shows the proposed methods highly increase the performance of existing pedestrian detection algorithms.

Development of LiDAR-Based MRM Algorithm for LKS System (LKS 시스템을 위한 라이다 기반 MRM 알고리즘 개발)

  • Son, Weon Il;Oh, Tae Young;Park, Kihong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.1
    • /
    • pp.174-192
    • /
    • 2021
  • The LIDAR sensor, which provides higher cognitive performance than cameras and radar, is difficult to apply to ADAS or autonomous driving because of its high price. On the other hand, as the price is decreasing rapidly, expectations are rising to improve existing autonomous driving functions by taking advantage of the LIDAR sensor. In level 3 autonomous vehicles, when a dangerous situation in the cognitive module occurs due to a sensor defect or sensor limit, the driver must take control of the vehicle for manual driving. If the driver does not respond to the request, the system must automatically kick in and implement a minimum risk maneuver to maintain the risk within a tolerable level. In this study, based on this background, a LIDAR-based LKS MRM algorithm was developed for the case when the normal operation of LKS was not possible due to troubles in the cognitive system. From point cloud data collected by LIDAR, the algorithm generates the trajectory of the vehicle in front through object clustering and converts it to the target waypoints of its own. Hence, if the camera-based LKS is not operating normally, LIDAR-based path tracking control is performed as MRM. The HAZOP method was used to identify the risk sources in the LKS cognitive systems. B, and based on this, test scenarios were derived and used in the validation process by simulation. The simulation results indicated that the LIDAR-based LKS MRM algorithm of this study prevents lane departure in dangerous situations caused by various problems or difficulties in the LKS cognitive systems and could prevent possible traffic accidents.

RTGC의 모델링 및 주행제어기 설계에 관한 연구

  • Lee, Dong-Seok;Kim, Yeong-Bok;Jeong, Jeong-Sun;Jeong, Ji-Hyeon
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2010.04a
    • /
    • pp.280-281
    • /
    • 2010
  • 컨테이너의 효율적인 이송은 작업 시간이 곧 비용으로 연결되는 항만에서 가장 중요한 요소이다. 따라서 1990년대 중반부터 세계항만은 RMGC(Rail-Mounted Gantry Crane), RTGC(Rubber-Tired Gantry Crane)등의 크레인이 개발되어 야드에서 컨테이너를 적재하는데 널리 이용되어 오고 있으며, 특히 최근에는 CCD카메라, 각종 센서 둥을 이용하여 트랜스퍼 크레인의 무인화를 위해 수많은 연구가 진행되었고 실용화 된 많은 기술들이 현장에서 사용되고 있으나, 여전히 많은 부분이 연구를 필요로 한다. 특히 RTGC의 경우 RMGC에 비해 무인 자동화 연구개발은 미비한 수준이다. 따라서 본 논문에서 RTGC의 무인자동화 작업을 위한 가장 기초라고 볼 수 있는 수학적 모델링을 기반으로한 고정밀도 주행제어기를 설계하고자 한다.

  • PDF

Applying SIFT Feature to Occlusion, Damage and Rotation Invariant Traffic Sign Recognition (겹침과 훼손, 회전에 강건한 교통표지판 인식을 위한 SIFT 적용 방법)

  • Kim, Sang-Chul;Lee, Je-Min;Kim, Dae-Youn;Nang, Jong-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.351-353
    • /
    • 2012
  • 교통 표지판은 도로 주행에 있어 분별력 있는 정보를 제공한다. 하지만 주행 중에 가로수나 다른 자동차에 의해 교통 표지판은 가려져 있거나 훼손된 경우가 많다. 또한 자동차가 커브할 때 카메라 영상에는 회전된 객체로 보이게 된다. 이런 경우에 교통 표지판의 인식이 어렵기 때문에 본 논문에서는 각 문제점에 모두 강건한 피처를 이용해 매칭하는 방법을 제안하였다. 본 논문에서 제안한 방법에 기반하여 주행 중 영상에서 보다 분별력 있는 정보를 획득하여 더 많은 응용 분야에 적용할 것으로 기대한다.

Image processing algorithm for preceding vehicle detection based on DLI (선형차량 인식을 위한 DLI 기반의 영상처리 알고리즘)

  • Hwang, H.J.;Baek, H.R.;Yi, U.K.
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2459-2461
    • /
    • 2003
  • 본 논문은 차량 내에 설치된 두 대의 CCD 카메라를 이용하여 도로 영상으로부터 주행차선내에 있는 장애물을 인식하는 새로운 알고리즘을 제시한다. 제안된 알고리즘은 주행하는 차선과 관련이 있는 차선 정보만을 이용하여, 스테레오 영상에서 변이도를 추출할 수 있는 변이도 함수인 DLI(Disparity of lane-related information)를 정의하였다. DLI는 선행 차량과 같은 장애물은 주위보다 상대적으로 큰 에지값을 가진다는 특성을 이용하여, 주행차선 내에 있는 장애물의 유무를 검출하고 위치를 유추한다. 제안된 방법은 특징점의 탐색공간을 현저히 줄여 실시간 처리문제를 해결한 수 있는 장점을 가지고 있다. 본 논문에서는 DLI를 이용한 선행차량 인식기법의 성능을 검증하기 위하여 다양한 환경의 도로영상에 알고리즘을 적용하여 제안한 방법의 우수함을 확인하였다.

  • PDF

Implementation of Autonomous Ride-On Toy Car Algorithm Based on Arduino (아두이노 기반의 자율주행 유아용전동차 알고리즘 구현)

  • Jiye Choi;Minseo Lee;Nari Hong;Hyeyeon Lee;Il Yong Chun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1153-1154
    • /
    • 2023
  • 본 논문에서는 아두이노를 이용하여 유아용전동차가 실제 도로와 유사한 환경의 트랙을 자율주행할 수 있는 방법을 찾고자 한다. 라이다와 카메라를 이용하여 차선을 따라 주행하고, 장애물을 회피하고 신호등의 지시에 따라 정지하고 출발하며, 후진 주차를 완수하는 알고리즘을 완성하였다.

Hardware implementation of automated haze removal method capable of real-time processing based on Hazy Particle Map (Hazy Particle Map 기반 실시간 처리 가능한 자동화 안개 제거방법의 하드웨어 구현)

  • Sim, Hwi-Bo;Kang, Bong-Soon
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.401-407
    • /
    • 2022
  • Recently, image processing technology for autonomous driving by recognizing objects and lanes through camera images to realize autonomous vehicles is being studied. Haze reduces the visibility of images captured by the camera and causes malfunctions of autonomous vehicles. To solve this, it is necessary to apply the haze removal function that can be processed in real time to the camera. Therefore, in this paper, the fog removal method of Sim with excellent performance is implemented with hardware capable of real-time processing. The proposed hardware was designed using Verilog HDL, and FPGA was implemented by setting Xilinx's xc7z045-2ffg900 as the target device. As a result of logic synthesis using Xilinx Vivado program, it has a maximum operating frequency of 276.932MHz and a maximum processing speed of 31.279fps in a 4K (4096×2160) high-resolution environment, thus satisfying the real-time processing standard.

Development of autonomous mobile patrol robot using SLAM (SLAM을 이용한 자율주행 순찰 로봇 개발)

  • Yun, Tae-Jin;Woo, Seon-jin;Kim, Cheol-jin;Kim, Ill-kwon;Lee, Sang-yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.437-438
    • /
    • 2019
  • 본 논문에서는 ROS(Robot Operating System)기반으로한 로봇(Robot)에 레이저 거리 센서(LiDAR)를 설치하여 SLAM(Simultaneous Localization and Mapping : 동시적 위치 추적 지도 작성)기법으로 맵 정보를 습득하고, 저장하여 이를 기반으로 장애물과 건물의 실내 복도 안전하고 정확하게 순찰 할 수 있도록 하였다. 또한, 순찰 로봇(Robot)에 장착된 Raspberry카메라와 OpenCV 영상인식 기술을 이용하여 실시간 영상으로 실내 복도를 순찰하면서 사전에 설정된 특이사항이 있을 시 발견하고 기록하도록 시스템을 개발하였다.

  • PDF

A Study on Transport Robot for Autonomous Driving to a Destination Based on QR Code in an Indoor Environment (실내 환경에서 QR 코드 기반 목적지 자율주행을 위한 운반 로봇에 관한 연구)

  • Se-Jun Park
    • Journal of Platform Technology
    • /
    • v.11 no.2
    • /
    • pp.26-38
    • /
    • 2023
  • This paper is a study on a transport robot capable of autonomously driving to a destination using a QR code in an indoor environment. The transport robot was designed and manufactured by attaching a lidar sensor so that the robot can maintain a certain distance during movement by detecting the distance between the camera for recognizing the QR code and the left and right walls. For the location information of the delivery robot, the QR code image was enlarged with Lanczos resampling interpolation, then binarized with Otsu Algorithm, and detection and analysis were performed using the Zbar library. The QR code recognition experiment was performed while changing the size of the QR code and the traveling speed of the transport robot while the camera position of the transport robot and the height of the QR code were fixed at 192cm. When the QR code size was 9cm × 9cm The recognition rate was 99.7% and almost 100% when the traveling speed of the transport robot was less than about 0.5m/s. Based on the QR code recognition rate, an experiment was conducted on the case where the destination is only going straight and the destination is going straight and turning in the absence of obstacles for autonomous driving to the destination. When the destination was only going straight, it was possible to reach the destination quickly because there was little need for position correction. However, when the destination included a turn, the time to arrive at the destination was relatively delayed due to the need for position correction. As a result of the experiment, it was found that the delivery robot arrived at the destination relatively accurately, although a slight positional error occurred while driving, and the applicability of the QR code-based destination self-driving delivery robot was confirmed.

  • PDF

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.