• Title/Summary/Keyword: 카메라 기반 주행

Search Result 165, Processing Time 0.022 seconds

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.

A Study on Radar Video Fusion Systems for Pedestrian and Vehicle Detection (보행자 및 차량 검지를 위한 레이더 영상 융복합 시스템 연구)

  • Sung-Youn Cho;Yeo-Hwan Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.197-205
    • /
    • 2024
  • Development of AI and big data-based algorithms to advance and optimize the recognition and detection performance of various static/dynamic vehicles in front and around the vehicle at a time when securing driving safety is the most important point in the development and commercialization of autonomous vehicles. etc. are being studied. However, there are many research cases for recognizing the same vehicle by using the unique advantages of radar and camera, but deep learning image processing technology is not used, or only a short distance is detected as the same target due to radar performance problems. Therefore, there is a need for a convergence-based vehicle recognition method that configures a dataset that can be collected from radar equipment and camera equipment, calculates the error of the dataset, and recognizes it as the same target. In this paper, we aim to develop a technology that can link location information according to the installation location because data errors occur because it is judged as the same object depending on the installation location of the radar and CCTV (video).

Development of Simulation Method to Design Rover's Camera System for Extreme Region Exploration (극한지 탐사 로버의 카메라 시스템 설계를 위한 시뮬레이션 기법 개발)

  • Kim, Changjae;Park, Jaemin;Choi, Kanghyuk;Shin, Hyu-Soung;Hong, Sungchul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.271-279
    • /
    • 2019
  • In extreme environment regions, unmanned rovers equipped with various sensors and devices are being developed for long-term exploration on behalf of humans. On the other hand, due to the harsh weather conditions and rough terrain, the rover camera has limited visible distance and field of view. Therefore, the rover cameras should be located for safe navigation and efficient terrain mapping. In this regard, to minimize the cost and time to manufacture the camera system on a rover, the simulation method using the rover design is presented to optimize the camera locations on the rover efficiently. In the simulation, a simulated terrain was taken from cameras with different locations and angles. The visible distance and overlapped extent of camera images, and terrain data accuracy calculated from the simulation were compared to determine the optimal locations of the rover's cameras. The simulated results will be used to manufacture a rover and camera system. In addition, self and system calibrations will be conducted to calculate the accurate position of the camera system on the rover.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

Object Detection of AGV in Manufacturing Plants using Deep Learning (딥러닝 기반 제조 공장 내 AGV 객체 인식에 대한 연구)

  • Lee, Gil-Won;Lee, Hwally;Cheong, Hee-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.36-43
    • /
    • 2021
  • In this research, the accuracy of YOLO v3 algorithm in object detection during AGV (Automated Guided Vehicle) operation was investigated. First of all, AGV with 2D LiDAR and stereo camera was prepared. AGV was driven along the route scanned with SLAM (Simultaneous Localization and Mapping) using 2D LiDAR while front objects were detected through stereo camera. In order to evaluate the accuracy of YOLO v3 algorithm, recall, AP (Average Precision), and mAP (mean Average Precision) of the algorithm were measured with a degree of machine learning. Experimental results show that mAP, precision, and recall are improved by 10%, 6.8%, and 16.4%, respectively, when YOLO v3 is fitted with 4000 training dataset and 500 testing dataset which were collected through online search and is trained additionally with 1200 dataset collected from the stereo camera on AGV.

Design and Implementation of ontology based context-awareness platform using driver intent information (운전자 의도정보를 이용한 온톨로지 기반 지능형자동차 상황인식 플랫폼 설계 및 구현)

  • Ko, Jae-Jin;Choi, Ki-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.1
    • /
    • pp.14-21
    • /
    • 2014
  • In this paper, we devise a new ontology-based context-aware system to recognize the smart car information, in which driver's intent is utilized by information of car, driver, environment as well as driving state, driver state. So proposed system can handle dynamically risk changes by adding real-time situational awareness information. We utilize the camera image recognition technology for context-aware intelligent vehicle driving information, and implement information acquisition scheme OBD-II protocol to acquire vehicle's information. Experiments confirm that the proposed advanced driver safety assist system outperforms the conventional system, which only utilizes the information of vehicle, driver, and environmental information, to support the service of a high-speed driving, lane-departure service and emergency braking situation awareness.

A study on stand-alone autonomous mobile robot using mono camera (단일 카메라를 사용한 독립형 자율이동로봇 개발)

  • 정성보;이경복;장동식
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.1
    • /
    • pp.56-63
    • /
    • 2003
  • This paper introduces a vision based autonomous mini mobile robot that is an approach to produce real autonomous vehicle. Previous autonomous vehicles are dependent on PC, because of complexity of designing hardware, difficulty of installation and abundant calculations. In this paper, we present an autonomous motile robot system that has abilities of accurate steering, quick movement in high speed and intelligent recognition as a stand-alone system using a mono camera. The proposed system has been implemented on mini track of which width is 25~30cm, and length is about 200cm. Test robot can run at average 32.9km/h speed on straight lane and average 22.3km/h speed on curved lane with 30~40m radius. This system provides a model of autonomous mobile robot adapted a lane recognition algorithm in odor to make real autonomous vehicle easily.

  • PDF

A Camera Based Traffic Signal Generating Algorithm for Safety Entrance of the Vehicle into the Joining Road (차량의 안전한 합류도로 진입을 위한 단일 카메라 기반 교통신호 발생 알고리즘)

  • Jeong Jun-Ik;Rho Do-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.66-73
    • /
    • 2006
  • Safety is the most important for all traffic management and control technology. This paper focuses on developing a flexible, reliable and real-time processing algorithm which is able to generate signal for the entering vehicle at the joining road through a camera and image processing technique. The images obtained from the camera located beside and upon the road can be used for traffic surveillance, the vehicle's travel speed measurement, predicted arriving time in joining area between main road and joining road. And the proposed algorithm displays the confluence safety signal with red, blue and yellow color sign. The three methods are used to detect the vehicle which is driving in setted detecting area. The first method is the gray scale normalized correlation algorithm, and the second is the edge magnitude ratio changing algorithm, and the third is the average intensity changing algorithm The real-time prototype confluence safety signal generation algorithm is implemented on stored digital image sequences of real traffic state and a program with good experimental results.

CNN-LSTM based Autonomous Driving Technology (CNN-LSTM 기반의 자율주행 기술)

  • Ga-Eun Park;Chi Un Hwang;Lim Se Ryung;Han Seung Jang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1259-1268
    • /
    • 2023
  • This study proposes a throttle and steering control technology using visual sensors based on deep learning's convolutional and recurrent neural networks. It collects camera image and control value data while driving a training track in clockwise and counterclockwise directions, and generates a model to predict throttle and steering through data sampling and preprocessing for efficient learning. Afterward, the model was validated on a test track in a different environment that was not used for training to find the optimal model and compare it with a CNN (Convolutional Neural Network). As a result, we found that the proposed deep learning model has excellent performance.

Development of Multi-Camera based Mobile Mapping System for HD Map Production (정밀지도 구축을 위한 다중카메라기반 모바일매핑시스템 개발)

  • Hong, Ju Seok;Shin, Jin Soo;Shin, Dae Man
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.587-598
    • /
    • 2021
  • This study aims to develop a multi-camera based MMS (Mobile Mapping System) technology for building a HD (High Definition) map for autonomous driving and for quick update. To replace expensive lidar sensors and reduce long processing times, we intend to develop a low-cost and efficient MMS by applying multiple cameras and real-time data pre-processing. To this end, multi-camera storage technology development, multi-camera time synchronization technology development, and MMS prototype development were performed. We developed a storage module for real-time JPG compression of high-speed images acquired from multiple cameras, and developed an event signal and GNSS (Global Navigation Satellite System) time server-based synchronization method to record the exposure time multiple images taken in real time. And based on the requirements of each sector, MMS was designed and prototypes were produced. Finally, to verify the performance of the manufactured multi-camera-based MMS, data were acquired from an actual 1,000 km road and quantitative evaluation was performed. As a result of the evaluation, the time synchronization performance was less than 1/1000 second, and the position accuracy of the point cloud obtained through SFM (Structure from Motion) image processing was around 5 cm. Through the evaluation results, it was found that the multi-camera based MMS technology developed in this study showed the performance that satisfies the criteria for building a HD map.