• Title/Summary/Keyword: 영상 주행기록계

Search Result 6, Processing Time 0.021 seconds

The Estimation of Operating Speed Classified by Design Speed Using Moving Image (동영상을 이용한 설계속도별 주행속도 산정)

  • Lee, Jong-Chool;Seo, Dong-Joo;Kim, Jin-Soo;Kim, Sung-Ho
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2005.05a
    • /
    • pp.413-417
    • /
    • 2005
  • 본 연구에서는 설계속도별 연속류 흐름을 가진 대상도로를 선택하여, 대상도로의 비첨두 시간을 정하여, 동영상에 의한 촬영을 실시하여 구간 주행속도를 추출하는 연구를 수행하였다. 각 대상구간의 거리는 수치지도 및 측량, 주행기록계 등을 이용하여 측정하였고, 영상의 분석을 통하여 차량의 구간통과시간을 산정하여 설계속도별 주행속도를 추출하였다. 그에 대한 검증으로 차량의 DGPS를 장착하여 대상도로를 주행하면서 동영상에 의한 주행속도와 비교 검증을 실시하였다.

  • PDF

A Study on the Visual Odometer using Ground Feature Point (지면 특징점을 이용한 영상 주행기록계에 관한 연구)

  • Lee, Yoon-Sub;Noh, Gyung-Gon;Kim, Jin-Geol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.3
    • /
    • pp.330-338
    • /
    • 2011
  • Odometry is the critical factor to estimate the location of the robot. In the mobile robot with wheels, odometry can be performed using the information from the encoder. However, the information of location in the encoder is inaccurate because of the errors caused by the wheel's alignment or slip. In general, visual odometer has been used to compensate for the kinetic errors of robot. In case of using the visual odometry under some robot system, the kinetic analysis is required for compensation of errors, which means that the conventional visual odometry cannot be easily applied to the implementation of the other type of the robot system. In this paper, the novel visual odometry, which employs only the single camera toward the ground, is proposed. The camera is mounted at the center of the bottom of the mobile robot. Feature points of the ground image are extracted by using median filter and color contrast filter. In addition, the linear and angular vectors of the mobile robot are calculated with feature points matching, and the visual odometry is performed by using these linear and angular vectors. The proposed odometry is verified through the experimental results of driving tests using the encoder and the new visual odometry.

Benchmark for Deep Learning based Visual Odometry and Monocular Depth Estimation (딥러닝 기반 영상 주행기록계와 단안 깊이 추정 및 기술을 위한 벤치마크)

  • Choi, Hyukdoo
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.114-121
    • /
    • 2019
  • This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in 'tfrecords' format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Development of a Data-logger Classifying Dangerous Drive Behaviors (위험 운전 유형 분류 및 데이터 로거 개발)

  • Oh, Ju-Taek;Cho, Jun-Hee;Lee, Sang-Yong;Kim, Young-Sam
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.3
    • /
    • pp.15-28
    • /
    • 2008
  • According to the accident statistics published by the National Police Agency in 2006, it can be recognized that drivers' characteristics and driving behaviors are the most causational factors on the traffic accidents. At present, although many recording tools such as digital speedometer or black box are distributed in the market to meet social requests of decreasing traffic accidents and increasing safe driving behaviors, it is also true that it still lacks in obvious categories for dangerous driving types and then, the efficiency of the categories to be studied has been low. In this study, dangerous driving types are redefined. They are grouped into 7 classifications in the first level, and the seven classifications are regrouped into 16 in more detail. To verify the redefined dangerous driving types, a Data-logger is developed to receive and analyze the data that occur from the driving behaviors of the test vehicle. The developed Data-logger can be used to construct a real time warning system and safe driving management system with dangerous driving patterns based on acceleration, deceleration, Yaw rate, image data, etc.

  • PDF

A Review on Deep Learning Platform for Artificial Intelligence (인공지능 딥러링 학습 플랫폼에 관한 선행연구 고찰)

  • Jin, Chan-Yong;Shin, Seong-Yoon;Nam, Soo-Tai
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.169-170
    • /
    • 2019
  • Lately, as artificial intelligence becomes a source of global competitiveness, the government is strategically fostering artificial intelligence that is the base technology of future new industries such as autonomous vehicles, drones, and robots. Domestic artificial intelligence research and services have been launched mainly in Naver and Kakao, but their size and level are weak compared to overseas. Recently, deep learning has been conducted in recent years while recording innovative performance in various pattern recognition fields including speech recognition and image recognition. In addition, deep running has attracted great interest from industry since its inception, and global information technology companies such as Google, Microsoft, and Samsung have successfully applied deep learning technology to commercial products and are continuing research and development. Therefore, we will look at artificial intelligence which is attracting attention based on previous research.

  • PDF