• Title/Summary/Keyword: 카메라 센서 퓨전

Search Result 6, Processing Time 0.018 seconds

A Fusion Sensor System for Efficient Road Surface Monitorinq on UGV (UGV에서 효율적인 노면 모니터링을 위한 퓨전 센서 시스템 )

  • Seonghwan Ryu;Seoyeon Kim;Jiwoo Shin;Taesik Kim;Jinman Jung
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.18-26
    • /
    • 2024
  • Road surface monitoring is essential for maintaining road environment safety through managing risk factors like rutting and crack detection. Using autonomous driving-based UGVs with high-performance 2D laser sensors enables more precise measurements. However, the increased energy consumption of these sensors is limited by constrained battery capacity. In this paper, we propose a fusion sensor system for efficient surface monitoring with UGVs. The proposed system combines color information from cameras and depth information from line laser sensors to accurately detect surface displacement. Furthermore, a dynamic sampling algorithm is applied to control the scanning frequency of line laser sensors based on the detection status of monitoring targets using camera sensors, reducing unnecessary energy consumption. A power consumption model of the fusion sensor system analyzes its energy efficiency considering various crack distributions and sensor characteristics in different mission environments. Performance analysis demonstrates that setting the power consumption of the line laser sensor to twice that of the saving state when in the active state increases power consumption efficiency by 13.3% compared to fixed sampling under the condition of λ=10, µ=10.

An Automatic Data Collection System for Human Pose using Edge Devices and Camera-Based Sensor Fusion (엣지 디바이스와 카메라 센서 퓨전을 활용한 사람 자세 데이터 자동 수집 시스템)

  • Young-Geun Kim;Seung-Hyeon Kim;Jung-Kon Kim;Won-Jung Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.189-196
    • /
    • 2024
  • Frequent false positives alarm from the Intelligent Selective Control System have raised significant concerns. These persistent issues have led to declines in operational efficiency and market credibility among agents. Developing a new model or replacing the existing one to mitigate false positives alarm entails substantial opportunity costs; hence, improving the quality of the training dataset is pragmatic. However, smaller organizations face challenges with inadequate capabilities in dataset collection and refinement. This paper proposes an automatic human pose data collection system centered around a human pose estimation model, utilizing camera-based sensor fusion techniques and edge devices. The system facilitates the direct collection and real-time processing of field data at the network periphery, distributing the computational load that typically centralizes. Additionally, by directly labeling field data, it aids in constructing new training datasets.

딥 러닝 기반 다중 카메라 영상을 이용한 해상 장애물 탐지 추적에 관한 연구

  • 박정호;노명일;이혜원;조영민;손남선
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.186-186
    • /
    • 2022
  • 과거에는 선박을 운용하기 위해서 많은 인원이 필요하였으나 최근 들어 선박 운용에 필요한 인원이 줄어들고 있으며, 더 나아가 자율적으로 운항하는 선박을 만들기 위한 연구가 활발히 수행되고 있다. 자율 운항 선박을 구성하는 여러 요소 중 인간의 시각을 대체하기 위한 자율 인지 시스템은 가장 선행되어야 하는 연구 분야 중 하나이다. RADAR (RAdio Detection And Ranging) 및 AIS (Automatic Identification System) 등의 전통적인 인지 센서를 활용한 연구가 진행 중이지만 사각지대나 탐지 주기 등의 한계가 있다. 따라서 본 연구에서는 다중 카메라 (광학, 열상, 파노라마)를 이용하여 전통적인 인지 센서의 한계를 보완하는 새로운 인지 시스템을 고안하였으며, 이를 기반으로 해상 장애물을 추적하여 동적 운동 정보를 얻었다. 먼저 실해역에서 수집한 이미지를 바탕으로 해상 장애물 탐지를 위한 데이터를 구성하고, 딥 러닝 기반의 탐지 모델을 학습시켰다. 탐지 모델을 이용하여 탐지한 결과는 직접 설계한 칼만 필터 기반의 적응형 추적 필터를 통과시켜 해상 장애물의운동 정보 (궤적, 속력, 방향)를 계산하는데 활용되었다. 또한 본 연구는 카메라를 센서로 활용했을 때의 한계를 보완하기 위하여 동 시간대에 다중 카메라에서 추적한 각각의 정보를 융합하였다. 그 결과 단일 카메라를 활용하는 경우, RADAR의 오차 범위 이내에 추적 결과가 수렴하는 양상을 보였으며, 다중 카메라를 활용하는 경우에는 단일 카메라보다 정확한 추적이 가능함을 확인하였다.

  • PDF

Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion (ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘)

  • Lee, Dongwoo;Yi, Kyongsu;Lee, Jaewan
    • Journal of Auto-vehicle Safety Association
    • /
    • v.3 no.2
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.

Real-time Speed Sign Recognition Method Using Virtual Environments and Camera Images (가상환경 및 카메라 이미지를 활용한 실시간 속도 표지판 인식 방법)

  • Eunji Song;Taeyun Kim;Hyobin Kim;Kyung-Ho Kim;Sung-Ho Hwang
    • Journal of Drive and Control
    • /
    • v.20 no.4
    • /
    • pp.92-99
    • /
    • 2023
  • Autonomous vehicles should recognize and respond to the specified speed to drive in compliance with regulations. To recognize the specified speed, the most representative method is to read the numbers of the signs by recognizing the speed signs in the front camera image. This study proposes a method that utilizes YOLO-Labeling-Labeling-EfficientNet. The sign box is first recognized with YOLO, and the numeric digit is extracted according to the pixel value from the recognized box through two labeling stages. After that, the number of each digit is recognized using EfficientNet (CNN) learned with the virtual environment dataset produced directly. In addition, we estimated the depth of information from the height value of the recognized sign through regression analysis. We verified the proposed algorithm using the virtual racing environment and GTSRB, and proved its real-time performance and efficient recognition performance.

Kalman Filter-based Sensor Fusion for Posture Stabilization of a Mobile Robot (모바일 로봇 자세 안정화를 위한 칼만 필터 기반 센서 퓨전)

  • Jang, Taeho;Kim, Youngshik;Kyoung, Minyoung;Yi, Hyunbean;Hwan, Yoondong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.8
    • /
    • pp.703-710
    • /
    • 2016
  • In robotics research, accurate estimation of current robot position is important to achieve motion control of a robot. In this research, we focus on a sensor fusion method to provide improved position estimation for a wheeled mobile robot, considering two different sensor measurements. In this case, we fuse camera-based vision and encode-based odometry data using Kalman filter techniques to improve the position estimation of the robot. An external camera-based vision system provides global position coordinates (x, y) for the mobile robot in an indoor environment. An internal encoder-based odometry provides linear and angular velocities of the robot. We then use the position data estimated by the Kalman filter as inputs to the motion controller, which significantly improves performance of the motion controller. Finally, we experimentally verify the performance of the proposed sensor fused position estimation and motion controller using an actual mobile robot system. In our experiments, we also compare the Kalman filter-based sensor fused estimation with two different single sensor-based estimations (vision-based and odometry-based).