• Title/Summary/Keyword: Multi-sensor fusion

Search Result 199, Processing Time 0.029 seconds

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

Experimental Verification of Multi-Sensor Geolocation Algorithm using Sequential Kalman Filter (순차적 칼만 필터를 적용한 다중센서 위치추정 알고리즘 실험적 검증)

  • Lee, Seongheon;Kim, Youngjoo;Bang, Hyochoong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.7-13
    • /
    • 2015
  • Unmanned air vehicles (UAVs) are getting popular not only as a private usage for the aerial photograph but military usage for the surveillance, reconnaissance and supply missions. For an UAV to successfully achieve these kind of missions, geolocation (localization) must be implied to track an interested target or fly by reference. In this research, we adopted multi-sensor fusion (MSF) algorithm to increase the accuracy of the geolocation and verified the algorithm using two multicopter UAVs. One UAV is equipped with an optical camera, and another UAV is equipped with an optical camera and a laser range finder. Throughout the experiment, we have obtained measurements about a fixed ground target and estimated the target position by a series of coordinate transformations and sequential Kalman filter. The result showed that the MSF has better performance in estimating target location than the case of using single sensor. Moreover, the experimental result implied that multi-sensor geolocation algorithm is able to have further improvements in localization accuracy and feasibility of other complicated applications such as moving target tracking and multiple target tracking.

Implementation of a Real-time Data fusion Algorithm for Flight Test Computer (비행시험통제컴퓨터용 실시간 데이터 융합 알고리듬의 구현)

  • Lee, Yong-Jae;Won, Jong-Hoon;Lee, Ja-Sung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.8 no.4 s.23
    • /
    • pp.24-31
    • /
    • 2005
  • This paper presents an implementation of a real-time multi-sensor data fusion algorithm for Flight Test Computer. The sensor data consist of positional information of the target from a radar, a GPS receiver and an INS. The data fusion algorithm is designed by the 21st order distributed Kalman Filter which is based on the PVA model with sensor bias states. A fault detection and correction logics are included in the algorithm for bad measurements and sensor faults. The statistical parameters for the states are obtained from Monte Carlo simulations and covariance analysis using test tracking data. The designed filter is verified by using real data both in post processing and real-time processing.

Hierarchical Behavior Control of Mobile Robot Based on Space & Time Sensor Fusion(STSF)

  • Han, Ho-Tack
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.314-320
    • /
    • 2006
  • Navigation in environments that are densely cluttered with obstacles is still a challenge for Autonomous Ground Vehicles (AGVs), especially when the configuration of obstacles is not known a priori. Reactive local navigation schemes that tightly couple the robot actions to the sensor information have proved to be effective in these environments, and because of the environmental uncertainties, STSF(Space and Time Sensor Fusion)-based fuzzy behavior systems have been proposed. Realization of autonomous behavior in mobile robots, using STSF control based on spatial data fusion, requires formulation of rules which are collectively responsible for necessary levels of intelligence. This collection of rules can be conveniently decomposed and efficiently implemented as a hierarchy of fuzzy-behaviors. This paper describes how this can be done using a behavior-based architecture. The approach is motivated by ethological models which suggest hierarchical organizations of behavior. Experimental results show that the proposed method can smoothly and effectively guide a robot through cluttered environments such as dense forests.

Mobile Robot Navigation using a Dynamic Multi-sensor Fusion

  • Kim, San-Ju;Jin, Tae-Seok;Lee, Oh-Keol;Lee, Jang-Myung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.240-243
    • /
    • 2003
  • In this study, as the preliminary step far developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results from the simulations run.

  • PDF

Spatio-spectral Fusion of Multi-sensor Satellite Images Based on Area-to-point Regression Kriging: An Experiment on the Generation of High Spatial Resolution Red-edge and Short-wave Infrared Bands (영역-점 회귀 크리깅 기반 다중센서 위성영상의 공간-분광 융합: 고해상도 적색 경계 및 단파 적외선 밴드 생성 실험)

  • Park, Soyeon;Kang, Sol A;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.523-533
    • /
    • 2022
  • This paper presents a two-stage spatio-spectral fusion method (2SSFM) based on area-to-point regression kriging (ATPRK) to enhance spatial and spectral resolutions using multi-sensor satellite images with complementary spatial and spectral resolutions. 2SSFM combines ATPRK and random forest regression to predict spectral bands at high spatial resolution from multi-sensor satellite images. In the first stage, ATPRK-based spatial down scaling is performed to reduce the differences in spatial resolution between multi-sensor satellite images. In the second stage, regression modeling using random forest is then applied to quantify the relationship of spectral bands between multi-sensor satellite images. The prediction performance of 2SSFM was evaluated through a case study of the generation of red-edge and short-wave infrared bands. The red-edge and short-wave infrared bands of PlanetScope images were predicted from Sentinel-2 images using 2SSFM. From the case study, 2SSFM could generate red-edge and short-wave infrared bands with improved spatial resolution and similar spectral patterns to the actual spectral bands, which confirms the feasibility of 2SSFM for the generation of spectral bands not provided in high spatial resolution satellite images. Thus, 2SSFM can be applied to generate various spectral indices using the predicted spectral bands that are actually unavailable but effective for environmental monitoring.

A Novel Method of Basic Probability Assignment Calculation with Signal Variation Rate (구간변화율을 고려한 기본확률배정함수 결정)

  • Suh, Dong-Hyok;Park, Chan-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.3
    • /
    • pp.465-470
    • /
    • 2013
  • Dempster-Shafer Evidence Theory is available for multi-sensor data fusion. Basic Probability Assignment is essential for multi-sensor data fusion using Dempster-Shafer Theory. In this paper, we proposed a novel method of BPA calculation with signal assessment. We took notice of the signal that reported from the sensor mote at the time slot. We assessed the variation rate of the reported signal from the terminal. The trend of variation implies significant component of the context. We calculated the variation rate of signal for reveal the relation of the variation and the context. We could reach context inference with BPA that calculated with the variation rate of signal.

The Performance Analysis of MPDA in Out of Sequence Measurement Environment (Out of Sequence Measurement 환경에서의 MPDA 성능 분석)

  • Seo, Il-Hwan;Lim, Young-Taek;Song, Taek-Lyul
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.9
    • /
    • pp.401-408
    • /
    • 2006
  • In a multi-sensor multi-target tracking systems, the local sensors have the role of tracking the target and transferring the measurements to the fusion center. The measurements from the same target can arrive out of sequence called the out-of-sequence measurements(OOSMs). Out-of-sequence measurements can arise at the fusion center due to communication delay and varying preprocessing time for different sensor platforms. In general, the track fusion occurs to enhance the tracking performance of the sensors using the measurements from the sensors at the fusion center. The target informations can wive at the fusion center with the clutter informations in cluttered environment. In this paper, the OOSM update step with MPDA(Most Probable Data Association) is introduced and tested in several cases with the various clutter density through the Monte Carlo simulation. The performance of the MPDA with OOSM update step is compared with the existing NN, PDA, and PDA-AI for the air target tracking in cluttered and out-of-sequence measurement environment. Simulation results show that MPDA with the OOSM has compatible root mean square errors with out-of-sequence PDA-AI filter and the MPDA is sufficient to be used in out-of-sequence environment.