• Title/Summary/Keyword: Multi-sensor data fusion

Search Result 127, Processing Time 0.033 seconds

Development of Multi-sensor Image Fusion software(InFusion) for Value-added applications (고부가 활용을 위한 이종영상 융합 소프트웨어(InFusion) 개발)

  • Choi, Myung-jin;Chung, Inhyup;Ko, Hyeong Ghun;Jang, Sumin
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.3
    • /
    • pp.15-21
    • /
    • 2017
  • Following the successful launch of KOMPSAT-3 in May 2012, KOMPSAT-5 in August 2013, and KOMPSAT-3A in March 2015 have succeeded in launching the integrated operation of optical, radar and thermal infrared sensors in Korea. We have established a foundation to utilize the characteristics of each sensors. In order to overcome limitations in the range of application and accuracy of the application of a single sensor, multi-sensor image fusion techniques have been developed which take advantage of multiple sensors and complement each other. In this paper, we introduce the development of software (InFusion) for multi-sensor image fusion and valued-added product generation using KOMPSAT series. First, we describe the characteristics of each sensor and the necessity of fusion software development, and describe the entire development process. It aims to increase the data utilization of KOMPSAT series and to inform the superiority of domestic software through creation of high value-added products.

Sensor Fusion for Underwater Navigation of Unmanned Underwater Vehicle (무인잠수정의 수중합법을 위한 센서융합)

  • Sur, Joo-No
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.8 no.4 s.23
    • /
    • pp.14-23
    • /
    • 2005
  • In this paper we propose a sensor fusion method for the navigation algorithm which can be used to estimate state vectors such as position and velocity for its motion control using multi-sensor output measurements. The output measurement we will use in estimating the state is a series of known multi-sensor asynchronous outputs with measurement noise. This paper investigates the Extended Kalman Filtering method to merge asynchronous heading, heading rate, velocity of DVL, and SSBL information to produce a single state vector. Different complexity of Kalman Filter, with. biases and measurement noise, are investigated with theoretically data from MOERI's SAUV. All levels of complexity of the Kalman Filters are shown to be much more close and smooth to real trajectories then the basic underwater acoustic navigation system commonly used aboard underwater vehicle.

A Novel Method of Basic Probability Assignment Calculation with Signal Variation Rate (구간변화율을 고려한 기본확률배정함수 결정)

  • Suh, Dong-Hyok;Park, Chan-Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.3
    • /
    • pp.465-470
    • /
    • 2013
  • Dempster-Shafer Evidence Theory is available for multi-sensor data fusion. Basic Probability Assignment is essential for multi-sensor data fusion using Dempster-Shafer Theory. In this paper, we proposed a novel method of BPA calculation with signal assessment. We took notice of the signal that reported from the sensor mote at the time slot. We assessed the variation rate of the reported signal from the terminal. The trend of variation implies significant component of the context. We calculated the variation rate of signal for reveal the relation of the variation and the context. We could reach context inference with BPA that calculated with the variation rate of signal.

New Medical Image Fusion Approach with Coding Based on SCD in Wireless Sensor Network

  • Zhang, De-gan;Wang, Xiang;Song, Xiao-dong
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.6
    • /
    • pp.2384-2392
    • /
    • 2015
  • The technical development and practical applications of big-data for health is one hot topic under the banner of big-data. Big-data medical image fusion is one of key problems. A new fusion approach with coding based on Spherical Coordinate Domain (SCD) in Wireless Sensor Network (WSN) for big-data medical image is proposed in this paper. In this approach, the three high-frequency coefficients in wavelet domain of medical image are pre-processed. This pre-processing strategy can reduce the redundant ratio of big-data medical image. Firstly, the high-frequency coefficients are transformed to the spherical coordinate domain to reduce the correlation in the same scale. Then, a multi-scale model product (MSMP) is used to control the shrinkage function so as to make the small wavelet coefficients and some noise removed. The high-frequency parts in spherical coordinate domain are coded by improved SPIHT algorithm. Finally, based on the multi-scale edge of medical image, it can be fused and reconstructed. Experimental results indicate the novel approach is effective and very useful for transmission of big-data medical image(especially, in the wireless environment).

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Multi Sources Track Management Method for Naval Combat Systems (다중 센서 및 다중 전술데이터링크 환경 하에서의 표적정보 처리 기법)

  • Lee, Ho Chul;Kim, Tae Su;Shin, Hyung Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.126-131
    • /
    • 2014
  • This paper is concerned with a track management method for a naval combat system which receives the tracks information from multi-sensors and multi-tactical datalinks. Since the track management of processing the track information from diverse sources can be formulated as a data fusion problem, this paper will deal with the data fusion architecture, track association and track information determination algorithm for the track management of naval combat systems.

Localization and Control of an Outdoor Mobile Robot Based on an Estimator with Sensor Fusion (센서 융합기반의 추측항법을 통한 야지 주행 이동로봇의 위치 추정 및 제어)

  • Jeon, Sang Woon;Jeong, Seul
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.2
    • /
    • pp.69-78
    • /
    • 2009
  • Localization is a very important technique for the mobile robot to navigate in outdoor environment. In this paper, the development of the sensor fusion algorithm for controlling mobile robots in outdoor environments is presented. The multi-sensorial dead-reckoning subsystem is established based on the optimal filtering by first fusing a heading angle reading data from a magnetic compass, a rate-gyro, and two encoders mounted on the robot wheels, thereby computing the dead-reckoned location. These data and the position data provided by a global sensing system are fused together by means of an extended Kalman filter. The proposed algorithm is proved by simulation studies of controlling a mobile robot controlled by a backstepping controller and a cascaded controller. Performances of each controller are compared.

  • PDF

Hierarchical Behavior Control of Mobile Robot Based on Space & Time Sensor Fusion(STSF)

  • Han, Ho-Tack
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.314-320
    • /
    • 2006
  • Navigation in environments that are densely cluttered with obstacles is still a challenge for Autonomous Ground Vehicles (AGVs), especially when the configuration of obstacles is not known a priori. Reactive local navigation schemes that tightly couple the robot actions to the sensor information have proved to be effective in these environments, and because of the environmental uncertainties, STSF(Space and Time Sensor Fusion)-based fuzzy behavior systems have been proposed. Realization of autonomous behavior in mobile robots, using STSF control based on spatial data fusion, requires formulation of rules which are collectively responsible for necessary levels of intelligence. This collection of rules can be conveniently decomposed and efficiently implemented as a hierarchy of fuzzy-behaviors. This paper describes how this can be done using a behavior-based architecture. The approach is motivated by ethological models which suggest hierarchical organizations of behavior. Experimental results show that the proposed method can smoothly and effectively guide a robot through cluttered environments such as dense forests.

Mobile Robot Navigation using a Dynamic Multi-sensor Fusion

  • Kim, San-Ju;Jin, Tae-Seok;Lee, Oh-Keol;Lee, Jang-Myung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.240-243
    • /
    • 2003
  • In this study, as the preliminary step far developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results from the simulations run.

  • PDF

Combining Geostatistical Indicator Kriging with Bayesian Approach for Supervised Classification

  • Park, No-Wook;Chi, Kwang-Hoon;Moon, Wooil-M.;Kwon, Byung-Doo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.382-387
    • /
    • 2002
  • In this paper, we propose a geostatistical approach incorporated to the Bayesian data fusion technique for supervised classification of multi-sensor remote sensing data. Traditional spectral based classification cannot account for the spatial information and may result in unrealistic classification results. To obtain accurate spatial/contextual information, the indicator kriging that allows one to estimate the probability of occurrence of classes on the basis of surrounding observations is incorporated into the Bayesian framework. This approach has its merit incorporating both the spectral information and spatial information and improves the confidence level in the final data fusion task. To illustrate the proposed scheme, supervised classification of multi-sensor test remote sensing data set was carried out.

  • PDF