• Title/Summary/Keyword: multi-sensor fusion

Search Result 202, Processing Time 0.027 seconds

Intruder Tracking and Collision Avoidance Algorithm Design for Unmanned Aerial Vehicles using a Model-based Design Method (모델 기반 설계 기법을 이용한 무인항공기의 침입기 추적 및 충돌회피 알고리즘 설계)

  • Choi, Hyunjin;Yoo, Chang-Sun;Ryu, Hyeok;Kim, Sungwook;Ahn, Seokmin
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.25 no.4
    • /
    • pp.83-90
    • /
    • 2017
  • Unmanned Aerial Vehicles(UAVs) require collision avoidance capabilities equivalent to the capabilities of manned aircraft to enter the airspace of manned aircraft. In the case of Visual Flight Rules of manned aircraft, collision avoidance is performed by 'See-and-Avoid' of pilots. To obtain those capabilities of UAVs named as 'Sense-and-Avoid', sensor-system-based intruder tracking and collision avoidance methods are required. In this study, a multi-sensor-based tracking, data fusion, and collision avoidance algorithm is designed by using a model-based design tool MATLAB/SIMULINK, and validations of the designed model and code using numerical simulations and processor-in-the-loop simulations are performed.

Matching and Geometric Correction of Multi-Resolution Satellite SAR Images Using SURF Technique (SURF 기법을 활용한 위성 SAR 다중해상도 영상의 정합 및 기하보정)

  • Kim, Ah-Leum;Song, Jung-Hwan;Kang, Seo-Li;Lee, Woo-Kyung
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.431-444
    • /
    • 2014
  • As applications of spaceborne SAR imagery are extended, there are increased demands for accurate registrations for better understanding and fusion of radar images. It becomes common to adopt multi-resolution SAR images to apply for wide area reconnaissance. Geometric correction of the SAR images can be performed by using satellite orbit and attitude information. However, the inherent errors of the SAR sensor's attitude and ground geographical data tend to cause geometric errors in the produced SAR image. These errors should be corrected when the SAR images are applied for multi-temporal analysis, change detection applications and image fusion with other sensor images. The undesirable ground registration errors can be corrected with respect to the true ground control points in order to produce complete SAR products. Speeded Up Robust Feature (SURF) technique is an efficient algorithm to extract ground control points from images but is considered to be inappropriate to apply to SAR images due to high speckle noises. In this paper, an attempt is made to apply SURF algorithm to SAR images for image registration and fusion. Matched points are extracted with respect to the varying parameters of Hessian and SURF matching thresholds, and the performance is analyzed by measuring the imaging matching accuracies. A number of performance measures concerning image registration are suggested to validate the use of SURF for spaceborne SAR images. Various simulations methodologies are suggested the validate the use of SURF for the geometric correction and image registrations and it is shown that a good choice of input parameters to the SURF algorithm should be made to apply for the spaceborne SAR images of moderate resolutions.

Land cover classification of a non-accessible area using multi-sensor images and GIS data (다중센서와 GIS 자료를 이용한 접근불능지역의 토지피복 분류)

  • Kim, Yong-Min;Park, Wan-Yong;Eo, Yang-Dam;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.5
    • /
    • pp.493-504
    • /
    • 2010
  • This study proposes a classification method based on an automated training extraction procedure that may be used with very high resolution (VHR) images of non-accessible areas. The proposed method overcomes the problem of scale difference between VHR images and geographic information system (GIS) data through filtering and use of a Landsat image. In order to automate maximum likelihood classification (MLC), GIS data were used as an input to the MLC of a Landsat image, and a binary edge and a normalized difference vegetation index (NDVI) were used to increase the purity of the training samples. We identified the thresholds of an NDVI and binary edge appropriate to obtain pure samples of each class. The proposed method was then applied to QuickBird and SPOT-5 images. In order to validate the method, visual interpretation and quantitative assessment of the results were compared with products of a manual method. The results showed that the proposed method could classify VHR images and efficiently update GIS data.

Intelligent Traffic Prediction by Multi-sensor Fusion using Multi-threaded Machine Learning

  • Aung, Swe Sw;Nagayama, Itaru;Tamaki, Shiro
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.6
    • /
    • pp.430-439
    • /
    • 2016
  • Estimation and analysis of traffic jams plays a vital role in an intelligent transportation system and advances safety in the transportation system as well as mobility and optimization of environmental impact. For these reasons, many researchers currently mainly focus on the brilliant machine learning-based prediction approaches for traffic prediction systems. This paper primarily addresses the analysis and comparison of prediction accuracy between two machine learning algorithms: Naïve Bayes and K-Nearest Neighbor (K-NN). Based on the fact that optimized estimation accuracy of these methods mainly depends on a large amount of recounted data and that they require much time to compute the same function heuristically for each action, we propose an approach that applies multi-threading to these heuristic methods. It is obvious that the greater the amount of historical data, the more processing time is necessary. For a real-time system, operational response time is vital, and the proposed system also focuses on the time complexity cost as well as computational complexity. It is experimentally confirmed that K-NN does much better than Naïve Bayes, not only in prediction accuracy but also in processing time. Multi-threading-based K-NN could compute four times faster than classical K-NN, whereas multi-threading-based Naïve Bayes could process only twice as fast as classical Bayes.

Target Tracking based on Kernelized Correlation Filter Using MWIR and SWIR Sensors (MWIR 및 SWIR 센서를 이용한 커널상관필터기반의 표적추적)

  • Sungu Sun;Yuri Lee;Daekyo Seo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.1
    • /
    • pp.22-30
    • /
    • 2023
  • When tracking small UAVs and drone targets in cloud clutter environments, MWIR sensors are often unable to track targets continuously. To overcome this problem, the SWIR sensor is mounted on the same gimbal. Target tracking uses sensor information fusion or selectively applies information from each sensor. In this case, parallax correction using the target distance is often used. However, it is difficult to apply the existing method to small UAVs and drone targets because the laser rangefinder's beam divergence angle is small, making it difficult to measure the distance. We propose a tracking method which needs not parallax correction of sensors. In the method, images from MWIR and SWIR sensors are captured simultaneously and a tracking error for gimbal driving is chosen by effectiveness measure. In order to prove the method, tracking performance was demonstrated for UAVs and drone targets in the real sky background using MWIR and SWIR image sensors.

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.

Development of Low-Power IoT Sensor and Cloud-Based Data Fusion Displacement Estimation Method for Ambient Bridge Monitoring (상시 교량 모니터링을 위한 저전력 IoT 센서 및 클라우드 기반 데이터 융합 변위 측정 기법 개발)

  • Park, Jun-Young;Shin, Jun-Sik;Won, Jong-Bin;Park, Jong-Woong;Park, Min-Yong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.5
    • /
    • pp.301-308
    • /
    • 2021
  • It is important to develop a digital SOC (Social Overhead Capital) maintenance system for preemptive maintenance in response to the rapid aging of social infrastructures. Abnormal signals induced from structures can be detected quickly and optimal decisions can be made promptly using IoT sensors deployed on the structures. In this study, a digital SOC monitoring system incorporating a multimetric IoT sensor was developed for long-term monitoring, for use in cloud-computing server for automated and powerful data analysis, and for establishing databases to perform : (1) multimetric sensing, (2) long-term operation, and (3) LTE-based direct communication. The developed sensor had three axes of acceleration, and five axes of strain sensing channels for multimetric sensing, and had an event-driven power management system that activated the sensors only when vibration exceeded a predetermined limit, or the timer was triggered. The power management system could reduce power consumption, and an additional solar panel charging could enable long-term operation. Data from the sensors were transmitted to the server in real-time via low-power LTE-CAT M1 communication, which does not require an additional gateway device. Furthermore, the cloud server was developed to receive multi-variable data from the sensor, and perform a displacement fusion algorithm to obtain reference-free structural displacement for ambient structural assessment. The proposed digital SOC system was experimentally validated on a steel railroad and concrete girder bridge.

Particle Filter Based Robust Multi-Human 3D Pose Estimation for Vehicle Safety Control (차량 안전 제어를 위한 파티클 필터 기반의 강건한 다중 인체 3차원 자세 추정)

  • Park, Joonsang;Park, Hyungwook
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.3
    • /
    • pp.71-76
    • /
    • 2022
  • In autonomous driving cars, 3D pose estimation can be one of the effective methods to enhance safety control for OOP (Out of Position) passengers. There have been many studies on human pose estimation using a camera. Previous methods, however, have limitations in automotive applications. Due to unexplainable failures, CNN methods are unreliable, and other methods perform poorly. This paper proposes robust real-time multi-human 3D pose estimation architecture in vehicle using monocular RGB camera. Using particle filter, our approach integrates CNN 2D/3D pose measurements with available information in vehicle. Computer simulations were performed to confirm the accuracy and robustness of the proposed algorithm.

Implementation of Multiple Sensor Data Fusion Algorithm for Fire Detection System

  • Park, Jung Kyu;Nam, Kihun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.7
    • /
    • pp.9-16
    • /
    • 2020
  • In this paper, we propose a prototype design and implementation of a fire detection algorithm using multiple sensors. The proposed topic detection system determines fire by applying rules based on data from multiple sensors. The fire takes about 3 to 5 minutes, which is the optimal time for fire detection. This means that timely identification of potential fires is important for fire management. However, current fire detection devices are very vulnerable to false alarms because they rely on a single sensor to detect smoke or heat. Recently, with the development of IoT technology, it is possible to integrate multiple sensors into a fire detector. In addition, the fire detector has been developed with a smart technology that can communicate with other objects and perform programmed tasks. The prototype was produced with a success rate of 90% and a false alarm rate of 10% based on 10 actual experiments.

Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion (ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법)

  • Jung, Sukwoo;Lee, Youn-Sung;Lee, KyungTaek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.466-468
    • /
    • 2022
  • 3D reconstruction is important issue in many applications such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, depth map can be acquired by stereo camera and time-of-flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied general multi-camera calibration technique which uses both color and depth information. Next, the depth map of the two sensors are fused by 3D registration and reprojection approach. The fused data is compared with the ground truth data which is reconstructed using RTC360 sensor. We used Geomagic Wrap to analysis the average RMSE of the two data. The proposed procedure was implemented and tested with real-world data.

  • PDF