• Title/Summary/Keyword: Multisensor

Search Result 59, Processing Time 0.035 seconds

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

Radio Beacon-based Seamless Indoor and Outdoor Positioning for Personal Navigation Systems (개인 휴대용 네비게이션을 위한 라디오 비컨 기반 실내외 연속측위 시스템)

  • Kim, Sang-Kyoon;Jang, Yoon-Ho;Bae, Sang-Jun;Kwak, Kyung-Sup
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.4
    • /
    • pp.84-92
    • /
    • 2009
  • In this paper, using the received signal strength of radio beacon such as Wi-Fi, Bluetooth, CDMA and GPS signal from the satellite, we propose the system of positioning which considered indoor and outdoor based on the Place Lab. Conventional Place Lab utilize the various positioning parameters to estimate the indoor location. However, this conventional system has limitations with respect to the range and efficiency of usage. Therefore, we defined the converged model of multisensor data and re-organized the Place Lab to overcome the limitation of a conventional system. Proposed system uses the radio beacon signal and GPS signal together to estimate the location. Furthermore, it provides the seamless PNS service with many mobile devices because this system realized by the OSGi bundle. This proposed system has evaluated the performance with SAMSUNG T*OMNIA SCH-M490 smart phone and the result shows the system is able to support the PNS service.

  • PDF

Image Georeferencing using AT without GCPs for a UAV-based Low-Cost Multisensor System (UAV 기반 저가 멀티센서시스템을 위한 무기준점 AT를 이용한 영상의 Georeferencing)

  • Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.2
    • /
    • pp.249-260
    • /
    • 2009
  • The georeferencing accuracy of the sensory data acquired by an aerial monitoring system heavily depends on the performance of the GPS/IMU mounted on the system. The employment of a high performance but expensive GPS/IMU unit causes to increase the developmental cost of the overall system. In this study, we simulate the images and GPS/IMU data acquired by an UAV-based aerial monitoring system using an inexpensive integrated GPS/IMU of a MEMS type, and perform the image georeferencing by applying the aerial triangulation to the simulated sensory data without any GCP. The image georeferencing results are then analyzed to assess the accuracy of the estimated exterior orientation parameters of the images and ground points coordinates. The analysis indicates that the RMSEs of the exterior orientation parameters and ground point coordinates is significantly decreased by about 90% in comparison with those resulted from the direct georeferencing without the aerial triangulation. From this study, we confirmed the high possibility to develop a low-cost real-time aerial monitoring system.

Information Fusion of Photogrammetric Imagery and Lidar for Reliable Building Extraction (광학 영상과 Lidar의 정보 융합에 의한 신뢰성 있는 구조물 검출)

  • Lee, Dong-Hyuk;Lee, Kyoung-Mu;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.236-244
    • /
    • 2008
  • We propose a new building detection and description algorithm for Lidar data and photogrammetric imagery using color segmentation, line segments matching, perceptual grouping. Our algorithm consists of two steps. In the first step, from the initial building regions extracted from Lidar data and the color segmentation results from the photogrammetric imagery, we extract coarse building boundaries based on the Lidar results with split and merge technique from aerial imagery. In the secondstep, we extract precise building boundaries based on coarse building boundaries and edges from aerial imagery using line segments matching and perceptual grouping. The contribution of this algorithm is that color information in photogrammetric imagery is used to complement collapsed building boundaries obtained by Lidar. Moreover, linearity of the edges and construction of closed roof form are used to reflect the characteristic of man-made object. Experimental results on multisensor data demonstrate that the proposed algorithm produces more accurate and reliable results than Lidar sensor.

Dempster-Shafer Fusion of Multisensor Imagery Using Gaussian Mass Function (Gaussian분포의 질량함수를 사용하는 Dempster-Shafer영상융합)

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.6
    • /
    • pp.419-425
    • /
    • 2004
  • This study has proposed a data fusion method based on the Dempster-Shafer evidence theory The Dempster-Shafer fusion uses mass functions obtained under the assumption of class-independent Gaussian assumption. In the Dempster-Shafer approach, uncertainty is represented by 'belief interval' equal to the difference between the values of 'belief' function and 'plausibility' function which measure imprecision and uncertainty By utilizing the Dempster-Shafer scheme to fuse the data from multiple sensors, the results of classification can be improved. It can make the users consider the regions with mixed classes in a training process. In most practices, it is hard to find the regions with a pure class. In this study, the proposed method has applied to the KOMPSAT-EOC panchromatic image and LANDSAT ETM+ NDVI data acquired over Yongin/Nuengpyung. area of Kyunggi-do. The results show that it has potential of effective data fusion for multiple sensor imagery.

Quantitative Flood Forecasting Using Remotely-Sensed Data and Neural Networks

  • Kim, Gwangseob
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2002.05a
    • /
    • pp.43-50
    • /
    • 2002
  • Accurate quantitative forecasting of rainfall for basins with a short response time is essential to predict streamflow and flash floods. Previously, neural networks were used to develop a Quantitative Precipitation Forecasting (QPF) model that highly improved forecasting skill at specific locations in Pennsylvania, using both Numerical Weather Prediction (NWP) output and rainfall and radiosonde data. The objective of this study was to improve an existing artificial neural network model and incorporate the evolving structure and frequency of intense weather systems in the mid-Atlantic region of the United States for improved flood forecasting. Besides using radiosonde and rainfall data, the model also used the satellite-derived characteristics of storm systems such as tropical cyclones, mesoscale convective complex systems and convective cloud clusters as input. The convective classification and tracking system (CCATS) was used to identify and quantify storm properties such as life time, area, eccentricity, and track. As in standard expert prediction systems, the fundamental structure of the neural network model was learned from the hydroclimatology of the relationships between weather system, rainfall production and streamflow response in the study area. The new Quantitative Flood Forecasting (QFF) model was applied to predict streamflow peaks with lead-times of 18 and 24 hours over a five year period in 4 watersheds on the leeward side of the Appalachian mountains in the mid-Atlantic region. Threat scores consistently above .6 and close to 0.8 ∼ 0.9 were obtained fur 18 hour lead-time forecasts, and skill scores of at least 4% and up to 6% were attained for the 24 hour lead-time forecasts. This work demonstrates that multisensor data cast into an expert information system such as neural networks, if built upon scientific understanding of regional hydrometeorology, can lead to significant gains in the forecast skill of extreme rainfall and associated floods. In particular, this study validates our hypothesis that accurate and extended flood forecast lead-times can be attained by taking into consideration the synoptic evolution of atmospheric conditions extracted from the analysis of large-area remotely sensed imagery While physically-based numerical weather prediction and river routing models cannot accurately depict complex natural non-linear processes, and thus have difficulty in simulating extreme events such as heavy rainfall and floods, data-driven approaches should be viewed as a strong alternative in operational hydrology. This is especially more pertinent at a time when the diversity of sensors in satellites and ground-based operational weather monitoring systems provide large volumes of data on a real-time basis.

  • PDF

A Fusion Algorithm considering Error Characteristics of the Multi-Sensor (다중센서 오차특성을 고려한 융합 알고리즘)

  • Hyun, Dae-Hwan;Yoon, Hee-Byung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.274-282
    • /
    • 2009
  • Various location tracking sensors; such as GPS, INS, radar, and optical equipment; are used for tracking moving targets. In order to effectively track moving targets, it is necessary to develop an effective fusion method for these heterogeneous devices. There have been studies in which the estimated values of each sensors were regarded as different models and fused together, considering the different error characteristics of the sensors for the improvement of tracking performance using heterogeneous multi-sensor. However, the rate of errors for the estimated values of other sensors has increased, in that there has been a sharp increase in sensor errors and the attempts to change the estimated sensor values for the Sensor Probability could not be applied in real time. In this study, the Sensor Probability is obtained by comparing the RMSE (Root Mean Square Error) for the difference between the updated and measured values of the Kalman filter for each sensor. The process of substituting the new combined values for the Kalman filter input values for each sensor is excluded. There are improvements in both the real-time application of estimated sensor values, and the tracking performance for the areas in which the sensor performance has rapidly decreased. The proposed algorithm adds the error characteristic of each sensor as a conditional probability value, and ensures greater accuracy by performing the track fusion with the sensors with the most reliable performance. The trajectory of a UAV is generated in an experiment and a performance analysis is conducted with other fusion algorithms.

Availability Evaluation of Object Detection Based on Deep Learning Method by Using Multitemporal and Multisensor Data for Nuclear Activity Analysis (핵 활동 분석을 위한 다시기·다종 위성영상의 딥러닝 모델 기반 객체탐지의 활용성 평가)

  • Seong, Seon-kyeong;Choi, Ho-seong;Mo, Jun-sang;Choi, Jae-wan
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1083-1094
    • /
    • 2021
  • In order to monitor nuclear activity in inaccessible areas, it is necessary to establish a methodology to analyze changesin nuclear activity-related objects using high-resolution satellite images. However, traditional object detection and change detection techniques using satellite images have difficulties in applying detection results to various fields because effects of seasons and weather at the time of image acquisition. Therefore, in this paper, an object of interest was detected in a satellite image using a deep learning model, and object changes in the satellite image were analyzed based on object detection results. An initial training of the deep learning model was performed using an open dataset for object detection, and additional training dataset for the region of interest were generated and applied to transfer learning. After detecting objects by multitemporal and multisensory satellite images, we tried to detect changes in objects in the images by using them. In the experiments, it was confirmed that the object detection results of various satellite images can be directly used for change detection for nuclear activity-related monitoring in inaccessible areas.

Monitoring the Coastal Waters of the Yellow Sea Using Ferry Box and SeaWiFS Data (정기여객선 현장관측 시스템과 SeaWiFS 자료를 이용한 서해 연안 해수환경 모니터링)

  • Ryu, Joo-Hyung;Moon, Jeong-Eon;Min, Jee-Eun;Ahn, Yu-Hwan
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.323-334
    • /
    • 2007
  • We analyzed the ocean environmental data from water sample and automatic measurement instruments with the Incheon-Jeju passenger ship for 18 times during 4 years from 2001 to 2004. The objectives of this study are to monitor the spatial and temporal variations of ocean environmental parameters in coastal waters of the Yellow Sea using water sample analysis, and to compare and analyze the reliability of automatic measurement sensors for chlorophyll and turbidity using in situ measurements. The chlorophyll concentration showed the ranges between 0.1 to $6.0mg/m^3$. High concentrations occurred in the Gyeonggi Bay through all the cruises. The maximum value of chlorophyll concentration was $16.5mg/m^3$ in this area during September 2004. The absorption coefficients of dissolve organic matter at 400 nm showed below $0.5m^{-1}$ except those in August 2001 During 2002-2003, it did not distinctly change the seasonal variations with the ranges 0.1 to $0.4m^{-1}$. In the case of suspended sediment (SS) concentration, most of the area showed below $20g/m^3$ through all seasons except the Gyeonggi Bay and around Mokpo area. In general SS concentration of autumn and winter season was higher than that of summer. The central area of the Yellow Sea appeared to have lower value $10g/m^3$. The YSI fluorometer for chlorophyll concentration had a very low reliability and turbidity sensor had a $R^2$ value of 0.77 through the 4 times measurements comparing with water sampling method. For the automatic measurement using instruments for chlorphlyll and suspended sediment concentration, McVan and Choses sensor was greater than YSI multisensor. The SeaWiFS SS distribution map was well spatially matched with in situ measurement, however, there was a little difference in quantitative concentration.