• Title/Summary/Keyword: Satellite image fusion

Search Result 128, Processing Time 0.03 seconds

Similarity Measurement using Gabor Energy Feature and Mutual Information for Image Registration

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.6
    • /
    • pp.693-701
    • /
    • 2011
  • Image registration is an essential process to analyze the time series of satellite images for the purpose of image fusion and change detection. The Mutual Information (MI) is commonly used as similarity measure for image registration because of its robustness to noise. Due to the radiometric differences, it is not easy to apply MI to multi-temporal satellite images using directly the pixel intensity. Image features for MI are more abundantly obtained by employing a Gabor filter which varies adaptively with the filter characteristics such as filter size, frequency and orientation for each pixel. In this paper we employed Bidirectional Gabor Filter Energy (BGFE) defined by Gabor filter features and applied the BGFE to similarity measure calculation as an image feature for MI. The experiment results show that the proposed method is more robust than the conventional MI method combined with intensity or gradient magnitude.

Cloud Detection and Restoration of Landsat-8 using STARFM (재난 모니터링을 위한 Landsat 8호 영상의 구름 탐지 및 복원 연구)

  • Lee, Mi Hee;Cheon, Eun Ji;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.861-871
    • /
    • 2019
  • Landsat satellite images have been increasingly used for disaster damage analysis and disaster monitoring because they can be used for periodic and broad observation of disaster damage area. However, periodic disaster monitoring has limitation because of areas having missing data due to clouds as a characteristic of optical satellite images. Therefore, a study needs to be conducted for restoration of missing areas. This study detected and removed clouds and cloud shadows by using the quality assessment (QA) band provided when acquiring Landsat-8 images, and performed image restoration of removed areas through a spatial and temporal adaptive reflectance fusion (STARFM) algorithm. The restored image by the proposed method is compared with the restored image by conventional image restoration method throught MLC method. As a results, the restoration method by STARFM showed an overall accuracy of 89.40%, and it is confirmed that the restoration method is more efficient than the conventional image restoration method. Therefore, the results of this study are expected to increase the utilization of disaster analysis using Landsat satellite images.

Evidential Fusion of Multsensor Multichannel Imagery

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.75-85
    • /
    • 2006
  • This paper has dealt with a data fusion for the problem of land-cover classification using multisensor imagery. Dempster-Shafer evidence theory has been employed to combine the information extracted from the multiple data of same site. The Dempster-Shafer's approach has two important advantages for remote sensing application: one is that it enables to consider a compound class which consists of several land-cover types and the other is that the incompleteness of each sensor data due to cloud-cover can be modeled for the fusion process. The image classification based on the Dempster-Shafer theory usually assumes that each sensor is represented by a single channel. The evidential approach to image classification, which utilizes a mass function obtained under the assumption of class-independent beta distribution, has been discussed for the multiple sets of mutichannel data acquired from different sensors. The proposed method has applied to the KOMPSAT-1 EOC panchromatic imagery and LANDSAT ETM+ data, which were acquired over Yongin/Nuengpyung area of Korean peninsula. The experiment has shown that it is greatly effective on the applications in which it is hard to find homogeneous regions represented by a single land-cover type in training process.

Accurate Classification of Water Area with Fusion of RADARSAT and SPOT Satellite Imagery (RADARSAT 위성영상과 SPOT 위성영상의 영상융합을 이용한 수계영역 분류정확도 향상)

  • 손홍규;송영선;박정환;유환희
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.04a
    • /
    • pp.277-281
    • /
    • 2003
  • We fused RADARSAT image and SPOT panchromatic image by wavelet transform in order to improve the accuracy of classification on the water area. Fused image in water not only maintained the characteristic of SAR image (low pixel value)but also had boundary information improved. This leads to accurate method to classify water areas.

  • PDF

Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector (퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류)

  • 이상훈
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.329-339
    • /
    • 2003
  • In this study, an approach of image fusion in decision level has been proposed for unsupervised image classification using the images acquired from multiple sensors with different characteristics. The proposed method applies separately for each sensor the unsupervised image classification scheme based on spatial region growing segmentation, which makes use of hierarchical clustering, and computes iteratively the maximum likelihood estimates of fuzzy class vectors for the segmented regions by EM(expected maximization) algorithm. The fuzzy class vector is considered as an indicator vector whose elements represent the probabilities that the region belongs to the classes existed. Then, it combines the classification results of each sensor using the fuzzy class vectors. This approach does not require such a high precision in spatial coregistration between the images of different sensors as the image fusion scheme of pixel level does. In this study, the proposed method has been applied to multispectral SPOT and AIRSAR data observed over north-eastern area of Jeollabuk-do, and the experimental results show that it provides more correct information for the classification than the scheme using an augmented vector technique, which is the most conventional approach of image fusion in pixel level.

Pattern Classification of Multi-Spectral Satellite Images based on Fusion of Fuzzy Algorithms (퍼지 알고리즘의 융합에 의한 다중분광 영상의 패턴분류)

  • Jeon, Young-Joon;Kim, Jin-Il
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.674-682
    • /
    • 2005
  • This paper proposes classification of multi-spectral satellite image based on fusion of fuzzy G-K (Gustafson-Kessel) algorithm and PCM algorithm. The suggested algorithm establishes the initial cluster centers by selecting training data from each category, and then executes the fuzzy G-K algorithm. PCM algorithm perform using classification result of the fuzzy G-K algorithm. The classification categories are allocated to the corresponding category when the results of classification by fuzzy G-K algorithm and PCM algorithm belong to the same category. If the classification result of two algorithms belongs to the different category, the pixels are allocated by Bayesian maximum likelihood algorithm. Bayesian maximum likelihood algorithm uses the data from the interior of the average intracluster distance. The information of the pixels within the average intracluster distance has a positive normal distribution. It improves classification result by giving a positive effect in Bayesian maximum likelihood algorithm. The proposed method is applied to IKONOS and Landsat TM remote sensing satellite image for the test. As a result, the overall accuracy showed a better outcome than individual Fuzzy G-K algorithm and PCM algorithm or the conventional maximum likelihood classification algorithm.

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.

Impact Assessment of Forest Development on Net Primary Production using Satellite Image Spatial-temporal Fusion and CASA-Model (위성영상 시공간 융합과 CASA 모형을 활용한 산지 개발사업의 식생 순일차생산량에 대한 영향 평가)

  • Jin, Yi-Hua;Zhu, Jing-Rong;Sung, Sun-Yong;Lee, Dong-Ku
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.20 no.4
    • /
    • pp.29-42
    • /
    • 2017
  • As the "Guidelines for GHG Environmental Assessment" was revised, it pointed out that the developers should evaluate GHG sequestration and storage of the developing site. However, the current guidelines only taking into account the quantitative reduction lost within the development site, and did not consider the qualitative decrease in the carbon sequestration capacity of forest edge produced by developments. In order to assess the quantitative and qualitative effects of vegetation carbon uptake, the CASA-NPP model and satellite image spatial-temporal fusion were used to estimate the annual net primary production in 2005 and 2015. The development projects between 2006 and 2014 were examined for evaluate quantitative changes in development site and qualitative changes in surroundings by development types. The RMSE value of the satellite image fusion results is less than 0.1 and approaches 0, and the correlation coefficient is more than 0.6, which shows relatively high prediction accuracy. The NPP estimation results range from 0 to $1335.53g\;C/m^2$ year before development and from 0 to $1333.77g\;C/m^2$ year after development. As a result of analyzing NPP reduction amount within the development area by type of forest development, the difference is not significant by type of development but it shows the lowest change in the sports facilities development. It was also found that the vegetation was most affected by the edge vegetation of industrial development. This suggests that the industrial development causes additional development in the surrounding area and indirectly influences the carbon sequestration function of edge vegetaion due to the increase of the edge and influx of disturbed species. The NPP calculation method and results presented in this study can be applied to quantitative and qualitative impact assessment of before and after development, and it can be applied to policies related to greenhouse gas in environmental impact assessment.

A Study on Fusion and Visualization using Multibeam Sonar Data with Various Spatial Data Sets for Marine GIS

  • Kong, Seong-Kyu
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.34 no.3
    • /
    • pp.407-412
    • /
    • 2010
  • According to the remarkable advances in sonar technology, positioning capabilities and computer processing power we can accurately image and explore the seafloor in hydrography. Especially, Multibeam Echo Sounder can provide nearly perfect coverage of the seafloor with high resolution. Since the mid-1990's, Multibeam Echo Sounders have been used for hydrographic surveying in Korea. In this study, new marine data set as an effective decision-making tool in various fields was proposed by visualizing and combining with Multibeam sonar data and marine spatial data sets such as satellite image and digital nautical chart. The proposed method was tested around the port of PyeongTaek-DangJin in the west coast of Korea. The Visualization and fusion methods are described with various marine data sets with processing. We demonstrated that new data set in marine GIS is useful in safe navigation and port management as an efficient decision-making tool.

Development of a Vehicle Positioning Algorithm Using Reference Images (기준영상을 이용한 차량 측위 알고리즘 개발)

  • Kim, Hojun;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1131-1142
    • /
    • 2018
  • The autonomous vehicles are being developed and operated widely because of the advantages of reducing the traffic accident and saving time and cost for driving. The vehicle localization is an essential component for autonomous vehicle operation. In this paper, localization algorithm based on sensor fusion is developed for cost-effective localization using in-vehicle sensors, GNSS, an image sensor and reference images that made in advance. Information of the reference images can overcome the limitation of the low positioning accuracy that occurs when only the sensor information is used. And it also can acquire estimated result of stable position even if the car is located in the satellite signal blockage area. The particle filter is used for sensor fusion that can reflect various probability density distributions of individual sensors. For evaluating the performance of the algorithm, a data acquisition system was built and the driving data and the reference image data were acquired. Finally, we can verify that the vehicle positioning can be performed with an accuracy of about 0.7 m when the route image and the reference image information are integrated with the route path having a relatively large error by the satellite sensor.