• Title/Summary/Keyword: High resolution multi-sensor images

Search Result 65, Processing Time 0.023 seconds

KOMPSAT Imagery Application Status (다목적실용위성 영상자료 활용 현황)

  • Lee, Kwangjae;Kim, Younsoo;Chae, Taebyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1311-1317
    • /
    • 2018
  • The ultimate goal of satellite development is to use information obtained from satellites. Therefore, national-levelsatellite development program should include not only hardware development, but also infrastructure establishment and application technology development for information utilization. Until now, Korea has developed various satellites and has been very useful in weather and maritime surveillance as well as various disasters. In particular, KOMPSAT (Korea Multi-purpose Satellite) images have been used extensively in agriculture, forestry and marine fields based on high spatial resolution, and has been widely used in research related to precision mapping and change detection. This special issue aims to introduce a variety of recent studies conducted using KOMPSAT optical and SAR (Synthetic Aperture Radar) images and to disseminate related satellite image application technologies to the public sector.

The Study of Land Surface Change Detection Using Long-Term SPOT/VEGETATION (장기간 SPOT/VEGETATION 정규화 식생지수를 이용한 지면 변화 탐지 개선에 관한 연구)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, In-Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.111-124
    • /
    • 2010
  • To monitor the environment of land surface change is considered as an important research field since those parameters are related with land use, climate change, meteorological study, agriculture modulation, surface energy balance, and surface environment system. For the change detection, many different methods have been presented for distributing more detailed information with various tools from ground based measurement to satellite multi-spectral sensor. Recently, using high resolution satellite data is considered the most efficient way to monitor extensive land environmental system especially for higher spatial and temporal resolution. In this study, we use two different spatial resolution satellites; the one is SPOT/VEGETATION with 1 km spatial resolution to detect coarse resolution of the area change and determine objective threshold. The other is Landsat satellite having high resolution to figure out detailed land environmental change. According to their spatial resolution, they show different observation characteristics such as repeat cycle, and the global coverage. By correlating two kinds of satellites, we can detect land surface change from mid resolution to high resolution. The K-mean clustering algorithm is applied to detect changed area with two different temporal images. When using solar spectral band, there are complicate surface reflectance scattering characteristics which make surface change detection difficult. That effect would be leading serious problems when interpreting surface characteristics. For example, in spite of constant their own surface reflectance value, it could be changed according to solar, and sensor relative observation location. To reduce those affects, in this study, long-term Normalized Difference Vegetation Index (NDVI) with solar spectral channels performed for atmospheric and bi-directional correction from SPOT/VEGETATION data are utilized to offer objective threshold value for detecting land surface change, since that NDVI has less sensitivity for solar geometry than solar channel. The surface change detection based on long-term NDVI shows improved results than when only using Landsat.

Hierarchical Land Cover Classification using IKONOS and AIRSAR Images (IKONOS와 AIRSAR 영상을 이용한 계층적 토지 피복 분류)

  • Yeom, Jun-Ho;Lee, Jeong-Ho;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.435-444
    • /
    • 2011
  • The land cover map derived from spectral features of high resolution optical images has low spectral resolution and heterogeneity in the same land cover class. For this reason, despite the same land cover class, the land cover can be classified into various land cover classes especially in vegetation area. In order to overcome these problems, detailed vegetation classification is applied to optical satellite image and SAR(Synthetic Aperture Radar) integrated data in vegetation area which is the result of pre-classification from optical image. The pre-classification and vegetation classification were performed with MLC(Maximum Likelihood Classification) method. The hierarchical land cover classification was proposed from fusion of detailed vegetation classes and non-vegetation classes of pre-classification. We can verify the facts that the proposed method has higher accuracy than not only general SAR data and GLCM(Gray Level Co-occurrence Matrix) texture integrated methods but also hierarchical GLCM integrated method. Especially the proposed method has high accuracy with respect to both vegetation and non-vegetation classification.

Epipolar Image Resampling from Kompsat-3 In-track Stereo Images (아리랑3호 스테레오 영상의 에피폴라 기하 분석 및 영상 리샘플링)

  • Oh, Jae Hong;Seo, Doo Chun;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_1
    • /
    • pp.455-461
    • /
    • 2013
  • Kompsat-3 is an optical high-resolution earth observation satellite launched in May 2012. The AEISS sensor of the Korean satellite provides 0.7m panchromatic and 2.8m multi-spectral images with 16.8km swath width from the sun-synchronous near-circular orbit of 685km altitude. Kompsat-3 is more advanced than Kompsat-2 and the improvements include better agility such as in-track stereo acquisition capability. This study investigated the characteristic of the epipolar curves of in-track Kompsat-3 stereo images. To this end we used the RPCs(Rational Polynomial Coefficients) to derive the epipolar curves over the entire image area and found out that the third order polynomial equation is required to model the curves. In addition, we could observe two different groups of curve patterns due to the dual CCDs of AEISS sensor. From the experiment we concluded that the third order polynomial-based RPCs update is required to minimize the sample direction image distortion. Finally we carried out the experiment on the epipolar resampling and the result showed the third order polynomial image transformation produced less than 0.7 pixels level of y-parallax.

Dimensionality Reduction Methods Analysis of Hyperspectral Imagery for Unsupervised Change Detection of Multi-sensor Images (이종 영상 간의 무감독 변화탐지를 위한 초분광 영상의 차원 축소 방법 분석)

  • PARK, Hong-Lyun;PARK, Wan-Yong;PARK, Hyun-Chun;CHOI, Seok-Keun;CHOI, Jae-Wan;IM, Hon-Ryang
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.1-11
    • /
    • 2019
  • With the development of remote sensing sensor technology, it has become possible to acquire satellite images with various spectral information. In particular, since the hyperspectral image is composed of continuous and narrow spectral wavelength, it can be effectively used in various fields such as land cover classification, target detection, and environment monitoring. Change detection techniques using remote sensing data are generally performed through differences of data with same dimensions. Therefore, it has a disadvantage that it is difficult to apply to heterogeneous sensors having different dimensions. In this study, we have developed a change detection method applicable to hyperspectral image and high spat ial resolution satellite image with different dimensions, and confirmed the applicability of the change detection method between heterogeneous images. For the application of the change detection method, the dimension of hyperspectral image was reduced by using correlation analysis and principal component analysis, and the change detection algorithm used CVA. The ROC curve and the AUC were calculated using the reference data for the evaluation of change detection performance. Experimental results show that the change detection performance is higher when using the image generated by adequate dimensionality reduction than the case using the original hyperspectral image.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Characteristics of the Electro-Optical Camera(EOC)

  • Lee, Seung-Hoon;Shim, Hyung-Sik;Paik, Hong-Yul
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.313-318
    • /
    • 1998
  • Electro-Optical Camera(EOC) is the main payload of Korea Multi-Purpose SATellite(KOMPSAT) with the mission of cartography to build up a digital map of Korean territory including Digital Terrain Elevation Map(DTEM). This instrument which comprises EOC Sensor Assembly and EOC Electronics Assembly produces the panchromatic images of 6.6 m GSD with a swath wider than 17 km by push-broom scanning and spacecraft body pointing in a visible range of wavelength, 510 ~ 730 nm. The high resolution panchromatic image is to be collected for 2 minutes during 98 minutes of orbit cycle covering about 800 km along ground track, over the mission lifetime of 3 years with the functions of programmable rain/offset and on-board image data storage. The image of 8 bit digitization, which is collected by a full reflective type F8.3 triplet without obscuration, is to be transmitted to Ground Station at a rate less than 25 Mbps. EOC was elaborated to have the performance which meets or surpasses its requirements of design phase. The spectral response the modulation transfer function, and the uniformity of all the 2592 pixel of CCD of EOC are illustrated as they were measured for the convenience of end-user. The spectral response was measured with respect to each gain setup of EOC and this is expected to give the capability of generating more accurate panchromatic image to the EOC data users. The modulation transfer function of EOC was measured as greater than 16% at Nyquist frequency over the entire field of view which exceeds its requirement of larger than 10%, The uniformity that shows the relative response of each pixel of CCD was measured at every pixel of the Focal Plane Array of EOC and is illustrated for the data processing.

  • PDF

Analysis of UAV-based Multispectral Reflectance Variability for Agriculture Monitoring (농업관측을 위한 다중분광 무인기 반사율 변동성 분석)

  • Ahn, Ho-yong;Na, Sang-il;Park, Chan-won;Hong, Suk-young;So, Kyu-ho;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1379-1391
    • /
    • 2020
  • UAV in the agricultural application are capable of collecting ultra-high resolution image. It is possible to obtain timeliness images for phenological phases of the crop. However, the UAV uses a variety of sensors and multi-temporal images according to the environment. Therefore, it is essential to use normalized image data for time series image application for crop monitoring. This study analyzed the variability of UAV reflectance and vegetation index according to Aviation Image Making Environment to utilize the UAV multispectral image for agricultural monitoring time series. The variability of the reflectance according to environmental factors such as altitude, direction, time, and cloud was very large, ranging from 8% to 11%, but the vegetation index variability was stable, ranging from 1% to 5%. This phenomenon is believed to have various causes such as the characteristics of the UAV multispectral sensor and the normalization of the post-processing program. In order to utilize the time series of unmanned aerial vehicles, it is recommended to use the same ratio function as the vegetation index, and it is recommended to minimize the variability of time series images by setting the same time, altitude and direction as possible.

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.