• Title/Summary/Keyword: SATELLITE IMAGE

Search Result 2,133, Processing Time 0.03 seconds

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

Integration of GIS-based RUSLE model and SPOT 5 Image to analyze the main source region of soil erosion

  • LEE Geun-Sang;PARK Jin-Hyeog;HWANG Eui-Ho;CHAE Hyo-Sok
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.357-360
    • /
    • 2005
  • Soil loss is widely recognized as a threat to farm livelihoods and ecosystem integrity worldwide. Soil loss prediction models can help address long-range land management planning under natural and agricultural conditions. Even though it is hard to find a model that considers all forms of erosion, some models were developed specifically to aid conservation planners in identifying areas where introducing soil conservation measures will have the most impact on reducing soil loss. Revised Universal Soil Loss Equation (RUSLE) computes the average annual erosion expected on hillslopes by multiplying several factors together: rainfall erosivity (R), soil erodibility (K), slope length and steepness (LS), cover management (C), and support practice (P). The value of these factors is determined from field and laboratory experiments. This study calculated soil erosion using GIS-based RUSLE model in Imha basin and examined soil erosion source area using SPOT 5 high-resolution satellite image and land cover map. As a result of analysis, dry field showed high-density soil erosion area and we could easily investigate source area using satellite image. Also we could examine the suitability of soil erosion area applying field survey method in common areas (dry field & orchard area) that are difficult to confirm soil erosion source area using satellite image.

  • PDF

AUTOMATIC ORTHORECTIFICATION OF AIRBORNE IMAGERY USING GPS/INS DATA

  • Jang, Jae-Dong;Kim, Young-Seup;Yoon, Hong-Joo
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.684-687
    • /
    • 2006
  • Airborne imagery must be precisely orthorectified to be used as geographical information data. GPS/INS (Global Positioning System/Inertial Navigation System) and LIDAR (LIght Detection And Ranging) data were employed to automatically orthorectify airborne images. In this study, 154 frame airborne images and LIDAR vector data were acquired. LIDAR vector data were converted to raster image for employing as reference data. To derive images with constant brightness, flat field correction was applied to the whole images. The airborne images were geometrically corrected by calculating internal orientation and external orientation using GPS/INS data and then orthorectified using LIDAR digital elevation model image. The precision of orthorectified images was validated using 50 ground control points collected in arbitrary selected five images and LIDAR intensity image. In validation results, RMSE (Root Mean Square Error) was 0.365 smaller then two times of pixel spatial resolution at the surface. It is possible that the derived mosaicked airborne image by this automatic orthorectification method is employed as geographical information data.

  • PDF

A Selection of Atmospheric Correction Methods for Water Quality Factors Extraction from Landsat TM Image (Landsat TM 영상으로부터 수질인자 추출을 위한 대기 보정 방법의 선정)

  • Yang, In-Tae;Kim, Eung-Nam;Choi, Youn-Kwan;Kim, Uk-Nam
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.2 s.14
    • /
    • pp.101-110
    • /
    • 1999
  • Recently, there are a lot of studies to use a satellite image data in order to investigate a simultaneous change of a wide range area as a lake. However, in many cases of the water quality research there is one problem occured when extracting the water quality factors from the satellite image data because the atmosphere scattering exert a bad influence on a result of analysis. In this study, an attempt was made to select the relative atmospheric correction method, extract the water quality factors from the satellite image data. And also, the time-series analysis of the water quality factors was performed by using the multi-temporal image data.

  • PDF

Analysis of Image Integration Methods for Applying of Multiresolution Satellite Images (다중 위성영상 활용을 위한 영상 통합 기법 분석)

  • Lee Jee Kee;Han Dong Seok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.4
    • /
    • pp.359-365
    • /
    • 2004
  • Data integration techniques are becoming increasing1y important for conquering a limitation with a single data. Image fusion which improves the spatial and spectral resolution from a set of images with difffrent spatial and spectral resolutions, and image registration which matches two images so that corresponding coordinate points in the two images correspond to the same physical region of the scene being imaged have been researched. In this paper, we compared with six image fusion methods(Brovey, IHS, PCA, HPF, CN, and MWD) with panchromatic and multispectral images of IKONOS and developed the registration method for applying to SPOT-5 satellite image and RADARSAT SAR satellite image. As the result of tests on image fusion and image registration, we could find that MWD and HPF methods showed the good result in term of visual comparison analysis and statistical analysis. And we could extract patches which depict detailed topographic information from SPOT-5 and RADARSAT and obtain encouraging results in image registration.

Spectral Quality Enhancement of Pan-Sharpened Satellite Image by Using Modified Induction Technique (수정된 영상 유도 기법을 통한 융합영상의 분광정보 향상 알고리즘)

  • Choi, Jae-Wan;Kim, Hyung-Tae
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.3
    • /
    • pp.15-20
    • /
    • 2008
  • High-spatial resolution remote sensing satellites (IKONOS-2, QuickBird and KOMPSAT-2) have provided low-spatial resolution multispectral images and high-spatial resolution panchromatic images. Image fusion or Pan-sharpening is a very important in that it aims at using a satellite image with various applications such as visualization and feature extraction through combining images that have a different spectral and spatial resolution. Many image fusion algorithms are proposed, most methods could not preserve the spectral information of original multispectral image after image fusion. In order to solve this problem, modified induction technique which reduce the spectral distortion of fused image is developed. The spectral distortion is adjusted by the comparison between the spatially degraded pan-sharpened image and original multispectral image and our algorithm is evaluated by QuickBird satellite imagery. In the experiment, pan-sharpened image by various methods can reduce spectral distortion when our algorithm is applied to the fused images.

  • PDF

INTRODUTION TO AN EFFICIENT IMPLEMENTATION OF THE SUBSTITUTE WAVELET INTENSITY METHOD FOR PANSHARPENING

  • Choi, Myung-Jin;Song, Jeong-Heon;Seo, Du-Chun;Lee, Dong-Han;Lim, Hyo-Suk
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.620-624
    • /
    • 2007
  • Recently, Gonzalez-Audicana et al. proposed the substitute wavelet intensity (SWI) method which provided a solution based on the intensity-hue-saturation (IHS) method for the fusing of panchromatic (PAN) and multispectral (MS) images. Although the spectral quality of the fused MS images is enhanced, this method is not efficient enough to quickly merge massive volumes of data from satellite. To overcome this problem, we introduce a new SWI method based on a fast IHS transform to implement efficiently as an alternative procedure. In addition, we show that the method is well applicable for fusing IKONOS PAN with MS images.

  • PDF

The Ground Checkout Test of OSMI(Ocean Scanning Multispectral Imager) on KOMPSAT-1

  • Yong, Sang-Soon;Shim, Hyung-Sik;Heo, Haeng-Pal;Cho, Young-Min;Oh, Kyoung-Hwan;Woo, Sun-Hee;Paik, Hong-Yul
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.375-380
    • /
    • 1999
  • Ocean Scanning Multispectral Imager (OSMI) is a payload on the KOMPSAT satellite to perform worldwide ocean color monitoring for the study of biological oceanography. The instrument images the ocean surface using a wisk-broom motion with a swath width of 800 km and a ground sample distance (GSD) of<1km over the entire field of view (FOV). The instrument is designed to have an on-orbit operation duty cycle of 20% over the mission lifetime of 3 years with the functions of programmable gain/offset and on-board image data compression/storage. The instrument also performs sun and dark calibration for on-board instrument calibration. The OSMI instrument is a multi-spectral imager covering the spectral range from 400nm to 900nm using CCD Focal Plane Array (FPA). The ocean colors are monitored using 6 spectral channels that can be selected via ground commands. KOMPSAT satellite with OSMI was integrated and the satellite level environment tests and instrument aliveness/functional test as well, such as launch environment, on-orbit environment (Thermal/vacuum) and EMl/EMC test were performed at KARI. Test results met the requirements and the OSMI data were collected and analyzed during each test phase. The instrument is launched on the KOMPSAT satellite in the late 1999 and the image is scheduled to start collecting ocean color data in the early 2000 upon completion of on-orbit instrument checkout.

  • PDF

Potential for Image Fusion Quality Improvement through Shadow Effects Correction (그림자효과 보정을 통한 영상융합 품질 향상 가능성)

  • 손홍규;윤공현
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.397-402
    • /
    • 2003
  • This study is aimed to improve the quality of image fusion results through shadow effects correction. For this, shadow effects correction algorithm is proposed and visual comparisons have been made to estimate the quality of image fusion results. The following four steps have been performed to improve the image fusion qualify First, the shadow regions of satellite image are precisely located. Subsequently, segmentation of context regions is manually implemented for accurate correction. Next step, to calculate correction factor we compared the context region with the same non-shadow context region. Finally, image fusion is implemented using collected images. The result presented here helps to accurately extract and interpret geo-spatial information from satellite imagery.

  • PDF

ANALYSIS OF OCEAN WAVE BY AIRBORNE PI-SAR X-BAND IMAGES

  • Yang, Chan-Su;Ouchi, Kazuo
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.240-242
    • /
    • 2008
  • In the present article, we analyze airborne Pi-SAR (Polarimetric-Interferometric SAR) X-band images of ocean waves around the Miyake Island at approximately 180 km south from Tokyo, Japan. Two images of a same scene were produced at approximately 40 min. interval from two directions at right angles. One image shows dominant range travelling waves, but the other image shows a different wave pattern. This difference can be caused by the different image modulations of RCS and velocity bunching. We have estimated the dominant wavelength from the image of range waves, and from the wave phase velocity computed from the dispersion relation (though no wave height data were available), the image intensity is computed by using the velocity bunching model. The comparison of the result with the second image at right angle strongly suggests the evidence of velocity bunching.

  • PDF