DOI QR코드

DOI QR Code

Land Use and Land Cover Mapping from Kompsat-5 X-band Co-polarized Data Using Conditional Generative Adversarial Network

  • Jang, Jae-Cheol (Department of Science Education, Seoul National University) ;
  • Park, Kyung-Ae (Department of Earth Science Education, Seoul National University)
  • Received : 2022.02.05
  • Accepted : 2022.02.23
  • Published : 2022.02.28

Abstract

Land use and land cover (LULC) mapping is an important factor in geospatial analysis. Although highly precise ground-based LULC monitoring is possible, it is time consuming and costly. Conversely, because the synthetic aperture radar (SAR) sensor is an all-weather sensor with high resolution, it could replace field-based LULC monitoring systems with low cost and less time requirement. Thus, LULC is one of the major areas in SAR applications. We developed a LULC model using only KOMPSAT-5 single co-polarized data and digital elevation model (DEM) data. Twelve HH-polarized images and 18 VV-polarized images were collected, and two HH-polarized images and four VV-polarized images were selected for the model testing. To train the LULC model, we applied the conditional generative adversarial network (cGAN) method. We used U-Net combined with the residual unit (ResUNet) model to generate the cGAN method. When analyzing the training history at 1732 epochs, the ResUNet model showed a maximum overall accuracy (OA) of 93.89 and a Kappa coefficient of 0.91. The model exhibited high performance in the test datasets with an OA greater than 90. The model accurately distinguished water body areas and showed lower accuracy in wetlands than in the other LULC types. The effect of the DEM on the accuracy of LULC was analyzed. When assessing the accuracy with respect to the incidence angle, owing to the radar shadow caused by the side-looking system of the SAR sensor, the OA tended to decrease as the incidence angle increased. This study is the first to use only KOMPSAT-5 single co-polarized data and deep learning methods to demonstrate the possibility of high-performance LULC monitoring. This study contributes to Earth surface monitoring and the development of deep learning approaches using the KOMPSAT-5 data.

Keywords

1. Introduction

Land use and land cover (LULC) mapping plays a fundamental role in geospatial analysis and is one of the major applications in remote sensing (Dwivedi et al., 2005). LULC maps have been used for various applications, such as natural environmental monitoring, urban planning and urbanization, and hazard assessment (Liu et al., 2017). For local-and global-scale LULC analyses, remotely sensed data have been efficiently used in many studies (Friedl et al., 2010).

As LULC types can exhibit various combinations of multispectral characteristics, it is generally possible to utilize optical and infrared images, including Landsat, MODIS, and VIIRS (Phiri and Morgenroth, 2017). However, these optical and infrared sensors can be affected by cloud cover and weather conditions; thus, they are available in limited conditions, such as clear skies (Lu and Weng, 2007; Kumar et al., 2015). In contrast, because the synthetic aperture radar (SAR), an all-weather sensor, is less affected by atmospheric conditions and the 24 hr available observation time, SAR has been considered more advantageous for LULC. Because the scattering characteristics differ with respect to the LULC type, polarimetric SAR (pol-SAR) data are generally used for LULC (Alberga, 2007).

Mapping LULC using satellite images has been increasingly conducted since the launch of Landsat in the 1970s. However, LULC classification from remotely sensed images has been recognized as a challenge because of the nonlinear relationship between LULC types and spectral intensity (Zhang et al., 2019). Pol-SAR methods are usually used for LULC classification when using SAR images as remote sensing data. However, owing to the inherent speckle noise and geometry distortion, object shadows, and layover, LULC mapping from SAR images is considered a complicated problem (Moreira et al., 2013). In particular, because urban environments consist of numerous natural and artificial objects and show complex structures, the problems caused by the characteristics of SAR images are remarkable (Niu and Ban, 2013). To resolve the problems associated with remote sensing data, several classification methods have been proposed, such as maximum likelihood classifiers, k-nearest neighbor, support vector machines, random forest, and artificial neural networks (Kavzoglu and Mather, 2003; Shiraishi et al., 2014; Kumar et al., 2015).

Optical and infrared satellite images, including Landsat, MODIS, and VIIRS, are highly accessible owing to data availability and have been acquired over a long period of four decades. The KOMPSAT-5 satellite, Korea’s first satellite equipped with an X-band SAR instrument, was launched on August 22, 2013 (Jang et al., 2019). Its central frequency is 9.66 GHz and its revisit time is approximately 28 d at an altitude of 550 km. The spatial resolution and swath width changes with respect to the acquisition mode and incidence angle. There are three acquisition modes, the high-resolution mode, standard mode (ST), and wide swath mode. Four extended acquisition modes of KOMPSAT-5 have been added since 2015, namely enhanced high-resolution mode, ultra-high-resolution mode, enhanced standard mode (ES), and enhanced wide swath mode. Such KOMPSAT-5 data have been used for diverse applications, including target detection (Park et al., 2020) and sea surface winds (Jang et al., 2018); however, no studies have been conducted on LULC. In this study, we developed LULC classification methods based on a deep learning method using KOMPSAT-5 single-pol and digital elevation model (DEM) data. This paper is organized as follows: Section 2 describes the materials, including the study area and data; Section 3 introduces the data preprocessing and deep learning model based on the cGAN method; Section 4 presents the accuracy of the LULC classification model and discusses the error factors; and Section 5 provides a summary and conclusion of this study.

2. Materials

1) Study Area

South Korea is located on the margin of Northeast Asia bordering the northwest Pacific Ocean (Fig. 1(a)).

OGCSBN_2022_v38n1_111_f0001.png 이미지

Fig. 1. MODIS land cover from the Annual International Geosphere-Biosphere Programme over (a) Northeast Asia and (b) the South Korea in 2019.

Because it is located in a monsoon region, its LULC types are dramatically diverse (Fig. 1(b)). Owing to its diverse LULC over a wide coverage, it is necessary to monitor LULC using remote-sensing data. However, because it shows seasonal weather characteristics, the frequency and data acquisition of satellite-based land surface observations vary with respect to the season and atmospheric conditions. During the summer monsoon season, acquisition of images of the land surface using optical and infrared satellites is challenging because of frequent cloud cover and rainfall (Jang et al., 2013).

2) KOMPSAT-5 and Elevation Model Data

To develop the LULC mapping model, we used KOMPSAT-5 data with single co-polarized data such as HH and VV. The spatial resolution and swath width were 3 m and 30 km, respectively, and the incidence angle was 45°. For the deep learning model training, we collected KOMPSAT-5 Level-1 data comprising seven ST mode images and 23 ES mode images (Fig. 2). For LULC classification model training, we classified the datasets into training and testing images.

OGCSBN_2022_v38n1_111_f0002.png 이미지

Fig. 2. A series of KOMPSAT-5 single co-polarized images used for training and testing the land use and land cover (LULC) classification model.

For the HH polarized data, two KOMPSAT-5 images observed on August 21, 2014, and January 2, 2015, were used for model testing. For the VV-polarized data, four KOMPSAT-5 images observed on April 14, 2017, May 12, 2017, March 6, 2018, and May 5, 2018, were used for model testing. In addition, we used Shuttle Radar Topography Mission (SRTM) DEM data as auxiliary data for LULC classification. The SRTM DEM data produced by the National Aeronautics and Space Administration (NASA) range from 56°S to 60°N, and its spatial resolution is an arc-second (approximately 30 m).

3) LULC Data

Owing to the high resolution of KOMPSAT-5, LULC data with high spatial resolution are necessary. We then used the main category-type data of LULC produced by the Ministry of Environment (ME), South Korea (https://egis.me.go.kr/main.do). The ME collects Landsat data over long periods and produces the main category-type data for LULC every 10 years. Its spatial resolution is 30 m, and there is no operational service for areas adjacent to North Korea. The main category types originally included built-up, cropland, forest, grassland, wetland, bare land, and water bodies. We integrated crop land, forest, grassland, and bare land into vegetation and classified the LULC type into built- up, vegetation, wetland, and water bodies (Fig. 3).

OGCSBN_2022_v38n1_111_f0003.png 이미지

Fig. 3. Land use and land cover (LULC) map produced by Ministry of Environment (ME), South Korea.

3. Method

1) Data Preprocessing

(1) Radiometric Calibration

The KOMPSAT-5 image processing program was [K5-SARP2017/TAS-I-R131-2017.1.3-CW64OIV2- 2017.05.14]. The calculation equation in this version converts the digital number to Sigma0 (e.g., sigma naught or backscattering coefficient), as follows:

\(\begin{aligned} \sigma^{0}[\mathrm{~dB}]=& 10 \log _{10}\left[\frac { C A L C O } { N ( \delta _ { a } \delta _ { S } ) } \sum _ { \{ i , j ] \in D } ^ { N } \left\{\left(\left(I_{i, j} \times R F\right)^{2}\right.\right.\right.\\ &\left.\left.\left.+\left(Q_{i j} \times \mathrm{RF}\right)^{2}\right) \sin \left(\theta_{i j}\right)\right\}\right] \end{aligned}\)       (1)

where N indicates the number of pixels including domain D; CALCOand RFdenote the calibration constant and rescaling factor, respectively; δS is the \(\frac{c}{2 \cdot B W_{r g}}\), c and BWrg represent a constant speed of light and range focusing bandwidth, respectively; δa indicates the azimuth instrument geometric resolution; ρCand ρLdenote the column spacing and line spacing, respectively; Ii, jand Qi, jrepresent the real and imaginary parts in the pixel with the ith row and jth column, respectively; and θi, jis the local incidence angle in the pixel with the ith row and jth column, respectively, as follows:

θ[deg] = GIM× GIMRF–GIMoff        (2)

where GIMindicates the digital number of the GIMlayer and GIMRF and GIMoff represent the rescaling factor and offset, respectively. The parameters required for calculating Sigma0 were included in the KOMPSAT-5 data.

(2) Construction of Dataset

The spatial resolution of the KOMPSAT-5 ST data is 3 m at an incidence angle of 45°, and the resolution changes depending on the incidence angle. However, the spatial resolution of the SRTM DEM and ME LULC data was 30 m. To construct the LULC model from the KOMPSAT-5 data, it was necessary to match the spatial resolution. Thus, we conducted a multi looking process based on the KOMPSAT-5 data. The final pixel resolution of the multi-looked KOMPSAT-5 data was approximately 30 m. Radiometric calibration was performed for the KOMPSAT-5 data, and Sigma0 was calculated for single-polarized data.

We used KOMPSAT-5 single co-polarized data and SRTM DEM data as the input data for the LULC model. The input parameters of the model were Sigma0, incidence angle, and DEM. Finally, to use the image translation, we created pairs of input data and reference LULC data, and the pairs were subsampled with an image size of 256 pixels × 256 pixels.

2) Image Translation

(1) Conditional Generative Adversarial Network (cGAN)

For LULC mapping using SAR data, we used the cGAN method with a Pix2Pix structure. The cGAN method was developed from GAN. cGAN methods, including the Pix2Pix structure, have been widely used for SAR applications with high accuracy, such as image translation and target recognition (Turnes et al., 2020; Niu et al., 2021). GAN is described as a minimum-maximum problem between generative and discriminative models. The generative model generates fake imagery similar to real imagery using input data. The discriminative model distinguishes fake imagery produced by the generative model from real imagery. In this study, to train the cGAN method, we used Sentinel-1 and SRTM DEM data as input data and ME LULC data as real imagery. The cGAN method with the Pix2Pix structure uses the U-Net method for the generative model and the patch GAN for the discriminative model (Ronneberger et al., 2015; Li and Wand, 2016). The minimum-maximum function of the cGAN method is as follows:

\(G^{*}=\arg \min _{G} \max _{D} L_{1}(G, D)+\lambda L_{2}(G)\)       (3)

where L1 and L2 are the cGAN loss function (adversarial loss) and reconstruction loss (CNN loss), respectively; Gand Dindicate the generative and discriminative models, respectively; and λdenotes the parameter that represents the trade-off between L1and L2. Generative and discriminative models provide adversarial feedback to minimize G*. L1and L2can be described as follows:

L1(G, D) = E(logD(x, y)) + E(log(1 –D(x, G(x, y′))))       (4)

L2(G) = E(|| y–G(x, y′) ||)       (5)

where E(logD(x, y)) and E(log(1 –D(x, G(x, y′)))) represent the discriminator to maximize the probability of the training data and minimize the probability of the data from the generative model, respectively; x, y, and y′ denote the input data, real imagery, and fake imagery, respectively; and E(|| y–G(x, y′) ||) indicates the difference between real and fake imagery.

(2) Residual Unit

When increasing the depth of a neural network, the performance is generally improved, and extensive blind deepening of the network could cause gradient diffusion and explosion, which could cause a degradation problem. Therefore, residual learning has been proposed to resolve the gradient diffusion and explosion caused by deepening the network structure (He et al., 2016). Each residual unit can be expressed as follows:

yi= h(xi) + F(xi)       (6)

xi+1= f(yi)       (7)

where xidenotes the input of the ith residual unit, xi+1 indicates not only the output of the ith residual unit but also the input of the i+1th residual unit; F(·) and f(·) represent the residual function and activation function, respectively; and h(·) is an identity mapping function. Fig. 4(a) illustrates the residual block enlisting the convolutional neural network, batch normalization, and rectified linear unit activation function. In this study, we used deep ResUNet, which combines U-Net with a residual unit (Fig. 4(b)). The residual unit and skip connection can ease the training of the model and facilitate information propagation without degradation, enabling the efficient construction of a model with better performance and fewer parameters (Zhang et al., 2018).

OGCSBN_2022_v38n1_111_f0004.png 이미지

Fig. 4. Flow diagram of (a) residual unit and (b) ResUNet.

3) Accuracy Analysis

The LULC results from Sentinel-1 translated by the cGAN model were compared with the ME LULC data (reference data). To quantitatively assess the accuracy, it is common to use the error matrix (confusion matrix), as shown in Table 1. We used the producer’s accuracy (PA), user’s accuracy (UA), overall accuracy (OA), and kappa coefficient. The parameters were calculated as follows:

\(P A_{i}=\frac{x_{i i}}{x_{+i}}\)       (8)

\(U A_{i}=\frac{x_{i i}}{x_{i+}}\)       (9)

\(O A=\frac{\sum_{i=1}^{r} x_{i i}}{N}\)       (10)

\(\text { kappa }=\frac{N \times \sum_{i=1}^{r} x_{i i}-\sum_{i=1}^{r} x_{i+} \times x_{+i}}{N^{2}-\sum_{i=1}^{r}\left(x_{i+} \times x_{+i}\right)}\)       (11)

where N and r are the total number of pixels and the total class; xi j indicates the number of pixels with estimated class iand reference class j; xi+denotes the number of pixels with estimated class i; and x+ idenotes the number of pixels with reference class i, respectively (Table 1).

Table 1. Layout of a typical error matrix (confusion matrix)

OGCSBN_2022_v38n1_111_t0001.png 이미지

4. Results and Discussions

1) Training History

Fig. 5 presents the training history of the ResUNet model. Training the model involved minimizing the loss function by optimizing the parameters in the model. Overall, the accuracy improved with increasing training epochs; however, it sharply decreased when 380 epochs were reached. After 390 epochs, the performance improved rapidly, and the performance improved slightly over 1000 epochs. The model shows a maximum OA of 93.89 and a kappa coefficient of 0.91 at 1732 epochs. In this study, we adopted the trained ResUNet model with 1732 epochs to estimate LULC from the KOMPSAT-5 single co-polarized data.

OGCSBN_2022_v38n1_111_f0005.png 이미지

Fig. 5. Changes in overall accuracy and kappa coefficient with respect to the training epochs of the ResUNet model training epochs, where a red line and blue line indicate the overall accuracy and kappa coefficient from validation datasets, respectively.

2) HH-pol image

Fig. 6(a-c) shows the LULC result derived from the KOMPSAT-5 HH-pol over inland observed on August 21, 2014, at 09:22 UTC. Because the image size was 256 × 256 pixels, the total number of pixels was 65, 536. The case showed an OA of 94.8 and a kappa coefficient of 0.92. When analyzing by individual type, built-up class showed a UA of 95.0 and PA of 96.9, vegetation class showed a UA of 94.4 and PA of 93.2, wetland showed a UA of 92.2 and PA of 79.3, and water body showed a UA of 96.3 and PA of 97.0 (Table 2). The bridge in the southwestern area of the KOMPSAT-5 image was partially misclassified as a water body, and the water body near the center of the KOMPSAT-5 image was misclassified as vegetation.

OGCSBN_2022_v38n1_111_f0008.png 이미지

Fig. 6. (a, d) KOMPSAT-5 HH-pol backscattering coefficient image observed on (a-c) August 21, 2014 at 09:22 UTC and (d-f) January 2, 2015 at 20:48 UTC, (b, e) result of land use and land cover (LULC) classification derived from KOMPSAT-5 single co-polarized data using ResUNet, and (c, f) the actual LULC.

Table 2. Error matrix on the classified map of the KOMPSAT-5 HH-pol image observed on August 21, 2014 at 09:22 UTC

OGCSBN_2022_v38n1_111_t0002.png 이미지

Fig. 6(d-f) shows the LULC results derived from the KOMPSAT-5 HH-pol over the Nakdong River delta observed on January 2, 2015, at 20:48 UTC. Because the image size was 256 × 256 pixels, the total number of pixels was 65, 536. The case showed an OA of 94.2% and a kappa coefficient of 0.92. When analyzing by each class, the built-up class showed a UA of 92.9 and PA of 96.5, vegetation class showed a UA of 93.5 and PA of 86.7, wetland showed a UA of 93.2 and PA of 88.2, and water body showed a UA of 96.2 and PA of 98.1 (Table 3). The bridge in the southeastern and eastern areas of the KOMPSAT-5 image was partially misclassified as water bodies. The overall structure of the river delta was detected well; however, the detailed structure of the river delta was partially misclassified.

Table 3. Error matrix on the classified map of the KOMPSAT-5 HH-pol image observed on January 2, 2015 at 20:48 UTC

OGCSBN_2022_v38n1_111_t0003.png 이미지

3) VV-pol image

Fig. 7(a-c) shows the LULC results derived from the KOMPSAT-5 VV-pol over inland observed on May 12, 2017, at 21:03 UTC. Because the image size was 256 × 256 pixels, the total number of pixels was 65, 536. The case showed an OA of 95.1 and kappa coefficient of 0.91. When analyzing by each class, the built-up class showed a UA of 93.4 and PA of 96.4, vegetation class showed a UA of 96.7 and PA of 95.4, wetland showed a UA of 88.8 and PA of 70.8, an water body showed a UA of 93.4 and PA of 94.1 (Table 4). The bridge in the northeastern area of the KOMPSAT-5 image was partially misclassified as a water body. Large rivers were detected well; however, relatively small rivers were not detected well, as shown in the southeastern areas.

OGCSBN_2022_v38n1_111_f0009.png 이미지

Fig. 7. (a, d) KOMPSAT-5 VV-pol backscattering coefficient image observed on (a-c) May 12, 2017 at 21:03 UTC and (d-f) March 6, 2018 at 09:20 UTC, (b, e) result of land use and land cover (LULC) classification derived from KOMPSAT-5 single co-polarized data using ResUNet, and (c, f) the actual LULC.

Fig. 7(d-f) shows the LULC result derived from the KOMPSAT-5 VV-pol over inland observed on March 6, 2018, at 09:20 UTC. Because the image size was 256 × 256 pixels, the total number of pixels was 65, 536. The case showed an OA of 96.0 and kappa coefficient of 0.92. When analyzing by each class, the built-up class showed a UA of 88.8 and PA of 87.8, vegetation class showed a UA of 94.2 and PA of 91.7, wetland showed a UA of 89.4 and PA of 85.1, and water body showed a UA of 97.9 and PA of 99.3 (Table 4). The bridge in the northwestern area of the KOMPSAT-5 image was partially misclassified as vegetation. Islands near the center were not detected, and vegetation was misclassified as water bodies.

Table 4. Error matrix on the classified map of the KOMPSAT-5 VV-pol image observed on May 12, 2017 at 21:03 UTC

OGCSBN_2022_v38n1_111_t0004.png 이미지

Table 5. Error matrix on the classified map of the KOMPSAT-5 VV-pol image observed on March 6, 2018 at 09:20 UTC

OGCSBN_2022_v38n1_111_t0005.png 이미지

4) Effect of DEM

To study the effect of the DEM on the accuracy of the LULC classification model, we trained another model using KOMPSAT-5 single co-polarized data without the DEM. We trained the ResUNet model using the KOMPSAT-5 single co-polarized data, excluding the DEM, for up to 2000 epochs. The model showed a maximum OA of 93.41 and a kappa coefficient of 0.90, and 1395 epochs, respectively. The model excluding DEM was adopted with 1395 epochs. The performance of the models, including DEM and excluding DEM, showed a similar OA and kappa coefficient in the training history.

For qualitative and quantitative validation, we compared the LULC classification results with actual LULC. Fig. 8 shows a comparison between the KOMPSAT-5 LULC model including DEM and excluding DEM. In the KOMPSAT-5 image observed on August 21, 2014, at 09:22 UTC, the model excluding DEM showed accuracy with an OA of 90.11, and a kappa coefficient of 0.84 (Fig. 8 (a, b)). In the KOMPSAT-5 image observed on January 2, 2015, at 20:48 UTC, the model excluding DEM showed an OA of 89.59 and a kappa coefficient of 0.85 (Fig. 8 (c, d)). In the KOMPSAT-5 image observed on May 12, 2017, at 21:03 UTC, the model excluding DEM showed an OA of 90.40 and a kappa coefficient of 0.82 (Fig. 8 (e, f)). In the KOMPSAT-5 image observed on March 6, 2018, at 09:20 UTC, the model excluding DEM showed accuracy with an OA of 93.99 and a kappa coefficient of 0.88 (Fig. 8 (g, h)). The model including DEM showed better accuracy with both OA and kappa coefficients than the model excluding DEM. The model excluding DEM showed a tendency to smoothen the LULC result compared to the model including DEM. In particular, the model excluding DEM showed difficulty in classifying LULC over vegetation on wetlands. Over the boundary area of each LULC, compared with the actual LULC data, the model including DEM showed high accuracy; however, the model excluding DEM showed difficulty classifying the land type.

OGCSBN_2022_v38n1_111_f0010.png 이미지

Fig. 8. (a, c, e, g) The result of land use and land cover (LULC) classification and (b, d, f, h) the comparison result of actual LULC classification derived from KOMPSAT-5 single co-polarized data including DEM and excluding DEM observed on (a, b) August 21, 2014 at 09:22 UTC, (c, d) January 2, 2015 at 20:48 UTC, (e, f) May 12, 2017 at 21:03 UTC, and (g, h) March 6, 2018 at 09:20 UTC.

5) Error factor

To investigate the effect of the incidence angle on the accuracy of LULC, we compared OA with the incidence angle of the KOMPSAT-5 data (Fig. 9). As the incident angle increased, the OA tended to decrease. When the angle of incidence was 28-30°, the OA was 92.95%; however, when the angle of incidence was 40- 42°, the OA was 92.54%. The accuracy of the LULC decreased as the angle of incidence increased because of the observation method of the SAR sensor. The SAR sensor scans using a side-looking system and produced images of the magnitude and phase of the received signals (echoes). When using a side-looking system based on an active sensor, the SAR sensor estimated the distance based on the travel time of the echoes. The longer the traveling time, the farther away the ground target, and the shorter the traveling time, the closer the ground target. Accordingly, geometric distortion is produced in the SAR image depending on the geometry of the SAR sensor and ground target (Fig. 10). The foreshortening occurs as long as the terrain slope is smaller than the local incidence angle and causes the projected distance to be shortened in the slant range direction relative to the actual distance. Layover occurs when the terrain slope exceeds the local incidence angle, and the ground target object is imaged in reverse order and superimposed on the contribution from other areas. The radar shadow occurs when the ground target (i.e., ground height variation, terrain effect, and artificial built-up) obscures the radar signal from reaching other the SAR scene (Zhang et al., 2010; Goel, 2014). These geometric distortions cause inherent errors in the SAR image quality and the difference between the actual coordinates and the SAR image coordinates. As shown in Fig. 10, because the SAR sensor uses a side-looking system, vegetation (V1) masks the built-up area (B1). The SAR sensor could not observe B1, and the pixels corresponding to B1could be classified as vegetation. However, the built-up areas (B2), water bodies (W1), and vegetation (V2) were not masked by V1and could be observed by the SAR system. The built-up area (B3) masks the vegetation (V3) and water bodies (W2). The SAR sensor could not observe V3and W2, and the pixels corresponding to V3and W2could be classified as built-up. The radar shadow changes depending on the relative look direction to the sensor, geometry of the surrounding pixels, and incidence angle (Kropatsch and Strobl, 1990; Horritt et al., 2003; Prasath and Haddad, 2014). In general, the area of the radar shadow increases as the difference between the heights over the neighboring pixels increases. Furthermore, the radar shadow increases as the incidence angle of the SAR sensor increases, owing to its side-looking system (Hasselmann et al., 1985). As much as radar shadow, layover, and foreshortening distort the SAR image coordinate compared with the actual coordinate, they could cause the difference in LULC between the actual and result of the SAR image. These erroneous geometric distortions can degrade the accuracy of LULC.

OGCSBN_2022_v38n1_111_f0006.png 이미지

Fig. 9. Comparison of overall accuracy with incidence angle of the KOMPSAT-5.

OGCSBN_2022_v38n1_111_f0007.png 이미지

Fig. 10. Illustration of geometrical effect (i.e. foreshortening, layover, and radar shadow) on a given row of the synthetic aperture radar (SAR) image, where h and θ indicate the height and incidence angle, respectively; V, B, and W represent the vegetation, built-up, and water bodies, respectively. targets, and these hidden areas show low signals in

5. Conclusions

This paper presents a ResUNet model that maps LULC classification using only KOMPSAT-5 single co-polarized data and DEM data. We classified 30 KOMPSAT-5 images into 24 images for model training and six images for model testing. We used the ResUNet model for LULC classification using only KOMPSAT- 5 single co-polarized data and analyzed the training history of the model. At 1732 epochs, the model showed a maximum OA of 93.89 and a kappa coefficient of 0.91. Thus, we adopted a trained model with epochs of 1732 as the reference LULC model.

The accuracy of each polarization mode was verified. There was no significant difference in accuracy based on polarization, and the model showed high performance for HH-and VV-polarized data with an OA greater than 90. Overall, the LULC model accurately distinguished water body areas from other LULC types. In particular, the model showed that the UA and PA of wetlands were lower than those of other LULC types. The relatively low accuracy of the wetlands was caused by the tidal conditions. Because South Korea has strong tidal effects, some areas of wetlands can be submerged under high-tide conditions. Thus, they showed different backscatter coefficient characteristics, depending on the satellite observation time and tidal conditions, which contributed to the error. Furthermore, when using the model excluding DEM, the model showed a tendency to smoothen the LULC result, difficulty in classifying the LULC over the boundary area of each LULC and the vegetation on wetland, and lower accuracy than the model including DEM. When assessing the accuracy with respect to the incidence angle, the OA showed a tendency to decrease as the incidence angle increased. This is because of the radar shadow caused by the side-looking system of the SAR sensor.

Field-based LULC monitoring systems are time- consuming, costly, and difficult to operate in real time and will play only a limited role in the future. However, LULC based on satellite data can save time and cost, and it can be operated in real time. In particular, among the satellite data, because the SAR sensor is less affected by atmospheric conditions and 24 hr available observation time, it is useful for monitoring the Earth’s surface. This study is the first LULC application that uses only KOMPSAT-5 single co-polarized data and deep learning methods. This demonstrates the possibility of an LULC monitoring system using only KOMPSAT-5 single-co-polarized data with high performance. It is believed that the accuracy and application area of the LULC model will be further improved by using more training data. This study contributes to the application of KOMPSAT-5 data and the development of deep-learning approaches using KOMPSAT-5 data.

References

  1. Alberga, V., 2007. A study of land cover classification Using polarimetric SAR parameters, International Journal of Remote Sensing, 28(17): 3851-3870. https://doi.org/10.1080/01431160601075541
  2. Dwivedi, R.S., K. Sreenivas, and K.V. Ramana, 2005. Cover: Land- use/land- cover change analysis in part of Ethiopia using Landsat Thematic Mapper data, International Journal of Remote Sensing, 26(7): 1285-1287. https://doi.org/10.1080/01431160512331337763
  3. Friedl, M.A., D. Sulla-Menashe, B. Tan, A. Schneider, N. Ramankutty, A. Sibley, and X. Huang, 2010. MODIS Collection 5 global land cover: algorithm refinements and characterization of new datasets, Remote Sensing of Environment, 114(1): 168-182. https://doi.org/10.1016/j.rse.2009.08.016
  4. Goel, K., 2014. Advanced stacking techniques and Applications in high resolution SAR interferometry, Technical University of Munich, Munich, Germany.
  5. Hasselmann, K., R.K. Raney, W.J. Plant, W. Alpers, R.A. Shuchman, D.R. Lyzenga, C.L. Rufenach, and M.J. Tucker, 1985. Theory of synthetic aperture radar ocean imaging: A MARSEN view, Journal of Geophysical Research: Oceans, 90(C3): 4659-4686. https://doi.org/10.1029/JC090iC03p04659
  6. He, K., X. Zhang, S. Ren, and J. Sun, 2016. Deep residual learning for image recognition, Proc. of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, Jun. 27-30, pp. 770-778.
  7. Horritt, M.S., D.C. Mason, D.M. Cobby, I.J. Davenport, and P.D. Bates, 2003. Waterline mapping in flooded vegetation from airborne SAR imagery, Remote Sensing of Environment, 85(3): 271-281. https://doi.org/10.1016/S0034-4257(03)00006-3
  8. Jang, J.C., K. Park, and D. Yang, 2018. Validation of sea surface wind estimated from KOMPSAT-5 backscattering coefficient data, Korean Journal of Remote Sensing, 34(6-3): 1383-1398 (in Korean with English abstract). https://doi.org/10.7780/KJRS.2018.34.6.3.6
  9. Jang, J.C., K. Park, D. Yang, and S.G. Lee, 2019. Improvement of KOMPSAT-5 Sea Surface Wind with Correction Equation Retrieval and Application of Backscattering Coefficient, Korean Journal of Remote Sensing, 35(6-4): 1373-1389 (in Korean with English abstract). https://doi.org/10.7780/KJRS.2019.35.6.4.7
  10. Kavzoglu, T. and P.M. Mather, 2003. The use of backpropagating artificial neural networks in land cover classification, International Journal of Remote Sensing, 24(23): 4907-4938. https://doi.org/10.1080/0143116031000114851
  11. Kropatsch, W.G. and D. Strobl, 1990. The generation of SAR layover and shadow maps from digital elevation models, IEEE Transactions on Geoscience and Remote Sensing, 28(1): 98-107. https://doi.org/10.1109/36.45752
  12. Kumar, P., D.K. Gupta, V.N. Mishra, and R. Prasad, 2015. Comparison of support vector machine, artificial neural network, and spectral angle mapper algorithms for crop classification using LISS IV data, International Journal of Remote Sensing, 36(6): 1604-1617. https://doi.org/10.1080/2150704X.2015.1019015
  13. Li, C. and M. Wand, 2016. Precomputed real-time texture synthesis with markovian generative adversarial networks, Proc. of 2016 European Conference on Computer Vision, Amsterdam, Netherlands, Oct. 11-14, pp. 702-716.
  14. Liu, X., J. He, Y. Yao, J. Zhang, H. Liang, H. Wang, and Y. Hong, 2017. Classifying urban land use by integrating remote sensing and social media data, International Journal of Geographical Information Science, 31(8): 1675-1696. https://doi.org/10.1080/13658816.2017.1324976
  15. Lu, D. and Q. Weng, 2007. A survey of image classification methods and techniques for improving classification performance, International Journal of Remote Sensing, 28(5): 823-870. https://doi.org/10.1080/01431160600746456
  16. Niu, X., D. Yang, K. Yang, H. Pan, Y. Dou, and F. Xia, 2021. Image translation between high-resolution optical and synthetic aperture radar (SAR) data, International Journal of Remote Sensing, 42(12): 4758-4784. https://doi.org/10.1080/01431161.2020.1836426
  17. Niu, X. and Y. Ban, 2013. Multi-temporal RADARSAT-2 polarimetric SAR data for urban land-cover classification using an object-based support vector machine and a rule-based approach, International Journal of Remote Sensing, 34(1): 1-26. https://doi.org/10.1080/01431161.2012.700133
  18. Park, W., W.K. Baek, J.S. Won, and H.S. Jung, 2020. Comparison of Input Image Dimensions for Ship Detection from KOMPSAT-5 SAR Image Using Deep Neural Network, Journal of Coastal Research, 102(SI): 208-217.
  19. Phiri, D. and J. Morgenroth, 2017. Developments in Landsat land cover classification methods: A review, Remote Sensing, 9(9): 967. https://doi.org/10.3390/rs9090967
  20. Prasath, V.S. and O. Haddad, 2014. Radar shadow detection in synthetic aperture radar images using digital elevation model and projections, Journal of Applied Remote Sensing, 8(1): 083628. https://doi.org/10.1117/1.JRS.8.083628
  21. Ronneberger, O., P. Fischer, and T. Brox, 2015. U-net: Convolutional networks for biomedical image segmentation, Proc. of 2015 International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, Oct. 5- 9, pp. 234-241.
  22. Shiraishi, T., T. Motohka, R.B. Thapa, M. Watanabe, And M. Shimada, 2014.Comparative assessment Of supervised classifiers for land use-land cover classification in a tropical region using time-series PALSAR mosaic data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(4): 1186-1199. https://doi.org/10.1109/JSTARS.2014.2313572
  23. Turnes, J.N., J.D.B. Castro, D.L. Torres, P.J.S. Vega, R.Q. Feitosa, and P.N. Happ, 2020. Atrous cgan for sar to optical image translation, IEEE Geoscience and Remote Sensing Letters, 19: 1-5.
  24. Zhang, C., I. Sargent, X. Pan, H. Li, A. Gardiner, J. Hare, and P.M. Atkinson, 2019.Joint Deep Learning for land cover and land use classification, Remote Sensing of Environment, 221: 173-187. https://doi.org/10.1016/j.rse.2018.11.014
  25. Zhang, G., W.B. Fei, Z. Li, X. Zhu, and D.R. Li, 2010. Evaluation of the RPC model for spaceborne SAR imagery, Photogrammetric Engineering and Remote Sensing, 76(6): 727-733. https://doi.org/10.14358/PERS.76.6.727
  26. Zhang, Z., Q. Liu, and Y. Wang, 2018. Road extraction by deep residual u-net, IEEE Geoscience and Remote Sensing Letters, 15(5): 749-753. https://doi.org/10.1109/lgrs.2018.2802944