1. Introduction
Synthetic aperture radar (SAR) is an active imaging system that provides multiple surface coverage independent from sunlight, cloud coverage, and weather conditions [1-3]. In spite of such merits, use of radar image was limited because of difficulty in recognizing and acquiring. Contrary to optical image, radar image is difficult for untrained people to recognize something. Moreover, capturing the SAR image is also difficult job. It has been done with airplane or satellite. Due to the difficulties, SAR image was traditionally used for special cases such as military area. But, the difficulties can be overcome with new technologies. With development of radar technology and vehicle, it is possible to capture SAR image with car or consumer drone [4,5]. If car or drone is used, a set of SAR images which captures a narrow zone with high resolution can be obtained. These SAR images can be used for remote sensing of areas where continuous observation is necessary but physical access is difficult. For example, to maintain water resource measuring the size of water region can be necessary. For other example, illegal building in national park can be observed with SAR image. Recognizing some information from SAR image is difficult for untrained people such as the officer of water resource office or national park, but image-based analysis technologies can help those people. To boost various applications of SAR image data, a system which can arrange and display the information extracted by analyzing SAR image is necessary. The system will be helpful to popularize SAR technology.
Image analysis of SAR have been researched for long time, and recently it is much developed due to acquisition of high-quality SAR image [6-33]. It is applied in various area, and applications of SAR image can be summarized in following 3 categories: land cover classification, object detection, and change detection. First, by analyzing SAR image, information of land cover or land use can be obtained [6-19]. Since SAR data are sensitive to the land surface’s geometric configurations and soil moisture, SAR image value gives information of classifying land cover [9]. Depending on the application fields, classes of SAR-based land cover classification are varied. Generally, classes of urban region, water, forest, farmland are main concern. Classification of crop is also thought as an application of SAR-based land cover classification [12, 19]. In large-scale farmland where farmers cannot check crops individually, information from remote sensing based on SAR data is helpful. Second, some interesting objects can be found from SAR image [20-29]. Object detection with SAR image is different from it with optical image. With the progress of deep-learning technology, object detection with optical image have achieved high detection rates of various objects [36]. Due to limitation of color and resolution, SAR image has a limit in detecting various objects. But, for a special case, object detection in SAR image is useful. Because of photographic capabilities irrespective of the time of day and weather conditions, target detection using SAR played an important role [28]. The objects which are interested in SAR applications are military target [26-28], ship [23-25], oil tank [21-22], vehicles [20, 29], and so on. Third, by analyzing temporal series of SAR images, change of land can be detected, and it is called SAR image change detection [30-33]. SAR image change detection can be widely used in many applications such as agricultural survey, natural disaster monitoring, forest monitoring, shoreline monitoring, and urban change analysis [32]. Analyzing SAR image, land use in agriculture can be found, and sequence of land use information helps to discover of agricultural trend and change. When a disaster such as flood occurs, changed region of land can be detected by comparing two SAR images, one before the disaster and the other after the disaster. Similarly, with sequence of SAR images in a fixed region, change of forest/water/urban region can be monitored. Among 3 applications, land cover classification can be the most widely used, and it is our main concern.
In this paper, a web-based management system for SAR image is proposed, and this system includes land cover classification modules. In the system, land cover classifier can be trained and used for newly uploaded SAR data. Uploaded SAR images and its classified results are displayed on the map and managed by data owner. Since SAR has the information of land, it is good to see on the map. Visualization of specific information based on the map is very effective [39]. From here, the system is called as SAR data management (SARDM) system. Since it is developed for web service, users can easily access the SARDM system. In the SARDM system, users can upload their own SAR image and its side information. The land cover classification module is based on convolutional neural network (CNN) and is designed for treating an SAR image of greyscale. In some literatures, land cover classification is performed after fusing optical image and SAR image or combining various polarization information of SAR information [13, 17, 18]. But, in our SARDM system such a combined input data is not considered. For a greyscale image, classifier with maximum 7 levels can be trained and used in the system.
The contribution of the SARDM system is that the system helps the users, who is not familiar to SAR image, to deal with SAR image. This system can be used for urban engineers, geologic researchers, and forest vegetation researchers to find some valuable information from land cover of specific regions. At the beginning use of the system, difficulty of making training data is inevitable. Help from experts on SAR image is necessary at making training data. The remainder of this paper is organized as follows. Section 2 explains the structure and interface of SARDM system. Section 3 explains the land cover classification module embedded in the system, and the experimental results for the module are shown in Section 4. Section 5 concludes this paper.
2. SARDM System
2.1 Structure
In the SARDM system, user can manage SAR data and land cover classification model. Classification model can be produced using uploaded SAR data. Land cover classification of newly uploaded SAR image can be performed if the classification model exists. Process flow diagram is shown in Fig. 1. The system stores uploaded data and does training/classification process based on user’srequest. The classified result, which can be obtained after classification or uploaded as a training ground-truth, is stored as CSV file format. The size of a CSV file, which is 1:1 matched to an SAR image, is determined as the size of the SAR image. In the CSV file, pixel-level information is labeled. The label ‘0’ in CSV file is supposed to be ‘uncertain region.’ Besides ‘uncertain region,’ 7 labels which are set up by user are allowed, and the labels from ‘0’ to ‘7’ are annotated in the CSV file. The ground-truth CSV file can be created by labeling tool. A CNN model is trained to make a land cover classifier with user’s own data. The properties of users’ data and the object of classification task are varied. Depending on capturing method and devices, there are great differences in the quality, the amount, and type of information. Variation of band, polarization, resolution, and site makes it difficult to develop a classifier which works well for all types of SAR data. Thus, the SARDM makes users to create their own classifier instead of applying a fixed classifier. The CNN model is written with Keras [37]. Thus, trained model is compatible with the code written in Keras. Training process will be explained in next section. SAR images and classified results are displayed on Google map.
Fig. 1. Process flow diagram of SARDM system
2.2 SAR image and Data
Side information coupled by SAR image are GPS information of the image, band/polarization information, date of capturing, and code information. To find location of SAR data, GPS information is necessary. The classification module performs depending on the band and polarization, thus they are necessary. In order to display SAR data and classified result of required period, date information is also necessary. Code information. which is arbitrarily created by user for area management, is coupled to SAR image. If SAR images of specific areas are continuously uploaded, code information is used in order to find a specific set of SAR data from database.
2.3 User interface
There are 4 pages in SARDM system: log-in, data upload, data management, and main page. The system is developed for web service, thus the first page after access the system is log-in page. The approved users can make use of the system by inputting user’s ID and password. In data upload page, user can upload SAR image and its side information. Fig. 2 shows data upload page. Uploading SAR image is mandatory, and uploading CSV file is optional. When CSV file is uploaded, size of CSV data and SAR image is compared. Band/polarization information are selected from the lists. GPS information of image end points should be written, and the image is localized on the map using GPS information. User can select whether the data is opened to unspecified users or not. After viewing the image on the map using GPS information, transparency of image can be adjusted, and it will be used when the image is overlaid on map. The GPS information upload by user can be inaccurate. Thus, the interface adjusting the position on the map should be necessary.
Fig. 2. Page of data upload
The main roles of data management page, which is shown in Fig. 3, are data editing and request of analysis. As in the figure, there are two taps: SAR and model, and page formations of two taps are very similar. User can edit side information of SAR data and delete SAR data. In the right side of Fig. 3, editing and delete buttons exist for every data. Similarly, trained model can be also edited and deleted. After selecting a set of data, classification model is trained by clicking ‘Train’ button. If one or more trained models exist, a set of SAR image can be classified by selecting images and clicking ‘Classify’ button. A pop-up window of selecting a classifier model will be created, and classification starts when selecting a classifier. Classification or training is time consuming work, thus after a certain amount of time the sign of ending can be found in data management page. After classification, cla. tap in Fig. 3 changes from ‘False’ to ‘True.’ Similar change can be found in model management tap.
Fig. 3. Data management menu
In main page, users can view all opened SAR data and analysis results on the map. Detailed explanation about display is written in next subsection.
2.4 Display of analysis results
Display method in main page is explained. As explained in previous subsection, main page does a role of displaying classified results. Every user can select whether the uploaded SAR image is opened to unspecified users or not. Thus, in main page, users can see their own data and data of others, if they are opened. There are two options displaying results: overlaying style option and time option. User can select overlaying style from three styles: overlaying SAR and analysis results on the map, overlaying SAR on the map, and just map. Using the time option, users can check the analysis results of SAR data acquired at specific time. Only the SAR data and classified results before set-up time are shown.
2.5 Labeling tool for training data
To create user’s own land cover classifier training data is necessary, thus a tool to help labeling SAR image is also developed. The tool is not embedded in SARDM system but working standalone. To make training data with a little effort, semi-automatic SAR image land-cover labeling tool is developed [34]. Labeling land cover for SAR image is not an easy work. Even though a trained researcher, who is familiar to SAR image, does labeling job, it is time-consuming work. To save effort of labeling, semi-automatic labeling system based on external information is developed. An open API provided by Korea National Spatial Data Infrastructure Portal (NSDI) can give local land category information based on GPS information [35]. The land category has one of 28 categories legally specified by the law of the nation [34]. The category information is available only in Republic of Korea. These categories can be grouped into user’s land cover classes. Fig. 4 shows the sequential processes of the semi-automatic SAR labeling system.
Fig. 4. The overall structure of semi-automatic labeling tool.
The data from NSDI gives rough label, thus following manual labeling should be necessary. There are two major reasons. One is difference between legal application and real application, and the other is wide scope of category. The categories are designed in order to clarify legal application of land. Thus, the category information can be differed from real applications at SAR capturing time. For example, there is a ‘dry field’ category, and this is thought to be labeled as ‘farmland.’ But sometimes, vinyl greenhouse may exist on ‘dry field.’ In this case, the region of vinyl greenhouse should not be labeled as ‘farmland.’ Since the legal category can cover a wide scope one category can include various regions of different land cover label. The example is ‘park’ category. In the land of ‘park,’ there exist buildings, forest, and water regions. Thus, our labeling system should not be completely dependent on NSDI information, and additional manual labeling is necessary.
Manual labeling tool, which aims to modify the label data after labeling with NSDI information, is also developed. After automatic labeling, the output CSV file has a size that is 1:1 to the input image size. In the labeling tool, input image and the generated CSV file are loaded and the user can manually modify the label information for a specific area [34]. The NSDI information may be useless in some cases such as crop classification, forest classification, and so on. Thus, it is possible to use manual labeling tool without auto-labeling. The detailed explanations can be found in [34].
3. Land cover classification module
3.1 Classifier development
3.1.1. Previous works
Land cover classification based on SAR image have been recently researched with acquisition of high-quality SAR images. Various methods are tried in this research area. Early researches focus feature extraction based on properties of SAR image. In [6], multi-level local pattern histogram and support vector machine are used. In [7], using decision tree and polarimetric parameters land use and land cover are classified. Similarly, a decision-tree-based adaptive land cover classification technique has been proposed in [8]. In [9], the impact of feature normalization is presented when fusing optical image and SAR image. In [10], spatial features are used with polarimetric features. After 2015, neural network has been used in SAR land cover classification [11, 12, 14, 15, 18]. In [11] and [12], CNN with 2 convolutional layers is used in classification of SAR image. In [14], CNN structure using three convolutional layers and two fully-connected layers is used. In [15], the features extracted from three different filters are combined and used as input of two-layer neural network for land cover classification. Nowadays, to enhance the classification accuracy, either use of multispectral SAR data or fusion of SAR and optical images is a new trend of SAR land cover classification researches [13, 17, 18].
3.1.2. Structure of classifier
Our classification structure is based on CNN, and there is no feature extraction process. We set an input of classifier as 32 by 32 greyscale image block and an output as a class label among N classes. In the SARDM system, classifier is respectively created with user’s own data and classification problem. Thus, the number of classes is determined as user’s ground-truth CSV files. For the clarity of display, N is limited by 7. Size of input image block is selected considering trade-off between accuracy and interpretability [14]. Big size of input can improve the classification accuracy, but it is not good to convert block-level to image-level classification result.
When user executes training process, SAR files and their matched CSV files are checked. The CSV files should have values from ‘0’ to ‘7,’ and the size should be same as image size of matched SAR files. Though size check is already done in data uploading process, it is performed once again. After that, 32 by 32 training blocks are extracted using exhaustive search. With moving 32 by 32 window on labeling data, blocks which have equal labels of all points are extracted. For the ‘uncertain region’, training blocks are not extracted since ‘uncertain region’ is assumed to be the mixed region or the region that cannot be defined. Including ‘uncertain region’ in training set is thought not to be proper in training. Using the training blocks, CNN model is trained and saved. The processes are shown in Fig. 5.
Fig. 5. Training process
A CNN model in SARDM is a simplified version of VGG-16 structure [38]. The model is presented in Fig. 6. All layers are initialized by zero, and they are trained in 20 epochs. Batch normalization is performed after every convolutional layer, and it is omitted in the figure. The kernel size of the 2D convolutional layer, which is denoted as Conv2D in Fig. 6, is 3 by 3. Max-pooling is performed over a 2 by 2-pixel window, with stride 2. Fully-connected layers, denoted as FC in Fig. 6, are following after convolutional layers. The output of final layer is N. The land cover classification task is simpler than image classification task which is main concern of VGG-16, thus the structure is used after simplification. A classifier model with large number of parameters needs long time to be trained. It is not suitable for web service. Thus, the number of CNN parameters should be limited. In our experiments, original VGG-16 structure is also tested, and it is not good for our land cover classification dataset. Thus, the structure shown in Fig. 6 is experimentally determined. The number of total parameters is 126,712.
Fig. 6. CNN model structure
3.2 Classification for SAR image
As written in the previous subsection, classifier is designed to work well on an image block of 32 by 32. For research purposes, block-based image data may be used [16]. But, the real image is not block-based, thus module to deal with real image is necessary. Based on superpixel algorithm [40], superpixel-based classification is done. Fig. 7 shows the flow chart of classification process. First, SAR image is divided using SEEDS algorithm [40]. For every superpixel, a class label is determined. The CNN model is applied after padding. Based on the size of each superpixel, a virtual rectangle block is set. The virtual rectangle block has many positions of no label data. The block is padded by overlapping the superpixel. After that, block-based classification is sequentially performed for the virtual rectangle block. By voting the results, the classified result of a superpixel is determined. If there is no dominant voting result or the virtual rectangle block is smaller than 32 by 32, the superpixel is determined as ‘uncertain region.’ For all superpixels, these processes are performed, and a CSV file is created by gathering the classified results.
Fig. 7. Flow chart of classification for SAR image
4. Experimental results
4.1 Developments of training data
To present the usability of land cover classification embedded in SARDM system, experimental results for two different sets of SAR data are shown. A data set is captured from satellite and the other is captured from a car. Those are very different from each other. Classification task is set to be finding 4 regions: forest/ building/ water/ farmland. We thought these 4 classes are applicable to many land cover problems. Most region is classified as above 4 classes, but sometimes ambiguous regions exist. Moreover, regions which cannot be classified as above 4 classes such as road also exist. Such regions are labeled as ‘uncertain region.’ Training data is labeled with the tool explained above. Since we used 4 classes, 28 categories in semi-automatic labeling tool is mapped to classes of forest, building, water, farmland, and ‘uncertain region.’ As a satellite SAR data, SAR data from Sentinal1-1A is used. The data from C-band is captured in 2020 May. The location of data is Jeollanam-do, Korea. Second data is acquired using car on the highway and is called as ‘car-borne SAR’ from here. The data from Ku-band is captured in 2021 January. The location of data is Chungcheongnam-do, Korea. Contrary to the satellite SAR data, the car-borne SAR image covers very narrow region. It was highly affected by shadow because the position of radar is not high. The quality of car-borne SAR image is relatively poor and the amount of data is small. The data used in our experiments are shown in Fig. 8.
Fig. 8. SAR data used in experiments
The number of training image blocks and test image blocks are written in Table 1. When SARDM system is used, balance of training data among classes is not guaranteed. Thus, in our experiment the training data is not forced to be balanced. But, to show classification results, the same number of test data for all classes is used in the experiments. Unbalance among classes is too severe in the car-borne SAR data since there exists very narrow water region. Thus, the number of test data for car-borne SAR data is very few.
Table 1. The number of data
4.2 Classification results
Classification accuracy for two datasets are written in Table 2. In case of car-borne SAR, lack of training data and poor image quality lead relatively low accuracy.
Table 2. Classification accuracy
4.3 Display in SARDM
Fig. 9 shows an example of SAR land cover classification displayed in the main page of SARDM system. In the figure, the red rectangular shows the classification results and the other data are uploaded as training data. As shown in the figure, land cover classification results for uploaded SAR data are successfully displayed.
Fig. 9. Display example
5. Conclusion
In this paper, a web-based system which manages SAR data is proposed. A SAR-based land cover classification module embedded in the system is also presented. Using the system, land cover classifier is generated by training with user’s own data and applied to a newly captured SAR image. Since it is difficult for untrained people to recognize something in SAR images, SAR image was traditionally used for special cases. The system is designed for easy understanding of SAR image, and will be helpful for the people to apply SAR technology. The land cover classification module is based on CNN, and it is verified with two SAR datasets which are totally different from each other in our experiments. With this system, the users who want consistently to classify SAR image is able to manage their SAR data and to display the classified results easily.
Acknowledgement
This work was supported by Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (No. 20002733, Developments of small sized 0.3m resolution airborne multi-band SAR with and big data analysis system)
References
- A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou, "A tutorial on synthetic aperture radar," IEEE Geosci. Remote Sens. Mag., vol. 1, no. 1, pp. 6-43, Mar. 2013. https://doi.org/10.1109/MGRS.2013.2248301
- J. Geng, H. Wang, J. Fan, and X. Ma, "SAR Image Classification via deep Recurrent Encoding Neural Networks," IEEE Trans. On Geoscience and remote sensing, vol. 56, No. 4, pp. 2255-2269, Apr, 2018. https://doi.org/10.1109/TGRS.2017.2777868
- J. R. Jensen, Introductory Digital Image Processing: A Remote Sensing Perspective, London, U.K.: Pearson, 2005.
- O. Frey, C. L. Werner, and R. Coscione, "Car-borne and UAV-borne mobile mapping of surface displacements with a compact repeat-pass interferometric SAR system at L-band," in Proc. of IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), Yokohama, Japan, pp. 274-277, Jul. 2019.
- C. J. Li and H. Ling, "Synthetic aperture radar imaging using a small consumer drone," in Proc. of IEEE Int. Symp. Antennas Propag. USNC/URSI Nat. Radio Sci. Meeting, pp. 685-686, 2015.
- D. Dai, W. Yang, and H. Sun, "Multilevel Local Pattern Histogram for SAR Image Classification," IEEE Geoscience and remote sensing Letters, vol. 8, no. 2, pp. 225-229, March, 2011. https://doi.org/10.1109/LGRS.2010.2058997
- Z. Qi, A. Yeh, X. Li, and Z. Lin, "A novel algorithm for land use and land cover classification using RADARSAT-2 polarimetric SAR data," Remote Sens. Environ., vol. 118, pp. 21-39, Mar. 2012. https://doi.org/10.1016/j.rse.2011.11.001
- P. Mishra and D. Singh, "A Statistical-Measure-Based Adaptive Land Cover Classification Algorithm by Efficient Utilization of Polarimetric SAR Observables," IEEE Trans. on Geoscience and remote sensing, vol. 52, no. 5, pp. 2889-2900, May, 2014. https://doi.org/10.1109/TGRS.2013.2267548
- H. Zhang, H. Lin, and Y. Li, "Impacts of Feature Normalization on Optical and SAR Data Fusion for Land Use/Land Cover Classification," IEEE Geoscience and remote sensing Letters, vol. 12, no. 5, pp. 1061-1065, May, 2015. https://doi.org/10.1109/LGRS.2014.2377722
- P. Du, A. Samat, B. Waske, S. Liu, and Z. Li, "Random forest and rotation forest for fully polarized SAR image classification using polarimetric and spatial features," ISPRS J. Photogramm. Remote Sens., vol. 105, pp. 38-53, Jul. 2015. https://doi.org/10.1016/j.isprsjprs.2015.03.002
- Y. Zhou, H. Wang, F. Xu, Y.-Q. Jin, "Polarimetric SAR Image Classification Using Deep Convolutional Neural Networks," IEEE Geoscience and remote sensing Letters, vol. 13, no. 12, pp. 1935-1939, Dec. 2016. https://doi.org/10.1109/LGRS.2016.2618840
- Z. Zhang, H. Wang, F. Xu, and Y.-Q. Jin, "Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification," IEEE Trans. on Geoscience and remote sensing, vol. 55, no. 12, pp. 7177-7188, Dec., 2017. https://doi.org/10.1109/TGRS.2017.2743222
- C. Sukawattanavijit, J. Chen, and H. Zhang, "GA-SVM Algorithm for Improving Land-Cover Classification Using SAR and Optical Remote Sensing Data," IEEE Geoscience and remote sensing Letters, vol. 14, no. 3, pp. 284-288, March, 2017. https://doi.org/10.1109/LGRS.2016.2628406
- J. Li, C. Wang, S. Wang, H. Zhang, and B. Zhang, "Classification of very high resolution SAR image based on convolutional neural network," in Proc. of Int. Workshop Remote Sens. Intell. Process., May, 2017.
- J. Geng, H. Wang, J. Fan, and X. Ma, "Deep Supervised and Contractive Neural Network for SAR Image Classification," IEEE Trans. On Geoscience and remote sensing, vol. 55, no. 4, pp. 2442- 2459, Apr., 2017. https://doi.org/10.1109/TGRS.2016.2645226
- C. O. Dumitru, G. Schwarz, and M. Datcu, "SAR Image Land Cover Datasets for Classification Benchmarking of Temporal Changes," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 5, pp. 1571-1592, May, 2018. https://doi.org/10.1109/JSTARS.2018.2803260
- B. Hu, Y. Xu, X. Huang, Q. Cheng, Q. Ding, L. Bai, and Y. Li, "Improving Urban Land Cover Classification with Combined Use of Sentinel-2 and Sentinel-1 Imagery," ISPRS Int. J. Geo-Inf., vol. 10, p. 533, 2021.
- A. Sebastianelli, M. P. Del Rosso, P. P. Mathieu, and S. L. Ullo, "Paradigm selection for data fusion of SAR and multispectral sentinel data applied to land-cover classification," arXiv preprint arXiv:2106.11056, 2021.
- H. Skriver, F. Mattia, G. Satalino, A. Balenzano, V. R. N. Pauwels, N. E. C. Verhoest, and M. Davidson, "Crop Classification Using Short-Revisit Multitemporal SAR Data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 4, no. 2, pp. 423-431, Jun., 2011. https://doi.org/10.1109/JSTARS.2011.2106198
- Y. Huang and F. Liu, "Detecting Cars in VHR SAR Images via Semantic CFAR Algorithm," IEEE Geoscience and remote sensing Letters, vol. 13, no. 6, pp. 801-805, Jun, 2016. https://doi.org/10.1109/LGRS.2016.2546309
- Y. Wang, M. Tang, T. Tan, and X. Tai, "Detection of Circular Oil Tanks Based on the Fusion of SAR and Optical Images," in Proc. of on International Conference on Image and Graphics (ICIG), 2004.
- L. Zhang, S. Wang, C. Liu, and Y. Wang, "Saliency-Driven Oil Tank Detection Based on Multidimensional Feature Vector Clustering for SAR Images," IEEE Geoscience and remote sensing Letters, vol. 16, no. 4, pp. 653-657, Apr., 2019. https://doi.org/10.1109/LGRS.2018.2878106
- H. Dai, L. Du, Y. Wang, and Z. Wang, "A Modified CFAR Algorithm Based on Object Proposals for Ship Target Detection in SAR Images," IEEE Geoscience and remote sensing Letters, vol. 13, no. 12, pp. 1925-1929, Dec., 2016. https://doi.org/10.1109/LGRS.2016.2618604
- T. Zhang, X. Zhang, J. Li, X. Xu, B. Wang, X. Zhan, Y. Xu, X. Ke, T. Zeng, H. Su, I. Ahmad, D. Pan C. Liu, Y. Zhou, J. Shi, and S. Wei, "SAR Ship Detection Dataset (SSDD): Official Release and Comprehensive Data Analysis," Remote Sens, 13, 3690, 2021.
- J. Li, C. Qu, and J. Shao, "Ship detection in SAR images based on an improved faster R-CNN," in Proc. of BIGSARDATA, Beijing, China, pp. 1-6, Nov. 2017.
- G. Gao, L. Liu, L. Zhao, G. Shi, G. Kuang, "An Adaptive and Fast CFAR Algorithm Based on Automatic Censoring for Target Detection in High-Resolution SAR Images" IEEE Trans. on Geoscience and remote sensing, vol. 47, no. 6, pp. 1685-1697, Jun., 2009. https://doi.org/10.1109/TGRS.2008.2006504
- Y. Cui, G. Zhou, J. Yang, and Y. Yamaguchi, "On the Iterative Censoring for Target Detection in SAR Images," IEEE Geoscience and remote sensing Letters, vol. 8, no. 4, pp. 641-645, Jul., 2011. https://doi.org/10.1109/LGRS.2010.2098434
- A. Agrawal, P. Mangalraj, and M. A. Bisherwal, "Target detection in SAR images using SIFT," in Proc. of IEEE Int. Symp. Signal Process. Inf. Technol. (ISSPIT), pp. 90-94, Dec. 2015.
- X. Dai, J. Yin, J. Yang, L. Zhou, "VEHICLE DETECTION VIA POLARIMETRIC SAR IMAGE," in Proc of on IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2021.
- Y. Bazi, L. Bruzzone, and F. Melgani, "An Unsupervised Approach Based on the Generalized Gaussian Model to Automatic Change Detection in Multitemporal SAR Images," IEEE Trans. on Geoscience and remote sensing, vol. 43, no. 4, pp. 874-887, Apr., 2005. https://doi.org/10.1109/TGRS.2004.842441
- O. Yousif and Y. Ban, "Improving Urban Change Detection from Multitemporal SAR Images Using PCA-NLM," IEEE Trans. on Geoscience and remote sensing, vol. 51, no. 4, pp. 2032-2041, Apr., 2013. https://doi.org/10.1109/TGRS.2013.2245900
- Y. Zheng, X. Zhang, B. Hou, and G. Liu, "Using Combined Difference Image and k-Means Clustering for SAR Image Change Detection," IEEE Geoscience and remote sensing Letters, vol. 11, no. 3, pp. 691-695, March, 2014. https://doi.org/10.1109/LGRS.2013.2275738
- J. Geng, X. Ma, X. Zhou, and H. Wang, "Saliency-Guided Deep Neural Networks for SAR Image Change Detection," IEEE Trans. on Geoscience and remote sensing, vol. 57, no. 10, pp. 7365-7377, Oct., 2019. https://doi.org/10.1109/TGRS.2019.2913095
- J. Lee, D. Jang, and J.-S. Lee, "Semi-Automatic SAR Image Land Cover Labelling pipeline," in Proc. of International Conference on ICT Convergence (ICTC), Oct., 2020.
- [Online] Available: https://www.nsdi.go.kr
- L. Jiao, F. Zhang, F. Liu, S. Yang, L. Li, Z. Feng, and R. Qu, "A survey of deep learning-based object detection," IEEE Access, vol. 7, pp. 128837-128868, 2019. https://doi.org/10.1109/ACCESS.2019.2939201
- F. Chollet and others, "Keras," 2015, https://keras.io
- K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv:1409.1556, 2014.
- J. D. Blower, A. L. Gemmell, G. H. Griffiths, K. Haines, A. Santokhee, and X. Yang, "A web map service implementation for the visualization of multidimensional gridded environmental data," Environmental Modelling and Software, vol. 47, pp. 218-224, 2013. https://doi.org/10.1016/j.envsoft.2013.04.002
- M. Van den Bergh, X. Boix, G. Roig and L. Van Gool, "Seeds: Superpixels extracted via energy-driven sampling," in Proc. of European conference on computer vision (ECCV), pp. 13-26, 2013.