• Title/Summary/Keyword: 훈련지역 취득기법

Search Result 8, Processing Time 0.024 seconds

Estimation of Classification Accuracy of JERS-1 Satellite Imagery according to the Acquisition Method and Size of Training Reference Data (훈련지역의 취득방법 및 규모에 따른 JERS-1위성영상의 토지피복분류 정확도 평가)

  • Ha, Sung-Ryong;Kyoung, Chon-Ku;Park, Sang-Young;Park, Dae-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.1
    • /
    • pp.27-37
    • /
    • 2002
  • The classification accuracy of land cover has been considered as one of the major issues to estimate pollution loads generated from diffuse landuse patterns in a watershed. This research aimed to assess the effects of the acquisition methods and sampling size of training reference data on the classification accuracy of land cover using an imagery acquired by optical sensor(OPS) on JERS-1. Two kinds of data acquisition methods were considered to prepare training data. The first was to assign a certain land cover type to a specific pixel based on the researchers subjective discriminating capacity about current land use and the second was attributed to an aerial photograph incorporated with digital maps with GIS. Three different sizes of samples, 0.3%, 0.5%, and 1.0% of all pixels, were applied to examine the consistency of the classified land cover with the training data of corresponding pixels. Maximum likelihood scheme was applied to classify the land use patterns of JERS-1 imagery. Classification run applying an aerial photograph achieved 18 % higher consistency with the training data than the run applying the researchers subjective discriminating capacity. Regarding the sample size, it was proposed that the size of training area should be selected at least over 1% of all of the pixels in the study area in order to obtain the accuracy with 95% for JERS-1 satellite imagery on a typical small-to-medium-size urbanized area.

  • PDF

Performance Evaluation of Deep Learning Model according to the Ratio of Cultivation Area in Training Data (훈련자료 내 재배지역의 비율에 따른 딥러닝 모델의 성능 평가)

  • Seong, Seonkyeong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1007-1014
    • /
    • 2022
  • Compact Advanced Satellite 500 (CAS500) can be used for various purposes, including vegetation, forestry, and agriculture fields. It is expected that it will be possible to acquire satellite images of various areas quickly. In order to use satellite images acquired through CAS500 in the agricultural field, it is necessary to develop a satellite image-based extraction technique for crop-cultivated areas.In particular, as research in the field of deep learning has become active in recent years, research on developing a deep learning model for extracting crop cultivation areas and generating training data is necessary. This manuscript classified the onion and garlic cultivation areas in Hapcheon-gun using PlanetScope satellite images and farm maps. In particular, for effective model learning, the model performance was analyzed according to the proportion of crop-cultivated areas. For the deep learning model used in the experiment, Fully Convolutional Densely Connected Convolutional Network (FC-DenseNet) was reconstructed to fit the purpose of crop cultivation area classification and utilized. As a result of the experiment, the ratio of crop cultivation areas in the training data affected the performance of the deep learning model.

Application of GIS Technique for Fire Drill in Hillside Area

  • Chung Yeong-Jin
    • Spatial Information Research
    • /
    • v.12 no.4 s.31
    • /
    • pp.321-328
    • /
    • 2004
  • The purpose of this study is to describe 3-Dimensional technique to obtain spatial information automatically using PhotoModeler Pro. PhotoModeler Pro is excellent software with the three-dimensional measurement function used by a personal computer using Windows operating system However, it is not sufficient to carry out the automatic matching work with two stereo images. This is very large neck as a 3-D measurement software. In this study, the automatic stereo matching work using the self-making program and DDE interface within PhotoModeler Pro was tried. The experiment field is the hillside and stair zone of Tateyama region, Nagasaki City. The results of automatic stereo matching work were very good with $100\%$ hitting ratio of target.

  • PDF

Extracting Flooded Areas in Southeast Asia Using SegNet and U-Net (SegNet과 U-Net을 활용한 동남아시아 지역 홍수탐지)

  • Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1095-1107
    • /
    • 2020
  • Flood monitoring using satellite data has been constrained by obtaining satellite images for flood peak and accurately extracting flooded areas from satellite data. Deep learning is a promising method for satellite image classification, yet the potential of deep learning-based flooded area extraction using SAR data remained uncertain, which has advantages in obtaining data, comparing to optical satellite data. This research explores the performance of SegNet and U-Net on image segmentation by extracting flooded areas in the Khorat basin, Mekong river basin, and Cagayan river basin in Thailand, Laos, and the Philippines from Sentinel-1 A/B satellite data. Results show that Global Accuracy, Mean IoU, and Mean BF Score of SegNet are 0.9847, 0.6016, and 0.6467 respectively, whereas those of U-Net are 0.9937, 0.7022, 0.7125. Visual interpretation shows that the classification accuracy of U-Net is higher than SegNet, but overall processing time of SegNet is around three times faster than that of U-Net. It is anticipated that the results of this research could be used when developing deep learning-based flood monitoring models and presenting fully automated flooded area extraction models.

Unsupervised Classification of Landsat-8 OLI Satellite Imagery Based on Iterative Spectral Mixture Model (자동화된 훈련 자료를 활용한 Landsat-8 OLI 위성영상의 반복적 분광혼합모델 기반 무감독 분류)

  • Choi, Jae Wan;Noh, Sin Taek;Choi, Seok Keun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.53-61
    • /
    • 2014
  • Landsat OLI satellite imagery can be applied to various remote sensing applications, such as generation of land cover map, urban area analysis, extraction of vegetation index and change detection, because it includes various multispectral bands. In addition, land cover map is an important information to monitor and analyze land cover using GIS. In this paper, land cover map is generated by using Landsat OLI and existing land cover map. First, training dataset is obtained using correlation between existing land cover map and unsupervised classification result by K-means, automatically. And then, spectral signatures corresponding to each class are determined based on training data. Finally, abundance map and land cover map are generated by using iterative spectral mixture model. The experiment is accomplished by Landsat OLI of Cheongju area. It shows that result by our method can produce land cover map without manual training dataset, compared to existing land cover map and result by supervised classification result by SVM, quantitatively and visually.

Change Detection for High-resolution Satellite Images Using Transfer Learning and Deep Learning Network (전이학습과 딥러닝 네트워크를 활용한 고해상도 위성영상의 변화탐지)

  • Song, Ah Ram;Choi, Jae Wan;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.199-208
    • /
    • 2019
  • As the number of available satellites increases and technology advances, image information outputs are becoming increasingly diverse and a large amount of data is accumulating. In this study, we propose a change detection method for high-resolution satellite images that uses transfer learning and a deep learning network to overcome the limit caused by insufficient training data via the use of pre-trained information. The deep learning network used in this study comprises convolutional layers to extract the spatial and spectral information and convolutional long-short term memory layers to analyze the time series information. To use the learned information, the two initial convolutional layers of the change detection network are designed to use learned values from 40,000 patches of the ISPRS (International Society for Photogrammertry and Remote Sensing) dataset as initial values. In addition, 2D (2-Dimensional) and 3D (3-dimensional) kernels were used to find the optimized structure for the high-resolution satellite images. The experimental results for the KOMPSAT-3A (KOrean Multi-Purpose SATllite-3A) satellite images show that this change detection method can effectively extract changed/unchanged pixels but is less sensitive to changes due to shadow and relief displacements. In addition, the change detection accuracy of two sites was improved by using 3D kernels. This is because a 3D kernel can consider not only the spatial information but also the spectral information. This study indicates that we can effectively detect changes in high-resolution satellite images using the constructed image information and deep learning network. In future work, a pre-trained change detection network will be applied to newly obtained images to extend the scope of the application.

Quantitative Evaluations of Deep Learning Models for Rapid Building Damage Detection in Disaster Areas (재난지역에서의 신속한 건물 피해 정도 감지를 위한 딥러닝 모델의 정량 평가)

  • Ser, Junho;Yang, Byungyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.381-391
    • /
    • 2022
  • This paper is intended to find one of the prevailing deep learning models that are a type of AI (Artificial Intelligence) that helps rapidly detect damaged buildings where disasters occur. The models selected are SSD-512, RetinaNet, and YOLOv3 which are widely used in object detection in recent years. These models are based on one-stage detector networks that are suitable for rapid object detection. These are often used for object detection due to their advantages in structure and high speed but not for damaged building detection in disaster management. In this study, we first trained each of the algorithms on xBD dataset that provides the post-disaster imagery with damage classification labels. Next, the three models are quantitatively evaluated with the mAP(mean Average Precision) and the FPS (Frames Per Second). The mAP of YOLOv3 is recorded at 34.39%, and the FPS reached 46. The mAP of RetinaNet recorded 36.06%, which is 1.67% higher than YOLOv3, but the FPS is one-third of YOLOv3. SSD-512 received significantly lower values than the results of YOLOv3 on two quantitative indicators. In a disaster situation, a rapid and precise investigation of damaged buildings is essential for effective disaster response. Accordingly, it is expected that the results obtained through this study can be effectively used for the rapid response in disaster management.

Estimation of Fractional Urban Tree Canopy Cover through Machine Learning Using Optical Satellite Images (기계학습을 이용한 광학 위성 영상 기반의 도시 내 수목 피복률 추정)

  • Sejeong Bae ;Bokyung Son ;Taejun Sung ;Yeonsu Lee ;Jungho Im ;Yoojin Kang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1009-1029
    • /
    • 2023
  • Urban trees play a vital role in urban ecosystems,significantly reducing impervious surfaces and impacting carbon cycling within the city. Although previous research has demonstrated the efficacy of employing artificial intelligence in conjunction with airborne light detection and ranging (LiDAR) data to generate urban tree information, the availability and cost constraints associated with LiDAR data pose limitations. Consequently, this study employed freely accessible, high-resolution multispectral satellite imagery (i.e., Sentinel-2 data) to estimate fractional tree canopy cover (FTC) within the urban confines of Suwon, South Korea, employing machine learning techniques. This study leveraged a median composite image derived from a time series of Sentinel-2 images. In order to account for the diverse land cover found in urban areas, the model incorporated three types of input variables: average (mean) and standard deviation (std) values within a 30-meter grid from 10 m resolution of optical indices from Sentinel-2, and fractional coverage for distinct land cover classes within 30 m grids from the existing level 3 land cover map. Four schemes with different combinations of input variables were compared. Notably, when all three factors (i.e., mean, std, and fractional cover) were used to consider the variation of landcover in urban areas(Scheme 4, S4), the machine learning model exhibited improved performance compared to using only the mean of optical indices (Scheme 1). Of the various models proposed, the random forest (RF) model with S4 demonstrated the most remarkable performance, achieving R2 of 0.8196, and mean absolute error (MAE) of 0.0749, and a root mean squared error (RMSE) of 0.1022. The std variable exhibited the highest impact on model outputs within the heterogeneous land covers based on the variable importance analysis. This trained RF model with S4 was then applied to the entire Suwon region, consistently delivering robust results with an R2 of 0.8702, MAE of 0.0873, and RMSE of 0.1335. The FTC estimation method developed in this study is expected to offer advantages for application in various regions, providing fundamental data for a better understanding of carbon dynamics in urban ecosystems in the future.