• Title/Summary/Keyword: Pixel-Based

Search Result 1,744, Processing Time 0.023 seconds

Land Cover Classification of Coastal Area by SAM from Airborne Hyperspectral Images (항공 초분광 영상으로부터 연안지역의 SAM 토지피복분류)

  • LEE, Jin-Duk;BANG, Kon-Joon;KIM, Hyun-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.1
    • /
    • pp.35-45
    • /
    • 2018
  • Image data collected by an airborne hyperspectral camera system have a great usability in coastal line mapping, detection of facilities composed of specific materials, detailed land use analysis, change monitoring and so forh in a complex coastal area because the system provides almost complete spectral and spatial information for each image pixel of tens to hundreds of spectral bands. A few approaches after classifying by a few approaches based on SAM(Spectral Angle Mapper) supervised classification were applied for extracting optimal land cover information from hyperspectral images acquired by CASI-1500 airborne hyperspectral camera on the object of a coastal area which includes both land and sea water areas. We applied three different approaches, that is to say firstly the classification approach of combined land and sea areas, secondly the reclassification approach after decompostion of land and sea areas from classification result of combined land and sea areas, and thirdly the land area-only classification approach using atmospheric correction images and compared classification results and accuracies. Land cover classification was conducted respectively by selecting not only four band images with the same wavelength range as IKONOS, QuickBird, KOMPSAT and GeoEye satelllite images but also eight band images with the same wavelength range as WorldView-2 from 48 band hyperspectral images and then compared with the classification result conducted with all of 48 band images. As a result, the reclassification approach after decompostion of land and sea areas from classification result of combined land and sea areas is more effective than classification approach of combined land and sea areas. It is showed the bigger the number of bands, the higher accuracy and reliability in the reclassification approach referred above. The results of higher spectral resolution showed asphalt or concrete roads was able to be classified more accurately.

The Evaluation of Meteorological Inputs retrieved from MODIS for Estimation of Gross Primary Productivity in the US Corn Belt Region (MODIS 위성 영상 기반의 일차생산성 알고리즘 입력 기상 자료의 신뢰도 평가: 미국 Corn Belt 지역을 중심으로)

  • Lee, Ji-Hye;Kang, Sin-Kyu;Jang, Keun-Chang;Ko, Jong-Han;Hong, Suk-Young
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.481-494
    • /
    • 2011
  • Investigation of the $CO_2$ exchange between biosphere and atmosphere at regional, continental, and global scales can be directed to combining remote sensing with carbon cycle process to estimate vegetation productivity. NASA Earth Observing System (EOS) currently produces a regular global estimate of gross primary productivity (GPP) and annual net primary productivity (NPP) of the entire terrestrial earth surface at 1 km spatial resolution. While the MODIS GPP algorithm uses meteorological data provided by the NASA Data Assimilation Office (DAO), the sub-pixel heterogeneity or complex terrain are generally reflected due to coarse spatial resolutions of the DAO data (a resolution of $1{\circ}\;{\times}\;1.25{\circ}$). In this study, we estimated inputs retrieved from MODIS products of the AQUA and TERRA satellites with 5 km spatial resolution for the purpose of finer GPP and/or NPP determinations. The derivatives included temperature, VPD, and solar radiation. Seven AmeriFlux data located in the Corn Belt region were obtained to use for evaluation of the input data from MODIS. MODIS-derived air temperature values showed a good agreement with ground-based observations. The mean error (ME) and coefficient of correlation (R) ranged from $-0.9^{\circ}C$ to $+5.2^{\circ}C$ and from 0.83 to 0.98, respectively. VPD somewhat coarsely agreed with tower observations (ME = -183.8 Pa ~ +382.1 Pa; R = 0.51 ~ 0.92). While MODIS-derived shortwave radiation showed a good correlation with observations, it was slightly overestimated (ME = -0.4 MJ $day^{-1}$ ~ +7.9 MJ $day^{-1}$; R = 0.67 ~ 0.97). Our results indicate that the use of inputs derived MODIS atmosphere and land products can provide a useful tool for estimating crop GPP.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Development of Cloud and Shadow Detection Algorithm for Periodic Composite of Sentinel-2A/B Satellite Images (Sentinel-2A/B 위성영상의 주기합성을 위한 구름 및 구름 그림자 탐지 기법 개발)

  • Kim, Sun-Hwa;Eun, Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.989-998
    • /
    • 2021
  • In the utilization of optical satellite imagery, which is greatly affected by clouds, periodic composite technique is a useful method to minimize the influence of clouds. Recently, a technique for selecting the optimal pixel that is least affected by the cloud and shadow during a certain period by directly inputting cloud and cloud shadow information during period compositing has been proposed. Accurate extraction of clouds and cloud shadowsis essential in order to derive optimal composite results. Also, in the case of an surface targets where spectral information is important, such as crops, the loss of spectral information should be minimized during cloud-free compositing. In thisstudy, clouds using two spectral indicators (Haze Optimized Tranformation and MeanVis) were used to derive a detection technique with low loss ofspectral information while maintaining high detection accuracy of clouds and cloud shadowsfor cabbage fieldsin the highlands of Gangwon-do. These detection results were compared and analyzed with cloud and cloud shadow information provided by Sentinel-2A/B. As a result of analyzing data from 2019 to 2021, cloud information from Sentinel-2A/B satellites showed detection accuracy with an F1 value of 0.91, but bright artifacts were falsely detected as clouds. On the other hand, the cloud detection result obtained by applying the threshold (=0.05) to the HOT showed relatively low detection accuracy (F1=0.72), but the loss ofspectral information was minimized due to the small number of false positives. In the case of cloud shadows, only minimal shadows were detected in the Sentinel-2A/B additional layer, but when a threshold (= 0.015) was applied to MeanVis, cloud shadowsthat could be distinguished from the topographically generated shadows could be detected. By inputting spectral indicators-based cloud and shadow information,stable monthly cloud-free composited vegetation index results were obtained, and in the future, high-accuracy cloud information of Sentinel-2A/B will be input to periodic cloud-free composite for comparison.